uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
2,869,038,156,028
arxiv
\section{\bf Introduction} \par \medskip Let $f _{n}(X) = (1 + X) ^{n} + (-1)^{n}(X ^{n} + 1)$, for each $n \in \mathbb N$. Throught this note, we assume that $f _{n}(X)$, $n \in \mathbb N$, are defined over a field $K$ of characteristic zero. If the order $n$ of $f _{n}(X)$ is an even number, then the degree deg$(f _{n})$ and the leading term of $f _{n}(X)$ are equal to $n$ and $2$, respectively; when $n$ is odd, deg$(f _{n})$ and the leading term of $f _{n}(X)$ are equal to $n - 1$ and $n$, respectively. In addition, it can be easily verified that $f _{n}(X)$ is divisible by the polynomial $X(X + 1)$, i.e. $f _{n}(0) = f _{n}(-1) = 0$, if and only if $n$ is odd. Similarly, one obtains by straightforward calculations that the polynomial $X ^{2} + X + 1$ divides $f _{n}(X)$ if and only if $n$ is not divisible by $3$. These observations prove the left-to-right implication in the following question: \par \medskip (1) Find whether $f _{m}(X)$ and $f _{n}(X)$ are relatively prime, for a pair of distinct positive integers $m$ and $n$, if and only if $mn$ is divisible by $6$. \par \medskip\noindent An affirmative answer to (1) would prove the following conjecture in the special case where $a = 1$: \par \medskip (2) Let $a$, $b$ and $c$ be a sequence of pairwise distinct positive integers with gcd$(a, b, c) = 1$. Then the symmetric polynomials $X _{1} ^{a} + X _{2} ^{a} + X _{3} ^{a}$, $X _{1} ^{b} + X _{2} ^{b} + X _{3} ^{b}$ and $X _{1} ^{c} + X _{2} ^{c} + X _{3} ^{c}$ form a regular sequence in the polynomial ring $K[X _{1}, X _{2}, X _{3}]$ in three algebraically independent variables over the field $K$ if and only if the product $abc$ is divisible by $6$. \par \medskip\noindent Let us note that a set of $\sigma $ homogeneous polynomials in $\sigma $ algebraically independent variables is a regular sequence, if the associated polynomial system has only the trivial solution $(0, \dots , 0)$. Conjecture (2) has been suggested in \cite{CKW} (see also \cite{KM}). The purpose of this note is to show that the answer to (1) is affirmative, for polynomials of admissible degrees at most equal to $100$. Formally, our main result can be stated as follows: \par \medskip \begin{theo} \label{theo1.1} Let $m$ and $n$ be distinct positive integers at most equal to $100$, and let $K$ be a field with {\rm char}$(K) = 0$. Then the polynomials $f _{m}(X), f _{n}(X) \in K[X]$ satisfy the equality {\rm gcd}$\{f _{m}(X), f _{n}(X)\} = 1$ if and only if $6 \mid mn$. \end{theo} \par \medskip It is clearly sufficient to prove Theorem \ref{theo1.1} and to consider (1) in the special case where $K$ is the field $\mathbb Q$ of rational numbers. Our notation and terminology are standard, and missing definitions can be found in \cite{L}. \par \medskip \section{\bf Preliminaries} \par \medskip This Section begins with a brief account of some properties of the polynomials $f _{m}(X)$, $f _{n}(X)$, where $m$ and $n$ are distinct positive integers. These properties are frequently used in the sequel without an explicit reference. Some of the most frequently used facts can be presented as follows: \par \medskip (2.1) (a) Any complex root $\alpha _{n}$ of $f _{n}(X)$, for a given $n \in \mathbb N$, satisfies the following: $2\alpha _{n}$ is an algebraic integer, provided that $n$ is even; $n\alpha _{n}$ is an algebraic integer in case $n$ is odd; \par (b) If $m$ and $n$ are positive integers of different parity, then the common complex roots of $f _{m}(X)$ and $f _{n}(X)$ (if any) are algebraic integers; \par (c) $0$ and $1$ are simple roots of $f _{n}(X)$, provided that $n \in \mathbb N$ and $n$ is odd; the same applies to the reduction of $f _{n}(X)$ modulo $2$; \par (d) Given an integer $n >0$ and a primitive cubic root of unity $\rho _{3}$ (lying in the field $\mathbb C$ of complex numbers), we have $f _{n}(\rho _{3}) = 0$ if and only if $n$ is not divisible by $3$. \par \medskip \noindent Note also that polynomial $f _{n}(X)$ has no real root, for any even integer $n$. \par \medskip (2.2) The roots of $f _{p}(X)$ are algebraic integers, for every prime $p$. \par \medskip Next we include a list of the polynomials $f _{n}(X)$, for some small values of $n$: \par \medskip (2.3) (a) $f _{10}(X) = (X ^{2} + X + 1) ^{2}g _{10}(X)$, where $g _{10}(X) = 2X ^{6} + 6X ^{5} + 27X ^{4} + 44X ^{3} + 27X ^{2} + 6X + 2$; \par (b) $f _{9}(X) = 3X(X + 1)g_{9}(X)$, where \par\noindent $g _{9}(X) = 3X ^{6} + 9X ^{5} + 19X ^{4} + 23X ^{3} + 19X ^{2} + 9X + 3$; \par (c) $f _{8}(X) = 2(X ^{2} + X + 1)g_{8}(X)$, where \par\noindent $g _{8}(X) = X ^{6} + 3X ^{5} + 10X ^{4} + 15X ^{3} + 10X ^{2} + 3X + 1$; \par (d) $f _{7}(X) = 7X(X + 1)(X ^{2} + X + 1) ^{2}$; \par (e) $f _{6}(X) = 2X ^{6} + 6X ^{5} + 15X ^{4} + 20X ^{3} + 15X ^{2} + 6X + 2$; \par (f) $f _{5}(X) = 5X(X + 1)(X ^{2} + X + 1)$; \par (g) $f _{4}(X) = 2(X ^{4} + 2X ^{3} + 3X ^{2} + 2X + 1) = 2(X ^{2} + X + 1) ^{2}$; \par (h) $f _{3}(X) = 3X(X + 1)$; \par (i) $f _{2}(X) = 2(X ^{2} + X + 1)$. \par \medskip The following lemma presents some well-known properties of Newton's binomial coefficients that are frequently used in the sequel: \par \medskip \begin{lemm} \label{lemm2.1} Assume that $p$ is a prime number, and $n, s$ are positive integers, such that $s$ does not divide $p$. Then the binomial coefficients $\binom{p^{n}}{j}$, $j = 1, \dots , p ^{n} - 1$, are divisible by $p$; $\binom{p ^{n}s}{p ^{n}}$ is not divisible by $p$. \end{lemm} \par \medskip \begin{proof} The assertion is obvious if $n = 1$, so we assume further that $j \ge p$ (and $n \ge 2$). Suppose first that $j$ is not divisible by $p$ and denote by $y$ the greatest integer divisible by $p$ and less than $j$. It is clear from the definition of Newton's binomial coefficients that the maximal power of $p$ dividing $\binom{p^{n}s}{j}$ is greater than the maximal power of $p$ dividing $\binom{p^{n}s}{y}$; in particular, this ensures that $p^{2}$ divides $\binom{p^{n}s}{j}$. Now fix a positive integer $u < p^{n-1}$ and denote by $C[p ^{n}s, pu]$ the product of the multiples of the numerator of $\binom{p^{n}s}{pu}$ that are divisible by $p$, divided by the product of the multiples divisible by $p$ of the denominator of $\binom{p^{n}s}{pu}$. It is easily verified that $C[p^{n}s, pu] = \binom{p^{n-1}s}{u}$. This allows to complete the proof of Lemma \ref{lemm2.1}, arguing by induction on $n$. \end{proof} \par \medskip \begin{rema} \label{rema2.2} It follows from Lemma \ref{lemm2.1}, applied to $p = 2$, that $f _{2^{k}3}(X)$ decomposes over the field $\mathbb Q _{2}$ of $2$-adic numbers into a product of three irreducible polynomials of degree $2^{k}$ each; one of these polynomials lies in the ring $\mathbb Z _{2}[X]$ and is $2$-Eisensteinian over the ring $\mathbb Z _{2}$ of $2$-adic integers. In view of our irreducibility criterion, see Section 3, this means that if $f _{2^{k}3}(X)$ is reducible over $\mathbb Q$, then it decomposes into a product of $3$ $\mathbb Q$-irreducible polynomials, say $h _{1}(X)$, $h _{2}(X)$ and $h _{3}(X)$ (in fact $h _{j}(X)$, $j = 1, 2, 3$, are irreducible even over $\mathbb Q_{2}$). More precisely, the action of the symmetric group Sym$_{3}$ on the set of roots of $f _{2^{k}3}(X)$ induces bijections $y _{j}$, $j = 2, 3$, from the set of roots of $h _{1}(X)$ in $\mathbb Q_{2,{\rm sep}}$ onto the set of roots of $h _{j}(X)$, for each index $j$. It is therefore clear that if gcd$(f _{2^{k}3}(X), f _{n}(X)) \neq 1$, for some $n \in \mathbb N$, then $f _{n}(X)$ and $h _{u}(X)$ have a common root $\xi _{u} \in \mathbb Q _{2,{\rm sep}}$, for each $u \in \{1, 2, 3\}$. Thus it turns out that if gcd$(f _{2^{k}3}(X), f _{n}(X)) \neq 1$, then $f _{2^{k}3}(X) \mid f _{n}(X)$; in particular, $f _{n}(X)$ has a complex root that is not an algebraic integer. \end{rema} \par \medskip Lemma \ref{lemm2.1} ca be supplemented as follows: \par \medskip \begin{lemm} \label{lemm2.3} Let $n \in \mathbb N$ and $p \in \mathbb P$. Then the binomial coefficients $\binom{p^{n}}{j}$ are divisible by $p$, provided that $j \in \mathbb Z$ and $1 \le j < p^{n}$; in addition, if $n \ge 2$, then $\binom{p ^{n}}{j}$ is divisible by $p^{2}$ unless $j= p^{n-1}j_{0}$, $1 \le j_{0} \le p - 1$. \end{lemm} \par \medskip \begin{proof} The former part of our assertion is a special case of Lemma \ref{lemm2.1}, so we assume further that $n \ge 2$. Suppose first that $j$ is not divisible by $p$ and denote by $y$ the greatest integer divisible by $p$ and less than $j$. It is clear from the definition of Newton's binomial coefficients that the maximal power of $p$ dividing $\binom{p^{n}}{j}$ is greater than the maximal power of $p$ dividing $\binom{p^{n}}{y}$; in particular, this ensures that $p^{2} \mid \binom{p^{n}}{j}$. Now fix a positive integer $u < p^{n-1}$ and define $C[p ^{n}, pu]$ as in the proof of Lemma \ref{lemm2.1}. It is easily verified that $C[p^{n}, pu] = \binom{p^{n-1}}{u}$. This allows to complete the proof of Lemma \ref{lemm2.3}, arguing by induction on $n$. \end{proof} \par \medskip \section{\bf Polynomials of even orders} \par \medskip This Section begins with a criterion for validity of the equality gcd$(f _{m}(X), f _{n}(X)) = 1$ in the special case where $m$ is divisible by $6$ and $f _{m}(X)$ is irreducible over $\mathbb Q$. \par \medskip \begin{prop} \label{prop3.1} Let $m$ and $n$ be positive integers with $2 \mid m$ and $6 \mid mn$. Put $f _{n}(X) = (1 + X)^{n} + (-1)^{n}(X ^{n} + 1)$, and suppose that $m < n$ and the polynomial $f _{m}(X) = (1 + x) ^{m} + X ^{m} + 1$ is $\mathbb Q$-irreducible or $m = 3.2^{k}$, for some $k \in \mathbb N$. Then {\rm gcd}$(f _{m}(X), f _{n}(X)) = 1$ except, possibly, under the following conditions: \par {\rm (a)} $n \equiv 1 ({\rm mod} \ m - 1)$ and $m \equiv n (mod \ 2^{k+1})$, where $k$ is the greatest integer for which $2^{k}$ divides $m$; \par {\rm (b)} If $m$ is divisible by $4$, then $n/2 \equiv 1 (mod \ m/2 - 1)$. \end{prop} \par \medskip \begin{proof} Suppose that gcd$(f _{m}(X), f _{n}) \neq 1$, for some $n \in \mathbb N$. This means that $f _{m}(X)$ and $f _{n}(X)$ have a common root $\rho \in \mathbb C$. Note further that the irreducibility of $f _{m}(X)$ over $\mathbb Q$ and the assumption that $m$ is even indicate that the complex roots of $f _{m}(X)$ are not algebraic integers, so it follows from (2.1) (b) that $n \equiv 0 \ ({\rm mod} \ 2)$. Observing also that $f _{m}(X) \mid f _{n}(X)$ (in $\mathbb Z[X]$), one concludes that $f _{m}(1) \mid f _{n}(1)$ (in $\mathbb Z$), i.e. $2 ^{m} + 2 \mid 2 ^{n} + 2$. It is therefore clear that $2 ^{m-1} + 1 \mid 2 ^{n-1} + 1$, and since $m - 1$ and $n - 1$ are odd, this requires that $m - 1 \mid n - 1$, proving the former part of Proposition \ref{prop3.1} (a). The rest of the proof of Proposition \ref{prop3.1} relies on Lemma \ref{lemm2.1}, which allows to prove that $f _{m}(X)$ and $f _{n}(X)$ have unique divisors $\theta _{m}(X)$ and $\theta _{n}(X)$, respectively, over the field $\mathbb Q _{2}$, with the following properties: $\theta _{m}$ and $\theta _{n}$ are $2$-Eisensteinian polynomials over the ring $\mathbb Z _{2}$; the degree of $\theta _{m}(X)$ equals the greatest power of $2$ dividing $m$, and the degree of $\theta _{n}$ equals the greatest power of $2$ dividing $n$. Observing also that $\theta _{m}(X)$ and $\theta _{n}(X)$ can be chosen so that their leading terms be equal to $1$, one obtains from the divisibility of $f _{n}(X)$ by $f _{m}(X)$ that $\theta _{m}(X) = \theta _{n}(X)$. This result completes the proof of Proposition \ref{prop3.1} (a). We turn to the proof of Proposition \ref{prop3.1} (b), so we suppose further that $4 \mid m$. In view of Proposition \ref{prop3.1} (a), this means that $4 \mid n$, which shows that $f _{m}(\sqrt{-1}) = (2\sqrt{-1}) ^{m/2} + 2 = 2 ^{m/2} + 2$ and $f _{n}(\sqrt{-1}) = 2 ^{n/2} + 2$. As $f _{m}(X) \mid f _{n}(X)$, whence $f _{m}(\sqrt{-1}) \mid f _{n}(\sqrt{-1})$ (in the ring $\mathbb Z[\sqrt{-1}]$ of Gaussian integers), our calculations lead to the conclusion that $2 ^{(m/2)-1} \mid 2 ^{(n/2)-1} + 1$ (in $\mathbb Z$). Taking finally into account that $(m/2) - 1$ and $(n/2) - 1$ are odd positive integers, one obtains that $(m/2) - 1 \mid (n/2) - 1$. Proposition \ref{prop3.1} is proved. \end{proof} \par \medskip Our next result shows that the polynomials $f _{6}(X)$, $f _{12}(X)$, $f _{18}(X)$, $f _{30}(X)$, $f _{36}(X)$, $f _{54}(X)$, $f _{84}(X)$ and $f _{90}(X)$ are irreducible over $\mathbb Q$. \par \medskip \begin{prop} \label{prop3.2} The polynomial $f _{m}(X + 1)$ is $3$-Eisensteinian relative to the ring $\mathbb Z$ of integers, if $m = 3^{k} + 3^{l}$, for some positive integers $k$ and $l$; in particular, this holds, for $m = 6$, $12, 18, 30, 36, 54, 84, 90$. \end{prop} \par \medskip \begin{proof} Note first that the free term of the polynomial $t _{m}(X) = f _{m}(X + 1)$ is divisible by $3$ but is not divisible by $9$. Indeed, this term is equal to $2 ^{m} + 2$, and since $6 \mid m$, we have $2 ^{m} \equiv 1 ({\rm mod} \ 9)$ and $2 ^{m} + 2 \equiv 3 ({\rm mod} \ 9)$. Therefore, using Lemma \ref{lemm2.1}, one sees that it suffices to show that the coefficient, say $a$, of the monomial $X ^{3^{l}}$ in the reduced presentation of $t _{m}(X)$ is divisible by $3$. The proof of this fact offers no difficulty because $a = (2 ^{3^{k}} + 1)\binom{m}{3 ^{l}}$ (the binomial coefficient $\binom{m}{3^{l}}$ is a positive integer not divisible by $3$ whereas $2 ^{3^{l}} + 1$ is divisible by $3$). \end{proof} \par \medskip Our next result gives an affirmative answer to (1) in the special case where $m$ is a $2$-primary number. It proves the validity of Theorem \ref{theo1.1}, under the condition that $m \in \{2, 4, 8, 16, 32, 64\}$. \par \medskip \begin{prop} \label{prop3.3} For any $k, n \in \mathbb N$, {\rm gcd}$(f _{m}(X), f _{3n}(X)) = 1$, where $m = 2^{k}$. \end{prop} \par \medskip \begin{proof} Our argument relies on the fact that $f _{m}(X) = 2g _{m}(X)$, $g _{m}(X)$ being a polynomial with integer coefficients, such that $g_{m}(X) - X^{m} - X^{m/2} - 1 = 2h _{m}(X)$, for some $h _{m}(X) \in \mathbb Z[X]$. This ensures that if $\rho $ is a complex root of $g_{m}$, $K =\mathbb Q(\rho )$ and $O_{K}$ is the ring of algebraic integers in $K$, then the coset $\rho + P$ is a cubic root of unity in the field $O/P$, for any prime ideal $P$ of $O_{K}$ of $2$-primary norm (i.e. a prime ideal, such that $2 \in P$). The same holds whenever K’/K is a finite extension, $O_{K'}$ is the ring of algebraic integers in $K ^{\prime }$, and $P ^{\prime }$ is a prime ideal in $O_{K'}$ of $2$-primary norm. The noted property of $\rho $ indicates that $f _{n}(\rho ) \equiv 3 \equiv 1 (mod P’)$ in case $n \in \mathbb N$ is divisible by $3$, which proves the non-existence of a common root of $f _{m}(X)$ and $f _{n}(X)$, as claimed. \end{proof} \par \medskip The following statement provides an affirmative answer to (1), under the hypothesis that $m/2$ or $m/4$ is an odd primary number not divisible by $3$. \par \medskip \begin{prop} \label{prop3.4} For any prime number $p > 3$ and each pair of positive integers $k, n$, we have gcd$(f _{2p^{k}}(X), f _{3n}(X)) = {\rm gcd}(f _{4p^{k}}(X), f _{3n}(X)) = 1$. \end{prop} \par \medskip \begin{proof} We proceed by reduction modulo $p$. Then $\bar f _{2p^{k}}(X) = \bar f _{2}(X) ^{p} = 2 ^{p}(X ^{2} + X + 1) ^{p}$ and $\bar f _{4p^{k}}(X) = \bar f _{4}(X) ^{p} = 2 ^{p}(X ^{2} + X + 1) ^{2p}$. This indicates that if $\hat \rho $ is a root of $\bar f _{2p^{k}}(X)$ or $\bar f _{4p^{k}}(X)$ in $(\mathbb Z/p\mathbb Z)_{{\rm sep}}$, then $\hat \rho $ is a cubic root of unity. Therefore, it is easily verified that $\bar f_{3n}(\hat \rho ) = -3 \neq 0$, provided that $n$ is odd. When $n$ is even, one obtains similarly that $\bar f_{3n}(\hat \rho ) = 3 \neq 0$. These calculations prove that gcd$(\bar f_{2p^{k}}(X), \bar f_{3n}(X)) = {\rm gcd}(\bar f_{4p^{k}}(X), \bar f_{3n}(X)) = 1$, for each $n \in \mathbb N$. Our conclusion means that gcd$(\bar f_{2p^{k}}(X).\bar f_{4p^{k}}(X), \bar f_{3n}(X)) = 1$, which can be restated by saying that $u(X)f_{2p^{k}}(X)f_{4p^{k}}(X) + v(X)f_{3n}(X) = 1 + pw(X)$, for some $u(X), v(X), w(X) \in \mathbb Z[X]$. Suppose now that $f _{3n}(\beta ) = f_{2p^{k}}(\beta )f_{4p^{k}}(\beta ) = 0$, for some $\beta \in \mathbb C$, put $O = \{r \in \mathbb Q\colon 2^{n(r)}r \in \mathbb Z$, for some integer $n(r) \ge 0\}$, and denote by $O _{\mathbb Q(\beta )} ^{\prime }$ the integral closure in $\mathbb Q(\beta )$ of the ring $O$. Since $2\beta $ is an algebraic integer and $1 + pw(\beta ) = 0$, one obtains consecutively that $\beta \in O _{\mathbb Q(\beta )} ^{\prime }$ and $p$ is an invertible element of $O _{\mathbb Q(\beta )} ^{\prime }$; in particular, this requires that $1/p \in O _{\mathbb Q(\beta )} ^{\prime }$. Since, however, $O$ is an integrally closed subring of $\mathbb Q$, the obtained result leads to the conclusion that $1/p \in O$ which is not the case. The obtained contradiction is due to the assumption that $f _{3n}(X)$ and $f _{2p^{k}}(X).f_{4p^{k}}(X)$ have a common root, so Proposition \ref{prop3.4} is proved. \end{proof} \par \medskip Let $\bar h(Z) \in (\mathbb Z/q\mathbb Z)[Z]$ be the cubic polynomial defined so that $\bar h(X + X ^{-1}) = \bar g _{8}(X)/X ^{3}$, $\bar g _{8}(X) \in (\mathbb Z/q\mathbb Z)[X]$ being the reduction of $g _{8}(X) \in \mathbb Z[X]$ modulo a prime number $q > 2$ not dividing the discriminant $d(h)$. It is not difficult to see that the discriminant $d(\bar h)$ is a non-square in $(\mathbb Z/q\mathbb Z) ^{\ast }$ if and only if $\bar h(X)$ has a unique zero lying in $\mathbb Z/q\mathbb Z$. When this holds, $\bar g _{8}(X)$ decomposes over $\mathbb Z/q\mathbb Z$ into a product of three (pairwise relatively prime) quadratic polynomials irreducible over $\mathbb Z/q\mathbb Z$. For example, this applies to the case where $q = 5$ or $q = 7$, which is implicitly used for simplifying the proofs of the following two statements. \par \medskip \begin{prop} \label{prop3.5} The polynomials $f _{8.5^{k}}(X)$ and $f _{n}(X)$ satisfy the equality {\rm gcd}$(f _{8.5^{k}}(X), f _{n}(X)) = 1$, for each $k \in \mathbb N$, and any $n \in \mathbb N$ divisible by $3$ and not congruent to $6$ modulo $24$. \end{prop} \par \medskip \begin{proof} It is easily verified that $2 + \sqrt{2}$, $2 - \sqrt{2}$, $1 + 2\sqrt{2}$, $1 - 2\sqrt{2}$, $-2 - 2\sqrt{2}$ and $-2 + 2\sqrt{2}$ are pairwise distinct roots of $\bar f _{8}(X)$, the reduction of $f _{8}(X)$ modulo $5$. These roots are contained in a field $\mathbb F _{25}$ with $25$ elements. None of them is a primitive cubic root of unity: $(3 + \sqrt{2})^{3} = (2 + \sqrt{2})^{3} = -\sqrt{2}$; $(1 + 2\sqrt{2})^{3} = 2\sqrt{2}$, $(2 + 2\sqrt{2})^{3} = 1$; $(-2 - 2\sqrt{2})^{3} = -1$, $(-1 - 2\sqrt{2})^{3} = -2\sqrt{2}$. In other words, the noted elements are roots of $\bar g _{8}(X)$, the reduction modulo $5$ of the polynomial $g_{8}(X) = (1/2)f _{8}(X)/(X ^{2} + X + 1)$ ($g_{8}(X) \in \mathbb Z[X]$ is irreducible over $\mathbb Q$). Observe that the latter two roots of $\bar g_{8}(X)$ are primitive $6$-th roots of $1$, whereas the remaining roots of $\bar g_{8}(X)$ are generators of the multiplicative group $\mathbb F _{25} ^{\ast }$ of $\mathbb F_{25}$. Taking further into account that the elements $a\sqrt{2}$, $a \in \mathbb F _{5} ^{\ast }$, are all primitive $8$-th roots of unity in $\mathbb F _{25}$ ($\mathbb F _{5}$ is the prime subfield of $\mathbb F_{25}$), and $f _{8}(X) = 2(X^{2} + X + 1)g_{8}(X)$, one concludes that $\bar f _{8}(X)$ and $\bar f _{3n}(X)$ do not possess a common root, for any odd $n \in \mathbb N$. These calculations yield gcd$(\bar f_{8.5^{k}}(X), \bar f _{3n}(X)) = 1$ which allows to deduce by the method of proving Proposition \ref{prop3.4} that gcd$(f _{8.5^{k}}(X), f _{3n}(X)) = 1$ whenever $n$ is odd, and also, in the following two cases: $4 \mid n$; $n \equiv 6 ({\rm mod} \ 8)$. Thus Proposition \ref{prop3.5} is proved. \end{proof} \par \medskip \begin{prop} \label{prop3.6} The polynomials $f _{8.7^{k}}(X)$ and $f _{n}(X)$ satisfy the equality {\rm gcd}$(f _{8.7^{k}}(X), f _{n}(X)) = 1$ whenever $k$ and $n \in \mathbb N$, and $n$ is divisible by $6$. \end{prop} \par \medskip \begin{proof} The reductions $\bar f _{8.7^{k}}(X)$ and $\bar f_{8}(X)$ modulo $7$ satisfy the equality $\bar f_{8.7^{k}}(X) = \bar f_{8}(X)^{7^{k}}$, so it is sufficient to show $\bar f_{8}(X)$ and $\bar f_{n}(X)$ do not possess a common root in $(\mathbb Z/7\mathbb Z)_{sep}$. Our argument relies on the fact that $\sqrt{-1} \notin \mathbb Z/7\mathbb Z$, and $3 + \sqrt{-1}$, $3 - \sqrt{-1}$, $1 + 2\sqrt{-1}$, $1 - 2\sqrt{-1}$, $-2 -2\sqrt{-1}$ and $-2 + 2\sqrt{-1}$ are all roots in $(\mathbb Z/7\mathbb Z)_{sep}$ of the reduction $\bar g_{8}(X)$ of $g_{8}(X)$ modulo $7$. This ensures that, for each of these roots, say $\rho $, $\bar f _{n}(\rho ) = \rho _{1} + \rho _{2} + 1$ whenever $n \in \mathbb N$ is fixed and divisible by $6$. Here $\rho _{1}$ and $\rho _{2}$ are $8$-th roots of unity in $(\mathbb Z/7\mathbb Z)_{sep}$ depending on $n$ and $\rho $. We show that $\bar f _{n}(\rho ) \neq 0$. Consider an arbitrary primitive $8$-th root of unity $\varepsilon \in (\mathbb Z/7\mathbb Z)_{sep}$. It is easily verified that $\varepsilon \in (\mathbb Z/7\mathbb Z)(\sqrt{-1}) \setminus \mathbb Z/7\mathbb Z$. More precisely, one obtains by straightforward calculations that $\varepsilon = -2(\varepsilon _{1} + \varepsilon _{2}\sqrt{-1})$, for some $\varepsilon _{j} \in \{-1, 1\}$, $j = 1, 2$. It is now easy to see that $\bar f _{n}(\rho ) \neq 0$, as claimed. Thus the assertion that gcd$(\bar f _{8.7^{k}}(X), \bar f _{n}(X)) = 1$, for every admissible pair $k, n$, becomes obvious, which completes the proof of Proposition \ref{prop3.6}. \end{proof} \par \medskip Proposition \ref{prop3.6} and our next result prove that gcd$(f _{56}(X), f_{n}(X)) = 1$, for each $n \in \mathbb N$ divisible by $3$. This, combined with Proposition \ref{prop3.4} and Corollaries \ref{coro5.6} and \ref{coro5.7}, proves the validity of Theorem \ref{theo1.1} in the special case where $m$ is an even biprimary number (see also Remark \ref{rema4.2} for the case of $m = 72$). \par \medskip \begin{prop} \label{prop3.7} The polynomials $f_{2^{k}q}(X)$ and $f_{n}(X)$ are relatively prime, provided that $k \in \mathbb N$, $q \in \{5, 7\}$ and $n \in \mathbb N$ is odd and divisible by $3$. \end{prop} \par \medskip \begin{proof} Denote by $\bar f_{2^{k}q}(X)$ the reduction of $f _{2^{k}q}(X)$ modulo $2$. It is not difficult to see that $\bar f _{2^{k}q}(X) = \bar f_{q}(X)^{2^{k}}$, where $\bar f_{q}$ is the reduction of $f_{q}(X)$ modulo $2$. This ensures that if $\alpha\in \mathbb C$ is an algebraic integer with $f_{2^{k}q}(\alpha ) = 0$, then $\bar f _{q}(\hat \alpha ) = 0$, where $\hat \alpha $ is the residue class of $\alpha $ modulo any prime ideal $P$ of $O_{\mathbb Q(\alpha )}$, such that $2 \in P$. In particular, this is the case where $\alpha $ is a common root of $f _{2^{k}q}(X)$ and $f_{\nu }(X)$, for some odd $\nu \in \mathbb N$. Observing also that $\bar f_{5}(X) = X(X + 1)(X ^{2} + X + 1)$ and $\bar f_{7}(X) = X(X + 1)(X ^{2} + X + 1)^{2}$, one concludes either $\hat \alpha \in \{0, -1\}$ or $\hat \alpha $ is a primitive cubic root of unity. The latter possibility is clearly ruled out, if $3 \mid \nu $. At the same time, since $\nu $ is odd, $0$ and $-1$ are simple roots of $\bar f_{\nu }(X)$, so it is easy to see that gcd$(\bar f_{2^{k}q}(X), \bar g_{\nu }(X)) = 1$, $\bar g _{\nu }(X)$ being the reduction modulo $2$ of the polynomial $g_{\nu }(X) \in \mathbb Z[X]$ defined by the equality $f _{\nu }(X) = X(X + 1)g _{\nu }(X)$. As $0$ and $-1$ are not roots of $f _{2^{k}q}(X)$, the obtained result yields consecutively gcd$(f _{2^{k}q}(X), g_{\nu }(X)) = gcd(f_{2^{k}q}(X), f_{\nu }(X)) = 1$, as claimed. \end{proof} \par \medskip \begin{rema} \label{rema3.8} Let $\bar f _{63}(X)$, $\bar f _{9}(X)$, $\bar f _{70}(X)$ and $\bar f _{10}(X)$ be the reductions modulo $7$ of the polynomials $f _{63}(X)$, $f _{9}(X)$, $f _{70}(X)$ and $f _{10}(X)$, respectively. It is easily verified that $f _{9}(X)$ and $f _{10}(X)$ are divisible in $\mathbb Z[X]$ by polynomials $g _{9}(X)$ and $g _{10}(X)$ both of degree $6$, which are irreducible over $\mathbb Q$. One also sees that $\bar g _{9}(X)$ decomposes over $\mathbb Z/7\mathbb Z$ to a product of two cubic polynomials irreducible over $\mathbb Z/7\mathbb Z$, whereas $\bar g _{10}(X)$ is presentable as a product of three $(\mathbb Z/7\mathbb Z)$-irreducible quadratic polynomials. As $\bar f _{63}(X) = \bar f _{9}(X)^{7}$ and $\bar f _{70}(X) = \bar f _{10}(X)^{7}$, this yields gcd$(\bar f _{63}(X), \bar f _{70}(X)) = 1$, which implies the existence of integral polynomials $u(X)$, $v(X)$ and $h(X)$, such that $u(X)f _{63}(X) + v(X)f _{70}(X) = 1 + 7h(X)$. We prove that gcd$(f _{63}(X), f _{70}(X)) = 1$, by assuming the opposite. Then $\mathbb C$ contains a common root $\beta $ of $f _{63}(X)$ and $f _{70}(X)$, and by (2.1) (b), $\beta $ must be an algebraic integer with $1 + 7h(\beta ) = 0$. This requires that $7$ be invertible in the ring of algebraic integers in $\mathbb Q(\beta )$, a contradiction proving that gcd$(f _{63}(X), f _{70}(X)) = 1$. \end{rema} \par \medskip It is likely that one could achieve more essential progress in the analysis of Question (1) (up-to its full answer), by applying systematically other specializations of $f _{m}(X)$ and $f _{n}(X)$ than those used in the proof of Proposition \ref{prop3.1}. An example supporting this idea is provided by the proof of the following assertion. \par \medskip \begin{prop} \label{prop3.9} Let $n$ be a positive integer different from $6$. Then $f _{6}(X)$ and $f _{n}(X)$ have no common root except, possibly, in the case of $n \equiv 6$ modulo $1260$. \end{prop} \par \medskip \begin{proof} Our starting point is the fact that $f _{6}(X)$ is irreducible over $\mathbb Q$ (see Proposition \ref{prop3.2}); this means that $f _{6}(X)$ and $f _{n}(X)$ have a common root, for a given $n \in \mathbb N$, if and only if $f _{6}(X)$ divides $f _{n}(X)$. Note also that the leading term of $f_{6}(X)$ is equal to $2$, and that $f _{6}(X)$ is a primitive polynomial (i.e. its coefficients are integers and their greatest common divisor equals $1$). These observations show that the complex roots of $f_{6}(X)$ are not algebraic integers. On the other hand, by (2.1) (b), if $n$ is odd and $r$ is a root of $f_{n }(X)$ and $f _{6}(X)$, then $r$ must be an algebraic integer. Therefore, one may assume in our further considerations that $n$ is even. Then it follows from Proposition \ref{prop3.1} that if $m \in \mathbb N$ is even and $f _{m}(X)$ divides $f _{n}(X)$ in $\mathbb Z[X]$, then $m - 1 \mid n - 1$ and $4 \mid n - m$. Thus it becomes clear that $f _{6}(X) \nmid f _{n}(X)$ except, possibly, in the case where $n \equiv 6 \ ({\rm mod} \ 20)$. In order to complete the proof of Proposition \ref{prop3.9}, it remains to be seen that if $f _{6}(X) \mid f _{n}(X)$, then $n \equiv 6 \ {\rm mod} \ 126$ (by Proposition \ref{prop3.1}, $5 \mid n - 6$, so the divisibility of $n - 6$ by $126$ would imply $n \equiv 6$ modulo $126.5 = 630$; hence, by the congruence $n \equiv 6 ({\rm mod} \ 4)$, $1260 \mid n - 6$, as claimed by Proposition \ref{prop3.9}). \par Observe now that $f _{6}(3) = 4^{6} + 3^{6} + 1 = 4096 + 729 + 1 = 4826 = 2.2413 = 2.19.127$. Note also that $2^{7} = 128$ is congruent to $1$ modulo $127$, which implies $2^{7k'} \equiv 1 ({\rm mod} \ 127)$, for each $k' \in \mathbb N$. Now fix an integer $k \ge 0$ and put $S(k) = 4^{k} + 1$. It is verified by straightforward calculations that $S(0) = 2$, $S(1) = 5$, $S(2) = 17$, $S(3) = 65$, and $S(4)$, $S(5)$ and $S(6)$ are congruent to $3$, $9$ and $33$, respectively, modulo $127$. It is also clear that $S(l) \equiv S(k) ({\rm mod} \ 127)$ whenever $l$ and $k$ are non-negative integers with $l - k$ divisible by $7$. Thus it turns out that when $k$ runs across $\mathbb N$, $S(k)$ may take $7$ possible values (in fact one value determined by the residue class of $k$ modulo $7$). \par The next step towards the proof of Proposition \ref{prop3.9} is to show that $3$ is a primitive root of unity modulo $127$. Thereafter (in fact, almost simultaneously) we show that if $T(k) = 4^{k} + 3^{k} + 1$, for any integer $k \ge 0$, then $T(k)$ is divisible by $127$ if and only if $k \equiv 6 ({\rm mod} \ 126)$. This particular fact allows us to take the final step towards our proof, as it shows that if $k$ is even and $f _{6}(X)$ divides $f _{k}(X)$, then $f _{6}(3)$ divides $f _{k}(3)$, which requires that $k \equiv 6 ({\rm mod} \ 126)$. \par It is verified by direct calculations that $3^{63} ≡ -1 ({\rm mod} \ 127)$ (apply the quadratic reciprocity law). Direct calculations also show that $3^{6}$, $3^{7}$, $3^{14}$, $3^{21}$, $3^{42}$ are congruent modulo $127$ to $-33$, $28$, $22$ ($127$ divides $784 – 22 = 762$), $-19$ ($28.22 = 616$ is congruent to $-19$ modulo $127$), $-20$, respectively. Note also that $3^{9}$ and $3^{18}$ are congruent modulo $127$ to $-2$ and $4$, respectively. These calculations prove that $3$ is a primitive root of unity modulo $127$, as claimed. \par The noted property of $3$ means that the residue classes modulo $127$ of the numbers $3^{g}\colon g = 1, \dots , 126$, form a permutation of numbers $1, \dots , 126$. This ensures that for any $j = 1, \dots , 7$, there exists a unique $s(j)$ modulo $126$, such that $S(j) + 3^{s(j)}$ is divisible by $127$. In order to take the final step towards our proof, it suffices to verify that $s(j)$ is not congruent to $j$ modulo $126$, for any $j \neq 6$. The verification process specifies this as follows: $s(0) = 9$, $s(1) = 24$, $s(2) = 101$, $s(3) = 118$, $s(4) = 64$, $s(5) = 65$, $s(6) = 6$. The computational part of this process is facilitated by the observation that $17$, $-11$, $5$ and $16$ are congruent modulo $127$ to $144$, $3^{5} = 243$, $132 = 2^{2}.3.11$, and $143 = 11.13$, respectively. Proposition \ref{prop3.9} is proved. \end{proof} \par \medskip \section{\bf An irreducibility criterion for integral polynomials in one variable} \par \medskip The main result of this Section attracts interest in the question of whether the polynomials $f _{6n}(X)$, $n \in \mathbb N$, are irreducible over $\mathbb Q$. It shows that this holds in several special cases (which, however, is crucial for the proof of Theorem \ref{theo1.1}). \par \medskip \begin{prop} \label{prop4.1} Let $f(X) \in \mathbb Z[X]$ be a polynomial of degree $n > 0$, and let $S$ be a finite set of prime numbers not dividing the discriminant $d(f)$ of $f$. For each $p \in S$, denote by $n_{p}$ the greatest common divisor of the degrees of the irreducible polynomials over the field with $p$ elements, which divide the reduction of $f(X)$ modulo $p$. Then every irreducible polynomial $g(X) \in \mathbb Z[X]$ over the field $\mathbb Q$ of rational numbers is divisible by the least common multiple, say $\nu $, of numbers $n _{p}\colon p \in S$; in particular, if $\nu = n$, then $f(X)$ is irreducible over $\mathbb Q$. \end{prop} \par \medskip \begin{proof} It is sufficient to observe that, by Hensel's lemma, for each $p \in S$, there is a degree-preserving bijection of the set of $\mathbb Q _{p}$-irreducible polynomials dividing $f(X)$ in $\mathbb Q _{p}[X]$ upon the set of $(\mathbb Z/p\mathbb Z)$-irreducible polynomials dividing the reduction of $f(X)$ modulo $p$ (in $(\mathbb Z/p\mathbb Z)[X]$). One should also note that every $\mathbb Q$-irreducible polynomial dividing $f(X)$ in $\mathbb Q[X]$ is presentable as a product of $\mathbb Q _{p}$-irreducible polynomials dividing $f(X)$ in $\mathbb Q _{p}[X]$. \end{proof} \par \medskip Let $f(X) \in \mathbb Z[X]$ be a $\mathbb Q$-irreducible polynomial of degree $n$, and let $G _{f}$ be the Galois group of $f(X)$ over $\mathbb Q$. It is worth mentioning that if the irreducibility of $f(X)$ can be deduced from Proposition \ref{prop4.1}, then $n$ divides the period of $G _{f}$. \par \medskip \begin{rema} \label{rema4.2} Using Proposition \ref{prop4.1} and a computer program for mathematical calculations, Junzo Watanabe proved that the polynomials $f _{42}(X)$, $f _{60}(X)$, $f _{66}(X)$, $f _{72}(X)$, $f _{78}(X)$ are irreducible over $\mathbb Q$. This result, combined with Proposition \ref{prop3.2}, yields gcd$(f _{m}(X), f _{n}(X)) = 1$, for every pair $m, n \in \mathbb N$ less than $100$, such that $m$ is odd and $6$ divides $n$. Similarly, he proved that $f _{88}(X) = (X^{2} + X + 1) ^{2}g_{88}(X)$, where $g _{88}(X) \in \mathbb Z[X]$ has degree $84$ and is irreducible over $\mathbb Q$. The obtained result indicates that the complex roots of $g _{88}(X)$ are not algebraic integers, which implies gcd$(g _{88}(X), f _{n}(X)) = {\rm gcd}(f _{88}(X), f _{n}(X)) = 1$, for every odd $n \in \mathbb N$ divisible by $3$. \par It would be of interest to know whether the polynomials $f _{6n}(X)$, $n \in \mathbb N$, are $\mathbb Q$-irreducible, and whether this can be obtained by applying Proposition \ref{prop4.1}. \end{rema} \par \medskip \begin{coro} \label{coro4.3} The polynomials $f _{88}(X)$ and $f _{n}(X) \in \mathbb Z[X]$ are relatively prime, for each $n \in \mathbb N$ divisible by $3$ and less than $100$. \end{coro} \par \medskip \begin{proof} In view of (2.1) (b) and Remark \ref{rema4.2}, one may consider only the case of $6 \mid n$. Then our conclusion follows Propositions \ref{prop3.1}, \ref{prop3.2} and Remark \ref{rema4.2}. \end{proof} \par \medskip Statements (2.1) (d), (2.2) and Remark \ref{rema4.2}, combined with Propositions \ref{prop3.1}, \ref{prop3.2} and Remark \ref{rema2.2}, lead to the conclusion that gcd$(f _{p}(X), f _{n}(X)) = 1$, for every $n \in \mathbb N$ with $6 \mid n$ and $n \le 100$, and for each prime $p < 100$. It is worth noting that there 26 prime numbers less than $100$. The set of primary composite odd numbers consists of $9$, $25$, $27$, $49$ and $81$. \par \medskip \section{\bf Polynomials of odd orders} \par \medskip Our next step towards the proof of Theorem \ref{theo1.1} aims at showing that \par\noindent gcd$(f _{m}(X), f _{n}(X)) = 1$, provided that $m, n \le 100$ and $m$ is an odd primary number. In view of the observations at the end of Section 4, one may consider only the case where $m$ is a power of a prime $p \in \{3, 5, 7\}$. This part of our proof relies on (2.1) (b) and the following result. \par \medskip \begin{prop} \label{prop5.1} Let $p$ be a prime number and $\alpha _{n}$ a root of the polynomial $f _{p^{n}}(X)$, for some $n \in \mathbb N$. Suppose that $\alpha _{n}$ is an algebraic integer and set $\varphi _{p^{n}}(X) = p^{-1}f _{p^{n}}(X)$. Then $\varphi _{p}(\alpha _{n}) ^{p^{(n-1)}}$ lies in the ideal $pO_{\mathbb Q(\alpha_{n})}$ of the ring $O_{\mathbb Q(\alpha_{n})}$ of algebraic integers in $\mathbb Q(\alpha _{n})$. \end{prop} \par \medskip Proposition \ref{prop5.1} can be deduced from Lemma \ref{lemm2.3} and the following lemma. \par \medskip \begin{lemm} \label{lemm5.2} In the setting of Lemma \ref{lemm2.3}, when $n \ge 2$, the integers $p^{-1}\binom{p}{j_{0}}$ and $p^{-1}\binom{p^{n}}{p^{n-1}j_{0}}$ are congruent modulo $p$, for each $j_{0} \in \mathbb N$, $j_{0} < p$. \end{lemm} \par \medskip \begin{proof} It follows from the equality $C[p^{n}, pu] = \binom{p^{n-1}}{u}$, where $u$ is an integer with $1 \le u < p^{n-1}$, that $\binom{p^{n}}{pu} = \binom{p^{n-1}}{u}.u_{p,n}$, for some element $u_{p,n}$ of the local ring $\mathbb Z_{(p)} = \{r/s: r, s \in \mathbb Z, \ p$ {\rm does not divide} $s\}$, such that $u_{p,n} - 1 \in p\mathbb Z_{(p)}$. This enables one to obtain step-by-step that $p^{-1}\binom{p}{j_{0}} \equiv p^{-1}\binom{p^{ν}}{p^{ν-1}j_{0}} ({\rm mod} \ p\mathbb Z_{(p)})$, $\nu = 1, \dots , n$, and so to prove Lemma \ref{lemm5.2}. \end{proof} \par \medskip The proofs of the following results rely on the explicit definitions of the polynomials $f _{3}(X)$, $f _{5}(X)$ and $f _{7}(X)$ (see (2.3)). We also need Proposition \ref{prop5.1}. \par \medskip \begin{coro} \label{coro5.3} We have gcd$(f _{3^{m}}(X), f _{n}(X)) = 1$ whenever $m, n \in \mathbb N$ and $2 \mid n$. \end{coro} \par \medskip \begin{proof} Let $\alpha _{m} \in \mathbb C$ be a root of both $f _{3^{m}}(X)$ and $f _{n}(X)$, and $P _{3}$ be a maximal ideal of $O_{\mathbb Q(\alpha _{m})}$, such that $3 \in P _{3}$. Then $\alpha _{m} \in O_{\mathbb Q(\alpha _{m})}$, so it follows from Proposition \ref{prop5.1} and equality (2.3) (h) that $\alpha _{m} ^{2} + \alpha _{m} \in P _{3}$, whence, $\alpha _{m} \in P _{3}$ or $\alpha _{m} + 1 \in P _{3}$. On the other hand, it is easy to see that $f _{n}(0)$ and $f _{n}(-1)$ are integers not divisible by $3$, which implies $f _{n}(\alpha _{m}) \notin P _{3}$. Our conclusion, however, contradicts the assumption that $f _{n}(\alpha _{m}) = 0$, so Corollary \ref{coro5.3} is proved. \end{proof} \par \medskip \begin{coro} \label{coro5.4} The equalities gcd$(f _{5^{m}}(X), f _{n}(X)) = gcd(f _{7^{m}}(X), f _{n}(X)) = 1$ hold, if $m, n \in \mathbb N$ and $n$ is divisible by $6$. \end{coro} \par \medskip \begin{proof} It is verified by straightforward calculations that $f _{n}(0) = f _{n}(-1) = 2$ and $f _{n}(\varepsilon _{3}) = 3$, for each primitive cubic root of unity $\varepsilon _{3} \in \mathbb C$. None of these values is divisible by $5$ or $7$, so it is not difficult to deduce (in the spirit of the proof of Corollary \ref{coro5.3}) from Proposition \ref{prop5.1} and the definitions of $f _{5}(X)$ and $f _{7}(X)$ that $f _{n}(X)$ has a common root neither with $f _{5^{m}}(X)$ nor with $f _{7^{m}}(X)$. \end{proof} \par \medskip The following result proves the equality gcd$(f _{m}(X), f _{n}(X)) = 1$ in the case where $m$ is odd, $m < 99$, $m$ has precisely two different prime divisors, and $m$ is not divisible by $9$. Here we note that $45$, $63$ and $99$ are all odd numbers less than $100$ and divisible by $9$, which have exactly two different prime divisors. \par \medskip \begin{coro} \label{coro5.5} Let $q \in \{3, 5, 7\}$ and $p$ be a prime number different from $2$, $3$ and $q$. Then gcd$(f _{qp^{\nu }}(X), f _{n}(X)) = 1$ whenever $\nu \in \mathbb N$, $n \in \mathbb N$ and $6$ divides $qn$. \end{coro} \par \medskip \begin{proof} This can be obtained, proceeding by reduction modulo $p$, and arguing as in the proofs of Corollaries \ref{coro5.3} and \ref{coro5.4}. We omit the details. \end{proof} \par \medskip The next two statements prove that gcd$(f _{m}(X), f _{n}(X)) = 1$, if $m \in \{40, 80\}$. \par \medskip \begin{coro} \label{coro5.6} The polynomials $f _{8.5^{k}}(X)$ and $f _{n}(X)$ satisfy the equality \par\noindent {\rm gcd}$(f _{8.5^{k}}(X), f _{n}(X)) = 1$ whenever $k \in \mathbb N$, $n \in \mathbb N$, $3 \mid n$ and $n \le 100$. \end{coro} \par \medskip \begin{proof} Proposition \ref{prop3.5} allows us to consider only the case of $n \equiv 6 ({\rm mod} \ 24)$. This amounts to assuming that $n$ equals $6$, $30$, $54$ or $78$. Then our calculations show that $2 + \sqrt{2}$, $2 - \sqrt{2}$, $1 + 2\sqrt{2}$, $1 - 2\sqrt{2}$, $-2 - 2\sqrt{2}$ and $-2 + 2\sqrt{2}$ are roots of $\bar f _{n}(X)$, which implies gcd$(\bar f _{8.5^{k}}(X), \bar f _{n}(X)) \neq 1$. Since, however, $f _{n}(X)$ is $\mathbb Q$-irreducible, for each admissible $n$ (see Proposition \ref{prop3.2} and Remark \ref{rema4.2}), it follows from Proposition \ref{prop3.1} (a) that gcd$(f _{8.5^{k}}(X), f _{n}(X)) = 1$, as required. \end{proof} \par \medskip \begin{coro} \label{coro5.7} The polynomials $f _{80}(X)$ and $f _{n}(X) \in \mathbb Z[X]$ are relatively prime, for every $n \in \mathbb N$ divisible by $3$ and less than $100$. \end{coro} \par \medskip \begin{proof} By Proposition \ref{prop3.7}, we have gcd$(f_{80}(X), f_{n}(X)) = 1$ in the case where $n \in \mathbb N$ is odd and divisible by $3$. Note further that the same equality holds, under the condition that $n \in \mathbb N$, $6 \mid n$ and $n < 100$. If $n > 80$, i.e. $n = 84, 90$ or $96$, this follows from Proposition \ref{prop3.2} and Remark \ref{rema2.2}. When $n \le 80$, our assertion can be deduced from Propositions \ref{prop3.1}, \ref{prop3.2} and Remark \ref{rema4.2}. \end{proof} \par \medskip Summing-up the obtained results, one concludes that Theorem \ref{theo1.1} will be proved, if we show that gcd$(f _{m}(X), f _{n}(X)) = 1$, provided that $n \in \mathbb N$ is even, $n \le 100$, and $m \in \{45, 63, 99\}$. We achieve this goal on a case-by-case basis. \par \medskip \begin{coro} \label{coro5.8} For each even $n \in \mathbb N$, {\rm gcd}$(f _{45}(X), f _{n}(X)) = 1$. \end{coro} \par \medskip \begin{proof} Let $\bar f _{n}$ be the reduction of $f _{n}$ modulo $5$, for each $n \in \mathbb N$. With these notation, we have $\bar f_{45}(X) = \bar f_{9}(X)^{5}$ and \par\noindent $\bar f _{9}(X) = X(X + 1)(X - 1)^{2}(X - 2)^{2}(X - 3)^{2}$. On the other hand, it is easily verified that $\bar f _{n}(0) = \bar f _{n}(-1) = 2$ and $\bar f _{n}(j) \in \{-1, 1, 3\} \subseteq \mathbb Z/5\mathbb Z$, $j = 1, 2, 3$. \end{proof} \par \medskip \begin{coro} \label{coro5.9} The equality {\rm gcd}$(f _{63}(X), f _{n}(X)) = 1$ holds, for any even number $n > 0$ at most equal to $100$. \end{coro} \par \medskip \begin{proof} Suppose first that $6 \mid n$. Then our assertion can be proved, by using Remark \ref{rema2.2} and combining Proposition \ref{prop3.1} (a) with Proposition \ref{prop3.2} and Remark \ref{rema4.2}. It remains to consider the case where $3 \nmid n$. When $n = 70$, our conclusion follows from Remark \ref{rema3.8}, and in case $n \neq 70$, the equality gcd$(f _{63}(X), f _{n}(X)) = 1$ is implied by Propositions \ref{prop3.3}, \ref{prop3.4} and \ref{prop3.7}. \end{proof} \par \medskip We are now in a position to complete the proof of Theorem \ref{theo1.1}. Note first that every $t \in \mathbb N$ with at least $4$ distinct prime divisors is greater than $100$; also, $70$ is the unique natural number less than $100$ and not divisible by $3$, which has $3$ distinct prime divisors. Therefore, the preceding assertions lead to the conclusion that it is sufficient to prove the equality gcd$(f _{99}(X), f _{n}(X)) = 1$, for every even $n \in \mathbb N$, $n \le 100$ (the question of whether gcd$(f _{99}(X), f _{n}(X)) = 1$ whenever $n \in \mathbb N$ and $2 \mid n$, is open). Suppose first that $6 \mid n$. Then the claimed equality follows from the fact the roots of $f _{n}(X)$ are not algebraic integers, whereas the common roots of $f _{99}(X)$ and $f _{n}(X)$ (if any) must be algebraic integers. Henceforth, we assume that $n \le 100$ and $n$ is not divisible by $3$. Applying Propositions \ref{prop3.3}, \ref{prop3.4} and \ref{prop3.7}, one reduces the rest of the proof of Theorem \ref{theo1.1} to its implementation in the special case where $n = 70$. Denote by $\bar f _{m}$ the reduction of $f _{m}$ modulo $7$, for each $m \in \mathbb N$. Note that $\bar f _{70}(X) = \bar f _{10}(X)^{7}$ and $\bar f _{10}(X) = (X ^{2} + X + 1)^{2}\bar g _{10}(X)$, where $\bar g _{10}(X) \in (\mathbb Z/7\mathbb Z)[X]$ is a degree $6$ polynomial decomposing into a product of three pairwise distinct quadratic polynomials lying in $(\mathbb Z/7\mathbb Z)[X]$ and irreducible over $\mathbb Z/7\mathbb Z$. Suppose now that gcd$(f _{99}(X), f _{70}(X)) \neq 1$ and fix a common root $\beta \in \mathbb C$ of $f _{99}(X)$ and $f _{70}(X)$. Then $\beta $ is an algebraic integer, and by the preceding observations on $\bar g _{10}(X)$, $\hat \beta ^{48} = 1$. Here $\hat \beta $ stands for the residue class of $\beta $ modulo some prime ideal $P$ in the ring $O_{\mathbb Q(\beta )}$ of algebraic integers in $\mathbb Q(\beta )$, chosen so that $7 \in P$. It is clear from the equality $\hat \beta ^{48} = 1$ that $\bar f _{99}(\hat \beta ) = (\hat \beta + 1)^{3} - \hat \beta ^{3} - 1 = 3(\hat \beta + 1)\hat \beta $ unless $\hat \beta \in \{0, -1\}$. On the other hand, the assumption that $f _{99}(\beta ) = f _{70}(\beta ) = 0$ requires that $\bar f _{99}(\hat \beta ) = 0$, which is possible only in case $\hat \beta \in \{0, -1\}$. The obtained contradiction is due to the hypothesis that $f _{99}(\beta ) = f _{70}(\beta ) = 0$. Thus it follows that gcd$(f _{99}(X), f _{70}(X)) = 1$, which completes the proof of the equality gcd$(f _{99}(X), f _{n}(X)) = 1$, for each even $n \in \mathbb N$, $n \le 100$. Theorem \ref{theo1.1} is proved. \par \medskip \section{\bf Appendix} \par \medskip After posting the first version of this preprint, Junzo Watanabe informed me that he had confirmed the following by a program built in Mathematica: \par \medskip (0) $f_n(x)$ is irreducible if $n = 6m$, for all $m =1, \dots , 100$. \par \medskip (1) $f_n(x) / f_7(x)$ is irreducible, if $n = 6m +1$, for all $m =1, \dots , 100$. \par \medskip (2) $f_n(x) / f_2(x)$ is irreducible, if $n = 6m +2$, for all $m =1, \dots , 100$. \par \medskip (3) $f_n(x) / f_3(x)$ is irreducible, if $n = 6m +3$ for all $m =1, \dots , 100$. \par \medskip (4) $f_n(x) / f_4(x)$ is irreducible, if $n = 6m +4$, for all $m =1, \dots , 100$. \par \medskip (5) $f_n(x) / f_5(x)$ is irreducible if $n = 6m +5$ for all $m =1, \dots , 100$. \par \medskip\noindent In view of (2.1) (c), (d) and Proposition \ref{prop3.1} (a), this confirmation gives an affirmative answer to (1), for pairs of distinct positive integers at most equal to $605$. It also answers in the affirmative the question posed at the end of Remark \ref{rema4.2}, for $n = 1, \dots , 100$. \par \vskip0.2truecm \emph{Acknowledgement.} A considerable part of the research presented in this note was done during my visits to Tokai University, Hiratsuka, Japan, in 2008/2009 and 2012. I would to thanks colleagues at the Department of Mathematics for their hospitality. My thanks are due to my host professor Junzo Watanabe also for drawing my attention to various aspects of Conjecture (2) and Question (1). \medskip
2,869,038,156,029
arxiv
\section{Introduction} Nash equilibrium (NE) is one of the most important concepts in game theory that captures a wide range of phenomena in engineering, economics, and finance~\cite{facchinei2007finite}. Consider a stochastic Nash game with $N$ players each with an associated strategy set $X_i$ and a cost function $f_i(\bullet)$. Player $i$'s objective is to determine, for any collection of arbitrary strategies of the other players, i.e., $x^{(-i)}$, an optimal strategy $x^{(i)}$ that solves the stochastic minimization problem \begin{align}\label{nash} &\hbox{minimize} \qquad \mathbb{E}\left[f_i\left(\left(x^{(i)}i;x^{(-i)}\right),\xi\right)\right],\\ & \hbox{subject to} \qquad x \in X_i,\notag \end{align} where $ f_i\left(\left(x^{(i)}i;x^{(-i)}\right),\xi\right)$ denotes the random cost function of the $i$th player that is parameterized in terms of the action of the player $x^{(i)}$, actions of other players denoted by $x^{(-i)}$, and a random variable $\xi$ representing the state of the game. An NE is characterized by observing that in a {\it stable} game, no player can lower their cost by changing their action within their designated strategy. If we define $X\triangleq \prod_{i=1}^N X_i$ and $G(x)\triangleq \left(\mathbb E\left[\nabla_{x^{(i)}}f_i(x,\xi)\right]\right)_{i=1}^N$, then problem \eqref{nash} is equivalent to a stochastic variational inequalities (SVI) problem, denoted by $\mbox{SVI}(X,G)$, where the goal is to find an $x^*\in X$ such that $(y-x^*)^T G(x^*)\geq 0,$ for all $ y\in X$ holds. We let SOL$(X,G)$ denote the solution set of the $\mbox{SVI}(X,G)$. In many behavioral systems, the players are self-motivated and aim to optimize an individual objective function. As a result, the global performance of the system might be worse than the case where a central authority can simply dictate a solution. One of the popular measures of the inefficiency of equilibria of a game is the \emph{price of stability} (PoS) \cite{nisan2007tardos}. In particular, PoS is defined as the ratio between the minimal cost of a Nash equilibrium and the optimal outcome, i.e., \begin{tcolorbox} \vspace{-0.1in} \begin{align}\label{pos}\mbox{PoS} \ \triangleq \ \frac{\min_{x\in {SOL}(X,\mathbb E[F(\bullet,\xi)])}\mathbb E[f(x,\xi)]}{\min_{x\in X}\mathbb E[f(x,\xi)]}, \end{align} \end{tcolorbox} where $f:\mathbb R^n\times \mathbb{R}^d\to\mathbb R$ denotes the cost function, $X \subseteq \mathbb{R}^n$, $F:X\times \mathbb{R}^d\to \mathbb{R}^n$ is a real-valued mapping, and $\xi: \Omega\to\mathbb{R}^d$ denotes a random variable associated with the probability space $(\Omega, \mathcal{F},\mathbb{P})$. To estimate the PoS with a guaranteed confidence interval, first, we need to solve the nominator of the right-hand side of \eqref{pos} which contains a stochastic optimization with an SVI constraint. This leads to a challenging optimization problem. To address this challenge, our goal is to employ stochastic approximation (SA) schemes. SA is an iterative scheme used for solving problems in which the objective function is estimated via noisy observations. In the context of optimization problems, the function values and/or higher-order information are estimated from noisy samples in a Monte Carlo simulation procedure \cite{broadie2014multidimensional}. The SA scheme, first introduced by Robbins and Monro \cite{robbins51sa}, has been studied extensively over the past few years incorporated averaging \cite{Polyak92acceleration}, addressed nonsmoothness \cite{Farzad1}, variational inequality problems \cite{juditsky2011solving}, and Nash games \cite{koshal12regularized}. Most of the aforementioned approaches considered the cases where the functional constraints are in the form of inequalities, equalities, or easy-to-project sets. However, emerging more complex problems from the recent advances of technologies in various areas, such as Nash games, machine learning, transportation, control, etc., lead to a pressing need for developing new iterative methods that can handle optimization problems with constraints complicated by VIs. To this end, in this paper, our primary interest lies in solving the following stochastic optimization problems whose constraint set is characterized as the solution set of an SVI problem given as \begin{tcolorbox} \vspace{-0.15in} \begin{align}\label{prob:sopt_svi} &\hbox{minimize} \qquad \mathbb{E}[f(x,\xi)]\\ & \hbox{subject to} \qquad x \in \mbox{SOL}(X, \mathbb{E}[F(\bullet,\xi)]),\notag \end{align} \end{tcolorbox} \noindent where $f:\mathbb R^n\times \mathbb{R}^d\to\mathbb R$ is a convex function, $X \subseteq \mathbb{R}^n$ is the Cartesian product of the component sets $X_i \subseteq \mathbb{R}^{n_i}$ where $\sum_{i=1}^N n_i =n$, i.e., $X \triangleq \prod\nolimits_{i=1}^NX_i$. We let $F(\bullet,\bullet)\triangleq \left(F_i(\bullet,\bullet)\right)_{i=1}^N$ where $F_i:\mathbb R^n\times \mathbb{R}^d\to \mathbb R^{n_i}$ for any $i\in [N]\triangleq \{1,\ldots,N\}$. For the ease of presentation, throughout we define $f(x) \triangleq \mathbb{E}[f(x,\xi)]$ and $F(x) \triangleq \mathbb{E}[F(x,\xi)]$. The variational inequality problem has been widely studied in the literature due to its versatility in describing a wide range of problems including optimization, equilibrium and complementarity problems, amongst others \cite{facchinei2007finite}. The extra-gradient method, initially proposed by Korpelevich \cite{korpelevich1976extragradient} and its extensions \cite{censor2011subgradient,censor2012extensions,chen2017accelerated,iusem2011korpelevich,juditsky2011solving,nemirovski2004prox,yousefian2014optimal,yousefian2018stochastic}, is a classical method for solving VI problems which requires weaker assumptions than their gradient counterparts \cite{bertsekas1997nonlinear,sibony1970methodes}. In stochastic regime, amongst the earliest schemes for resolving stochastic variational inequalities via stochastic approximation was presented by Jiang and Xu \cite{jiang2008stochastic} for strongly monotone maps. Regularized variants were developed by Koshal et al. \cite{koshal12regularized} for merely monotone regimes while Lipschitzian requirements were weakened by combining local smoothing with regularization in \cite{yousefian2013regularized,yousefian2017smoothing}. In the absence of regularization, extra-gradient approaches that rely on two projections per iteration provide an avenue for resolving merely monotone problems \cite{jalilzadeh2019proximal}. The per-iteration complexity can be reduced to a single projection via projected reflected gradient and splitting techniques as examined in \cite{cui2016analysis,cui2021analysis} (also see Hsieh et al. \cite{hsieh2019convergence}). When the assumption on the mapping is modified to pseudomonotonicity and its variants, rate statements have been provided in \cite{iusem2017extragradient,kannan2014pseudomonotone,kannan2019optimal} via a stochastic extra-gradient framework. Despite these advances in VIs and SVIs, solving problem \eqref{prob:sopt_svi} remains challenging. One main approach to solve \eqref{prob:sopt_svi}, when the constraint set is a solution of deterministic VI and objective function is also deterministic, is sequential regularization (SR) approach which is a two-loop framework (see \cite{facchinei2007finite}). In each iteration of the SR scheme, a regularized VI is required to be solved and convergence has been shown under the monotonicity of the mapping $F$ and closedness and convexity of the set $X$. However, the iteration complexity of the SR algorithm is unknown and it requires solving a series of increasingly more difficult VI problems. To resolve these shortcomings, most recently, Kaushik and Yousefian \cite{kaushik2021method} developed a more efficient first-order method called averaging randomized block iteratively regularized gradient. Non-asymptotic suboptimality and infeasibility convergence rate of $\mathcal O(1/K^{0.25})$ has been obtained where $K$ is the total number of iterations. Here, we consider a more general problem with a stochastic objective function and a stochastic VI constraint. Using a novel iterative penalization technique, we propose an extra-(sub)gradient-based method and we derive the same convergence results as in \cite{kaushik2021method}, despite the presence of stochasticity in the both levels of the problem. {\bf Main contributions.} In this paper, we study a stochastic optimization problem with a nonsmooth and convex objective function and a monotone SVI constraint. We develop an efficient first-order method based on the extra-gradient method with block coordinate update, called Averaging Randomized Iteratively Penalized Stochastic Extra-Gradient Method (aR-IP-SeG). We demonstrate a convergence rate of the proposed method in terms of suboptimality and infeasibility. In particular, in Theorem \ref{prop:bounds}, we obtain an iteration complexity of the order $\epsilon^{-4}$ that appears to be best known result for this class of constrained stochastic optimization problems. Moreover, combining the proposed extra-(sub)gradient-based method with a variance reduction technique, we derive a confidence interval with lower and upper errors of the order $\frac{1}{\sqrt{k}}$ and $\frac{1}{\sqrt[4]{k}}$, respectively. Such guarantees appear to be new for estimating the PoS (see Theorem \ref{prop:pos_CI}). {\bf Outline of the paper. } Next, we introduce the notation that we use throughout the paper. In the next section, we precisely state the main definitions and assumptions that we need for the convergence analysis. In Section \ref{sec:alg}, we describe the (aR-IP-SeG) algorithm to solve problem \eqref{prob:sopt_svi} and the complexity analysis is provided in Section \ref{sec:rate}. Additionally, in Section \ref{sec:pos}, we propose (VRE-PoS-RSeG) algorithm to estimate the price of stability in \eqref{pos} with a guaranteed confidence interval. Finally, some empirical experiments are presented in Section \ref{sec:num} to show the benefit of our proposed scheme in comparison with the competitive schemes for estimating the PoS for a stochastic Nash Cournot competition over a network. {\bf Notations.} $\mathbb E[\bullet]$ denotes the expectation with respect to the all probability distributions under study. We use filterations to take conditional expectations with respect to a subgroup of probability distributions. We denote the optimal objective value of the problem~\eqref{prob:sopt_svi} by $f^*$. The Euclidean projection of vector $x$ onto a convex set $X$ is denoted by $\mathcal P_X(x)$, where $\mathcal P_X(x)\triangleq \mbox{argmin}_{y\in X}\|y-x\|$. Throughout the paper, unless specified otherwise, $k$ denotes the iteration counter while $K$ represents the total number of steps employed in the proposed methods. \section{Preliminaries and Background}\label{sec:assump} Throughout the paper, we consider the following assumptions on the map $F$, objective function $f$ and set $X$ in problem \eqref{prob:sopt_svi}. \begin{assumption}[Problem properties]\label{assum:problem} Consider problem \eqref{prob:sopt_svi}. Let the following holds. \noindent (i) Mapping $F(\bullet):\mathbb{R}^n \to \mathbb{R}^{n}$ is single-valued, continuous, and merely monotone on its domain. \noindent (ii) Function $f(\bullet):\mathbb{R}^n \to \mathbb{R}$ is real-valued, merely convex on its domain. \noindent (iii) Set $X \subseteq \mbox{int}\left(\mbox{dom}(F)\cap\mbox{dom}(f)\right)$ is nonempty, compact, and convex. \end{assumption} \begin{remark}\label{rem:bounds} In view of Assumption \ref{assum:problem}, the subdifferential set $\partial f(x)$ is nonempty for all $x \in \mbox{int}(\mbox{dom}(f))$. Also, $f$ has bounded subgradients over $X$. Throughout, we let scalars $D_X$ and $D_f$ be defined as $D_X\triangleq \sup_{x \in X} \|x\|$ and $D_f\triangleq \sup_{x \in X} |f(x)|$, respectively. Also, we let $C_F>0$ and $C_f>0$ be scalars such that $\|F(x)\|\leq C_F$, and $\|\tilde \nabla f(x)\|\leq C_f$ for all $\tilde \nabla f(x)\in \partial f(x)$, for all $x \in X$. \end{remark} \begin{definition}\label{def:hist} We denote the history of the method by $\mathcal{F}_k$ for $k \geq 0$ defined as \begin{align*} \mathcal{F}_k \triangleq \cup_{t=0}^k\{\tilde \xi_t,\tilde i_t, \xi_t, i_t\} \cup \{x_0,y_0\}. \end{align*} \end{definition} Next, we define the errors for stochastic approximation of objective function $f$ and operator $F$, and block coordinate sampling. In particular, we use the notations of $w_{\bullet,k}$, $\tilde w_{\bullet,k}$ for the error of stochastic approximations involve in iteration $k$ and $e_{\bullet,k}$, $\tilde e_{\bullet,k}$ for the error of block coordinate sampling. \begin{definition}[Stochastic errors]\label{def:errs} For all $k \geq 0$ we define \begin{align*} & \tilde{w}_{F,k} \triangleq F(x_k,\tilde \xi_k) - F(x_k), \qquad \tilde{w}_{f,k} \triangleq \tilde \nabla f(x_k,\tilde \xi_k) -\tilde \nabla f(x_k),\\ & {w}_{F,k} \triangleq F(y_{k+1}, \xi_k) - F(y_{k+1}), \qquad {w}_{f,k} \triangleq \tilde \nabla f(y_{k+1}, \xi_k) -\tilde \nabla f(y_{k+1}),\\ & \tilde{e}_{F,k} \triangleq N\mathbf{U}_{\tilde i_k}F_{\tilde i_k}(x_k,\tilde \xi_k) - F(x_k,\tilde \xi_k), \qquad \tilde{e}_{f,k} \triangleq N\mathbf{U}_{\tilde i_k}\tilde \nabla_{\tilde i_k} f(x_k,\tilde \xi_k) -\tilde \nabla f(x_k,\tilde \xi_k) ,\\ & {e}_{F,k} \triangleq N\mathbf{U}_{ i_k}F_{ i_k}(y_{k+1},\xi_k) - F(y_{k+1}, \xi_k), \qquad {e}_{f,k} \triangleq N\mathbf{U}_{ i_k}\tilde \nabla_{ i_k} f(y_{k+1}, \xi_k) -\tilde \nabla f(y_{k+1}, \xi_k), \end{align*} where $\mathbf{U}_\ell \in \mathbb{R}^{n\times n_\ell}$ for $\ell \in [N]$ such that $[\mathbf{U}_1,\ldots,\mathbf{U}_N]=\mathbf{I}_n$ where $\mathbf{I}_n$ denotes the $n \times n$ identity matrix. \end{definition} Next, we impose a requirement on the conditional bias and the conditional second moment on the sampled subgradient $\tilde \nabla f(\bullet,\bullet)$ and sampled map $F(\bullet,\bullet)$ produced by the oracle. \begin{assumption}[Random samples]\label{assum:rnd_vars} \noindent (a) The random samples $\tilde \xi_k$ and $\xi_k$ are i.i.d., and $\tilde i_k$ and $i_k$ are i.i.d. from the range $\{1,\ldots,N\}$. Also, all these random variables are independent from each other. \noindent (b) For all $k\geq 0$ the stochastic mappings $F(\bullet,\tilde \xi_k)$ and $F(\bullet,\xi_k)$ are both unbiased estimators of $F(\bullet)$. Similarly, { $\tilde \nabla f(\bullet,\tilde \xi_k)$ and $\tilde \nabla f(\bullet,\xi_k)$ are both unbiased estimators of $\tilde \nabla f(\bullet)$}. \noindent (c) For all $k\geq 0$, there exist $\tilde\nu_{F},\nu_{f}>0$ such that $\mathbb{E}[\|F(x,\xi) - F(x)\|^2 \mid x] \leq\nu_{F}^2$ and $\mathbb{E}[\|\tilde \nabla f(x, \xi) - \tilde \nabla f(x)\|^2 \mid x] \leq\nu_{f}^2$. \end{assumption} Based on the above definition and assumption, we state some standard properties of the errors. \begin{lemma}[Properties of stochastic approximation and random blocks]\label{lemma:prop_rnd_blcks} Consider $\tilde{e}_{F,k}$, $\tilde{e}_{f,k}$, ${e}_{F,k}$, and ${e}_{f,k}$ given by Definition \ref{def:errs}. Let Assumption \ref{assum:rnd_vars} hold. Then, the following statements hold almost surely for all $k \geq 0$. \noindent (a) $\mathbb{E}[\tilde{w}_{F,k}\mid \mathcal{F}_{k-1}]= \mathbb{E}[\tilde{w}_{f,k}\mid \mathcal{F}_{k-1}]=\mathbb{E}[{w}_{F,k}\mid \mathcal{F}_{k-1}\cup\{\tilde \xi_k,\tilde{i}_k\}]= \mathbb{E}[{w}_{f,k}\mid \mathcal{F}_{k-1}\cup\{\tilde \xi_k,\tilde{i}_k\}]=0$.\\ \noindent (b) $\mathbb{E}[\|\tilde{w}_{F,k}\|^2 \mid \mathcal{F}_{k-1}] \leq\nu_{F}^2$, $\mathbb{E}[\|\tilde{w}_{f,k}\|^2\mid \mathcal{F}_{k-1}] \leq \nu_{f}^2$, $\mathbb{E}[\|{w}_{F,k}\|^2\mid \mathcal{F}_{k-1}\cup\{\tilde \xi_k,\tilde{i}_k\}]\leq \nu_{F}^2$, and $\mathbb{E}[\|{w}_{f,k}\|^2\mid \mathcal{F}_{k-1}\cup\{\tilde \xi_k,\tilde{i}_k\}]\leq \nu_{f}^2$.\\ \noindent (c) $\mathbb{E}[\tilde{e}_{F,k}\mid \mathcal{F}_{k-1}\cup\{\tilde \xi_k\}]= \mathbb{E}[\tilde{e}_{f,k}\mid \mathcal{F}_{k-1}\cup\{\tilde \xi_k\}]=\mathbb{E}[{e}_{F,k}\mid \mathcal{F}_{k-1}\cup\{\tilde \xi_k,\tilde{i}_k,\xi_k\}]= \mathbb{E}[{e}_{f,k}\mid \mathcal{F}_{k-1}\cup\{\tilde \xi_k,\tilde{i}_k,\xi_k\}]=0$.\\ \noindent (d) $\mathbb{E}[\|\tilde{e}_{F,k}\|^2\mid \mathcal{F}_{k-1}\cup\{\tilde \xi_k\}] = (N-1)\|F(x_k,\tilde \xi_k)\|^2$, $\mathbb{E}[\|\tilde{e}_{f,k}\|^2\mid \mathcal{F}_{k-1}\cup\{\tilde \xi_k\}] = (N-1)\|\tilde{\nabla} f(x_k,\tilde \xi_k)\|^2$, $\mathbb{E}[\|{e}_{F,k}\|^2\mid \mathcal{F}_{k-1}\cup\{\tilde \xi_k,\tilde{i}_k,\xi_k\}]=(N-1)\|F(y_{k+1}, \xi_k)\|^2$, and $\mathbb{E}[\|{e}_{f,k}\|^2\mid \mathcal{F}_{k-1}\cup\{\tilde \xi_k,\tilde{i}_k,\xi_k\}] =(N-1)\|\tilde{\nabla} f(y_{k+1}, \xi_k)\|^2$. \end{lemma} \begin{proof} (a) Using the fact that $\tilde \nabla f(\bullet,\tilde \xi)$ and $F(\bullet,\tilde \xi)$ are unbiased estimator of $\tilde \nabla f(\bullet)$ and $F(\bullet)$, respectively, we have that $\mathbb{E}[\tilde{w}_{F,k}\mid \mathcal{F}_{k-1}]= \mathbb{E}[\tilde{w}_{f,k}\mid \mathcal{F}_{k-1}]=0$. Moreover, from Assumption \ref{assum:problem} (a), since random samples $\tilde \xi_i$ and $\tilde i_k$ are independent from $\xi_k$, one can show that $\mathbb{E}[{w}_{F,k}\mid \mathcal{F}_{k-1}\cup\{\tilde \xi_k,\tilde{i}_k\}]=\mathbb{E}[{w}_{f,k}\mid \mathcal{F}_{k-1}\cup\{\tilde \xi_k,\tilde{i}_k\}]=0$.\\ (b) Using the same argument in part (a) and using Assumption \ref{assum:problem} (c), the results follow. \\ (c) $\tilde e_{F,k}$ is the error of block coordinate sampling of $\tilde i_k$ and since $\tilde\xi_k$ and $\tilde i_k$ are independent, one can show that $\mathbb E[N\mathbf{U}_{\tilde i_k}F_{\tilde i_k}(x_k,\tilde \xi_k) \mid \mathcal{F}_{k-1}\cup\{\tilde \xi_k\}]={1\over N}\sum_{i=1}^N NU_{\tilde i_k}F_{\tilde i_k}(x_k,\tilde \xi_k)=F_{x_k,\tilde \xi_k}$, hence $\mathbb{E}[\tilde{e}_{F,k}\mid \mathcal{F}_{k-1}\cup\{\tilde \xi_k\}]=0$. Similarly, one can show that $\mathbb{E}[\tilde{e}_{f,k}\mid \mathcal{F}_{k-1}\cup\{\tilde \xi_k\}]=0$. Moreover, using the same argument and the fact that $i_k$ is independent of $\tilde\xi_k,\tilde i_k$ and $\xi_k$, one can get $\mathbb{E}[{e}_{F,k}\mid \mathcal{F}_{k-1}\cup\{\tilde \xi_k,\tilde{i}_k,\xi_k\}]= \mathbb{E}[{e}_{f,k}\mid \mathcal{F}_{k-1}\cup\{\tilde \xi_k,\tilde{i}_k,\xi_k\}]=0$.\\ (d) $\mathbb{E}[\|\tilde{e}_{F,k}\|^2\mid \mathcal{F}_{k-1}\cup\{\tilde \xi_k\}]= \|F(x_k,\tilde\xi_k)\|^2+N\sum_{i=1}^N \|U_iF_i(x_k,\tilde\xi_k)\|^2-2\sum_{i=1}^N \|F_i(x_k,\tilde\xi_k)\|^2=(N-1)\|F(x_k,\tilde\xi_k)\|^2.$ Other equalities in part (d) can be shown using the same approach. \end{proof} \begin{corollary}\label{cor:exp_terms} Consider $\tilde{e}_{F,k}$, $\tilde{e}_{f,k}$, ${e}_{F,k}$, and ${e}_{f,k}$ given by Definition \ref{def:errs}. Let Assumption \ref{assum:rnd_vars} hold. Then, the following statements hold almost surely for all $k \geq 0$. \noindent (a) $\mathbb{E}[\tilde{w}_{F,k}]= \mathbb{E}[\tilde{w}_{f,k}]=\mathbb{E}[{w}_{F,k}]= \mathbb{E}[{w}_{f,k}]=0$.\\ \noindent (b) $\mathbb{E}[\|\tilde{w}_{F,k}\|^2 ] \leq\nu_{F}^2$, $\mathbb{E}[\|\tilde{w}_{f,k}\|^2 ] \leq \nu_{f}^2$, $\mathbb{E}[\|{w}_{F,k}\|^2 ]\leq \nu_{F}^2$, and $\mathbb{E}[\|{w}_{f,k}\|^2 ]\leq \nu_{f}^2$.\\ \noindent (c) $\mathbb{E}[\tilde{e}_{F,k} ]= \mathbb{E}[\tilde{e}_{f,k}]=\mathbb{E}[{e}_{F,k}]= \mathbb{E}[{e}_{f,k}]=0$.\\ \noindent (d) $\mathbb{E}[\|\tilde{e}_{F,k}\|^2] \leq (N-1)C_F^2$, $\mathbb{E}[\|\tilde{e}_{f,k}\|^2] \leq (N-1)C_f^2$, $\mathbb{E}[\|{e}_{F,k}\|^2]\leq(N-1)C_F^2$, and $\mathbb{E}[\|{e}_{f,k}\|^2] \leq(N-1)C_f^2$. \end{corollary} \begin{proof} Parts (a-c) follow from taking conditional expectation from the results in parts (a-c) of Lemma \ref{lemma:prop_rnd_blcks}. Part (d) also follows by taking conditional expectation of part (d) in Lemma \ref{lemma:prop_rnd_blcks} as well as using the bounds in Remark \ref{rem:bounds}. \end{proof} \section{Algorithm Outline}\label{sec:alg} In this section, we propose an Averaging Randomized Iteratively Penalized Stochastic Extra-Gradient (aR-IP-SeG) method presented in Algorithm \ref{algorithm:aR-IP-SeG}. In particular, at each iteration $k$, we select indices $i_k$ and $\tilde i_k$ uniformly at random and update only the corresponding blocks of the variables $y_k$ and $x_k$ by taking a step in a negative direction of the partial sample subgradient $\tilde\nabla_i f(\bullet,\xi_k)$ and sample map $F_i(\bullet, \xi_k)$ for $i=i_k$ and $\tilde i_k$. Then, we calculate the projection onto sets $X_{i_k}$ and $X_{\tilde i_k}$. $\gamma_k$ and $\rho_k$ denote the stepsize and the penalization parameter, respectively. Finally, the output of the proposed algorithm is a weighted average of the generated sequence $\{y_k\}_k$. \begin{algorithm} \caption{Averaging Randomized Iteratively Penalized Stochastic Extra-Gradient Method (aR-IP-SeG)} \label{algorithm:aR-IP-SeG} \begin{algorithmic}[1] \STATE \textbf{initialization:} Set random initial points $x_0, y_0\in X$, a stepsize $\gamma_0>0$, a scalar $r<1$, $\bar{y}_0= y_0$, and $\Gamma_0=0$. \FOR {$k=0,1,\ldots,K-1$} \STATE Generate $i_k$ and $\tilde i_k$ uniformly from $\{1,\ldots,N\}$. \STATE Generate $\xi_k$ and $\tilde \xi_k$ as realizations of the random vector $\xi$. \STATE Update the variables $y_k$ and $x_k$ as follows. \begin{align} y_{k+1}^{(i)}&:= \left\{\begin{array}{ll}\mathcal{P}_{X_i}\left(x_{k}^{(i)}-\gamma_k(\tilde \nabla_i f(x_k,\tilde \xi_k) + \rho_k F_{i}(x_{k},\tilde \xi_k))\right) &\hbox{if } i=\tilde i_k,\cr \hbox{} &\hbox{}\cr x_k^{(i)}& \hbox{if } i\neq \tilde i_k,\end{array}\right.\label{eqn:y_k_update_rule} \\ &\hbox{} \notag\\ x_{k+1}^{(i)}&:=\left\{\begin{array}{ll}\mathcal{P}_{X_i}\left(x_{k}^{(i)} - \gamma_k(\tilde \nabla_i f(y_{k+1}, \xi_k) + \rho_k F_{i}(y_{k+1}, \xi_k))\right) &\hbox{if } i=i_k,\cr \hbox{} &\hbox{}\cr x_k^{(i)}& \hbox{if } i\neq i_k.\end{array}\right.\label{eqn:x_k_update_rule} \end{align} \STATE Update $\Gamma_k$ and $\bar y_{k}$ using the following recursions. \begin{align} &\Gamma_{k+1}:=\Gamma_k+(\gamma_k\rho_k)^r,\label{eqn:averaging_eq1}\\ &\bar y_{k+1}:=\frac{\Gamma_k \bar y_k+(\gamma_k\rho_k)^r y_{k+1}}{\Gamma_{k+1}}.\label{eqn:averaging_eq2} \end{align} \ENDFOR \STATE Return $\bar y_{K}$. \end{algorithmic} \end{algorithm} In the following lemma, we show that the update rules \eqref{eqn:y_k_update_rule} and \eqref{eqn:x_k_update_rule} in Algorithm \ref{algorithm:aR-IP-SeG} can be written compactly based on the full subgradient $\tilde \nabla f$ and map $F$ using Definition \ref{def:errs}. \begin{lemma}[Compact representation of the scheme]\label{lemma:compact_alg} Consider Algorithm~\ref{algorithm:aR-IP-SeG}. The update rules \eqref{eqn:y_k_update_rule} and \eqref{eqn:x_k_update_rule} can be compactly written as \begin{align*} &y_{k+1} = \mathcal{P}_X\left(x_k-N^{-1}\gamma_k\left(\tilde \nabla f(x_k)+\tilde{w}_{f,k}+\tilde{e}_{f,k}+\rho_kF(x_k)+\rho_k\tilde{w}_{F,k}+\rho_k\tilde{e}_{F,k}\right)\right)\\ &x_{k+1} = \mathcal{P}_X\left(x_k-N^{-1}\gamma_k\left(\tilde \nabla f(y_{k+1})+{w}_{f,k}+e_{f,k}+\rho_kF(y_{k+1})+\rho_k{w}_{F,k}+\rho_k{e}_{F,k}\right)\right). \end{align*} \end{lemma} \begin{proof} Note that $ \mathcal{P}_X(\bullet)=[ \mathcal{P}_{X_i}(\bullet)]_{i=1}^N$, then update rule \eqref{eqn:y_k_update_rule} can be written as $$y_{k+1}=\mathcal{P}_X\left(x_k-\gamma_k(U_{i}\tilde \nabla_i f(x_k,\tilde \xi_k) + \rho_k F_{i}(x_{k},\tilde \xi_k))\right),$$ then the result follows using Definition \ref{def:errs}. Similarly, one can obtain the compact form of the update rule \eqref{eqn:x_k_update_rule}. \end{proof} In our analysis, we use the following properties of projection map. \begin{lemma}[Properties of projection mapping \cite{bertsekas2003convex}]\label{lemma:proj} Let $X \subseteq \mathbb{R}^n$ be a nonempty closed convex set. \noindent (a) $\|\mathcal{P}_X(u) - \mathcal{P}_X(v)\| \leq \|u-v\|$ for all $u,v \in \mathbb{R}^n$. \noindent (b) $\left(\mathcal{P}_X(u)-u\right)^T\left(x-\mathcal{P}_X(u)\right)\geq 0$ for all $u \in \mathbb{R}^n$ and $x \in X$. \end{lemma} We will adopt the following standard gap function to measure the quality of solution generated by Algorithm \ref{algorithm:aR-IP-SeG}. \begin{definition}[The dual gap function \cite{marcotte1998weak}]\label{def:dual_gap} Let $X\subseteq \mathbb{R}^n$ be a nonempty closed set and $F:X\rightarrow\mathbb{R}^n$ be a single-valued mapping. Then, for any $x \in X$, the dual gap function $\mathrm{Gap}^*:X\rightarrow \mathbb{R}\cup \{+\infty\} $ is defined as $ \mathrm{Gap}^*(x) \triangleq \sup_{y\in X} F(y)^T(x-y)$. \end{definition} \begin{remark} Note that when $X \neq \emptyset$, the dual gap function is nonnegative over $X$. It is also known that when $F$ is continuous and monotone and $X$ is closed and convex, $\mathrm{Gap}^*(x^*)=0$ if and only if $x^* \in \mbox{SOL}(X,F)$ (cf.~\cite{juditsky2011solving}). \end{remark} \begin{lemma}[Bounds on the harmonic series~\cite{kaushik2021method}]\label{lemma:harmonic_bnds} Let $0\leq \alpha <1$ be a given scalar. Then, for any integer $K \geq 2^{\frac{1}{1-\alpha}}$, we have \begin{align*} \frac{K^{1-\alpha}}{2(1-\alpha)} \leq \sum_{k=0}^{K-1}\frac{1}{(k+1)^\alpha}\leq \frac{K^{1-\alpha}}{1-\alpha}. \end{align*} \end{lemma} \section{Performance analysis}\label{sec:rate} In this section, we develop a rate and complexity analysis for the aR-IP-SeG method. We begin with showing that $\bar y_k$ generated by Algorithm \ref{algorithm:aR-IP-SeG} is a well-defined weighted average \begin{lemma}[Weighted averaging]\label{lemma:ave} Let $\{\bar y_k\}$ be generated by Algorithm \ref{algorithm:aR-IP-SeG}. Let us define the weights $\lambda_{k,K} \triangleq \frac{(\gamma_k\rho_k)^r}{\sum_{j=0}^{K-1} (\gamma_j\rho_j)^r}$ for $k \in \{0,\ldots, K-1\}$ and $K\geq 1$. Then, for any $K\geq 1$, we have $\bar{y}_{K} = \sum_{k=0}^{K-1} \lambda_{k,K} y_{k+1}$. Also, when $X$ is a convex set, we have $\bar y_K \in X$. \end{lemma} \begin{proof} We employ induction to show $\bar{y}_{K} = \sum_{k=0}^{K-1} \lambda_{k,K} y_{k+1}$ for any $K\geq 1$. For $K=1$ we have \begin{align*} \sum_{k=0}^0 \lambda_{k,0} y_{k+1} = \lambda_{0,1}y_{1} = y_{1}, \end{align*} where we used $\lambda_{0,1}=1$. Also, from the equations \eqref{eqn:averaging_eq1}--\eqref{eqn:averaging_eq2} and the initialization $\Gamma_0 =0$, we have \begin{align*} &\bar y_{1}:=\frac{\Gamma_0 \bar y_0+(\gamma_0\rho_0)^r y_{1}}{\Gamma_{1}}=\frac{0+(\gamma_0\rho_0)^r y_{1}}{\Gamma_{0}+\gamma_{0}^r} = y_{1}. \end{align*} The preceding two relations imply that the hypothesis statement holds for $K=1$. Next, suppose the relation holds for some $K\geq 1$. From the hypothesis, equations \eqref{eqn:averaging_eq1}--\eqref{eqn:averaging_eq2}, and that $\Gamma_{K}=\sum_{k=0}^{K-1}\gamma_k^r$ for all $K\geq 1$, we have \begin{align*} \bar{y}_{K+1} &= \frac{\Gamma_K\bar{y}_K + (\gamma_K\rho_K)^r y_{K+1}}{\Gamma_{K+1}} = \frac{\left(\sum_{k=0}^{K-1}(\gamma_k\rho_k)^r\right)\sum_{k=0}^{K-1} \lambda_{k,K} y_{k+1}+ (\gamma_K\rho_K)^r y_{K+1}}{\Gamma_{K+1}}\\ &= \frac{\sum_{k=0}^{K}(\gamma_k\rho_k)^r y_{k+1}}{\sum_{j = 0}^{K}(\gamma_j\rho_j)^r} = \sum_{k=0}^{K}\left(\tfrac{(\gamma_k\rho_k)^r }{\sum_{j = 0}^{K}(\gamma_j\rho_j)^r}\right)y_{k+1}= \sum_{k=0}^{K} \lambda_{k,K+1} y_{k+1}, \end{align*} implying that the induction hypothesis holds for $K+1$. Thus, we conclude that the averaging formula holds for {all} $K\geq 1$. Note that since $\sum_{k=0}^{K-1} \lambda_{k,K}=1$, under the convexity of the set $X$, we have $\bar y_K \in X$. This completes the proof. \end{proof} Next, we prove a one-step lemma to obtain an upper bound for $F(y)^T(y_{k+1}-y) + \rho_k^{-1}(f(y_{k+1})-f(y))$ in terms of consecutive iterates and error terms that later helps us obtaining upper bounds for suboptimality of the objective function and the gap function in Proposition \ref{prop:bounds}. \begin{lemma}[An error bound]\label{lemma:main_ineq} Consider Algorithm~\ref{algorithm:aR-IP-SeG} for solving problem~\eqref{prob:sopt_svi}. Let Assumptions~\ref{assum:problem} and~\ref{assum:rnd_vars} hold. Let the auxiliary stochastic sequence $\{u_k\}$ be defined recursively as $$u_{k+1}\triangleq \mathcal{P}_X\left(u_k+N^{-1}\gamma_k({w}_{f,k}+e_{f,k}+\rho_k{w}_{F,k}+\rho_k{e}_{F,k})\right),$$ where $u_0:=x_0$. Then for any arbitrary $y \in X$ and $k \geq 0$ we have \begin{align}\label{eqn:main_07} & (\gamma_k\rho_k)^rF(y)^T(y_{k+1}-y) + (\gamma_k\rho_k)^{r}\rho_k^{-1}(f(y_{k+1})-f(y)) \notag\\ & \leq 0.5N(\gamma_k\rho_k)^{r-1}\left(\|x_k-y\|^2 -\|x_{k+1}-y\|^2+\|u_k-y\|^2-\|u_{k+1}-y\|^2\right) \notag\\ &+2N^{-1}(\gamma_k\rho_k)^{r+1}\rho_k^{-2}\left(6C_f^2+3\|\tilde{w}_{f,k}\|^2+3\|\tilde{e}_{f,k}\|^2+4\|{w}_{f,k}\|^2+4\|e_{f,k}\|^2\right)\notag \\ &+2N^{-1}(\gamma_k\rho_k)^{r+1}\left(6C_F^2+3\|\tilde{w}_{F,k}\|^2+3\|\tilde{e}_{F,k}\|^2+4\|{w}_{F,k}\|^2+4\|{e}_{F,k}\|^2\right)\notag \\ &+\gamma_k^r\rho_k^{r-1}\left({w}_{f,k}+e_{f,k}+\rho_k{w}_{F,k}+\rho_k{e}_{F,k}\right)^T(u_k-y_{k+1}). \end{align} \end{lemma} \begin{proof} Let $y \in X$ and $k\geq 0$ be arbitrary fixed values. From Lemma~\ref{lemma:compact_alg} we have \begin{align}\label{eqn:main_01} \|x_{k+1}-y\|^2 &= \|x_{k+1}-x_k\|^2+\|x_k-y\|^2 +2(x_{k+1}-x_k)^T(x_k-y)\notag\\ & = \|x_{k+1}-x_k\|^2+\|x_k-y\|^2 +2(x_{k+1}-x_k)^T(x_k-x_{k+1})+2(x_{k+1}-x_k)^T(x_{k+1}-y)\notag\\ & = \|x_k-y\|^2-\|x_{k+1}-x_k\|^2 +2(x_{k+1}-x_k)^T(x_{k+1}-y), \end{align} where the first equation is obtained by adding and subtracting $x_k$ while the third equation is implied by adding and subtracting $x_{k+1}$. In view of Lemma \ref{lemma:proj} (b), by setting $$u:= x_k-N^{-1}\gamma_k\left(\tilde \nabla f(y_{k+1})+{w}_{f,k}+e_{f,k}+\rho_kF(y_{k+1})+\rho_k{w}_{F,k}+\rho_k{e}_{F,k}\right),$$ and $x:=y$, and that we have $x_{k+1} =\mathcal{P}_X(u)$, we have \begin{align*} &0 \leq \left(x_{k+1} - \left(x_k-N^{-1}\gamma_k\left(\tilde \nabla f(y_{k+1})+{w}_{f,k}+e_{f,k}+\rho_kF(y_{k+1})+\rho_k{w}_{F,k}+\rho_k{e}_{F,k}\right)\right)\right)^T(y-x_{k+1})\\ \Rightarrow \ & (x_{k+1}-x_k)^T(y-x_{k+1}) \\ &\leq N^{-1}\gamma_k\left(\tilde \nabla f(y_{k+1})+{w}_{f,k}+e_{f,k}+\rho_kF(y_{k+1})+\rho_k{w}_{F,k}+\rho_k{e}_{F,k}\right)^T(y-x_{k+1}). \end{align*} Combining the preceding inequality with \eqref{eqn:main_01} we obtain \begin{align*} \|x_{k+1}-y\|^2 & \leq \|x_k-y\|^2-\|x_{k+1}-x_k\|^2 \notag\\ &+2N^{-1}\gamma_k\left(\tilde \nabla f(y_{k+1})+{w}_{f,k}+e_{f,k}+\rho_kF(y_{k+1})+\rho_k{w}_{F,k}+\rho_k{e}_{F,k}\right)^T(y-x_{k+1}). \end{align*} Note that we have \begin{align}\label{eqn:aux_seq} \|x_{k+1}-x_k\|^2 &= \|x_{k+1}-y_{k+1}\|^2 +\|y_{k+1}-x_k\|^2 + 2(x_{k+1}-y_{k+1})^T(y_{k+1}-x_k). \end{align} From the two preceding relations we obtain \begin{align}\label{eqn:main_02} \|x_{k+1}-y\|^2 & \leq \|x_k-y\|^2-\|x_{k+1}-y_{k+1}\|^2 -\|y_{k+1}-x_k\|^2 - 2(x_{k+1}-y_{k+1})^T(y_{k+1}-x_k) \notag\\ &+2N^{-1}\gamma_k\left(\tilde \nabla f(y_{k+1})+{w}_{f,k}+e_{f,k}+\rho_kF(y_{k+1})+\rho_k{w}_{F,k}+\rho_k{e}_{F,k}\right)^T(y-x_{k+1}). \end{align} Next we find an upper bound on the term $- 2(x_{k+1}-y_{k+1})^T(y_{k+1}-x_k)$. In view of Lemma \ref{lemma:proj} (b), by setting $$u:= x_k-N^{-1}\gamma_k\left(\tilde \nabla f(x_k)+\tilde{w}_{f,k}+\tilde{e}_{f,k}+\rho_kF(x_k)+\rho_k\tilde{w}_{F,k}+\rho_k\tilde{e}_{F,k}\right),$$ and $x:=x_{k+1}$, and that we have $y_{k+1} =\mathcal{P}_X(u)$, we have \begin{align*} &0 \leq \left(y_{k+1} - \left(x_k-N^{-1}\gamma_k\left(\tilde \nabla f(x_k)+\tilde{w}_{f,k}+\tilde{e}_{f,k}+\rho_kF(x_k)+\rho_k\tilde{w}_{F,k}+\rho_k\tilde{e}_{F,k}\right)\right)\right)^T(x_{k+1}-y_{k+1})\\ \Rightarrow \ & - (x_{k+1}-y_{k+1})^T(y_{k+1}-x_k) \\ &\leq N^{-1}\gamma_k\left(\tilde \nabla f(x_k)+\tilde{w}_{f,k}+\tilde{e}_{f,k}+\rho_kF(x_k)+\rho_k\tilde{w}_{F,k}+\rho_k\tilde{e}_{F,k}\right)^T(x_{k+1}-y_{k+1}). \end{align*} From the preceding inequality and \eqref{eqn:main_02} we obtain \begin{align*} \|x_{k+1}-y\|^2 & \leq \|x_k-y\|^2-\|x_{k+1}-y_{k+1}\|^2 -\|y_{k+1}-x_k\|^2 \notag\\ & +2N^{-1}\gamma_k\left(\tilde \nabla f(x_k)+\tilde{w}_{f,k}+\tilde{e}_{f,k}+\rho_kF(x_k)+\rho_k\tilde{w}_{F,k}+\rho_k\tilde{e}_{F,k}\right)^T(x_{k+1}-y_{k+1})\notag \\ &+2N^{-1}\gamma_k\left(\tilde \nabla f(y_{k+1})+{w}_{f,k}+e_{f,k}+\rho_kF(y_{k+1})+\rho_k{w}_{F,k}+\rho_k{e}_{F,k}\right)^T(y-x_{k+1}). \end{align*} We further obtain \begin{align*} \|x_{k+1}-y\|^2 & \leq \|x_k-y\|^2-\|x_{k+1}-y_{k+1}\|^2 -\|y_{k+1}-x_k\|^2 \notag\\ & +2N^{-1}\gamma_k\left(\tilde \nabla f(x_k)+\tilde{w}_{f,k}+\tilde{e}_{f,k}+\rho_kF(x_k)+\rho_k\tilde{w}_{F,k}+\rho_k\tilde{e}_{F,k}\right.\notag\\ &\left.-\tilde \nabla f(y_{k+1})-{w}_{f,k}-e_{f,k}-\rho_kF(y_{k+1})-\rho_k{w}_{F,k}-\rho_k{e}_{F,k}\right)^T(x_{k+1}-y_{k+1})\notag \\ &+2N^{-1}\gamma_k\left(\tilde \nabla f(y_{k+1})+{w}_{f,k}+e_{f,k}+\rho_kF(y_{k+1})+\rho_k{w}_{F,k}+\rho_k{e}_{F,k}\right)^T(y-y_{k+1}). \end{align*} Recall that for any $a,b \in \mathbb{R}$, we have $2ab \leq a^2+b^2$. We obtain \begin{align}\label{eqn:main_03} \|x_{k+1}-y\|^2 & \leq \|x_k-y\|^2 -\|y_{k+1}-x_k\|^2 \notag\\ & +N^{-2}\gamma_k^2\left\|\tilde \nabla f(x_k)+\tilde{w}_{f,k}+\tilde{e}_{f,k}+\rho_kF(x_k)+\rho_k\tilde{w}_{F,k}+\rho_k\tilde{e}_{F,k}\right.\notag\\ &\left.-\tilde \nabla f(y_{k+1})-{w}_{f,k}-e_{f,k}-\rho_kF(y_{k+1})-\rho_k{w}_{F,k}-\rho_k{e}_{F,k}\right\|^2\notag \\ &+2N^{-1}\gamma_k\left(\tilde \nabla f(y_{k+1})+{w}_{f,k}+e_{f,k}+\rho_kF(y_{k+1})+\rho_k{w}_{F,k}+\rho_k{e}_{F,k}\right)^T(y-y_{k+1}). \end{align} Note that we can write \begin{align*} &\left\|\tilde \nabla f(x_k)+\tilde{w}_{f,k}+\tilde{e}_{f,k}+\rho_kF(x_k)+\rho_k\tilde{w}_{F,k}+\rho_k\tilde{e}_{F,k} \right. \\ &\left.-\tilde \nabla f(y_{k+1})-{w}_{f,k}-e_{f,k}-\rho_kF(y_{k+1})-\rho_k{w}_{F,k}-\rho_k{e}_{F,k}\right\|^2 \\ &\leq 12\|\tilde \nabla f(x_k)\|^2+12\|\tilde \nabla f(y_{k+1})\|^2 + 12\rho_k^2\|F(x_k)\|^2+12\rho_k^2\|F(y_{k+1})\|^2+12\Delta_f +12\rho_k^2\Delta_F, \end{align*} where $\Delta_f \triangleq \|\tilde{w}_{f,k}\|^2+\|\tilde{e}_{f,k}\|^2+\|{w}_{f,k}\|^2+\|e_{f,k}\|^2$ and $\Delta_F \triangleq \|\tilde{w}_{F,k}\|^2+\|\tilde{e}_{F,k}\|^2+\|{w}_{F,k}\|^2+\|{e}_{F,k}\|^2$. In view of Remark~\ref{rem:bounds} we have \begin{align*} &\left\|\tilde \nabla f(x_k)+\tilde{w}_{f,k}+\tilde{e}_{f,k}+\rho_kF(x_k)+\rho_k\tilde{w}_{F,k}+\rho_k\tilde{e}_{F,k}\right. \\ &\left.-\tilde \nabla f(y_{k+1})-{w}_{f,k}-e_{f,k}-\rho_kF(y_{k+1})-\rho_k{w}_{F,k}-\rho_k{e}_{F,k}\right\|^2 \leq 24C_f^2+24\rho_k^2C_F^2+12\Delta_f +12\rho_k^2\Delta_F. \end{align*} From the preceding inequality and \eqref{eqn:main_03}, dropping the non-positive term $-\|y_{k+1}-x_k\|^2$ we have \begin{align}\label{eqn:main_04} \|x_{k+1}-y\|^2 & \leq \|x_k-y\|^2 +N^{-2}\gamma_k^2\left(24C_f^2+24\rho_k^2C_F^2+12\Delta_f +12\rho_k^2\Delta_F\right)\notag \\ &+2N^{-1}\gamma_k\left(\tilde \nabla f(y_{k+1})+{w}_{f,k}+e_{f,k}+\rho_kF(y_{k+1})+\rho_k{w}_{F,k}+\rho_k{e}_{F,k}\right)^T(y-y_{k+1}). \end{align} Note that from the convexity of $f$ we have that $\tilde \nabla f(y_{k+1})^T(y-y_{k+1}) \leq f(y)-f(y_{k+1})$. Also, the monotonicity of $F$ implies that $F(y_{k+1})^T(y-y_{k+1}) \leq F(y)^T(y-y_{k+1})$. Multiplying both sides of~\eqref{eqn:main_04} by $0.5N$, for all $y \in X$ and $k \geq 0$ we have \begin{align}\label{eqn:main_05} \gamma_k\rho_kF(y)^T(y_{k+1}-y) + \gamma_k(f(y_{k+1})-f(y)) &\leq 0.5N\left(\|x_k-y\|^2 -\|x_{k+1}-y\|^2\right) \notag\\ &+N^{-1}\gamma_k^2\left(12C_f^2+12\rho_k^2C_F^2+6\Delta_f +6\rho_k^2\Delta_F\right)\notag \\ &+\gamma_k\left({w}_{f,k}+e_{f,k}+\rho_k{w}_{F,k}+\rho_k{e}_{F,k}\right)^T(y-y_{k+1}), \end{align} Let us now consider the auxiliary sequence $\{u_k\}$ given by Lemma~\ref{lemma:main_ineq}. Invoking Lemma~\ref{lemma:proj} (a) we can write \begin{align*} \|u_{k+1}-y\|^2 &= \left\|\mathcal{P}_X\left(u_k+N^{-1}\gamma_k({w}_{f,k}+e_{f,k}+\rho_k{w}_{F,k}+\rho_k{e}_{F,k})\right)-\mathcal{P}_X(y)\right\|^2\\ &\leq \| u_k+N^{-1}\gamma_k({w}_{f,k}+e_{f,k}+\rho_k{w}_{F,k}+\rho_k{e}_{F,k})-y\|^2\\ & = \|u_k-y\|^2+N^{-2}\gamma_k^2\|{w}_{f,k}+e_{f,k}+\rho_k{w}_{F,k}+\rho_k{e}_{F,k}\|^2\\ &+2N^{-1}\gamma_k({w}_{f,k}+e_{f,k}+\rho_k{w}_{F,k}+\rho_k{e}_{F,k})^T(u_k-y)\\ &\leq \|u_k-y\|^2+4N^{-2}\gamma_k^2\|{w}_{f,k}\|^2+4N^{-2}\gamma_k^2\|{e}_{f,k}\|^2+4N^{-2}\gamma_k^2\rho_k^2\|{w}_{F,k}\|^2+4N^{-2}\gamma_k^2\rho_k^2\|{e}_{F,k}\|^2\\ &+2N^{-1}\gamma_k({w}_{f,k}+e_{f,k}+\rho_k{w}_{F,k}+\rho_k{e}_{F,k})^T(u_k-y). \end{align*} Rearranging the terms in the preceding inequality and multiplying the both sides by $0.5N$ we obtain \begin{align}\label{eqn:main_06} 0&\leq 0.5N\left(\|u_k-y\|^2-\|u_{k+1}-y\|^2\right) +2N^{-1}\gamma_k^2\|{w}_{f,k}\|^2+2N^{-1}\gamma_k^2\|{e}_{f,k}\|^2+2N^{-1}\gamma_k^2\rho_k^2\|{w}_{F,k}\|^2\notag\\ &+2N^{-1}\gamma_k^2\rho_k^2\|{e}_{F,k}\|^2+\gamma_k({w}_{f,k}+e_{f,k}+\rho_k{w}_{F,k}+\rho_k{e}_{F,k})^T(u_k-y). \end{align} Summing the inequities \eqref{eqn:main_05} and \eqref{eqn:main_06} we have \begin{align*} &\gamma_k\rho_kF(y)^T(y_{k+1}-y) + \gamma_k(f(y_{k+1})-f(y)) \leq 0.5N\left(\|x_k-y\|^2\right. \\ &\left. -\|x_{k+1}-y\|^2+\|u_k-y\|^2-\|u_{k+1}-y\|^2\right) \notag\\ &+2N^{-1}\gamma_k^2\left(6C_f^2+3\|\tilde{w}_{f,k}\|^2+3\|\tilde{e}_{f,k}\|^2+4\|{w}_{f,k}\|^2+4\|e_{f,k}\|^2\right)\notag \\ &+2N^{-1}\gamma_k^2\rho_k^2\left(6C_F^2+3\|\tilde{w}_{F,k}\|^2+3\|\tilde{e}_{F,k}\|^2+4\|{w}_{F,k}\|^2+4\|{e}_{F,k}\|^2\right)\notag \\ &+\gamma_k\left({w}_{f,k}+e_{f,k}+\rho_k{w}_{F,k}+\rho_k{e}_{F,k}\right)^T(u_k-y_{k+1}). \end{align*} Multiplying both sides of the preceding inequality by $(\gamma_k\rho_k)^{r-1}$, we obtain the inequality \eqref{eqn:main_07}. \end{proof} \begin{lemma}\label{lemma:aux_exp_zero} Consider the auxiliary sequence defined by \eqref{eqn:aux_seq}. Let Assumptions~\ref{assum:problem} and~\ref{assum:rnd_vars} hold. Then for any $k\geq 0$ we have \begin{align*} \mathbb{E}[\left({w}_{f,k}+e_{f,k}+\rho_k{w}_{F,k}+\rho_k{e}_{F,k}\right)^T(u_k-y_{k+1})]=0. \end{align*} \end{lemma} \begin{proof} Consider $\{u_k\}$ defined by \eqref{eqn:aux_seq}. From this definition and Algorithm~\ref{algorithm:aR-IP-SeG} we observe that $u_k$ is $\mathcal{F}_{-1}$-measurable. Also, note that $y_{k+1}$ is $\mathcal{F}_{k-1}\cup\{\tilde{\xi}_k,\tilde{i}_k\}$-measurable. We can write \begin{align}\label{eqn:lem_u_zero_1} &\mathbb{E}[\left({w}_{f,k}+e_{f,k}+\rho_k{w}_{F,k}+\rho_k{e}_{F,k}\right)^T(u_k-y_{k+1})\mid \mathcal{F}_{k-1}\cup\{\tilde{\xi}_k,\tilde{i}_k\}]\notag\\ &=\mathbb{E}[\left({w}_{f,k}+e_{f,k}+\rho_k{w}_{F,k}+\rho_k{e}_{F,k}\right)\mid \mathcal{F}_{k-1}\cup\{\tilde{\xi}_k,\tilde{i}_k\}]^T(u_k-y_{k+1}). \end{align} Note that from Lemma \ref{lemma:prop_rnd_blcks} (a) we have \begin{align}\label{eqn:lem_u_zero_2} \mathbb{E}[{w}_{f,k}+\rho_k{w}_{F,k}\mid \mathcal{F}_{k-1}\cup\{\tilde{\xi}_k,\tilde{i}_k\}]=0. \end{align} We also have from Lemma \ref{lemma:prop_rnd_blcks} (c) that \begin{align*} \mathbb{E}[e_{f,k}+\rho_k{e}_{F,k}\mid \mathcal{F}_{k-1}\cup\{\tilde{\xi}_k,\tilde{i}_k,\xi_k\}]=0. \end{align*} Taking conditional expectation with respect to $\xi_k$ from both sides of the preceding equation we obtain \begin{align*} \mathbb{E}[e_{f,k}+\rho_k{e}_{F,k}\mid \mathcal{F}_{k-1}\cup\{\tilde{\xi}_k,\tilde{i}_k\}]=0. \end{align*} Combining the preceding relation with \eqref{eqn:lem_u_zero_1} and \eqref{eqn:lem_u_zero_2} we have that \begin{align*} &\mathbb{E}[\left({w}_{f,k}+e_{f,k}+\rho_k{w}_{F,k}+\rho_k{e}_{F,k}\right)^T(u_k-y_{k+1})\mid \mathcal{F}_{k-1}\cup\{\tilde{\xi}_k,\tilde{i}_k\}]=0. \end{align*} Taking conditional expectation with respect to $\mathcal{F}_{k-1}\cup\{\tilde{\xi}_k,\tilde{i}_k\}$ from both sides of the preceding relation we obtain the desired result. \end{proof} Now, we demonstrate an upper bound for the suboptimality of the objective function and the gap function for the SVI constaint. \begin{proposition}[Error bounds]\label{prop:bounds} Consider Algorithm~\ref{algorithm:aR-IP-SeG} for solving problem~\eqref{prob:sopt_svi}. Let Assumptions~\ref{assum:problem} and~\ref{assum:rnd_vars} hold. Suppose $\{\gamma_k\rho_k\}$ is nonincreasing, $\{\rho_k\}$ is nondecreasing, and $0\leq r<1$ is a scalar. The following results hold for any $K\geq 2$. \begin{align} &\mathbb{E}[f(\bar{y}_K)]- f^*\leq \frac{4ND_X^2(\gamma_{K-1}\rho_{K-1})^{r-1}\rho_{K-1}+2N^{-1}\sum_{k=0}^{K-1}(\gamma_k\rho_k)^{1+r}\rho_k\left(\theta_F+\theta_f\rho_k^{-2}\right)}{\sum_{k=0}^{K-1}(\gamma_k\rho_k)^r},\label{prop:subopt_bound}\\ &\mathbb{E}[\mbox{Gap}^*(\bar{y}_K)]\leq \frac{4ND_X^2(\gamma_{K-1}\rho_{K-1})^{r-1}+2N^{-1}\sum_{k=0}^{K-1}(\gamma_k\rho_k)^{r}\left(\theta_F\gamma_k\rho_k+\theta_f\gamma_k\rho_k^{-1}+2ND_f\rho_k^{-1}\right)}{\sum_{k=0}^{K-1}(\gamma_k\rho_k)^r},\label{prop:infeas_bound} \end{align} where $\theta_F\triangleq (7N-1)C_F^2+7\nu_F^2$ and $\theta_f\triangleq (7N-1)C_f^2+7\nu_f^2$. \end{proposition} \begin{proof} First we show \eqref{prop:subopt_bound}. Consider the inequality \eqref{eqn:main_07}. Let $y:=x^*$ where $x^* \in X$ is an optimal solution to the problem \eqref{prob:sopt_svi}. This implies that $x^* \in \mbox{SOL}(X, \mathbb{E}[F(\bullet,\xi(\omega))])$ or equivalently, $F(x^*)^T(y_{k+1}-x^*) \geq 0$. We obtain \begin{align}\label{eqn:prop_bounds_01} &(\gamma_k\rho_k)^{r}\rho_k^{-1}(f(y_{k+1})-f^*) \leq 0.5N(\gamma_k\rho_k)^{r-1}\left(\|x_k-x^*\|^2 -\|x_{k+1}-x^*\|^2\right. \notag\\\ &\left. +\|u_k-x^*\|^2-\|u_{k+1}-x^*\|^2\right) \notag\\ &+2N^{-1}(\gamma_k\rho_k)^{r+1}\rho_k^{-2}\left(6C_f^2+3\|\tilde{w}_{f,k}\|^2+3\|\tilde{e}_{f,k}\|^2+4\|{w}_{f,k}\|^2+4\|e_{f,k}\|^2\right)\notag \\ &+2N^{-1}(\gamma_k\rho_k)^{r+1}\left(6C_F^2+3\|\tilde{w}_{F,k}\|^2+3\|\tilde{e}_{F,k}\|^2+4\|{w}_{F,k}\|^2+4\|{e}_{F,k}\|^2\right)\notag \\ &+\gamma_k^r\rho_k^{r-1}\left({w}_{f,k}+e_{f,k}+\rho_k{w}_{F,k}+\rho_k{e}_{F,k}\right)^T(u_k-y_{k+1}). \end{align} Multiplying the both sides by $\rho_k$ and then, adding and subtracting $$0.5N(\gamma_{k-1}\rho_{k-1})^{r-1}\rho_{k-1}\left(\|x_k-x^*\|^2+\|u_k-x^*\|^2 \right),$$ for all $k \geq 1$ we have \begin{align}\label{eqn:prop_bounds_02} (\gamma_k\rho_k)^{r}(f(y_{k+1})-f^*) &\leq 0.5N(\gamma_{k-1}\rho_{k-1})^{r-1}\rho_{k-1}\left(\|x_k-x^*\|^2 +\|u_k-x^*\|^2\right) \notag\\ &-0.5N(\gamma_{k}\rho_{k})^{r-1}\rho_{k}\left(\|x_{k+1}-x^*\|^2+\|u_{k+1}-x^*\|^2\right)\notag\\ &+ 0.5N\left((\gamma_{k}\rho_{k})^{r-1}\rho_k-(\gamma_{k-1}\rho_{k-1})^{r-1}\rho_{k-1}\right)\left(\|x_k-x^*\|^2 +\|u_k-x^*\|^2\right)\notag\\ &+2N^{-1}(\gamma_k\rho_k)^{r+1}\rho_k^{-1}\left(6C_f^2+3\|\tilde{w}_{f,k}\|^2+3\|\tilde{e}_{f,k}\|^2+4\|{w}_{f,k}\|^2+4\|e_{f,k}\|^2\right)\notag \\ &+2N^{-1}(\gamma_k\rho_k)^{r+1}\rho_k\left(6C_F^2+3\|\tilde{w}_{F,k}\|^2+3\|\tilde{e}_{F,k}\|^2+4\|{w}_{F,k}\|^2+4\|{e}_{F,k}\|^2\right)\notag \\ &+(\gamma_k\rho_k)^r\left({w}_{f,k}+e_{f,k}+\rho_k{w}_{F,k}+\rho_k{e}_{F,k}\right)^T(u_k-y_{k+1}). \end{align} Note that because $r<1$ and that $\{\gamma_k\rho_k\}$ is nonincreasing and $\{\rho_k\}$ is nondecreasing, we have $$\gamma_{k}^{r-1}\rho_k-\gamma_{k-1}^{r-1}\rho_{k-1} \geq 0.$$ Thus, in view of Remark~\ref{rem:bounds} we have \begin{align*} &0.5N\left((\gamma_{k}\rho_{k})^{r-1}\rho_k-(\gamma_{k-1}\rho_{k-1})^{r-1}\rho_{k-1}\right)\left(\|x_k-x^*\|^2 +\|u_k-x^*\|^2\right) \\ & \leq 4ND_X^2\left((\gamma_{k}\rho_{k})^{r-1}\rho_k-(\gamma_{k-1}\rho_{k-1})^{r-1}\rho_{k-1}\right). \end{align*} Substituting the preceding bound in \eqref{eqn:prop_bounds_02} and then, summing the resulting inequality for $k=1,\ldots,K-1$ we obtain \begin{align}\label{eqn:prop_bounds_02} &\sum_{k=1}^{K-1}(\gamma_k\rho_k)^{r}(f(y_{k+1})-f^*) \leq 0.5N(\gamma_{0}\rho_{0})^{r-1}\rho_{0}\left(\|x_1-x^*\|^2 +\|u_1-x^*\|^2\right) \notag\\ &+4ND_X^2\left((\gamma_{K-1}\rho_{K-1})^{r-1}\rho_{K-1}-(\gamma_{0}\rho_0)^{r-1}\rho_{0}\right)\notag\\ &+2N^{-1}\sum_{k=1}^{K-1}(\gamma_k\rho_k)^{r+1}\rho_k^{-1}\left(6C_f^2+3\|\tilde{w}_{f,k}\|^2+3\|\tilde{e}_{f,k}\|^2+4\|{w}_{f,k}\|^2+4\|e_{f,k}\|^2\right)\notag \\ &+2N^{-1}\sum_{k=1}^{K-1}(\gamma_k\rho_k)^{r+1}\rho_k\left(6C_F^2+3\|\tilde{w}_{F,k}\|^2+3\|\tilde{e}_{F,k}\|^2+4\|{w}_{F,k}\|^2+4\|{e}_{F,k}\|^2\right)\notag \\ &+\sum_{k=1}^{K-1}(\gamma_k\rho_k)^{r}\left({w}_{f,k}+e_{f,k}+\rho_k{w}_{F,k}+\rho_k{e}_{F,k}\right)^T(u_k-y_{k+1}). \end{align} From \eqref{eqn:prop_bounds_01} for $k=0$ we have \begin{align}\label{eqn:prop_bounds_03} (\gamma_0\rho_0)^{r}(f(y_{1})-f^*) &\leq 0.5N(\gamma_0\rho_0)^{r-1}\rho_0\left(\|x_0-x^*\|^2 -\|x_{1}-x^*\|^2+\|u_0-x^*\|^2-\|u_{1}-x^*\|^2\right) \notag\\ &+2N^{-1}(\gamma_0\rho_0)^{1+r}\rho_0^{-1}\left(6C_f^2+3\|\tilde{w}_{f,0}\|^2+3\|\tilde{e}_{f,0}\|^2+4\|{w}_{f,0}\|^2+4\|e_{f,0}\|^2\right)\notag \\ &+2N^{-1}(\gamma_0\rho_0)^{1+r}\rho_0\left(6C_F^2+3\|\tilde{w}_{F,0}\|^2+3\|\tilde{e}_{F,0}\|^2+4\|{w}_{F,0}\|^2+4\|{e}_{F,0}\|^2\right)\notag \\ &+(\gamma_0\rho_0)^r\left({w}_{f,0}+e_{f,0}+\rho_k{w}_{F,0}+\rho_k{e}_{F,0}\right)^T(u_0-y_{1}). \end{align} Summing the preceding two relations we obtain \begin{align}\label{eqn:prop_bounds_04} &\sum_{k=0}^{K-1}(\gamma_k\rho_k)^{r}(f(y_{k+1})-f^*) \leq 0.5N(\gamma_0\rho_0)^{r-1}\rho_0\left(\|x_0-x^*\|^2 +\|u_0-x^*\|^2\right) \notag\\ &+4ND_X^2\left((\gamma_{K-1}\rho_{K-1})^{r-1}\rho_{K-1}-(\gamma_{0}\rho_0)^{r-1}\rho_{0}\right)\notag\\ &+2N^{-1}\sum_{k=0}^{K-1}(\gamma_k\rho_k)^{r+1}\rho_k^{-1}\left(6C_f^2+3\|\tilde{w}_{f,k}\|^2+3\|\tilde{e}_{f,k}\|^2+4\|{w}_{f,k}\|^2+4\|e_{f,k}\|^2\right)\notag \\ &+2N^{-1}\sum_{k=0}^{K-1}(\gamma_k\rho_k)^{r+1}\rho_k\left(6C_F^2+3\|\tilde{w}_{F,k}\|^2+3\|\tilde{e}_{F,k}\|^2+4\|{w}_{F,k}\|^2+4\|{e}_{F,k}\|^2\right)\notag \\ &+\sum_{k=0}^{K-1}(\gamma_k\rho_k)^{r}\left({w}_{f,k}+e_{f,k}+\rho_k{w}_{F,k}+\rho_k{e}_{F,k}\right)^T(u_k-y_{k+1}). \end{align} Note that from the convexity of $f$ and Lemma \ref{lemma:ave} we have \begin{align*} \frac{\sum_{k=0}^{K-1}(\gamma_k\rho_k)^rf(y_{k+1})}{\sum_{k=0}^{K-1}(\gamma_k\rho_k)^r} &= \sum_{k=0}^{K-1}\left(\frac{(\gamma_k\rho_k)^r}{\sum_{j=0}^{K-1}(\gamma_j\rho_j)^r}\right)f(y_{k+1}) = \sum_{k=0}^{K-1}\lambda_{k,K}f(y_{k+1}) \geq f\left(\sum_{k=0}^{K-1} \lambda_{k,K} y_{k+1}\right) \\ & = f(\bar{y}_K). \end{align*} Dividing the both side of \eqref{eqn:prop_bounds_04} by $\sum_{k=0}^{K-1}(\gamma_k\rho_k)^r$, using the preceding relation, and $\|x_0-x^*\|^2 +\|u_0-x^*\|^2 \leq 8D_X^2$, we obtain \begin{align}\label{eqn:prop_bounds_05} f(\bar{y}_K) -f^*&\leq \left(\sum_{k=0}^{K-1}(\gamma_k\rho_k)^r\right)^{-1}\left(4ND_X^2(\gamma_0\rho_0)^{r-1}\rho_0 +4ND_X^2\left((\gamma_{K-1}\rho_{K-1})^{r-1}\rho_{K-1}-(\gamma_{0}\rho_0)^{r-1}\rho_{0}\right)\right.\notag\\ &\left.+2N^{-1}\sum_{k=0}^{K-1}(\gamma_k\rho_k)^{r+1}\rho_k^{-1}\left(6C_f^2+3\|\tilde{w}_{f,k}\|^2+3\|\tilde{e}_{f,k}\|^2+4\|{w}_{f,k}\|^2+4\|e_{f,k}\|^2\right)\right.\notag \\ &\left.+2N^{-1}\sum_{k=0}^{K-1}(\gamma_k\rho_k)^{r+1}\rho_k\left(6C_F^2+3\|\tilde{w}_{F,k}\|^2+3\|\tilde{e}_{F,k}\|^2+4\|{w}_{F,k}\|^2+4\|{e}_{F,k}\|^2\right)\right.\notag \\ &\left.+\sum_{k=0}^{K-1}(\gamma_k\rho_k)^{r}\left({w}_{f,k}+e_{f,k}+\rho_k{w}_{F,k}+\rho_k{e}_{F,k}\right)^T(u_k-y_{k+1})\right). \end{align} Taking expectations from the both sides and applying Corollary \ref{cor:exp_terms} and Lemma \ref{lemma:aux_exp_zero}, we obtain \begin{align*} \mathbb{E}[f(\bar{y}_K)]- f^*&\leq \left(\sum_{k=0}^{K-1}(\gamma_k\rho_k)^r\right)^{-1}\left(4ND_X^2(\gamma_{K-1}\rho_{K-1})^{r-1}\rho_{K-1}\right. \\ & \left. +2N^{-1}\sum_{k=0}^{K-1}(\gamma_k\rho_k)^{r+1}\rho_k^{-1}\left(6C_f^2+7\nu_f^2+7(N-1)C_f^2\right)\right.\notag \\ &\left.+2N^{-1}\sum_{k=0}^{K-1}(\gamma_k\rho_k)^{r+1}\rho_k\left(6C_F^2+7\nu_F^2+7(N-1)C_F^2\right)\right). \end{align*} This implies that the inequality \eqref{prop:subopt_bound} holds for all $K\geq 2$. Next we show the inequality \eqref{prop:infeas_bound}. Consider the inequality \eqref{eqn:main_07} again for an arbitrary $y \in X$. In view of Remark~\ref{rem:bounds} we have $f(y_{k+1})-f(y) \leq 2D_f$. Rearranging the terms in \eqref{eqn:main_07} we obtain \begin{align} \label{eqn:prop_bound2_01} (\gamma_k\rho_k)^rF(y)^T(y_{k+1}-y) & \leq 0.5N(\gamma_k\rho_k)^{r-1}\left(\|x_k-y\|^2 -\|x_{k+1}-y\|^2+\|u_k-y\|^2-\|u_{k+1}-y\|^2\right) \notag\\ &+2N^{-1}(\gamma_k\rho_k)^{r+1}\rho_k^{-2}\left(6C_f^2+3\|\tilde{w}_{f,k}\|^2+3\|\tilde{e}_{f,k}\|^2+4\|{w}_{f,k}\|^2+4\|e_{f,k}\|^2\right)\notag \\ &+2N^{-1}(\gamma_k\rho_k)^{r+1}\left(6C_F^2+3\|\tilde{w}_{F,k}\|^2+3\|\tilde{e}_{F,k}\|^2+4\|{w}_{F,k}\|^2+4\|{e}_{F,k}\|^2\right)\notag \\ &+\gamma_k^r\rho_k^{r-1}\left({w}_{f,k}+e_{f,k}+\rho_k{w}_{F,k}+\rho_k{e}_{F,k}\right)^T(u_k-y_{k+1}) +2(\gamma_k\rho_k)^{r}\rho_k^{-1}D_f. \end{align} Adding and subtracting $(\gamma_k\rho_k)^{r-1}\left(\|x_k-y\|^2+\|u_k-y\|^2 \right)$, for all $k \geq 1$ we have \begin{align}\label{eqn:prop_bound2_02} (\gamma_k\rho_k)^{r}F(y)^T(y_{k+1}-y) &\leq 0.5N(\gamma_{k-1}\rho_{k-1})^{r-1}\left(\|x_k-y\|^2 +\|u_k-y\|^2\right) \notag\\ &-0.5N(\gamma_k\rho_k)^{r-1}\left(\|x_{k+1}-y\|^2+\|u_{k+1}-y\|^2\right)\notag\\ &+ 0.5N\left((\gamma_k\rho_k)^{r-1}-(\gamma_{k-1}\rho_{k-1})^{r-1}\right)\left(\|x_k-y\|^2 +\|u_k-y\|^2\right)\notag\\ &+2N^{-1}(\gamma_k\rho_k)^{r+1}\rho_k^{-2}\left(6C_f^2+3\|\tilde{w}_{f,k}\|^2+3\|\tilde{e}_{f,k}\|^2+4\|{w}_{f,k}\|^2+4\|e_{f,k}\|^2\right)\notag \\ &+2N^{-1}(\gamma_k\rho_k)^{r+1}\left(6C_F^2+3\|\tilde{w}_{F,k}\|^2+3\|\tilde{e}_{F,k}\|^2+4\|{w}_{F,k}\|^2+4\|{e}_{F,k}\|^2\right)\notag \\ &+\gamma_k^r\rho_k^{r-1}\left({w}_{f,k}+e_{f,k}+\rho_k{w}_{F,k}+\rho_k{e}_{F,k}\right)^T(u_k-y_{k+1}) +2(\gamma_k\rho_k)^{r}\rho_k^{-1}D_f. \end{align} Note that because $r<1$ and that $\{\gamma_k\rho_k\}$ is nonincreasing, we have $(\gamma_k\rho_k)^{r-1}-(\gamma_{k-1}\rho_{k-1})^{r-1} \geq 0$. Thus, in view of Remark~\ref{rem:bounds} we have \begin{align*} 0.5N\left((\gamma_k\rho_k)^{r-1}-(\gamma_{k-1}\rho_{k-1})^{r-1}\right)\left(\|x_k-x^*\|^2 +\|u_k-x^*\|^2\right) \leq 4ND_X^2\left((\gamma_k\rho_k)^{r-1}-(\gamma_{k-1}\rho_{k-1})^{r-1}\right). \end{align*} Substituting the preceding bound in \eqref{eqn:prop_bound2_02} and then, summing the resulting inequality for $k=1,\ldots,K-1$ we obtain \begin{align}\label{eqn:prop_bound2_03} &\sum_{k=1}^{K-1}(\gamma_k\rho_k)^{r}F(y)^T(y_{k+1}-y) \leq 0.5N(\gamma_{0}\rho_{0})^{r-1}\left(\|x_1-y\|^2 +\|u_1-y\|^2\right) \notag\\ & + 4ND_X^2\left((\gamma_{K-1}\rho_{K-1})^{r-1}-(\gamma_{0}\rho_{0})^{r-1}\right)\notag\\ &+2N^{-1}\sum_{k=1}^{K-1}(\gamma_k\rho_k)^{r+1}\rho_k^{-2}\left(6C_f^2+3\|\tilde{w}_{f,k}\|^2+3\|\tilde{e}_{f,k}\|^2+4\|{w}_{f,k}\|^2+4\|e_{f,k}\|^2\right)\notag \\ &+2N^{-1}\sum_{k=1}^{K-1}(\gamma_k\rho_k)^{r+1}\left(6C_F^2+3\|\tilde{w}_{F,k}\|^2+3\|\tilde{e}_{F,k}\|^2+4\|{w}_{F,k}\|^2+4\|{e}_{F,k}\|^2\right)\notag \\ &+\sum_{k=1}^{K-1}\gamma_k^r\rho_k^{r-1}\left({w}_{f,k}+e_{f,k}+\rho_k{w}_{F,k}+\rho_k{e}_{F,k}\right)^T(u_k-y_{k+1}) +2D_f\sum_{k=1}^{K-1}(\gamma_k\rho_k)^{r}\rho_k^{-1}. \end{align} Consider \eqref{eqn:prop_bound2_01} for $k=0$. Summing that relation with \eqref{eqn:prop_bound2_03} we have \begin{align}\label{eqn:prop_bound2_04} &F(y)^T\left(\sum_{k=0}^{K-1}(\gamma_k\rho_k)^{r}y_{k+1}-y\right) \leq 0.5N(\gamma_0\rho_0)^{r-1}\left(\|x_0-y\|^2 +\|u_0-y\|^2\right) \notag\\ & +4ND_X^2\left((\gamma_{K-1}\rho_{K-1})^{r-1}-(\gamma_{0}\rho_0)^{r-1}\right)\notag\\ &+2N^{-1}\sum_{k=0}^{K-1}(\gamma_k\rho_k)^{r+1}\rho_k^{-2}\left(6C_f^2+3\|\tilde{w}_{f,k}\|^2+3\|\tilde{e}_{f,k}\|^2+4\|{w}_{f,k}\|^2+4\|e_{f,k}\|^2\right)\notag \\ &+2N^{-1}\sum_{k=0}^{K-1}(\gamma_k\rho_k)^{r+1}\left(6C_F^2+3\|\tilde{w}_{F,k}\|^2+3\|\tilde{e}_{F,k}\|^2+4\|{w}_{F,k}\|^2+4\|{e}_{F,k}\|^2\right)\notag \\ &+\sum_{k=0}^{K-1}\gamma_k^r\rho_k^{r-1}\left({w}_{f,k}+e_{f,k}+\rho_k{w}_{F,k}+\rho_k{e}_{F,k}\right)^T(u_k-y_{k+1}) +2D_f\sum_{k=0}^{K-1}(\gamma_k\rho_k)^{r}\rho_k^{-1}. \end{align} Dividing the both side of \eqref{eqn:prop_bound2_04} by $\sum_{k=0}^{K-1}(\gamma_k\rho_k)^r$, invoking Lemma~\ref{lemma:ave}, and $\|x_0-y\|^2 +\|u_0-y\|^2 \leq 8D_X^2$, we obtain \begin{align}\label{eqn:prop_bound2_05} F(y)^T(\bar{y}_K-y) &\leq \left(\sum_{k=0}^{K-1}(\gamma_k\rho_k)^r\right)^{-1}\left(4ND_X^2(\gamma_{K-1}\rho_{K-1})^{r-1}\right.\notag\\ &\left.+2N^{-1}\sum_{k=0}^{K-1}(\gamma_k\rho_k)^{r+1}\rho_k^{-2}\left(6C_f^2+3\|\tilde{w}_{f,k}\|^2+3\|\tilde{e}_{f,k}\|^2+4\|{w}_{f,k}\|^2+4\|e_{f,k}\|^2\right)\right.\notag \\ &\left.+2N^{-1}\sum_{k=0}^{K-1}(\gamma_k\rho_k)^{r+1}\left(6C_F^2+3\|\tilde{w}_{F,k}\|^2+3\|\tilde{e}_{F,k}\|^2+4\|{w}_{F,k}\|^2+4\|{e}_{F,k}\|^2\right)\right.\notag \\ &\left.+\sum_{k=0}^{K-1}\gamma_k^r\rho_k^{r-1}\left({w}_{f,k}+e_{f,k}+\rho_k{w}_{F,k}+\rho_k{e}_{F,k}\right)^T(u_k-y_{k+1}) +2D_f\sum_{k=0}^{K-1}(\gamma_k\rho_k)^{r}\rho_k^{-1}\right). \end{align} Taking the supremum from the both sides of \eqref{eqn:prop_bound2_05} with respect to $y$ over the set $X$ and invoking Definition~\ref{def:dual_gap}, we have \begin{align*} \mbox{Gap}^*(\bar{y}_K) &\leq \left(\sum_{k=0}^{K-1}(\gamma_k\rho_k)^r\right)^{-1}\left(4ND_X^2(\gamma_{K-1}\rho_{K-1})^{r-1}\right.\notag\\ &\left.+2N^{-1}\sum_{k=0}^{K-1}(\gamma_k\rho_k)^{r+1}\rho_k^{-2}\left(6C_f^2+3\|\tilde{w}_{f,k}\|^2+3\|\tilde{e}_{f,k}\|^2+4\|{w}_{f,k}\|^2+4\|e_{f,k}\|^2\right)\right.\notag \\ &\left.+2N^{-1}\sum_{k=0}^{K-1}(\gamma_k\rho_k)^{r+1}\left(6C_F^2+3\|\tilde{w}_{F,k}\|^2+3\|\tilde{e}_{F,k}\|^2+4\|{w}_{F,k}\|^2+4\|{e}_{F,k}\|^2\right)\right.\notag \\ &\left.+\sum_{k=0}^{K-1}\gamma_k^r\rho_k^{r-1}\left({w}_{f,k}+e_{f,k}+\rho_k{w}_{F,k}+\rho_k{e}_{F,k}\right)^T(u_k-y_{k+1}) +2D_f\sum_{k=0}^{K-1}(\gamma_k\rho_k)^{r}\rho_k^{-1}\right). \end{align*} Taking expectations from the both sides and applying Corollary \ref{cor:exp_terms} and Lemma \ref{lemma:aux_exp_zero}, we obtain \begin{align*} \mathbb{E}[\mbox{Gap}^*(\bar{y}_K)] &\leq \left(\sum_{k=0}^{K-1}(\gamma_k\rho_k)^r\right)^{-1}\left(4ND_X^2(\gamma_{K-1}\rho_{K-1})^{r-1}\right. \\ & \left. +2N^{-1}\sum_{k=0}^{K-1}(\gamma_k\rho_k)^{r+1}\rho_k^{-2}\left(6C_f^2+7\nu_f^2+7(N-1)C_f^2\right)\right.\notag \\ &\left.+2N^{-1}\sum_{k=0}^{K-1}(\gamma_k\rho_k)^{r+1}\left(6C_F^2+7\nu_F^2+7(N-1)C_F^2\right)+2D_f\sum_{k=0}^{K-1}(\gamma_k\rho_k)^{r}\rho_k^{-1}\right). \end{align*} Hence, we obtain the infeasibility \eqref{prop:infeas_bound}. \end{proof} Now, we specify the step size $\gamma_k$ and penalization parameter $\rho_k$ to obtain the iteration complexity of Algorithm \ref{algorithm:aR-IP-SeG}. \begin{theorem}[Rate statements and iteration complexity guarantees]\label{prop:bounds} Consider Algorithm~\ref{algorithm:aR-IP-SeG} applied to problem~\eqref{prob:sopt_svi}. Suppose $r \in [0,1)$ is an arbitrary scalar. Let Assumptions~\ref{assum:problem} and~\ref{assum:rnd_vars} hold. Suppose, for any $k\geq 0$, the stepsize and the penalty sequence are given by \begin{align*} \gamma_k \triangleq \frac{\gamma_0}{\sqrt[4]{(k+1)^3}} \quad \hbox{and } \quad \rho_k \triangleq \rho_0\sqrt[4]{k+1}. \end{align*} Then, for all $K\geq 2^{\frac{2}{1-r}}$ the following statements hold. \noindent {\bf (i)} The convergence rate in terms of the suboptimality is as follows. \begin{align*} \mathbb{E}[f(\bar{y}_K)]- f^*&\leq \left(\tfrac{D_X^2}{\gamma_0\rho_0}+\tfrac{\gamma_0\rho_0\left((7-N^{-1})C_F^2+7\nu_F^2N^{-1}+\tfrac{(7-N^{-1})C_f^2+7\nu_f^2N^{-1}}{\rho_0^2}\right)}{(1.5-r)N}\right)\frac{4\rho_0(2-r)N }{\sqrt[4]{K}}. \end{align*} \noindent {\bf (ii)} The convergence rate in terms of the infeasibility is as follows. \begin{align*} \mathbb{E}[\mbox{Gap}^*(\bar{y}_K)]&\leq \left(\tfrac{D_X^2}{\gamma_0\rho_0\sqrt[4]{K}}+\tfrac{\gamma_0\rho_0\left((7-N^{-1})C_F^2+7\nu_F^2N^{-1}+\tfrac{(7-N^{-1})C_f^2+7\nu_f^2N^{-1}}{\rho_0^2}\right)}{(1-r)N\sqrt[4]{K}}+\tfrac{D_fN^{-1}}{\rho_0(0.75-0.5r)}\right)\frac{4(2-r)N }{\sqrt[4]{K}}. \end{align*} \noindent {\bf (iii)} Let $K_\epsilon$ be an integer to achieve $\mathbb{E}[f(\bar{y}_{K_\epsilon})]- f^* \leq \epsilon$ and $\mathbb{E}[\mbox{Gap}^*(\bar{y}_{K_\epsilon})] \leq \epsilon$. Then the total iteration complexity and also, the total sample complexity of Algorithm~\ref{algorithm:aR-IP-SeG} are the same and are of $\mathcal{O}(N^4\epsilon^{-4})$ where $N$ denotes the number of blocks. \end{theorem} \begin{proof} (i) Substituting the update rules of $\gamma_k$ and $\rho_k$ in \eqref{prop:subopt_bound}, we obtain \begin{align*} &\mathbb{E}[f(\bar{y}_K)]- f^*\leq \frac{4ND_X^2(\gamma_{K-1}\rho_{K-1})^{r-1}\rho_{K-1}+2N^{-1}\sum_{k=0}^{K-1}(\gamma_k\rho_k)^{1+r}\rho_k\left(\theta_F+\theta_f\rho_k^{-2}\right)}{\sum_{k=0}^{K-1}(\gamma_k\rho_k)^r}\\ &\leq \frac{4ND_X^2(\gamma_0\rho_0K^{-0.5})^{r-1}\rho_0K^{0.25}+2N^{-1}\rho_0\left(\theta_F+\theta_f\rho_0^{-2}\right)\sum_{k=0}^{K-1}(\gamma_0\rho_0(k+1)^{-0.5})^{1+r}(k+1)^{0.25}}{(\gamma_0\rho_0)^r\sum_{k=0}^{K-1}(k+1)^{-0.5r}}\\ &= \frac{4ND_X^2\rho_0(\gamma_0\rho_0)^{r-1}K^{0.75-0.5r}+2N^{-1}\rho_0\left(\theta_F+\theta_f\rho_0^{-2}\right)(\gamma_0\rho_0)^{1+r}\sum_{k=0}^{K-1}(k+1)^{-(0.25+0.5r)}}{(\gamma_0\rho_0)^r\sum_{k=0}^{K-1}(k+1)^{-0.5r}}. \end{align*} Because $0\leq r<1$, note that both the terms $0.25+0.5r$ and $0.5r$ are nonnegative and smaller than $1$. This implies that the conditions of Lemma~\ref{lemma:harmonic_bnds} are met. Employing the bounds provided by Lemma~\ref{lemma:harmonic_bnds}, from the preceding inequality we have \begin{align*} &\mathbb{E}[f(\bar{y}_K)]- f^*\\&\leq \frac{4ND_X^2\rho_0(\gamma_0\rho_0)^{r-1}K^{0.75-0.5r}+2N^{-1}\rho_0\left(\theta_F+\theta_f\rho_0^{-2}\right)(\gamma_0\rho_0)^{1+r}(0.75-0.5r)^{-1}K^{0.75-0.5r}}{0.5(1-0.5r)^{-1}(\gamma_0\rho_0)^rK^{1-0.5r}}\\ &=\frac{(2-r)\left(4ND_X^2\rho_0(\gamma_0\rho_0)^{-1}+2N^{-1}\rho_0\left(\theta_F+\theta_f\rho_0^{-2}\right)(\gamma_0\rho_0)(0.75-0.5r)^{-1}\right)}{K^{0.25}}. \end{align*} Substituting $\theta_f$ and $\theta_F$ by their values and then, rearranging the terms we obtain the desired rate statement in (i). \noindent (ii) Next we derive the non-asymptotic rate statement in terms of the infeasibility. Substituting the update rules of $\gamma_k$ and $\rho_k$ in \eqref{prop:infeas_bound}, and noting that $\gamma_k$ and $\rho_k^{-1}$ are nonincreasing, we obtain \begin{align*} &\mathbb{E}[\mbox{Gap}^*(\bar{y}_K)]\leq \frac{4ND_X^2(\gamma_{K-1}\rho_{K-1})^{r-1}+2N^{-1}\sum_{k=0}^{K-1}(\gamma_k\rho_k)^{r}\left(\theta_F\gamma_k\rho_k+\theta_f\gamma_k\rho_k^{-1}+2ND_f\rho_k^{-1}\right)}{\sum_{k=0}^{K-1}(\gamma_k\rho_k)^r}\\ &\leq \frac{4ND_X^2(\gamma_{K-1}\rho_{K-1})^{r-1}+2N^{-1}(\theta_F+\theta_f\rho_0^{-2})\sum_{k=0}^{K-1}(\gamma_k\rho_k)^{r+1}+4D_f\sum_{k=0}^{K-1}(\gamma_k\rho_k)^{r}\rho_k^{-1}}{\sum_{k=0}^{K-1}(\gamma_k\rho_k)^r}\\ &\leq \frac{4ND_X^2(\gamma_0\rho_0K^{-0.5})^{r-1}+2N^{-1}\left(\theta_F+\theta_f\rho_0^{-2}\right)(\gamma_0\rho_0)^{1+r}\sum_{k=0}^{K-1}(k+1)^{-0.5(1+r)}}{(\gamma_0\rho_0)^r\sum_{k=0}^{K-1}(k+1)^{-0.5r}}\\ &+\frac{4D_f(\gamma_0\rho_0)^{r}\rho_0^{-1}\sum_{k=0}^{K-1}(k+1)^{-0.5r-0.25}}{(\gamma_0\rho_0)^r\sum_{k=0}^{K-1}(k+1)^{-0.5r}}. \end{align*} Employing the bounds provided by Lemma~\ref{lemma:harmonic_bnds}, from the preceding inequality we have \begin{align*} \mathbb{E}[\mbox{Gap}^*(\bar{y}_K)]&\leq \frac{4ND_X^2(\gamma_0\rho_0)^{-1}K^{-0.5(r-1)}+2N^{-1}\left(\theta_F+\theta_f\rho_0^{-2}\right)(\gamma_0\rho_0)(1-0.5(1+r))^{-1}K^{1-0.5(1+r)}}{0.5(1-0.5r)^{-1}K^{1-0.5r}}\\ &+\frac{4D_f\rho_0^{-1}(1-0.5r-0.25)^{-1}K^{1-0.5r-0.25}}{0.5(1-0.5r)^{-1}K^{1-0.5r}}\\ & \leq (2-r)\frac{4ND_X^2(\gamma_0\rho_0)^{-1}+4N^{-1}\left(\theta_F+\theta_f\rho_0^{-2}\right)(\gamma_0\rho_0)(1-r)^{-1}}{K^{0.5}}\\ &+(2-r)\frac{4D_f\rho_0^{-1}(0.75-0.5r)^{-1}}{K^{0.25}}. \end{align*} The rate statement in (ii) can be obtained by substituting $\theta_f$ and $\theta_F$ by their values and then, rearranging the terms. \noindent (iii) The result of part (iii) holds directly from the rate statements in parts (i) and (ii). \end{proof} \section{Estimation of price of stability}\label{sec:pos} Our goal in this section lies in devising a stochastic scheme for estimating the price of stability in monotone stochastic Nash games. In particular, we are interested in characterizing the speed of the scheme in terms of confidence intervals built on the true value of the PoS. The proposed scheme includes three main steps described as follows. \noindent {\bf (i)} Employing Algorithm~\ref{algorithm:aR-IP-SeG} for approximating a solution to the optimization problem \eqref{prob:sopt_svi}. \noindent {\bf (ii)} Employing a stochastic approximation method for approximating a solution to the nonsmooth stochastic optimization problem $\min_{x\in X}\mathbb{E}[f(x,\xi)]$. This can be done through a host of well-known methods including the stochastic sub-gradient and gradient~\cite{nemirovski_robust_2009,FarzadAut12} and its accelerated smoothed variants~\cite{jalilzadeh2018smoothed}. Another avenue for solving this class of problems is stochastic extra sub-gradient methods~\cite{juditsky2011solving,nemirovski2004prox,yousefian2014optimal,yousefian2018stochastic,iusem2017extragradient}. \noindent {\bf (iii)} Lastly, given the two approximate optimal solutions in (i) and (ii), we estimate the objective function value $\mathbb{E}[f(x,\xi)]$ at each solution. The PoS is then estimated by dividing the approximate optimal objective value of problem \eqref{prob:sopt_svi} by that of $\min_{x\in X}\mathbb{E}[f(x,\xi)]$. An example of this scheme is presented by Algorithm~\ref{algorithm:pos}. Here, vectors $y_{k,1}$ and $x_{k,1}$ are generated by Algorithm~\ref{algorithm:aR-IP-SeG}, while $y_{k,2}$ and $x_{k,2}$ are generated by a standard stochastic extra-subgradient method for solving $\min_{x\in X}\mathbb{E}[f(x,\xi)]$.We make the following remark to make clarifications about this method. \begin{remark}\em As mentioned earlier, we do have several options in employing a method for solving the canonical nonsmooth stochastic optimization problem $\min_{x\in X}\mathbb{E}[f(x,\xi)]$. Here, we use the stochastic extra-subgradient method that is known to achieve the convergence rate of the order $\frac{1}{\sqrt{K}}$ when employing a suitable weighted averaging scheme specified by \eqref{eqn:averaging_pos_eq2} (cf.~\cite{yousefian2018stochastic}). We also note that Algorithm~\ref{algorithm:pos} can be compactly presented by mentioning that we calling the two extra-subgradient schemes, separately. However, it is important to note that there are different groups of random samples generated in Algorithm~\ref{algorithm:pos} and the analysis of the confidence interval relies on what assumptions we make on these samples. As such, we explicitly present these schemes as a unifying scheme to be clear about our assumptions that play a key role in deriving the confidence interval. \end{remark} We make the following assumption about the random samples in Algorithm~\ref{algorithm:pos}. \begin{assumption}\label{assum:vars_alg_pos} Let the following statements hold. \noindent (i) The random samples $\{ \xi_{k,1}\}_{k=0}^{K-1}$, $\{\tilde \xi_{k,1}\}_{k=0}^{K-1}$, $\{\xi_{k,2}\}_{k=0}^{K-1}$, $\{\tilde \xi_{k,2}\}_{k=0}^{K-1}\}$, $\{ \xi_{t}^{M_K}\}_{t=0}^{M_K-1}$, $\{ \tilde{\xi}_{t}^{M_K}\}_{t=0}^{M_K-1}$ are i.i.d. associated with the probability space $(\Omega, \mathcal{F},\mathbb{P})$. Also, $\{\tilde i_{k,1}\}_{k=0}^{K-1}$, $\{\tilde i_{k,1}\}_{k=0}^{K-1}$, $\{i_{k,2}\}_{k=0}^{K-1}$, and $\{\tilde i_{k,2}\}_{k=0}^{K-1}$ are i.i.d. uniformly distributed within the range $\{1,\ldots,N\}$. Additionally, all the aforementioned random variables are independent from each other. \noindent (ii) $f(\bullet,\xi)$ is an unbiased estimator of the deterministic function $f(\bullet)$. \end{assumption} The main result in this section is presented in the following. \begin{algorithm}[H] \caption{\footnotesize Variance-reduced Estimator of PoS using Randomized Stochastic Extragradient Methods (VRE-PoS-RSeG)} \label{algorithm:pos} \begin{algorithmic}[1] \STATE \textbf{initialization:} Set random initial points $x_{0,1},x_{0,2} y_{0,1},y_{0,2}\in X$, initial stepsizes $\gamma_{0,1}, \gamma_{0,2}>0$, scalar $0 \leq r_1, r_2<1$, $\bar{y}_{0,1} = \bar{y}_{0,2}:= y_0$, $\Gamma_{0,1}=\Gamma_{0,2}:=0$, $S_{0,1}=S_{0,2} : = 0$, batch-size $M_K$. \FOR {$k=0,1,\ldots,K-1$} \STATE Generate $i_{k,1}$, $\tilde i_{k,1}$, $i_{k,2}$, and $\tilde{i}_{k,2}$ uniformly from $\{1,\ldots,N\}$. \STATE Generate $\xi_{k,1}$, $\tilde \xi_{k,1}$, $\xi_{k,2}$, and $\tilde \xi_{k,2}$ as realizations of the random vector $\xi$. \STATE Update the variables $y_{k,1}$, $x_{k,1}$, $y_{k,2}$, and $x_{k,2}$ as follows. \begin{align} y_{k+1,1}^{(i)}&:= \left\{\begin{array}{ll}\mathcal{P}_{X_i}\left(x_{k,1}^{(i)}-\gamma_{k,1}(\tilde \nabla_i f(x_{k,1},\tilde \xi_{k,1}) + \rho_k F_{i}(x_{k,1},\tilde \xi_{k,1}))\right) &\hbox{if } i=\tilde i_{k,1},\cr \hbox{} &\hbox{}\cr x_{k,1}^{(i)}& \hbox{if } i\neq \tilde i_{k,1},\end{array}\right.\label{eqn:y_k_update_rule_pos} \\ &\hbox{} \notag\\ x_{k+1,1}^{(i)}&:=\left\{\begin{array}{ll}\mathcal{P}_{X_i}\left(x_{k,1}^{(i)} - \gamma_{k,1}(\tilde \nabla_i f(y_{k+1,1}, \xi_{k,1}) + \rho_k F_{i}(y_{k+1,1}, \xi_{k,1}))\right) &\hbox{if } i=i_{k,1},\cr \hbox{} &\hbox{}\cr x_{k,1}^{(i)}& \hbox{if } i\neq i_{k,1},\end{array}\right.\label{eqn:x_k_update_rule_pos}\\ &\hbox{} \notag\\ y_{k+1,2}^{(i)}&:= \left\{\begin{array}{ll}\mathcal{P}_{X_i}\left(x_{k,2}^{(i)}-\gamma_{k,2}\tilde \nabla_i f(x_{k,2},\tilde \xi_{k,2}) \right) &\hbox{if } i=\tilde i_{k,2},\cr \hbox{} &\hbox{}\cr x_{k,2}^{(i)}& \hbox{if } i\neq \tilde i_{k,2},\end{array}\right.\label{eqn:y_k_update_rule_pos_2} \\ &\hbox{} \notag\\ x_{k+1,2}^{(i)}&:=\left\{\begin{array}{ll}\mathcal{P}_{X_i}\left(x_{k,2}^{(i)} - \gamma_{k,2} \tilde \nabla_i f(y_{k+1,2}, \xi_{k,2}) \right) &\hbox{if } i=i_{k,2},\cr \hbox{} &\hbox{}\cr x_{k,2}^{(i)}& \hbox{if } i\neq i_{k,2}.\end{array}\right.\label{eqn:x_k_update_rule_pos_2} \end{align} \STATE Update $\Gamma_{k,1}$, $\Gamma_{k,2}$, $\bar y_{k,1}$, and $\bar y_{k,2}$ using the following recursions. \begin{align} &\Gamma_{k+1,1}:=\Gamma_{k,1}+(\gamma_{k,1}\rho_k)^{r_1}, \quad \bar y_{k+1,1}:=\frac{\Gamma_{k,1} \bar y_{k,1}+(\gamma_{k,1}\rho_k)^{r_1} y_{k+,1}}{\Gamma_{k+1,1}},\label{eqn:averaging_pos_eq1}\\ &\Gamma_{k+1,2}:=\Gamma_{k,2}+\gamma_{k,2}^{r_2}, \quad \bar y_{k+1,2}:=\frac{\Gamma_{k,2} \bar y_{k,2}+\gamma_{k,2}^{r_2} y_{k+1,2}}{\Gamma_{k+1,2}}\label{eqn:averaging_pos_eq2}. \end{align} \ENDFOR \FOR {$t=0,1,\ldots,M_K-1$} \STATE Generate $\xi_{t}^{M_k}$ and $\tilde \xi_{t}^{M_k}$ as realizations of the random vector $\xi$. \STATE Do the following updates. \begin{align} &S_{t+1,1}: = S_{t,1} + f\left(\bar y_{K,1},\xi_{t}^{M_K}\right),\label{eqn:S1}\\ & S_{t+1,2}: = S_{t,2} + f\left(\bar y_{K,2},\tilde{\xi}_{t}^{M_K}\right).\label{eqn:S2} \end{align} \ENDFOR \STATE Return $\widehat{\mbox{PoS}}_K:= \frac{S_{M_K,1}}{S_{M_K,2}}$. \end{algorithmic} \end{algorithm} \begin{theorem}[Confidence interval for estimating PoS]\label{prop:pos_CI} Consider Algorithm~\ref{algorithm:pos} applied for estimating the PoS defined as~\eqref{prob:sopt_svi}. Let Assumptions~\ref{assum:problem},~\ref{assum:rnd_vars}, and~\ref{assum:vars_alg_pos} hold. Suppose, $r_1, r_2 \in[0,1)$ be fixed scalars and for any $k\geq 0$, let us define \begin{align*} \gamma_{k,1} \triangleq \frac{\gamma_{0,1}}{\sqrt[4]{(k+1)^3}}, \quad \rho_k \triangleq \rho_0\sqrt[4]{k+1}, \quad \gamma_{k,2} \triangleq \frac{\gamma_{0,2}}{\sqrt{k+1}}. \end{align*} Let $M_K:=K$ where $K$ is sufficiently large. Given $\alpha \in (0,1)$, the following statement holds. \begin{align}\label{eqn:ci_pos} \mathbb{P}\left(\mbox{PoS}-\mathcal{O}\left(\frac{1}{\sqrt{K}}\right) \leq \widehat{\mbox{PoS}}_K \leq \mbox{PoS}+\mathcal{O}\left(\frac{1}{\sqrt[4]{K}}\right)\right) = (1-\alpha)^2. \end{align} \end{theorem} \begin{proof} Let us define $\bar{S}_{M_K,1}(x) = \tfrac{1}{M_K}\sum_{t=0}^{M_K-1}f\left(x,\xi_{t}^{M_K}\right)$ and $\bar{S}_{M_K,2}(x) = \tfrac{1}{M_K}\sum_{t=0}^{M_K-1}f\left(x,\tilde{\xi}_{t}^{M_K}\right)$. \begin{align*} \mathbb{E}\left[\bar{S}_{M_K,1}(\bar y_{K,1})\right] = \mathbb{E}[f(\bar y_{K,1})]. \end{align*} From Theorem \ref{prop:bounds} we have \begin{align*} 0 \leq \mathbb{E}[f(\bar y_{K,1})] - f^*_{VI} \leq \frac{\theta N}{\sqrt[4]{K}}. \end{align*} Let us define $\sigma_1^2 \triangleq \mathbb{E}[\|\bar{S}_{M_K,1}(\bar y_{K,1}) - \mathbb{E}[f(\bar y_{K,1})] \|^2]$ as the variance of $\bar{S}_{M_K,1}(\bar y_{K,1})$. We have \begin{align*} \sigma_1^2 = \mathbb{E}[\|\bar{S}_{M_K,1}(\bar y_{K,1}) - \mathbb{E}[f(\bar y_{K,1})] \|^2] = \mathbb{E}[\mathbb{E}[\|\bar{S}_{M_K,1}(\bar y_{K,1}) - \mathbb{E}[f(\bar y_{K,1})] \|^2\mid \bar y_{K,1}]]\leq \mathbb{E}\left[\frac{\nu^2_f}{M_K}\right] =\frac{\nu^2_f}{M_K}. \end{align*} From the Central Limit Theorem, we can write \begin{align*} \mathbb{P}\left( -\frac{z_{{\alpha}/{2}}\nu^2_f}{\sqrt{M_K}} \leq \bar{S}_{M_K,1}(\bar y_{K,1}) - \mathbb{E}[f(\bar y_{K,1})] \leq \frac{z_{{\alpha}/{2}}\nu^2_f}{\sqrt{M_K}}\right) = 1-\alpha. \end{align*} We obtain \begin{align*} \mathbb{P}\left( -\frac{z_{{\alpha}/{2}}\nu^2_f}{\sqrt{M_K}} \leq \bar{S}_{M_K,1}(\bar y_{K,1}) - f^*_{VI} \leq \frac{z_{{\alpha}/{2}}\nu^2_f}{\sqrt{M_K}}+\frac{\theta N}{\sqrt[4]{K}} \right) = 1-\alpha. \end{align*} Similarly, we have \begin{align*} \mathbb{P}\left( -\frac{z_{{\alpha}/{2}}\nu^2_f}{\sqrt{M_K}} \leq \bar{S}_{M_K,2}(\bar y_{K,2}) - f^* \leq \frac{z_{{\alpha}/{2}}\nu^2_f}{\sqrt{M_K}}+\frac{\theta N}{\sqrt{K}} \right) = 1-\alpha. \end{align*} Therefore, the following statement hold for a sufficiently large $M_K$. \begin{align*} \mathbb{P}\left(\frac{f^*_{VI}-\frac{z_{{\alpha}/{2}}\nu^2_f}{\sqrt{M_K}}}{f^*+ \frac{z_{{\alpha}/{2}}\nu^2_f}{\sqrt{M_K}}+\frac{\theta N}{\sqrt{K}}} \leq \frac{S_{M_K,1}}{S_{M_K,2}} \leq \frac{f^*_{VI}+ \frac{z_{{\alpha}/{2}}\nu^2_f}{\sqrt{M_K}}+\frac{\theta N}{\sqrt[4]{K}}}{f^*-\frac{z_{{\alpha}/{2}}\nu^2_f}{\sqrt{M_K}}}\right) = (1-\alpha)^2. \end{align*} This implies that \begin{align}\label{eqn:pos_1} \mathbb{P}\left(\frac{\mbox{PoS}-\frac{z_{{\alpha}/{2}}\nu^2_f}{f^*\sqrt{M_K}}}{1+ \frac{z_{{\alpha}/{2}}\nu^2_f}{f^*\sqrt{M_K}}+\frac{\theta N}{f^*\sqrt{K}}} \leq\widehat{\mbox{PoS}}_K \leq \frac{\mbox{PoS}+ \frac{z_{{\alpha}/{2}}\nu^2_f}{f^*\sqrt{M_K}}+\frac{\theta N}{f^*\sqrt[4]{K}}}{1-\frac{z_{{\alpha}/{2}}\nu^2_f}{f^*\sqrt{M_K}}}\right) = (1-\alpha)^2. \end{align} Let us assume that $M_K:=K$. Note that we have \begin{align}\label{eqn:pos_2} \frac{\mbox{PoS}-\frac{z_{{\alpha}/{2}}\nu^2_f}{f^*\sqrt{M_K}}}{1+ \frac{z_{{\alpha}/{2}}\nu^2_f}{f^*\sqrt{M_K}}+\frac{\theta N}{f^*\sqrt{K}}} =\mbox{PoS}\left(1-\frac{z_{\alpha/2}\nu_f^2+\theta N}{f^*\sqrt{K}+z_{\alpha/2}\nu_f^2+\theta N}\right)+\frac{z_{\alpha/2}\nu_f^2}{f^*\sqrt{K}+z_{\alpha/2}\nu_f^2+\theta N} =\mbox{PoS} -\mathcal{O}\left(\frac{1}{\sqrt{K}}\right), \end{align} and \begin{align}\label{eqn:pos_3} \frac{\mbox{PoS}+ \frac{z_{{\alpha}/{2}}\nu^2_f}{f^*\sqrt{M_K}}+\frac{\theta N}{f^*\sqrt[4]{K}}}{1-\frac{z_{{\alpha}/{2}}\nu^2_f}{f^*\sqrt{M_K}}} &= \mbox{PoS}\left(1+\frac{z_{\alpha/2}\nu_f^2}{f^*\sqrt{K} -z_{\alpha/2}\nu_f^2}\right) + \frac{z_{\alpha/2}\nu_f^2}{f^*\sqrt{K} -z_{\alpha/2}\nu_f^2} +\frac{\theta N}{f^*\sqrt[4]{K} -z_{\alpha/2}\nu_f^2} \notag\\ &= \mbox{PoS} +\mathcal{O}\left(\frac{1}{\sqrt[4]{K}}\right). \end{align} From \eqref{eqn:pos_1}--\eqref{eqn:pos_3}, we obtain the the statement \eqref{eqn:ci_pos}. \end{proof} \section{Numerical Experiments}\label{sec:num} In this section we present the performance of the proposed schemes in estimating the price of stability for a stochastic Nash Cournot competition over a network. Cournot game is one of the most popular and amongst the first economic models for formulating the competition among multiple firms (see~\cite{JohariThesis,facchinei2007finite} the applications of Cournot models in imperfectly competitive power markets and also, rate control in communication networks). The Cournot model is described as follows. Consider a collection of $N$ firms who compete over a network with $J$ nodes to sell a product. The strategy of firm $i \in\{1, \dots, N\}$ is characterized by the decision variables $y_{ij}$ and $s_{ij}$, denoting the generation and sales of firm $i$ at the node $j$, respectively. Compactly, the decision variables of the $i^{th}$ firm is denoted by $x^{(i)} \triangleq \left(y_i; s_i\right) \in \mathbb{R}^{2J}$ where we assume that $\ y_i \triangleq \left(y_{i1}; \dots; y_{iJ}\right)$ and $s_i \triangleq \left(s_{i1}; \dots; s_{iJ}\right)$. The goal of the $i^{th}$ firm lies in minimizing the expected value of a net cost function $f_i\left(x^{(i)}, x^{(-i)},\xi\right)$ over the network over the strategy set $X_i$. This optimization problem for the firm $i$ is defined as follows. \begin{align*} \text{minimize} \qquad &\mathbb{E}\left[f_i\left(x^{(i)}, x^{(-i)},\xi\right)\right] \triangleq \mathbb{E}\left[\sum_{j=1}^{{J}} c_{ij} (y_{ij})- \sum_{j=1}^{{J}} s_{ij}p_j\left(\bar{s}_j,\xi\right)\right]\\ \text{Subject to.} \qquad & x^{(i)} \in X_i \triangleq \left \{ \left(y_i; s_i\right) \mid y_{ij} \leq \mathcal{B}_{ij},\sum_{j=1}^{J} y_{ij} = \sum_{j=1}^{J} s_{ij}, \quad y_{ij}, s_{ij} \geq 0,\ \text{ for all } j = 1, \dots, J \right \}. \end{align*} Here, $\bar{s}_j \triangleq \sum_{i=1}^{d}s_{ij}$ denotes the aggregate sales from all the firms at node $j$, $p_j:\mathbb{R}\times \Omega\to \mathbb{R}$ denotes the price function characterized in terms of the aggregate sales at the node $j$ and a random variable $\xi$, and $c_{ij}:\mathbb{R}\to \mathbb{R}$ denotes the production cost function of firm $i$ at node $j$. The price functions are given as $p_j \left(\bar{s}_j,\xi\right) \triangleq \alpha_j(\xi) - \beta_j \left(\bar{s}_j\right)^{\sigma}$, where $\alpha_j(\xi)$ is a random positive variable, $ \beta_j$ is a positive scalar, and $\sigma \geq 1$. We assume that cost function are linear and the transportation costs are zero. The constraint $y_{ij} \leq \mathcal{B}_{ij}$ states that the generation is capacitated where $\mathcal{B}_{ij}$ is a positive scalar for all $i$ and $j$. Similar to~\cite{kaushik2021method}, in defining a global objective function for the price of stability, we consider a Marshallian aggregate surplus function defined as $$\mathbb{E}[f(x,\xi)]\triangleq \sum_{i=1}^N \mathbb{E}\left[f_i\left( x^{(i)}, x^{(-i)},\xi \right)\right]. $$ It has been shown~\cite{KannanShanbhag2012} that when $\sigma \geq 1$, $f$ is convex and also, when either $\sigma =1$ or $1<\sigma\leq 3$ and $N\leq \frac{3\sigma-1}{\sigma-1}$, the mapping associated with the Cournot game, i.e., $F(x)\triangleq \left[\nabla_{x^{(1)}} \mathbb{E}[f_1(x,\xi)]; \ldots;\nabla_{x^{(N)}} \mathbb{E}[f_N(x,\xi)] \right]$ is merely monotone. \textbf{Experiments and set-up.} We compare the performance of Algorithm~\ref{algorithm:aR-IP-SeG} with that of the two existing methods, namely (aRB-IRG) in~\cite{kaushik2021method} and the sequential regularization (SR) scheme (cf.~\cite{FacchineiPang2003,kaushik2021method}). Note that both the SR scheme and (aRB-IRG) can only use deterministic gradients. To apply these two schemes, we use a sample average approximation scheme by assuming that the deterministic gradient is approximated using a batch size of $1000$ random samples. In Algorithm~\ref{algorithm:aR-IP-SeG}, however, we can use stochastic gradients (using a single sample $\xi$). In both Algorithm~\ref{algorithm:aR-IP-SeG} and (aRB-IRG), we employ a randomized block-coordinate scheme with $N$ number of blocks, where $N$ is the number of firms. We consider four different settings in our simulation results, where they differ in terms of the choices of the initial stepsize, the initial regularization parameter, and the initial penalty parameter. For each setting, we implement the three methods on two different Cournot games, one with $4$ players over a network with $5$ nodes, and another with $10$ players over a network with $10$ nodes. We assume that $\alpha_j(\xi)$ is uniformly distributed for all the agents. To compare the simulation results, we generate $15$ independent sample-paths for each scheme that is stochastic or randomized. \textbf{Results and insights.} The simulation results are presented in Figures~\ref{fig:comparison},~\ref{fig:comparison2}, and~\ref{fig:pos}. Several observations can be made: (i) As it can be seen in Figures~\ref{fig:comparison} and~\ref{fig:comparison2}, Algorithm~\ref{algorithm:aR-IP-SeG} outperforms the other two methods in almost all the scenarios. (ii) Although both Algorithm~\ref{algorithm:aR-IP-SeG} and (aRB-IRG) are equipped with the same convergence speeds, Algorithm~\ref{algorithm:aR-IP-SeG} enjoys a better performance with respect to the run-time. This is because it uses stochastic gradients that are cheaper to compute in contrast with the sample average gradients used in (aRB-IRG). (iii) We do observe that as the size of the problem increases in terms of the number of players and the size of the network, the performance of all the schemes is downgraded. However, Algorithm~\ref{algorithm:aR-IP-SeG} seems to stay robust across most settings and often outperforms the other two methods. (vi) In estimating the PoS in Figure~\ref{fig:pos}, the confidence intervals seem to become tighter as the algorithm proceeds. This is indeed aligned with our theoretical statement provided in \eqref{eqn:ci_pos}. \begin{table}[H] \setlength{\tabcolsep}{0pt} \centering{ \begin{tabular}{c || c c c c} {\footnotesize {Setting}\ \ }& {\footnotesize (1)} & {\footnotesize (2)} & {\footnotesize (3) } & {\footnotesize (4) }\\ \hline\\ \rotatebox[origin=c]{90}{{\footnotesize {sample ave. gap}}} & \begin{minipage}{.22\textwidth} \includegraphics[scale=.175, angle=0]{figures/plot_N5J4_gap_1.pdf} \end{minipage} & \begin{minipage}{.22\textwidth} \includegraphics[scale=.175, angle=0]{figures/plot_N5J4_gap_2.pdf} \end{minipage} & \begin{minipage}{.22\textwidth} \includegraphics[scale=.175, angle=0]{figures/plot_N5J4_gap_3.pdf} \end{minipage} & \begin{minipage}{.22\textwidth} \includegraphics[scale=.175, angle=0]{figures/plot_N5J4_gap_4.pdf} \end{minipage} \\ \hbox{}& & & &\\ \hline\\ \rotatebox[origin=c]{90}{{\footnotesize sample ave. objective}} & \begin{minipage}{.22\textwidth} \includegraphics[scale=.175, angle=0]{figures/plot_N5J4_obj_1.pdf} \end{minipage} & \begin{minipage}{.22\textwidth} \includegraphics[scale=.175, angle=0]{figures/plot_N5J4_obj_2.pdf} \end{minipage} & \begin{minipage}{.22\textwidth} \includegraphics[scale=.175, angle=0]{figures/plot_N5J4_obj_3.pdf} \end{minipage} & \begin{minipage}{.22\textwidth} \includegraphics[scale=.175, angle=0]{figures/plot_N5J4_obj_4.pdf} \end{minipage} \end{tabular}} \captionof{figure}{Simulation results for a stochastic Nash Cournot game with 4 players over a network with 5 nodes, comparing Algorithm \eqref{algorithm:aR-IP-SeG} with other existing methods for solving problem \eqref{prob:sopt_svi}.} \label{fig:comparison} \vspace{-.1in} \end{table} \begin{table}[H] \setlength{\tabcolsep}{0pt} \centering{ \begin{tabular}{c || c c c c} {\footnotesize {Setting}\ \ }& {\footnotesize (1)} & {\footnotesize (2)} & {\footnotesize (3) } & {\footnotesize (4) }\\ \hline\\ \rotatebox[origin=c]{90}{{\footnotesize {sample ave. gap}}} & \begin{minipage}{.22\textwidth} \includegraphics[scale=.175, angle=0]{figures/plot_N10J10_gap_1.pdf} \end{minipage} & \begin{minipage}{.22\textwidth} \includegraphics[scale=.175, angle=0]{figures/plot_N10J10_gap_2.pdf} \end{minipage} & \begin{minipage}{.22\textwidth} \includegraphics[scale=.175, angle=0]{figures/plot_N10J10_gap_3.pdf} \end{minipage} & \begin{minipage}{.22\textwidth} \includegraphics[scale=.175, angle=0]{figures/plot_N10J10_gap_4.pdf} \end{minipage} \\ \hbox{}& & & &\\ \hline\\ \rotatebox[origin=c]{90}{{\footnotesize sample ave. objective}} & \begin{minipage}{.22\textwidth} \includegraphics[scale=.175, angle=0]{figures/plot_N10J10_obj_1.pdf} \end{minipage} & \begin{minipage}{.22\textwidth} \includegraphics[scale=.175, angle=0]{figures/plot_N10J10_obj_2.pdf} \end{minipage} & \begin{minipage}{.22\textwidth} \includegraphics[scale=.175, angle=0]{figures/plot_N10J10_obj_3.pdf} \end{minipage} & \begin{minipage}{.22\textwidth} \includegraphics[scale=.175, angle=0]{figures/plot_N10J10_obj_4.pdf} \end{minipage} \end{tabular}} \captionof{figure}{Simulation results for a stochastic Nash Cournot game with 10 players over a network with 10 nodes, comparing Algorithm \eqref{algorithm:aR-IP-SeG} with other existing methods for solving problem \eqref{prob:sopt_svi}.} \label{fig:comparison2} \vspace{-.1in} \end{table} \begin{figure}[H] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.8\linewidth]{figures/plot_N5J4_PoS_1.pdf} \caption{Cournot game with 4 players and network 5 nodes} \label{fig:sub1} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.8\linewidth]{figures/plot_N10J10_PoS_1.pdf} \caption{Cournot game with 10 players and network 10 nodes} \label{fig:sub2} \end{subfigure} \caption{Performance of Algorithm~\ref{algorithm:pos} in estimating PoS. 90\% confidence intervals become tighter as the scheme proceeds.} \label{fig:pos} \end{figure} \section{Acknowledgments} This work is supported by the National Science Foundation CAREER grant ECCS-1944500. \bibliographystyle{amsplain}
2,869,038,156,030
arxiv
\section{Introduction} In this paper we consider several algorithmic problems that involve, explicitly or implicitly, a finite set of lines in three dimensions. The main problems that we consider are: \begin{description} \item[(i)] \emph{Ray shooting amid triangles in three dimensions.} We have a set $\T$ of $n$ triangles in $\reals^3$, and our goal is to preprocess $\T$ into a data structure that supports efficient ray-shooting queries, each of which specifies a ray $\rho$ and asks for the first triangle of $\T$ that is hit by $\rho$, if such a triangle exists. \item[(ii)] \emph{Intersection reporting, emptiness, and approximate counting queries amid triangles in three dimensions.} For a set $\T$ of $n$ triangles in $\reals^3$, we want to preprocess $\T$ into a data structure that supports efficient intersection reporting (resp., emptiness) queries, each of which specifies a line, ray, or segment $\rho$ and asks for reporting the triangles of $\T$ that $\rho$ intersects (resp., determining whether such a triangle exists). We want the queries to be output-sensitive, so that their cost is a small (sublinear) overhead plus a term that is nearly linear in the output size $k$. In the related problem of approximate counting queries, we want to preprocess $\T$ into a data structure, such that given a query $\rho$ as above, it efficiently computes the number of triangles of $\T$ that $\rho$ intersects, up to some prescribed small relative error. \item[(iii)] Compute the intersection of two nonconvex polyhedra. The complexity of the intersection can be quadratic in the complexities of the input polyhedra, and we therefore seek an output-sensitive solution, where the running time is a small (subquadratic) overhead plus a term that is nearly linear in $k$, where $k$ is the complexity of the intersection. \item[(iv)] Detect, count, or report intersections in a set of lines in 3-space. Again, in the reporting version we seek an output-sensitive solution, as above. \item[(v)] Output-sensitive construction of an arrangement of triangles in three dimensions. \end{description} All these problems, or variants thereof, have been considered in several works during the 1990s; see~\cite{AM,AgS,dB,BHOSK,CEGSS,MS,Pel} for a sample of these works. See also Pellegrini~\cite{Pel:surv} for a recent comprehensive survey of the state of the art in this area. Pellegrini~\cite{Pel} presents solutions to some of these problems, including efficient data structures (albeit less efficient than ours) for the ray-shooting problem, and also (a) an output-sensitive algorithm for computing the intersection of two nonconvex polyhedra in time $O(n^{8/5+\eps} + k\log k)$, for any $\eps>0$, where $n$ is the number of vertices, edges, and facets of the two polyhedra and $k$ is the (similarly defined) complexity of their intersection; (b) an output-sensitive algorithm for constructing an arrangement of $n$ triangles in 3-space in $O(n^{8/5+\eps} + k\log k)$ time, where $k$ is the output size; and (c) an algorithm that, in $O(n^{8/5+\eps})$ expected time, counts all pairs of intersecting lines, in a set of $n$ lines in 3-space. \paragraph{Background.} Algorithmic problems that involve lines in three dimensions have been studied for more than 30 years, covering the problems mentioned above and several others. An early study by McKenna and O'Rourke~\cite{MO} has developed some of the tools and techniques for tackling these problems. Various techniques for ray shooting, and for the related problems of computing and verifying depth orders and hidden surface removal have been studied in de Berg's dissertation~\cite{dB}, and later by de Berg et al.~\cite{BHOSK}. Another work that developed some of the infrastructure for these problems is by Chazelle et al.~\cite{CEGSS}, who presented several combinatorial and algorithmic results for problems involving lines in 3-space. Agarwal and Matou\v{s}ek~\cite{AM} reduced ray shooting problems, via parametric search, to segment emptiness problems (where the query is a segment and we want to determine whether it intersects any input object), and obtained efficient solutions via this reduction. See also~\cite{MS} and~\cite{AgS} for studies of some additional and special cases of the ray shooting problem. Most of the works cited above suffer from the `curse' of the four-dimensionality of (the parametric representation of) lines in space, which leads to algorithms whose complexity is inferior to those obtained in our work. Nevertheless, there are a few instances where better solutions can be obtained, such as in \cite{CEGS,CEGSS} and some other works. \paragraph{Our results.} Using the polynomial partitioning technique of \cite{Guth,GK}, we derive more efficient algorithms for the problems listed above. In our first main result, presented in Section~\ref{sec:shoot}, we tackle the ray-shooting problem, and construct a data structure on an input set of $n$ triangles, which requires $O(n^{3/2+\eps})$ storage and preprocessing, so that a ray shooting query can be answered in $O(n^{1/2+\eps})$ time, for any $\eps>0$. We then extend the technique, in Section~\ref{sec:seg}, to obtain an equally-efficient data structure for the segment-triangle intersection reporting, emptiness, and approximate counting problems, where in the case of approximate counting the query time bound has an additional term that is nearly linear in the output size. These are significant improvements over previous results, which, as already noted, have treated the lines supporting the edges of the input triangles and the line supporting the query ray (or segment) as points or surfaces in a suitable four-dimensional parametric space (in many of the earlier works, lines were actually represented as points on the \emph{Klein quadric} in five-dimensional projective space; see~\cite{BR,Hu,Pel:surv,St}). As a result, the algorithms obtained by these techniques were less efficient. A weakness, or rather an intriguing peculiarity, of our analysis is that it does not provide a desirably sharp tradeoff between storage and query time. To make this statement more precise, the tradeoff that the earlier solutions provide, say for the ray shooting problem for specificity, is that, for $n$ input triangles and with $s$ storage, for $s$ varying between $n$ and $n^4$, a ray-shooting query takes $O(n^{1+\eps}/s^{1/4})$ time; see, e.g.,~\cite{Pel:surv} (the `$4$' in the exponent comes from the fact that lines in 3-space are represented as objects in four-dimensional parametric space). Thus, with storage $O(n^{3/2+\eps})$, which is what our solution uses, the query time becomes about $O(n^{5/8})$, considerably weaker than our bound. An ambitious, and maybe unrealistic goal would be to improve the tradeoff so that the query time is only $O(n^{1+\eps}/s^{1/3})$. (This does indeed coincide with the bound that our main result gives, as the storage that it uses is $O(n^{3/2+\eps})$, but this coincidence only holds for this amount of storage.) Although not achieving this goal, still, combining our technique with the known, aforementioned `$4$-dimensional' tradeoff, we are able to obtain an `in between' tradeoff, which we present in Section~\ref{sec:trade}. Concretely, the tradeoff is that, with $s$ storage, the cost of a query is \begin{equation} \label{eq:trade1} Q(n,s) = \begin{cases} O\left( \frac{n^{5/4+\eps}}{s^{1/2}} \right) , & s = O(n^{3/2+\eps}) , \\ O\left( \frac{n^{4/5+\eps}}{s^{1/5}} \right) , & s = \Omega(n^{3/2+\eps}) . \end{cases} \end{equation} Note that this tradeoff contains our bounds $(s,Q) = \left(O(n^{3/2+\eps}), O(n^{1/2+\eps})\right)$, as a special case, that at the extreme ends $s=\Theta(n)$, $s=\Theta(n^4)$, of the range of $s$ we get $Q = O(n^{3/4+\eps})$, $Q = O(n^\eps)$, respectively,\footnote The actual query time in the older tradeoff, with maximum storage, is $Q = O(\log n)$.} as in the older tradeoff, and that the new tradeoff is better for any in-between value of $s$. A comparison between the two tradeoffs is illustrated in Figure~\ref{tradeoff}. Our improved tradeoff applies to all the problems studied in this paper. In particular, it implies that, in all these problems, the overall cost of processing $m$ queries with $n$ input objects, including preprocessing cost, is \begin{equation} \label{eq:trade2} \max\Bigl\{ O(m^{2/3}n^{5/6+\eps} + n^{1+\eps}),\; O(n^{2/3}m^{5/6+\eps} + m^{1+\eps}) \Bigr\} , \end{equation} for any $\eps>0$; for the output-sensitive problems, this bounds the total overhead cost. The first (resp., second) bound dominates when $n\ge m$ (resp., $n\le m$). \begin{figure}[htb] \begin{center} \input{tradeoff.pdf_t} \caption{{\sf The old tradeoff (green) and the new tradeoff (red). The $x$-axis is the storage as a function of $n$, and the $y$-axis is the query cost. Both axes are drawn in a logarithmic scale.} } \label{tradeoff} \end{center} \end{figure} We then present, in Section~\ref{sec:other}, extensions of our technique for solving the other problems (iii), (iv) and (v) listed above. In all these applications, our algorithms are output-sensitive for the reporting versions, so that the query time bound, or the full processing cost bound, contains an additional term that is nearly linear in the output size. See Section~\ref{sec:other} for the concrete bounds that we obtain. \section{Ray shooting amid triangles} \label{sec:shoot} Let $\T$ be a collection of $n$ triangles in ${\reals}^3$. We fix some sufficiently large constant parameter $D$, and construct a partitioning polynomial $f$ of degree $O(D)$ for $\T$, so that each of the $O(D^3)$ connected components $\tau$ of $\reals^3\setminus Z(f)$ (the cells of the partition) is crossed by at most $n/D^2$ triangle edges. We refer to triangles whose edge crosses $\tau$ as \emph{narrow triangles} (with respect to $\tau$), and refer to the remaining triangles that cross $\tau$ (but none of their edges do) as \emph{wide triangles}. We denote the set of narrow (resp., wide) triangles in $\tau$ by $\N_\tau$ (resp., $\W_\tau$). The existence of such a partitioning polynomial is implied, as a special case, by the general machinery developed in Guth~\cite{Guth}. An algorithm for constructing $f$ is given in a recent work of Agarwal et al.~\cite{AAEZ}. It runs in $O(n)$ time, for any constant value of $D$, where the constant of proportionality depends (polynomially) on $D$. For technical reasons, we want to turn any query ray into a bounded segment, and we do it by enclosing all the triangles of $\T$ by a sufficiently large bounding box $B_0$, and by clipping any query ray to its portion within $B_0$. For each (bounded) cell $\tau\subseteq B_0$ of the partition, we take the set $\W_\tau$ of wide triangles in $\tau$, and prepare a data structure for efficient segment-shooting queries into the triangles of $\W_\tau$, by segments that are fully contained in $\tau$. The nontrivial details of this procedure are given in Section~\ref{sec:wide}. As we show there, we can construct such a structure with storage and preprocessing $O(|\W_\tau|^{3/2+\eps}) = O(n^{3/2+\eps})$, for any $\eps > 0$ (where the choice of $D$ depends on $\eps$), and each segment-shooting query takes $O(|\W_\tau|^{1/2+\eps}) = O(n^{1/2+\eps})$ time. The preprocessing then recurses within each such cell $\tau$ of the partition, with the set $\N_\tau$ of the narrow triangles in $\tau$. The recursion terminates when the number of input triangles becomes smaller than the constant threshold $n_0 := O(D^2)$, in which case we simply output the list of triangles in the subproblem. A query, with a ray (now turned into a segment) $\rho$, emanating from a point $q$, is answered as follows. We first consider the case where $\rho$ (that is, the line containing $\rho$) is not fully contained in $Z(f)$, and discuss the (simpler, albeit still involved) case where $\rho\subset Z(f)$, later. \paragraph{The case where $\rho\not\subset Z(f)$.} We assume a standard model of algebraic computation, in which a variety of computations involving polynomials of constant degree, such as computing (some discrete representation of) the roots of such polynomials, performing comparisons and algebraic computations (of constant degree) with these roots, and so on, can be done exactly in time $C(\delta)$, where $\delta$ is the degree of the polynomial, and $C(\delta)$ is a constant that depends on $\delta$; see, e.g., \cite{AEZ,BPR}. Using this model, we first locate the cell of the partition that contains the starting endpoint $q$ of the segment $\rho$, in constant time (recalling that $D$ is a constant). One way of doing this is to construct the \emph{cylindrical algebraic decomposition (CAD)} of $Z(f)$ (see \cite{Col,SS2}), associate with each cell $\sigma$ of the CAD the cell of $\reals^3\setminus Z(f)$ that contains it (or an indication that $\sigma$ is contained in $Z(f)$), and then search with $q$ in the CAD, coordinate by coordinate (see, e.g.,~\cite{AAEZ} for more details concerning such an operation). We then find, in constant time, the $t=O(D)$ points of intersection of $\rho$ with $Z(f)$, and sort them into a sequence $P := (p_1,\ldots,p_t)$ in their order along $\rho$; we assume that $p_t\in\bd B_0$, and ignore the suffix of $\rho$ from $p_t$ onwards. The points in $P$ partition $\rho$ into a sequence of segments, each of which is a connected component of the intersection of $\rho$ with some cell. The first segment is $e_1 = qp_1$, the subsequent segments are $e_2 = p_1p_2$, $e_3 = p_2p_3,\ldots,e_t = p_{t-1}p_t$. We denote by $\tau_i$ the cell containing the $i$-th segment, for $i=1,\ldots,t$ (a cell can appear several times in this sequence). See Figure~\ref{cellbycell}. \begin{figure}[htb] \begin{center} \input{cellbycell.pdf_t} \caption{{\sf A two-dimensional rendering of the the general structure of the ray-shooting mechanism.} } \label{cellbycell} \end{center} \end{figure} We now process the segments $e_i$ in order. For each segment $e_i$, let $\tau_i$ denote the partition cell that contains $e_i$. We first perform a ray-shooting (or rather a segment-shooting) query in the structure for $\W_{\tau_i}$ with the segment $e_i$. As already mentioned (and will be described in Section~\ref{sec:wide}), this step can be performed in $O(n^{1/2+\eps})$ time, with $O(n^{3/2+\eps})$ storage and preprocessing, for any $\eps > 0$. We then query with $e_i$ in the substructure recursively constructed for $\N_{\tau_i}$. If at least one of the two queries succeeds, i.e., outputs a point that lies on $e_i$, we report the point nearest to the starting point of $e_i$, and terminate the whole query. If both queries fail, we proceed to the next segment $e_{i+1}$ and repeat this step. If the mechanism fails for all the segments, we report that $\rho$ does not hit any triangle of $\T$. \paragraph{The case where $\rho\subset Z(f)$.} We use the cylindrical algebraic decomposition (CAD) of $Z(f)$ (see \cite{Col,SS2}), which has already been constructed for the earlier case. One of its by-products is a \emph{stratification} of $Z(f)$, which is a decomposition of $Z(f)$ into pairwise disjoint relatively open patches of dimensions $0$, $1$, and $2$, called \emph{strata} (each stratum is a cell of the CAD), so that each of the two-dimensional strata is $xy$-monotone and its relative interior is free of any singularities of $Z(f)$, and $Z(f)$ is the union of the closures of these two-dimensional strata, excluding possible components of $Z(f)$ of dimension at most $1$, which we may ignore. We compute the intersection arcs $\gamma_\Delta := Z(f)\cap\Delta$, for $\Delta\in\T$, and distribute each arc amid the closures of the two-dimensional strata that it traverses. We then project the closure of each two-dimensional stratum $\sigma$ onto the $xy$-plane, including the portions of the arcs $\gamma_\Delta$ that the closure contains, and preprocess the resulting collection of $O(n)$ algebraic arcs, each of degree $O(D) = O(1)$, into a planar ray-shooting data structure, whose details are spelled out in Section~\ref{sec:onzf}.\footnote This specific planar ray-shooting problem, amid constant-degree algebraic arcs, has not received full attention in the past, although several algorithms have been proposed, mostly with suboptimal solutions. Consult, e.g., Table 2 in Agarwal~\cite{Ag:rs}; see also \cite{AvKO-93,Kol}.} As we show there, we can answer a ray-shooting query in $O(n^{1/2+\eps})$ time, using $O(n^{3/2+\eps})$ storage, for any $\eps > 0$, where the constants of proportionality depend on $\eps$, as does the choice of $D$. The overall storage complexity, over all the (projected) strata of $Z(f)$, is thus $O(n^{3/2+\eps})$, and the overall query time, over all strata met by the query ray $\rho$, is $O(n^{1/2+\eps})$, for a larger constant of proportionality (that depends on $\eps$). Note that the recursion on $D$ when the query ray comes to lie on the zero set of the current partitioning polynomial. When this happens, we solve the problem in this recursive instance using the (nonrecursive) procedure in Section~\ref{sec:onzf} and terminate the (current branch of the) recursion. Another way of saying this is that the leaves of the $D$-recursion tree represent either constant-size subproblems or subproblems on the zero set of the current partitioning polynomial, and the inner nodes represent subproblems of shooting within the partition cells. \paragraph{Analysis.} The correctness of the procedure is fairly easy to establish. Denote by $S(n)$ the maximum storage used by the structure for a set of at most $n$ triangles, and denote by $S_0(n)$ (resp., $S_1(n)$) the maximum storage used by the auxiliary structure for a set of at most $n$ wide triangles in a cell of the partition, as analyzed in Section~\ref{sec:wide} (resp., for a set of at most $n$ intersection arcs on $Z(f)$, which we process for planar ray-shooting in Section~\ref{sec:onzf}). Then $S(n)$ obeys the recurrence \begin{equation} \label{eq:storage} S(n) = O(D^3)S_0(n) + S_1(n) + O(D^3)S(n/D^2) , \end{equation} for $n > n_0$, and $S(n) = O(n)$ for $n\le n_0$, where $n_0 := c D^2$, for a suitable constant $c \ge 1$. We show, in the respective Sections~\ref{sec:wide} and~\ref{sec:onzf}, that $S_0(n) = O(n^{3/2+\eps})$ and $S_1(n) = O(n^{3/2+\eps})$, for any $\eps > 0$, where both constants of proportionality depend on $D$ and $\eps$, from which one can easily show that the solution of (\ref{eq:storage}) is $S(n) = O(n^{3/2+\eps})$, for a slightly larger, but still arbitrarily small $\eps>0$; to achieve this bound, we need to take $D$ to be $2^{\Theta(1/\eps)}$, as will follow from our analysis. Regarding the bound on the preprocessing time $T(n)$, we obtain a similar recurrence as in~(\ref{eq:storage}), namely, \[ T(n) = O(n) + O(D^3)T_0(n) + T_1(n) + O(D^3)T(n/D^2) , \] where the non-recursive linear term is the time to compute the polynomial $f$, and $T_0(n)$, $T_1(n)$ are defined in an analogous manner as above, and have similar upper bounds as $S_0(n)$, $S_1(n)$ (see Sections~\ref{sec:wide} and~\ref{sec:onzf}). Similarly, denote by $Q(n)$ the maximum query time for a set of at most $n$ triangles, and denote by $Q_0(n)$ (resp., $Q_1(n)$) the maximum query time in the auxiliary structure for a set of at most $n$ wide triangles in a cell of the partition (resp., for a set of at most $n$ intersection arcs within $Z(f)$, when the query ray lies on $Z(f)$). Then $Q(n)$ obeys the recurrence \begin{equation} \label{eq:query} Q(n) = \max\left\{ O(D)Q_0(n) + O(D)Q(n/D^2) ,\; Q_1(n) \right\} , \end{equation} for $n > n_0$, and $Q(n) = O(n) = O(1)$ for $n\le n_0$. (This reflects the observation, made above, that the current branch of the recursion terminates when the query ray lies on the zero set of the current partitioning polynomial.) Again, the analysis in Sections~\ref{sec:wide} and~\ref{sec:onzf} shows that $Q_0(n) = Q_1(n) = O(n^{1/2+\eps})$, for any $\eps > 0$ (where the choice of $D$ depends on $\eps$, as above), from which one can easily show, using induction on $n$, that the solution of (\ref{eq:query}) is $Q(n) = O(n^{1/2+\eps})$, for a slightly larger but still arbitrarily small $\eps>0$. The main result of this section is therefore: \begin{theorem} \label{thm:trimain} Given a collection of $n$ triangles in three dimensions, and a prescribed parameter $\eps>0$, we can process the triangles into a data structure of size $O(n^{3/2+\eps})$, in time $O(n^{3/2+\eps})$, so that a ray shooting query amid these triangles can be answered in $O(n^{1/2+\eps})$ time. \end{theorem} \subsection{Ray shooting into wide triangles} \label{sec:wide} \paragraph{Preliminaries.} In this subsection we present and analyze our procedure for ray shooting in the set $\W_\tau$ of the wide triangles in a cell $\tau$ of the partition. We then present, in Section~\ref{sec:onzf}, a different approach that yields a procedure for ray shooting within $Z(f)$. Both procedures have the performance bounds stated in Theorem~\ref{thm:trimain}. The efficiency of our structures depends on $D$ being a constant, since the constants of proportionality depend polynomially (and rather poorly) on $D$. We thus focus now on ray shooting in a set of wide triangles within a three-dimensional cell $\tau$ of the partition. To appreciate the difficulty in solving this subproblem, we make the following observation. A simple-minded approach might be to replace each wide triangle $\Delta\in\W_\tau$ by the plane $h_\Delta$ supporting it. Denoting the set of these planes as $\H_\tau$, we could then preprocess $\H_\tau$ for ray-shooting queries, each of which specifies a query ray $\zeta$ and asks for the first intersection of $\zeta$ with the planes of $\H_\tau$. Using standard machinery (see, e.g.~\cite{Ag:rs}), this would result in an algorithm with the performance bounds that we want. However, this approach is problematic, since, even though $\Delta$ is wide in $\tau$, $h_\Delta$ could intersect $\tau$ in several connected components, some of which lie outside $\Delta$. See Figure~\ref{fig:horseshoe} for an illustration. In such cases, ray shooting amid the planes in $\H_\tau$ is not equivalent to ray shooting amid the triangles of $\W_\tau$, even for rays, or rather portions thereof, that are contained in $\tau$. \begin{figure}[htb] \begin{center} \input{horseshoe.pdf_t} \caption{{\sf Wide triangles cannot be replaced by their supporting planes for ray shooting within $\tau$.} } \label{fig:horseshoe} \end{center} \end{figure} Our solution is therefore more involved, and proceeds as follows. \paragraph{Canonical sets of wide triangles.} Consider first, for exposition sake, the case where the starting point of the shooting segment lies on $\bd\tau$ (the terminal point always lies on $\bd\tau$). As we will show, for each such segment query, the set of wide triangles in $\W_\tau$ that it intersects can be decomposed into a small collection of precomputed ``canonical'' subsets, where in each canonical set the wide triangles can be treated as planes (for that particular query segment). We show below that the overall size of these sets, over all possible segment queries, is $O(n^{3/2+\eps})$, for any $\eps > 0$. Actually, to prepare for the complementary case, where the starting point of the query segment lies inside $\tau$, we calibrate our algorithm, so that we control the storage that it uses, and consequently also the query time bound. To do so, we introduce a \emph{storage parameter} $s$, which can range between $n$ and $n^2$, as a second input to our procedure, and then require that the actual storage and preprocessing cost be both $O(s^{1+\eps})$, for any $\eps > 0$. This relaxed notion of storage offers some simplification in the analysis. (We will also allow larger values of $s$ when we discuss tradeoff between storage and query time, in Section~\ref{sec:trade}.) For each $\Delta\in\W_\tau$, let $\gamma_\Delta$ denote the intersection curve of $\Delta$ with $\bd\tau$. Note that $\gamma_\Delta$ does not have to be connected---it can have up to $O(D^2)$ connected components, by Harnack's curve theorem~\cite{Harnack} (applied on the plane containing $\Delta$). Note also that $\bd\tau$ does not have to be connected, so $\gamma_\Delta$ can have nonempty components on different connected components of $\bd\tau$, as well as several components on the same connected component of $\bd\tau$. We construct the locus $S_\tau$ of points on $\bd\tau$ that are either singular points of $Z(f)$ or points with $z$-vertical tangency. Since $D$ is constant, $S_\tau$ is a curve of constant degree (by B\'ezout's theorem, its degree is $O(D^2)$). We take a random sample $\R$ of $r_0$ triangles of $\W_\tau$, where the analysis dictates that we choose $r_0 = D^{\Theta(1/\eps)}$, for the arbitrarily small prescribed $\eps>0$. Since we have chosen $D$ to be $2^{\Theta(1/\eps)}$, the actual choice of $r_0$ is $2^{\Theta(1/\eps^2)}$. Let $\Gamma_\R = \{\gamma_\Delta \mid \Delta\in\R\}$, and let $\A_0 = \A(\Gamma_\R\cup\{S_\tau\})$ denote the arrangement of these curves within $\bd\tau$, together with $S_\tau$. By construction, each face of $\A_0$ is $xy$-monotone and does not cross any other branch of $Z(f)$ (at a singular point). We partition each face $\varphi$ of $\A_0$ into pseudo-trapezoids (called trapezoids for short), using a suitably adapted version of a two-dimensional vertical decomposition scheme. Let $\A_0^*$ denote the collection of these trapezoids on $\bd\tau$. The number of trapezoids in $\A_0^*$ is proportional to the complexity of $\A_0$, which is $O_D(r_0^2) = O(1)$ (we use the notation $O_D(\cdot)$ to indicate that the constant of proportionality depends on $D$, and recall that $r_0$ also depends on $D$). We assume that the trapezoids are relatively open. To cover all possible cases, we also include in the collection of trapezoids the relatively open subarcs of arcs in $\Gamma_\R$ that the partition generates, the vertical edges of the trapezoids, and the vertices of the partition, but, for exposition sake, we will only handle here the case of two-dimensional trapezoids. (The inclusion of lower-dimensional `trapezoids' is simpler to handle; it does not affect the essence of the forthcoming analysis, nor does it affect the asymptotic performance bounds.) Let $\psi_1$, $\psi_2$ be two distinct trapezoids of $\A_0^*$. Let $S(\psi_1,\psi_2)$ denote the collection of all segments $e$ such that one endpoint of $e$ lies in $\psi_1$, the other endpoint lies in $\psi_2$, and the relative interior of $e$ is fully contained in the open cell $\tau$. We can parameterize such a segment $e$ by four real parameters, so that two parameters specify the starting endpoint of $e$ (as a point in $\psi_1$, using, e.g., the $xy$-parameterization of the $xy$-monotone face containing $\psi_1$), and the other two parameters similarly specify the other endpoint. (Fewer parameters are needed when lower-dimensional trapezoids are involved.) Denote by $\F$ the corresponding (at most) four-dimensional parametric space. Since each of $\tau$, $\psi_1$, $\psi_2$ is of constant complexity, $S(\psi_1,\psi_2)$ is a semi-algebraic set in $\F$ of constant complexity. More specifically, we can write $S(\psi_1, \psi_2)$ as an (implicitly) quantified formula of the form \[ S(\psi_1, \psi_2) = \{ (p_1, p_2) \mid p_1 \in \psi_1, p_2 \in \psi_2, \; \mbox{and} \; p_1 p_2 \subset \tau \}, \] where $p_1 p_2$ denotes the line-segment connecting $p_1$ to $p_2$. Using the singly exponential quantifier-elimination algorithm in~\cite[Theorem 14.16]{BPR}, we can construct, in $O_D(1)$ time, a quantifier-free semi-algebraic representation of $S(\psi_1,\psi_2)$ of $O_D(1)$ complexity. Moreover, we can decompose $S(\psi_1, \psi_2)$ into its connected components, in $O_D(1)$ time as well. For each segment $e \in S(\psi_1,\psi_2)$, let $\T(e)$ denote the set of all wide triangles of $\W_\tau$ that $e$ crosses. We have the following technical lemma. \begin{lemma} \label{lem:cross} In the above notations, each connected component $C$ of $S(\psi_1,\psi_2)$ can be associated with a fixed set $\T_C$ of wide triangles of $\W_\tau$, none of which crosses $\psi_1\cup\psi_2$, so that, for each segment $e\in C$, $\T_C\subseteq \T(e)$, and each triangle in $\T(e)\setminus\T_C$ crosses either $\psi_1$ or $\psi_2$. \end{lemma} \medskip \noindent{\bf Proof.} Pick an arbitrary but fixed segment $e_0$ in $C$, and define $\T_C$ to consist of all the triangles in $\T(e_0)$ that do not cross $\psi_1\cup\psi_2$. See Figure~\ref{fig:tcee0} for an illustration. \begin{figure}[htb] \begin{center} \input{tcee0.pdf_t} \caption{{\sf The set $\T_C$ (consisting of the triangles depicted as black segments), and an illustration of the proof of Lemma~\ref{lem:cross}.}} \label{fig:tcee0} \end{center} \end{figure} Let $e$ be another segment in $C$. Since $C$ is connected, as a set in $\F$ (recall that this is a four-dimensional parametric space representing the segments), there exists a continuous path $\pi$ in $C$ that connects $e_0$ and $e$ (recall that each point on $\pi$ represents a segment with one endpoint on $\psi_1$ and the other on $\psi_2$, and $\pi$ represents a continuous variation of such a segment from $e_0$ to $e$). Let $\Delta$ be a triangle in $\T(e_0)$ that does not cross $\psi_1\cup\psi_2$ (that is, $\Delta\in\T_C$), and let $h_\Delta$ denote its supporting plane. As a segment $e'$ traverses $\pi$ from $e_0$ to $e$, the point $q_\Delta(e') := e'\cap h_\Delta$ is well defined and varies continuously in $\tau$, unless $e'$ comes to be contained in, or parallel to $h_\Delta$, a situation that, as we now argue, cannot arise. In what follows, we are going to argue that the segment $e'$ is detached from $\Delta$ when either (i) the relative interior of $e'$ touches the boundary of $\Delta \cap \tau$, which cannot happen since then $e'$ would have to (partially) exit $\tau$ and meet its boundary, contrary to the assumption that $e'$ is fully contained in $\tau$ (recall that $\tau$ is open), or (ii) $\Delta \cap \tau$ touches an endpoint of $e'$, which again cannot happen because the endpoints of $e'$ lie on $\psi_1\cup\psi_2$, and $\Delta$ is assumed not to intersect $\psi_1\cup\psi_2$. More formally, we argue as follows. By assumption, $q_\Delta(e_0)$ lies in $\Delta$, and, as long as $q_\Delta(e')$ is defined (i.e., $e'$ intersects $\Delta$ at a unique point), $q_\Delta(e')$ cannot reach $\bd\Delta$ because the corresponding segment $e'$ is fully contained in the open cell $\tau$ and $\Delta$ is wide in $\tau$. (We may assume that $e'$ does not overlap $\Delta$, because, since $\Delta$ is wide, that would mean that both endpoints of $e'$ lie on $\Delta$, but then $\Delta$ crosses both $\psi_1$ and $\psi_2$, which we have assumed not to be the case.) We claim that $q_\Delta(e')$ must be nonempty throughout the motion of $e'$ along $\pi$, for otherwise $q_\Delta(e')$ would have to reach an endpoint of $e'$, which, by definition of $S(\psi_1,\psi_2)$, must lie on $\psi_1$ or on $\psi_2$. But then $\Delta$ would have to intersect either $\psi_1$ or $\psi_2$, contrary to assumption. It follows that $q_\Delta(e)$ also lies in $\Delta$, so $\Delta\in\T(e)$. This establishes the first assertion of the lemma. We next need to show that each triangle in $\T(e)\setminus\T_C$ must cross either $\psi_1$ or $\psi_2$, which is our second assertion. Let $\Delta$ be a triangle in $\T(e)\setminus\T_C$, and assume to the contrary that $\Delta$ does not cross $\psi_1\cup\psi_2$. We run the preceding argument in reverse (moving from $e$ to $e_0$), and observe that, by assumption and by the same argument (and notations) as above, $q_\Delta(e')$ remains inside $e'$, for all intermediate segments $e'$ along the connecting path $\pi$, and does not reach $\bd{\Delta \cap \tau}$, so $\Delta\in \T(e_0)$ and thus also $\Delta\in \T_C$ (by definition of $\T_C$), contradicting our assumption. This establishes the second assertion, and thereby completes the proof. $\Box$ \medskip Lemma~\ref{lem:cross} and its proof show that, for each connected component $C$ of $S(\psi_1,\psi_2)$, the canonical set $\T_C$, of wide triangles that are crossed by all segments in $C$ and do not cross $\psi_1\cup\psi_2$, assigned to $C$, is unique and is independent of the choice of $e_0$. (This is because the sets $\T(e_0)$, for $e_0\in C$, differ from each other only in triangles that cross either $\psi_1$ or $\psi_2$.) The collection of all these sets $\T_C$, over all connected components $C$, and all pairs of trapezoids $(\psi_1,\psi_2)$, is part of the whole output collection of canonical sets over $\tau$; the rest of this collection is constructed recursively over the trapezoids $\psi$ of $\A_0^*$. \paragraph{The algorithm.} For each trapezoid $\psi$ of $\A_0^*$, the \emph{conflict list} $K_\psi$ of $\psi$ is the set of all wide triangles that cross $\psi$. By standard random sampling arguments~\cite{CEGS-91}, with high probability, the size of each conflict list is $O\left(\frac{n}{r_0}\log r_0\right)$, where the constant of proportionality depends on $D$. Two extreme situations that require special treatment are (i) $0$-dimensional trapezoids (vertices), where there is no bound on the number of triangles of $\W_\tau$ that can contain a vertex $\psi$ (in which case we do not recurse at $\psi$), but it suffices just to maintain one of them in the structure, because if the starting (or the other) endpoint of a query segment lies at $\psi$, it does not matter which of these incident triangles we use. Also, we do not recurse at $\psi$ (technically, it has no conflict list, as no triangle crosses it). (ii) triangles that fully contain a two-dimensional trapezoid $\psi$, where these triangles are contained in some planar component of $Z(f)$ (where only the triangles that cross $\psi$ are processed recursively). We assume for simplicity that these triangles do not overlap one another. Since we are handling here rays that are not contained in $Z(f)$, such a ray $\rho$ can cross $Z(f)$ at only $O(D) = O(1)$ points, and it is easy to find, in $O(1)$ time, these crossing points, and then check, in $O(\log n)$ time (with linear storage), whether any of these points belongs to a triangle contained in $Z(f)$. (The case where the triangles are overlapping is also easy to handle. The performance bounds deteriorate, but are still within the overall bounds that we derive.) For every pair of trapezoids $\psi_1$, $\psi_2$, we compute $S(\psi_1,\psi_2)$ and decompose it into its connected components. We pick some arbitrary but fixed segment $e_0$ from each component $C$, compute the set $\T(e_0)$ of the wide triangles that cross $e_0$, and remove from it any triangle that crosses $\psi_1\cup\psi_2$, thereby obtaining the set $\T_C$. All this takes $O_D(r_0^4 n) = O_D(n)$ time, and the overall size of the produced canonical sets is also $O_D(n)$. Let $s$ be the storage parameter associated with the problem, as defined earlier, and recall that we require that $n\le s \le n^2$. Each canonical set $\T_C$ is preprocessed into a data structure that supports ray shooting in the set of \emph{planes} $\H_C = \{h_\Delta \mid \Delta\in\T_c\}$, where $h_\Delta$ is the plane supporting $\Delta$. We construct these structures so that they use $O(s^{1+\eps})$ storage (and preprocessing), for any $\eps>0$, and a query takes $O(n\;{\rm polylog}(n)/s^{1/3})$ time (see, e.g.,~\cite{Ag:rs}). We now process recursively each conflict list $K_\psi$, over all trapezoids $\psi$ of $\A_0^*$. Each recursive subproblem uses the same parameter $r_0$, but now the storage parameter that we allocate to each subproblem is only $s/r_0^2$. We keep recursing until we reach conflict lists of size close to $n^2/s$. More precisely, after $j$ levels of recursion, we get a total of at most $(c_0 r_0^2)^j = c_0^jr_0^{2j}$ subproblems, each involving at most $\left(\frac{c_1\log r_0}{r_0}\right)^j n$ wide triangles, for some constants $c_0$, $c_1$ that depend on $D$, and thus on $\eps$ (specifically, $c_1$ depends on $c_0$ and $c_0 = O(D^2)$ by B\'ezout's theorem), but are considerably smaller than $r_0$, which, as already mentioned, we take to be $D^{\Theta(1/\eps)}$. We stop the recursion at the first level $j^*$ at which $(c_1r_0\log r_0)^{j^*} \ge s/n$. As a result, we have ${r_0}^{j^{*}} \le s/n$, and we get $c_0^{j^{*}} r_0^{2j^{*}} = O(s^2/n^{2-\eps})$ subproblems, for any $\eps > 0$, where the choice of $D$ (and therefore also of $c_0$, $c_1$ and $r_0$) depends, as above, on $\eps$. Each of these subproblems involves at most \[ \left(\frac{c_1\log r_0}{r_0}\right)^{j^*} n = \left(\frac{(c_1\log r_0)^2}{c_1r_0\log r_0 }\right)^{j^*} n \le (c_1\log r_0)^{2j^*}\cdot \frac{n^2}{s} = \frac{n^{2+\eps}}{s} \] triangles, for any $\eps>0$. For this estimate to hold, we choose $D = 2^{\Theta(1/\eps)}$. Hence the overall size of the inputs, as well as of the canonical sets, at all the subproblems throughout the recursion, is ${\displaystyle O\left(\frac{s^2}{n^{2-\eps}}\right) \cdot \frac{n^{2+\eps}}{s} = O(sn^{2\eps})} = O(s^{1+\eps})$, for a slightly larger $\eps > 0$. Note that the canonical sets that we encounter when querying with a fixed segment $e$ are not necessarily pairwise disjoint. This is because the sets $K_{\psi_1}$ and $K_{\psi_2}$ are not necessarily disjoint (they are disjoint of $\T_C$, though). This does not pose a problem for ray shooting queries, but will be problematic for \emph{counting queries}; see Section~\ref{sec:seg}. At the bottom of the recursion, each subproblem contains at most $n^{2+\eps}/s$ wide triangles, which we merely store in the structure. As just calculated, the overall storage that this requires is $O(s^{1+\eps})$, for a slightly larger $\eps$, as above. We obtain the following recurrence for the overall storage $S(N_W,s_W)$ for the structure constructed on $N_W$ wide triangles, where $s_W$ denotes the storage parameter allocated to the structure (at the root $N_W = n$, $s_W = s$). \[ S(N_W,s_W) = \left\{ \begin{array}{ll} O_D(r_0^4 s_W^{1+\eps}) + c_0r_0^2 S\left(\frac{c_1N_W \log r_0}{r_0},\; \frac{s_W}{r_0^2}\right) & \mbox{for $N_W \ge n^{2+\eps}/s$,}\\[1mm] O(N_W) &\mbox{for $N_W < n^{2+\eps}/s$}. \end{array} \right\} \] (The overhead term is actually $O_D(r_0^4 N_W + r_0^4 s_W^{1+\eps})$, but the second term dominates.) Throughout the recursion we have $N_W \le s_W \le N_W^2$. Indeed, starting with $n$ and $s$, after $j$ recursive levels we have $N_W \le \left(\frac{c_1\log r_0}{r_0}\right)^{j} n$ and $s_W = s/r_0^{2j}$. Hence the right inequality continues to hold (for $s \le n^2$), and the left inequality holds as long as $\left(\frac{c_1\log r_0}{r_0}\right)^{j} n \le s/r_0^{2j}$, or $(c_1 r_0\log r_0)^{j} \le s/n$, which indeed holds up to the terminal level $j^*$, by construction. Unfolding the first recurrence up to the terminal level $j^*$, where $N_W < n^{2+\eps}/s$, the sum of the nonrecursive overhead terms $O_D(r_0^4 s_W^{1+\eps})$, over all nodes at a fixed level $j$, is \[ c_0^{j} r_0^{2j} \cdot O\left( \frac{s_W^{1+\eps}}{r_0^{2j(1+\eps)}} \right) = O\left(\frac{c_0^j}{r_0^{2j\eps}} s_W^{1+\eps}\right) = O\left( s_W^{1+\eps} \right) , \] by the choice of $r_0$. Hence, starting the recurrence at $(N_W,s_w) = (n,s)$, the overall contribution of the overhead terms (over the logarithmically many levels) is $O(s^{1+\eps})$, for a slightly larger $\eps$. At the bottom of recurrence, we have, as already noted, $O(s^2/n^{2-\eps})$ subproblems, each with at most $O(n^{2+\eps}/s)$ triangles, so the sum of the terms $N_W$ at the bottom of recurrence is also $O(s^{1+\eps})$. In other words, the overall storage used by the data structure is $O(s^{1+\eps})$. Using similar considerations, one can show that the overall preprocessing time is $O(s^{1+\eps})$ as well, since the time obeys essentially the same recurrence. \paragraph{Answering a query.} To perform a query with a segment $e$ that starts at a point $a$ (that lies anywhere inside $\tau$), we extend $e$ from $a$ backwards, find the first intersection point $a'$ of the resulting backward ray with $\bd\tau$, and denote by $e'$ the segment that starts at $a'$ and contains $e$. See Figure~\ref{inside} for an illustration. This takes $O_D(1)$ time. This step is vacuous when $e$ starts on $\bd\tau$, in which case we have $e' = e$. \begin{figure}[htb] \begin{center} \input{inside.pdf_t} \caption{\sf{Segment shooting from inside the cell $\tau$: Extending the segment backwards and the resulting canonical set of triangles.} } \label{inside} \end{center} \end{figure} We find the pair of trapezoids $\psi_1$, $\psi_2$ that contain the endpoints of $e'$, find the connected component $C\subseteq S(\psi_1,\psi_2)$ that contains $e'$, and retrieve the canonical set $\T_C$. We then perform segment shooting along $e$ from $a$ in the structure constructed for $\H_C$, and then continue recursively in the subproblems for $K_{\psi_1}$ and $K_{\psi_2}$. We output the triangle that $e$ hits at a point nearest to $a$, or, if no such point is produced, report that $e$ does not hit any wide triangle inside $\tau$. In case both endpoints of $e'$ lie in the same trapezoid $\psi$ (that is, $\psi_1 = \psi_2$), we set $\T_C$ to be empty at this step (it is easy to verify that this indeed must be the case), and then continue processing $e'$ (and thus $e$) in the recursion on $K_\psi$. The correctness of the procedure follows from the fact that $e'$ intersects all the triangles of $\T_C$, and thus replacing these triangles by their supporting planes cannot produce any new (false) intersection of any of these triangles with $e$, and any other wide triangle that $e$ hits must belong to $K_{\psi_1}\cup K_{\psi_2}$. The query time $Q(N_W,s_W)$ satisfies the recurrence \[ Q(N_W,s_W) = \left\{ \begin{array}{ll} O_D(1) + O\left(\frac{N_W {\rm polylog}(N_W)} {s_W^{1/3}} \right) + 2Q\left(\frac{c_1 N_W \log r_0}{r_0},\; \frac{s_W}{r_0^2} \right) & \mbox{for $N_W \ge n^{2+\eps}/s$,}\\[1mm] O(N_W) & \mbox{for $N_W < n^{2+\eps}/s$} . \end{array} \right\} \] Unfolding the first recurrence, we see that when we pass from some recursive level to the next one, we get two descendant subproblems from each recursive instance, and the term $N_W {\rm polylog}(N_W)/s_W^{1/3}$ is replaced in each of them by the (upper bound) term \[ \frac{ \frac{c_1 N_W \log r_0}{r_0} }{ \left(\frac{s_W}{r_0^2} \right)^{1/3} } \cdot {\rm polylog}(N_W) = \frac{c_1 \log r_0}{r_0^{1/3}} \cdot \frac{N_W {\rm polylog}(N_W) }{s_W^{1/3}} . \] Hence the overall bound for the nonrecursive overhead terms in the unfolding, starting from $(N_W,s_W) = (n,s)$, is at most \[ O\left( \sum_{j\ge 0} \left( \frac{2c_1 \log r_0}{r_0^{1/3}} \right)^{j} \right) \cdot \frac{n \ {\rm polylog}(n)}{s^{1/3}} = O\left( \frac{n \ {\rm polylog}(n)}{s^{1/3}} \right) , \] provided that $r_0$ is sufficiently large. Adding the cost at the $2^{j^*}$ subproblems at the bottom level $j^*$ of the recursion, where the cost of each subproblem is at most $n^{2+\eps}/s$, gives an overall bound for the query time of \begin{equation} \label{eq:qns} Q(n,s) = O\left( \frac{n \ {\rm polylog}(n) }{s^{1/3}} + \frac{n^{2+\eps}}{s} \right) . \end{equation} Starting with $s=n^{3/2}$, the query time is $O(n^{1/2+\eps})$. We thus obtain \begin{proposition} \label{prop:inside} For a (bounded) cell $\tau$ of the polynomial partition, and a set $\W$ of $n$ wide triangles in $\tau$, one can construct a data structure of size and preprocessing cost $O(n^{3/2+\eps})$, so that a segment-shooting query within $\tau$, from any starting point, can be answered in $O(n^{1/2+\eps})$ time, for any $\eps > 0$. \end{proposition} The case where the query ray is contained in $Z(f)$ is discussed in detail in the following Section~\ref{sec:onzf}, culminating in Proposition~\ref{prop:onzf}, with the same performance bounds. Thus, combined with the results in this subsection, Theorem~\ref{thm:trimain} follows. \subsection{Ray shooting within $Z(f)$} \label{sec:onzf} We now consider the case where the (line supporting the) query ray is contained in the zero set $Z(f)$ of $f$. We present our result in a more general form, in which we are given a collection $\Gamma$ of $n$ constant-degree algebraic arcs in the plane,\footnote Recall that we project each stratum of $Z(f)$ onto the $xy$-plane.} and preprocess it into a data structure of $O(n^{3/2+\eps})$ size, which can be constructed in $O(n^{3/2+\eps})$ time, that supports ray-shooting queries in time $O(n^{1/2+\eps})$ per query, for any $\eps > 0$. \paragraph{Ray shooting amid arcs in the plane.} Let $\Gamma$ be a set of $n$ algebraic arcs of constant degree in the plane. We may assume, after breaking each arc into a constant number of subarcs, if needed, that each arc is $x$-monotone, has a smooth relative interior, and is either convex or concave. For concreteness, and with no loss of generality, we assume in what follows that all the arcs of $\Gamma$ are convex. That is, the tangent directions turn counterclockwise along each arc as we traverse it from left to right. We present the solution in four steps. We first discuss the problem of detecting an intersection between a query line and the input arcs. We then extend this machinery to detecting intersection with a query ray, and finally to detecting intersection with a query segment. Once we have such a procedure, we can use the parametric-search technique of Agarwal and Matou\v{s}ek~\cite{AM} (this is our fourth step) to perform ray shooting, with similar performance bounds. The reason for this gradual presentation of the technique is that each step uses the structure from the previous step as a substructure. We remind the reader, again, that the problem of ray shooting in the plane amid a general collection of constant-degree algebraic arcs, which is the problem considered in this section, does not seem to have a solution with sharp performance bounds; see Table 2 in \cite{Ag:rs} and~\cite{AvKO-93,Kol}. \subsubsection{Detecting line intersection with the arcs} \label{sec:line} Our approach is to transform the line-intersection problem to a planar point-location problem, by mapping the lines to points and the arcs $\gamma \in \Gamma$ to semi-algebraic sets (whose complexity depends on the complexity of $\gamma$). Our mapping is based on \emph{quantifier elimination}, and proceeds as follows.\footnote There is another, more direct approach to solving this problem, which is easier to visualize but involves several levels of range searching structures. The running time of this approach, which we do not detail here, is asymptotically the same as the bound that we get here.} Fix an arc $\gamma \in \Gamma$, and recall that it is a constant-degree algebraic arc in the plane, and, by assumption, $\gamma$ is convex, smooth and $x$-monotone. Consider the smallest affine variety (curve) $V_\gamma$ that contains $\gamma$, known as the \emph{Zariski closure} of $\gamma$~\cite{CLO}, and let $F(x,y)$ be the bivariate polynomial whose zero set is $V$. $F$ is a polynomial of constant degree, which we denote by $d$. Consider a line $\ell$, given by the equation $y := ax + b$, where $a, b$ are real coefficients. (Vertical lines are easier to handle, and we ignore this special case in what follows.) Then $\ell$ intersects $\gamma$ if and only if there exists $x\in\reals$ such that $(x,ax+b)\in\gamma$. This can be expressed as a quantified Boolean algebraic predicate of constant complexity (i.e., involving a constant number of variables, and a constant number of polynomial equalities and inequalities of constant degrees); one of the clauses of the predicate is $F(x,ax+b) = 0$ and the others restrict $(x,ax+b)$ to lie in $\gamma$. Using the singly exponential quantifier-elimination algorithm in~\cite[Theorem 14.16]{BPR} (also used earlier in Section~\ref{sec:wide}), we can construct, in $O_d(1)$ time, a quantifier-free semi-algebraic set $G := G_{\gamma}$ in the $ab$-parametric plane, whose overall complexity is $O_d(1)$ as well, such that the quantified predicate is true if and only if $(a,b) \in G$; see, e.g.,~\cite{AM-94} for a concrete construction of such a set for the problem of intersection detection between lines and spheres in $\reals^3$. We have thus mapped the setting of our problem to a planar point location problem amid a collection $\G$ of $n$ semi-algebraic regions of constant complexity. Using standard techniques based on $\eps$-cuttings (see, e.g.,~\cite{CF-90} for such constructions), one can construct, using overall storage and preprocessing time of $O(n^{3/2+\eps})$, for any $\eps > 0$, a data structure that supports point-location queries in the arrangement $\A(\G)$ of these regions in time $O(n^{1/2})$ per query;\footnote We comment that we need to exploit our model of computation in which root extraction, and manipulations of these roots, can be performed in constant time, for univariate polynomials of constant degree.} note that the factor $n^\eps$ does not appear in the query time bound, but only in the preprocessing time bound. Briefly, to do so, we construct a $(1/\sqrt{n})$-cutting of $\A(\G)$ in $O(n^{3/2})$ time. This is a partition of the plane into $O(n)$ pseudo-trapezoids, each crossed by $O(n^{1/2})$ boundaries of the regions in $\G$ (see \cite{CF-90}). Each pseudo-trapezoid (trapezoid for short) has a conflict list $\G_tau$ of the set of regions whose boundaries cross $\tau$, and another list $\G^{(0)}_\tau$ of regions that fully contain $\tau$. The lists $\G_\tau$ are stored explcitly at the respective regions $\tau$, as their overall size is $O(n^{3/2})$. The overall size of the lists $\G^{(0)}_\tau$ is $O(n^2)$, so we store them implicitly in a persistent data structure, based on some tour of the trapezoids of the cutting, using the fact that $\G^{(0)}_\tau$ changes by only $O(n^{1/2})$ regions as we pass from $\tau$ to an adjacent trapezoid $\tau'$ (we gloss here over certain technical issues involved in the construction of such a tour). explcitly at the respective regions $\tau$, as their overall size is $O(n^{3/2})$. For the problem of detecting line intersection, it suffices to test whether the query point $(a,b)$ (representing a query line $\ell$) is contained in any of the input semi-algebraic regions. To do so, we locate $(a,b)$ in the cutting. If its containing trapezoid $\tau$ has a nonempty list $\G^{(0)}_\tau$, we report that $\ell$ intersects an arc of $\Gamma$ and stop. Otherwise we go over the conflict list $\G_\tau$ and test explicitly whether $\ell$ interscts any of the associated arcs of $\Gamma$, in $O(n^{1/2})$ time. Moreover, this point-location machinery can also return a compact representation of the set of the arcs from $\Gamma$ that intersect $\ell$, as a disjoint union of $O(n^{1/2})$ precomputed canonical subsets of $\Gamma$ (namely, the set $\G^{(0)}_\tau$ of the trapezoid $\tau$ containing $(a,b)$, and the $O(n^{1/2})$ singleton sets corresponding to those regions in $\G_\tau$ that contain $(a,b)$). This latter property is useful for the extensions of this procedure for detecting intersections of rays or segments with the given arcs, described below. \subsubsection{Detecting ray and segment intersections with the arcs} \label{sec:ray} We next augment the data structure so that it can test whether a query ray $\rho$ intersects any arc in $\Gamma$. A similar, somewhat more involved approach, which is spelled out later in this section, allows us also to test whether a query segment $s$ intersects any arc in $\Gamma$. Using the parametric-search machinery of Agarwal and Matou\v{s}ek~\cite{AM}, this latter structure allows us to answer ray shooting queries (finding the first arc of $\Gamma$ hit by a query ray $\rho$) with similar performance bounds. We comment that in principle we could have simply used an extended version of the quantifier-elimination technique used in the previous subsection. However, such an extension requires more parameters to represent a ray (three parameters) or a segment (four parameters). As a consequence, the space in which we need to perform the search becomes three- or four-dimensional, and the performance of the solution deteriorates. We therefore use a different, more explicit approach to these extended versions. We also comment that the analysis presented next only applies to nonvertical rays and segments. Handling vertical rays is much simpler, and amounts, with some careful modifications, to point location of the apex of the ray in the arrangement of the given arcs, which can be implemented with standard techniques, with performance bounds that match the ones that we obtain for the general problem. We therefore assume in what follows that the query rays and segments are nonvertical. So let $\rho$ be a query ray, let $q$ be the apex of $\rho$, and let $\ell$ be the line supporting $\rho$. We assume, without loss of generality, that $\rho$ is directed to the right (for rays directed to the left, a symmetric set of conditions apply). We have: \begin{lemma} \label{lem:ray-arc} Let $\rho$, $q=(q_x,q_y)$ and $\ell$ be as above. Then $\rho$ intersects a convex $x$-monotone arc $\gamma$, oriented from left to right, if and only if $\ell$ intersects $\gamma$, and one of the following conditions holds, where $u$ and $v$ are the left and right endpoints of $\gamma$, and where $a$ is the slope of $\ell$. \begin{description} \item[(a)] $q$ lies to the left of $u$. See Figure~\ref{rho-cross}(a). \item[(b)] $q$ lies between $u$ and $v$ and below $\gamma$, and the tangent direction to $\gamma$ at $q_x$ is smaller than $a$. See Figure~\ref{rho-cross}(b). \item[(c)] $q$ lies between $u$ and $v$ and above $\gamma$, and $v$ lies above $\ell$. See Figure~\ref{rho-cross}(c). \end{description} \end{lemma} \begin{figure}[htb] \begin{center} \input{ray-arc.pdf_t} \caption{\sf{Scenarios where a ray $\rho$ intersects a convex $x$-monotone arc $\gamma$: (a) $q$ lies to the left of $u$. (b) $q$ lies between $u$ and $v$ and below $\gamma$, and the tangent direction to $\gamma$ at $q_x$ is smaller than $a$. (c) $q$ lies between $u$ and $v$ and above $\gamma$, and $v$ lies above $\ell$.}} \label{rho-cross} \end{center} \end{figure} \noindent{\bf Proof.} The `only if' part of the lemma is simple, and we only consider the `if' part. We are given that $\ell$ intersects $\gamma$. If $q$ lies to the left of $u$ then clearly $\rho$ also intersects $\gamma$ (this is Case (a), where we actually have $\ell \cap \gamma = \rho \cap \gamma$), and if $q$ lies to the right of $v$ then clearly $\rho$ does not intersect $\gamma$. Assume then that $q$ lies between $u$ and $v$. If $q$ lies above $\gamma$, the ray intersects $\gamma$ if and only if $v$ lies above $\ell$, as is easily checked, which is Case (c). If $q$ lies below $\gamma$ then, given that $\ell$ intersects $\gamma$, $\rho$ intersects $\gamma$ if and only if $q$ lies to the left of the left intersection point in $\ell\cap\gamma$, and this happens if and only if the slope of the tangent to $\gamma$ at $q_x$ is smaller than the slope of $\ell$. This is Case (b), and thus the proof is completed. $\Box$ \medskip Our data structure is constructed by taking the structure of Section~\ref{sec:line} and augmenting it with additional levels, in three different ways, each testing for one of the conditions (a), (b), (c) in Lemma~\ref{lem:ray-arc}. Testing for Condition (a) is easily done with a single additional level based on a one-dimensional search tree on the left endpoints of the arcs. Testing for Condition (b) requires three more levels. The first level is a segment tree on the $x$-spans of the arcs, which we search with $q_x$, to retrieve all the arcs whose $x$-span contains $q_x$, as the disjoint union of $O(\log n)$ canonical sets. The second level filters out those arcs that lie above $q$. As in the line-intersection structure given in Section~\ref{sec:line} (except that the plane in which we perform the search is the actual $xy$-plane and not the parametric $ab$-plane), this level requires $O(n^{3/2+\eps})$ storage and preprocessing (where $n$ is the size of the present canonical set), and answers a query in $O(n^{1/2})$ time. In the third level we consider the tangent directions of the arcs of $\gamma$ as partial functions of $x$, and construct their lower envelope (see~\cite[Corollary 6.2]{SA}). We can then test whether $(q_x,a)$ lies above the envelope in logarithmic time. Finally, testing for Condition (c) also requires three more levels, where the first two levels are as in case (b), and the third level tests whether there is any right endpoint $v$ of an arc in the present canonical set that lies above $\ell$, by constructing, in nearly linear time, the upper convex hull of the right endpoints and by testing, in logarithmic time, whether $\ell$ does not pass fully above the hull (see, e.g.,~\cite{DK}). It is easily verified that the overall data structure has the desired performance bounds, namely, $O(n^{3/2+\eps})$ storage and preprocessing cost, and $O(n^{1/2+\eps})$ query time, for any $\eps > 0$. \paragraph{Detecting segment intersection.} The same mechanism works when $\rho$ is a segment, rather than a ray, except that the conditions for intersection with an arc of $\Gamma$ are more involved. To simplify the presentation, we reduce the problem to the ray-intersection detection problem just treated, thereby avoiding explicit enumeration of quite a few subcases that need to be tested. We associate with each arc $\gamma\in\Gamma$ the semi-unbounded region \[ \kappa = \kappa(\gamma) = \{ (x,y) \mid u_x \le x \le v_x \text{ and $(x,y)$ lies strictly above $\gamma$} \} . \] That is, $\kappa$ is bounded on the left by the upward vertical ray emanating from $u$, bounded on the right by the upward vertical ray emanating from $v$, and bounded from below by $\gamma$; see Figure~\ref{seg-cross}. Then we have the following extension of Lemma~\ref{lem:ray-arc}: \begin{lemma} \label{lem:seg-arc} Let $\gamma$, $u$, $v$, and $\kappa$ be as above. Let $s$ be a segment, with left endpoint $p=(p_x,p_y)$, right endpoint $q=(q_x,q_y)$, and slope $a$, and let $\ell$ be the line supporting $s$. Let $\rho_p$ be the ray that starts at $p$ and contains $s$ (so $\rho_p$ is rightward directed), and let $\rho_q$ be the ray that starts at $q$ and contains $s$ (so $\rho_q$ is leftward directed). Then $s$ intersects $\gamma$ if and only if all the following conditions hold: \begin{description} \item[(a)] $\ell$ intersects $\gamma$. \item[(b)] At least one of $p$, $q$ lies outside $\kappa$. \item[(c)] $\rho_p \cap \gamma \neq \emptyset$ and $\rho_q \cap \gamma \neq \emptyset$. \end{description} \end{lemma} \medskip See Figure~\ref{seg-cross} for an illustration. \medskip \begin{figure}[htb] \begin{center} \begin{tabular}{c c} {\input{segment_arc_disjoint.pdf_t} } &\quad {\input{segment_arc_1.pdf_t} } \\ {\small (a)} & \quad\quad {\small (b)} \end{tabular} \end{center} \caption{\sf{A segment $s$ and a convex $x$-monotone arc $\gamma$, where the line $\ell$ containing $s$ intersects $\gamma$: (a) Both endpoints of $s$ lie inside $\kappa$ and therefore $s$ is disjoint from $\gamma$, although in this case $\rho_p$, $\rho_q$ (depicted by the dash arrows in the figure) both intersect $\gamma$. (b) Several different positions of $s$, in each of which there is an endpoint of $s$ outside $\kappa$. In this case $s$ intersects $\gamma$ if and only if both $\rho_p$ and $\rho_q$ intersect $\gamma$ (these rays are drawn for an illustration for only two of the segments in the subfigure). } } \label{seg-cross} \end{figure} \noindent{\bf Proof.} Here too, the `only if' part of the lemma is simple, and we only consider the `if' part. Condition (a) and the convexity of $\gamma$ imply that $\ell\cap \gamma$ consists of one or two points. Assume first that $\ell\cap \gamma$ consists of two points $\xi$, $\eta$. By Condition (b) at least one of $p$, $q$ lies outside the (open) interval $\xi\eta$. Assume, without loss of generality, that $p$ lies outside $\xi\eta$. If $p$ lies to the right of that interval, $\rho_p$ misses $\gamma$, contradicting Condition (c). Thus $p$ lies to the left of $\xi\eta$. Then the only way in which $s\cap\gamma$ is empty is when $q$ also lies to the left of $\xi\eta$, but then $\rho_q\cap\gamma$ would be empty, again contradicting Condition (c). Assume next that $\ell\cap \gamma$ consists of one point $\xi$. By Condition (c), $p$ must lie to the left of $\xi$ and $q$ must lie to the right of $\xi$, implying that $s$ meets $\gamma$. See Figure~\ref{seg-cross}(b) for an illustration. $\Box$ The description of the data structure is fairly straightforward given the criteria for intersection in Lemma~\ref{lem:seg-arc}. That is, we construct a multi-level data structure, where in the first level we test Condition (a), obtaining the set of arcs that $\ell$ intersects, as the disjoint union of canonical sets of arcs. At the next levels we test Condition (b). For an arc $\gamma$, with endpoints $u$ and $v$, a point $p$ lies outside $\kappa = \kappa(\gamma)$ when $p$ lies either below $\gamma$, or to the left of $u$, or to the right of $v$. Collecting the arcs $\gamma$ that satisfy this property is easily done using similar data structures as those described for the case of ray-intersection queries, where we first extract those arcs that lie above $p$ and then the arcs that lie above $q$. We then use a one-dimensional search tree on the left (resp., right) endpoints of the arcs, to collect those arcs that lie to the right of $p$ (resp., to the left of $q$). Overall this requires $O(n^{3/2+\eps})$ storage and preprocessing and the query time is $O(n^{1/2+\eps})$. To test for Condition (c), we build a multi-level data structure, over each final canonical set, that tests whether Conditions (a)--(c) of Lemma~\ref{lem:ray-arc} are satisfied for the rightward-directed ray $\rho_p$, and that their symmetric counterparts are satisfied for the leftward-directed ray $\rho_q$. The overall performance bounds remain the same: $O(n^{3/2+\eps})$ storage and preprocessing cost, and $O(n^{1/2+\eps})$ query time. As noted above, the parametric-search approach in~\cite{AM} yields a procedure for answering ray-shooting queries, given a procedure for answering segment-intersection queries, as well as a parallel procedure for the same task. The preprocessing that constructs the structure is performed only sequentially, as above. The query procedure, for detecting segment intersection, is easy to parallelize, since it is essentially a multi-level tree traversal. By allocating a processor to each node that the query visits, we can perform the traversal in parallel, in polylogarithmic parallel time. In other words, the parallel time to answer a segment-intersection detection query is $O(\polylog{n})$, using at most $O(n^{3/2+\eps})$ processors. Integrating these bounds with~\cite[Theorem 2.1]{AM}, we obtain that ray-shooting queries in a planar collection of arcs can be answered using the same data structure for segment-intersection queries, where the query time for the former problem is only within a polylogarithmic factor of the time for the latter one, a factor that is hidden in our $\eps$-notation, by slightly increasing $\eps$. A simple modification of the segment-intersection query procedure allows us to report an arc $\gamma$ intersecting (i.e., containing) the endpoint of the query segment, when the segment is otherwise empty. The easy details are omitted. In conclusion we have shown the following general result, which we believe to be of independent interest. \begin{proposition} \label{prop:ray-curves} A collection $\Gamma$ of $n$ constant-degree algebraic arcs in the plane can be preprocessed, in time and storage $O(n^{3/2+\eps})$, for any $\eps>0$, into a data structure that supports ray shooting queries in $\Gamma$, in $O(n^{1/2+\eps})$ time per query. \end{proposition} As a corollary, we thus obtain: \begin{proposition} \label{prop:onzf} For a partitioning polynomial $f$ of sufficiently large constant degree, and a set $\W$ of $n$ triangles, one can construct a data structure of size and preprocessing cost $O(n^{3/2+\eps})$, so that a segment-shooting query with a segment that lies in $Z(f)$, can be answered in $O(n^{1/2+\eps})$ time, for any $\eps > 0$. \end{proposition} As already concluded, Proposition~\ref{prop:onzf}, combined with Proposition~\ref{prop:inside} of the previous subsection, complete the proof of Theorem~\ref{thm:trimain}. \section{Segment-triangle intersection reporting, emptiness, and approximate counting queries} \label{sec:seg} \subsection{Segment-triangle intersection reporting and emptiness} We extend the technique presented in Section~\ref{sec:shoot} to answer intersection reporting queries amid triangles in $\reals^3$. Here too we have a set $\T$ of $n$ triangles in $\reals^3$, and our goal is to preprocess $\T$ into a data structure that supports efficient intersection queries, each of which specifies a line, ray or segment $\rho$ and asks for reporting the triangles of $\T$ that $\rho$ intersects. In particular, this data structure also supports \emph{segment emptiness} queries, in which we want to determine whether the query segment meets any input triangle. Unfortunately, for technical reasons, the method does not extend to segment-triangle intersection (exact) \emph{counting} queries, in which we want to find the (exact) number of triangles that intersect a query segment (or a line or a ray). This issue will be discussed later on in this section, and a partial solution, which supports queries that approximately count the number of intersections, up to any prescribed relative error $\eps>0$, will be presented in Section~\ref{sec:apxct}. \begin{theorem} \label{thm:seg_intersection} Given a collection of $n$ triangles in three dimensions, and a prescribed parameter $\eps>0$, we can process the triangles into a data structure of size $O(n^{3/2+\eps})$, in time $O(n^{3/2+\eps})$, so that a segment-intersection reporting (resp., emptiness) query amid these triangles can be answered in $O(n^{1/2+\eps}+k\log n)$ (resp., $O(n^{1/2+\eps})$) time, where $k$ is the number of triangles that the query segment crosses. \end{theorem} \noindent{\bf Proof.} The algorithm that we develop here is a fairly easy adaptation of those given in the previous section, which is fairly straightforward, except for one significant issue, noted below. The preprocessing is almost identical, except that now we preprocess the sets $\T_C$ of wide triangles for line- (ray-, or segment-)intersection reporting queries in a set of planes in $\reals^3$ (namely, the corresponding planes in $\H_C$); in this case the reporting query time is ${\displaystyle O\left(\frac{n \polylog{n} }{s^{1/3}} + k\right)}$, where $n$ is the number of wide triangles, $s$ is the amount of storage allocated, and $k$ is the output size; see~\cite[Theorem 3.3]{AM}. To answer a query with a segment (ray, or line) $\rho$, we trace $\rho$ through the cells and subcells that it crosses, as before, but do not abort the search at cells where an intersection has been found, and instead follow the search to completion. We take each canonical set $\T_C$ that the query collects, and access it via the intersection-reporting mechanism that we have constructed for it. At the bottom level we examine all the triangles in the subproblem, and report those that are crossed by $\rho$. Recall that the canonical sets that we construct are not necessarily pairwise disjoint, even those that are encountered when querying with a fixed segment $e$. Thus, the triangles that we report may be reported multiple times, which we want to avoid. To this end, at each level of the recursion (on the trapezoidal cells $\psi$) within a single cell $\tau$ of the polynomial partition, we take the outputs of the two recursive calls (recall the details in Section~\ref{sec:shoot}), and keep only one copy of each triangle that is reported twice. (Recall that the sets of triangles in the recursive calls are disjoint from the canonical set $\T_C$ but may share triangles between themselves.) Repeating this step at all levels of recursion guarantees that the reported triangles are all distinct. The overall cost of this overhead is $O(k\log n)$, where $k$ is the output size. Note that this non-disjointness of the outputs makes it difficult to convert this procedure to one that counts the number of triangles that the query object crosses, and at the moment we do not know how to perform intersection-counting queries with the same performance bounds. (It is also conceivable that the upper bound on the cost of a counting query is larger; see, e.g., \cite{AM} for similar phenomena.) A similar adaptation is applied for the substructure that handles queries that are contained in the zero set of the partitioning polynomial, and its easy details are omitted. (In fact, in this case, due to the nature of our range searching mechanism, the conflict lists comprising the answer to a single query are pairwise disjoint. Therefore in this case, we do obtain a range counting mechanism with similar asymptotic performance bounds). This completes the description of the required adaptations, and establishes Theorem~\ref{thm:seg_intersection}. $\Box$ \subsection{Approximate segment-intersection counting} \label{sec:apxct} Let $T$ be as above, and let $\delta>0$ be a prescribed parameter. We want to preprocess $T$ into a data structure that supports \emph{approximate counting} queries of the following form: Given a query segment $e$, count how many triangles of $T$ are crossed by $e$, up to a relative error of $1\pm \delta$. To do so, we use the notion of a \emph{relative $(p,\delta)$-approximation}, as developed and analyzed in Har-Peled and Sharir~\cite{HPS}. We recall this notion and its basic properties. Consider the range space $(T,\R)$, where $\R$ is the collection of all subsets of $T$ of the form $T(e) = \{\Delta\in T \mid \Delta\cap e\ne\emptyset\}$, where $e$ is a segment. It is easily shown that $(T,\R)$ has finite VC-dimension $\eta$. (A brief argument for this latter property follows by bounding the \emph{primal shatter function} of the range space, as a function of $|T|$. This is done by representing the lines supporting the triangle edges as surfaces in $4$-space, and by observing that, for each of the $O(|T|^4)$ cells $C$ of their arrangement, all the lines whose Pl\"ucker images lie in $C$ meet the same subset $T_C$ of triangles of $T$. For a segment $e$, we take the cell $C$ containing the image of the line supporting $e$, and argue that there are only polynomially many subsets of $T_C$ that can be crossed by such a segment $e$.) For a segment $e$ and a subset $X\subseteq T$, write $\oX(e) := \frac{|X\cap e|}{|X|}$; this is the ``relative size'' of the range $T(e)$ induced by $e$. Let $0<p,\;\delta < 1$ be given parameters. A subset $Z\subseteq T$ is called a \emph{relative $(p,\delta)$-approximation} if, for every segment $e$, we have: \begin{gather*} (1-\delta)\oT(e) \le \oZ(e) \le (1+\delta)\oT(e), \quad\text{if $\oT(e) \ge p$} \\ \oT(e) - \delta p \le \oZ(e) \le \oT(e) + \delta p, \quad\text{if $\oT(e) \le p$} . \end{gather*} As shown in \cite{HPS}, a random sample $Z$ of $T$ of size ${\displaystyle \frac{c}{\delta^2 p} \left( \eta \log \frac{1}{p} + \log \frac{1}{q} \right)}$ is a relative $(p,\delta)$-approximation with probability at least $1-q$, where $c$ is some absolute constant. We use this notion as follows. Using Theorem~\ref{thm:seg_intersection} we construct our data structure for exact segment intersection \emph{reporting} queries, with $O(n^{3/2+\eps})$ storage and preprocessing and query time $O(n^{1/2+\eps} + k\log n)$, for any $\eps>0$, where $k$ is the output size. We also construct a relative $(p,\delta)$-approximation $Z$ for $T$, by an appropriate random sampling mechanism, where the value of $p$ will be determined shortly. An approximate counting query with a segment $e$ is answered as follows. We first query with $e$ in the data structure for exact segment intersection reporting, but stop the procedure as soon as we collect more than $np$ triangles. If the output size $k$ does not exceed this bound, we output $k$ (as an exact count) and are done. The cost of the query so far is $O(n^{1/2+\eps} + np\log n)$. If we detect that $k > np$, we compute $\oZ(e)$ by brute force, in $O(|Z|)$ time, and output the value $k_{\rm apx} := n\oZ(e)$. By the properties of relative approximations, we have, since $\oT(e) \ge p$, \[ (1-\delta) k \le k_{\rm apx} \le (1+\delta) k , \] so $k_{\rm apx}$ satisfies the desired approximation property. The overall (deterministic) cost of the query is \[ O\left(n^{1/2+\eps} + np\log n + |Z|\right) = O\left(n^{1/2+\eps} + np\log n + \frac{1}{\delta^2 p} \left( \eta \log \frac{1}{p} + \log \frac{1}{q} \right) \right) . \] We ignore the effect of $q$, and balance the terms by choosing ${\displaystyle p := \frac{1}{\delta n^{1/2}}}$, making the query cost \[ O\left(n^{1/2+\eps} + \frac{n^{1/2}}{\delta} \log n \right) . \] As long as $\delta$ is not too small (but we can still choose $\delta$ to be $1/n^{\eps'}$, for some $\eps' < \eps$), the first term dominates the bound, which is thus asymptotically the same as the bound for reporting queries. The storage is $O\left( n^{3/2+\eps} + |Z|\right) = O\left( n^{3/2+\eps}\right)$, as long as $\delta$ is not chosen too small. We thus conclude: \begin{theorem} \label{thm:approximate_counting} Given a collection of $n$ triangles in three dimensions, and prescribed parameters $\eps, \delta>0$, where $\delta = \omega(1/n^\eps)$, we can process the triangles, using random sampling, into a data structure of size $O(n^{3/2+\eps})$, in time $O(n^{3/2+\eps})$, so that, for a query segment $e$, the number of intersections of $e$ with the input triangles can be approximately computed, up to a relative error of $1\pm\delta$, with very high probability, in $O(n^{1/2+\eps})$ time. \end{theorem} \section{Tradeoff between storage and query time} \label{sec:trade} In this section we extend the technique in Sections~\ref{sec:shoot} and \ref{sec:seg} to obtain a tradeoff between storage (and preprocessing) and query time. A similar tradeoff holds for the other problems studied in Section~\ref{sec:other}. For a quick overview of our approach, consider the ray-shooting structure of Section~\ref{sec:shoot}, and let $s$ be the storage parameter that we allocate to the structure, which now satisfies $n\le s\le n^4$. We modify the procedure for ray shooting inside a cell $\tau$ by (i) stopping potentially the $r_0$-recursion at some earlier `premature' level, and (ii) modifying the structure at the bottom of recursion so that it uses the (weaker) ray-shooting technique of Pellegrini~\cite{Pel} instead of a brute-force scanning of the triangles (the current cost of $O(n^2/s)$, a consequence of this brute-force approach, is too expensive when $s$ is small). A similar adaptation is applied to the procedure of ray shooting on the zero set of the partitioning polynomial. With additional care we obtain the performance bounds (\ref{eq:trade1}) and (\ref{eq:trade2}) announced in the introduction. We now present the technique in detail. Consider the ray-shooting structure of Section~\ref{sec:shoot}, and let $s$ be the storage parameter that we allocate to the structure, which satisfies $n\le s\le n^4$. As before, we use this notation to indicate that the actual storage (and preprocessing) that the structure uses may be $O(s^{1+\eps})$, for any $\eps>0$. We comment that in the preceding sections $s$ is assumed to be at most $n^2$. Handling larger values of $s$ require some care, detailed below. For the time being, we continue to assume that $s\le n^2$, and will later show how to extend the analysis for larger values. Consider first the subprocedure for handling ray shooting for rays that are not contained in the zero set of the partitioning polynomial. When $s = O(n^{3/2})$, we run the recursive preprocessing described in Section~\ref{sec:shoot} up to some `premature' level $k$, and when $s = \Omega(n^{3/2})$, we run it all the way down. With a suitable choice of parameters, we obtain $O(D^{3k+\eps})$ subproblems at the bottom level of recursion, each involving at most $n/D^{2k}$ (narrow) triangles. Except for the bottom level, we build, at each node $\tau$ of the recursion, the same structure on the set $\W_\tau$ of wide triangles in $\tau$, with one (significant) difference. First, since we start the recursion with storage parameter $s$, we allocate to each subproblem, at any level $j$, the storage parameter $s/D^{3j}$, thus ensuring that the storage used by the structure is $O(s^{1+\eps})$. However, the cost of a query, even at the first level of recursion, given in (\ref{eq:qns}), has the term $O(n^{2+\eps}/s)$, which is the cost of a naive, brute-force processing of the conflict lists at the bottom instances of the $r_0$-recursion within the partition cells. This is fine for $s = \Omega(n^{3/2+\eps})$ but kills the efficiency of the procedure when $s$ is smaller. For example, for $s=n$ we get (near) linear query time, much more than what we aim to have. We therefore improve the performance at the bottom-level nodes of the $r_0$-recurrence (within a cell), by constructing, for each respective conflict list, the ray-shooting data structure of Pellegrini~\cite{Pel}, which, for $N$ triangles and with storage parameter $s$, answers a query in time $O(N^{1+\eps}/s^{1/4})$. Since at the bottom of the $r_0$-recursion, both the number of triangles and the storage parameter are $n^{2+\eps}/s$, the cost of a query at the bottom of the recursion is $O((n^2/s)^{3/4+\eps})$. That is, the modified (improved) cost of a query at such a node is \begin{equation} \label{eq:qns1} Q(n,s) = O\left( \frac{n \ {\rm polylog}(n) }{s^{1/3}} + \frac{n^{3/2+\eps}}{s^{3/4}} \right) . \end{equation} At each of the $O(D^{3k+\eps})$ bottom-level cells $\tau$, we take the set $\N_\tau$ of (narrow) triangles that have reached $\tau$, whose size is now at most $n/D^{2k}$, allocate to it the storage parameter $s/D^{3k}$, and preprocess $\N_\tau$ using the aforementioned technique of Pellegrini~\cite{Pel}, which results in a data structure, with storage parameter $s/D^{3k}$, which supports ray shooting queries in time \[ O\left(\frac{|\N_\tau|^{1+\eps}}{(s/D^{3k})^{1/4}}\right) = O\left(\frac{(n/D^{2k})^{1+\eps}}{(s/D^{3k})^{1/4}}\right) = O\left(\frac{n^{1+\eps}}{s^{1/4}D^{(5/4+2\eps)k}} \right) . \] Multiplying this bound by the number $O(D^{k+\eps})$ of cells that the query ray crosses, the cost of the query at the bottom-level cells is \[ Q_{\rm bot}(n,s) = O\left(\frac{n^{1+\eps}}{s^{1/4}D^{(1/4+\eps)k}} \right) . \] The cost of a query at the inner recursive nodes of some depth $j<k$ is the number, $O(D^{j+\eps})$, of $j$-level cells that the ray crosses, times the cost of accessing the data structure for the wide triangles at each visited cell. Since we have allocated to each of the $O(D^{3j+\eps})$ cells at level $j$ the storage parameter $O(s/D^{3j})$, the cost of accessing the structure for wide triangles in a $j$-level cell is, according to (\ref{eq:qns1}), at most \[ Q_{\rm inner}(n,s) = O\left( \frac{(n/D^{2j}){\rm polylog}(n)}{(s/D^{3j})^{1/3}} + \frac{(n/D^{2j})^{3/2+\eps}}{(s/D^{3j})^{3/4}} \right) = O\left( \frac{n\;{\rm polylog}(n)}{ D^{j} s^{1/3} } + \frac{n^{3/2+\eps}}{D^{(3/4+2\eps)j}s^{3/4}} \right) . \] Summing over all $j$-level cells, for all $j$, and then adding the bottom-level cost, and the cost of traversing the structure with the query segment, the overall cost of a query is (we remind the reader that so far we only consider the case where $s\le n^2$) \begin{equation} \label{qtrade} O\left( D^{k+\eps} + \frac{n^{3/2+\eps} D^{k/4} }{s^{3/4}} + \frac{n^{1+\eps}}{s^{1/3}} + \frac{n^{1+\eps}}{s^{1/4}D^{(1/4+\eps)k}} \right) . \end{equation} We choose $k$ to (roughly) balance the second and the last terms; specifically, we choose \[ D^k = \frac{s}{n} . \] Since $D^{k+\eps}$ should not exceed $O(n^{1/2+\eps})$, we require for this choice of $k$ that $s = O(n^{3/2+\eps})$. In this case it is easily verified that the second and last terms, which are $O(n^{5/4+\eps}/s^{1/2})$, dominate both the first and third terms (recall that we assume $s \ge n$), and the query time is therefore \[ O(n^{5/4+\eps}/s^{1/2}) . \] For larger values of $s$, that is, when $s = \Omega(n^{3/2+\eps})$ (but we still assume $s\le n^2$), we balance the first term with the last term, so we choose \[ D^k = \frac{n^{4/5}}{s^{1/5}} . \] Note that in this range we indeed have that $D^{k+\eps} = O(n^{1/2+\eps})$. Moreover, in this case the first and last terms dominate the second and third terms, as is easily verified. Therefore the query time is \[ O(n^{4/5+\eps}/s^{1/5}) . \] As already promised, the case where the query ray lies on the current zero set will be presented later. It remains to handle the range $n^2 < s\le n^4$. Informally, at each cell $\tau$ of the polynomial partition, at any level $j$ of the $D$-recursion, we have $n_\tau\le n/D^{2j}$ wide triangles and storage parameter $s_\tau = s/D^{3j}$. Since $s\ge n^2$, we also have $s_\tau \ge n_\tau^2$. With such `abundance' of storage, we run the $r_0$-recursion until we reach subproblems of constant size, in which case we simply store the list of wide triangles at each bottom-level node, and the query simply inspects all of them, at a constant cost per subproblem. Hence the cost of a query at $\tau$ is $O(n_\tau^{1+\eps}/s_\tau^{1/3})$. To be precise, this is the case as long as $s_\tau \le n_\tau^3$. If $n^2\le s\le n^3$ there will be some level $j$ at whose cells $\tau$ $s_\tau = s/D^{3j}$ becomes larger than $(n/D^{2j})^3 \ge n_\tau^3$, and then the cost becomes $O(n_\tau^\eps)$. When $n^3 < s \le n^4$ the cost becomes $O(n_\tau^\eps)$ right away (and stays so). That is, the cost of a query in the structure for wide triangles at a cell $\tau$ at level $j$ is \begin{align*} O\left( \frac{(n/D^{2j})^{1+\eps}} {(s/D^{3j})^{1/3}} \right) = O\left( \frac{n^{1+\eps}} {s^{1/3} D^{j(1+2\eps)}} \right) & \qquad\text{for $s \le \frac{n^3}{D^{3j}}$} \\ O\left( n^\eps \right) & \qquad\text{for $s > \frac{n^3}{D^{3j}}$} . \end{align*} Since a query visits $O(D^{j+\eps})$ cells $\tau$ at level $j$, the overall cost of searching amid the wide triangles, over all levels, is easily seen to be \begin{align*} O\left( \frac{n^{1+\eps}} {s^{1/3}} \right) & \qquad\text{for $n^2\le s \le n^3$} \\ O\left( D^k n^\eps \right) & \qquad\text{for $n^3< s \le n^4$} , \end{align*} where $k$ is the depth of the $D$-recursion. Querying amid the narrow triangles is done again as in Section~\ref{sec:shoot} (once again, recall that we now consider the case where $s > n^2$, whereas earlier in this section we assumed $s \le n^2$). At each node $\tau$ at the bottom level $k$ of the $D$-recursion we use Pellegrini's data structure~\cite{Pel}, which, with at most $n/D^{2k}$ narrow triangles and storage parameter $s/D^{3k}$, answers a query in time \[ O\left( \frac{(n/D^{2k})^{1+\eps}} {(s/D^{3k})^{1/4}} \right) = O\left( \frac{n^{1+\eps}}{D^{(5/4+2\eps)k}s^{1/4}} \right) . \] We multiply by the number of cells that the query visits, namely $O(D^{k+\eps})$, and add the cost $O(D^{k+\eps})$ of traversing these cells, for a total of \[ O\left( D^{k+\eps} + \frac{n^{1+\eps}}{s^{1/3}} + \frac{n^{1+\eps}}{D^{(1/4+2\eps)k}s^{1/4}} \right) . \] In other words, we get the same bound as in (\ref{qtrade}), except for the second term which is missing now (this term corresponds to querying at the bottom-level nodes of the $r_0$-recursion on the wide triangles, which is not needed when $s > n^2$, since these bottom-level subproblems now have constant size). Repeating the same analysis as above, we get the same bound for the query cost. \paragraph{Handling the zero set.} We next analyze the case where the query ray lies on the zero set. In order to obtain the trade-off bounds for ray shooting within $Z(f)$, we recall the multi-level data structure presented in Section~\ref{sec:onzf}. Each level in this data structure is either a one- or a two-dimensional search tree, where the dominating levels are those where we need to apply a planar decomposition over a set of planar regions (or in an arrangement of algebraic arcs) and preprocess it into a structure that supports point-location queries. A standard property of multi-level range searching data structures is that the overall complexity of their storage (resp., query time) is governed by the level with dominating storage (resp., query time) bound, up to a polylogarithmic factor~\cite{AE99}. Recall that in each level of our data structure we form a collection of canonical sets of the arcs in $\Gamma$, which are passed on to the next level for further processing. Our approach is to keep forming these canonical sets, where at the very last level we apply the ray-shooting data structure of Pellegrini~\cite{Pel}, as described above. Therefore the overall query cost (resp., storage and preprocessing complexity) is the sum of the query (resp., storage and preprocessing time) bounds over all canonical sets of arcs that the query reaches (resp., all the sets) at the last level. We now sketch the analysis in more detail. In order to simplify the presentation, we consider one of the dominating levels, and describe the ray-shooting data structure at that level. As stated above, we build this data structure only at the very last level, but the analysis for the dominating level subsumes the bounds for the last level, and thus for the entire multi-level data structure, up to a polylogarithmic factor. In such a scenario we have a set of algebraic arcs (or graphs of functions, or semi-algebraic regions represented by their bounding arcs), which we need to preprocess for planar point location. This is done using the technique of $(1/r)$-cuttings (see~\cite{CF-90}), which forms a decomposition of the plane into $O(r^2)$ pseudo-trapezoidal cells, each meeting at most $n/r$ arcs (the ``conflict list'' of the cell). The overall storage complexity is thus $O(nr)$. More precisely, to achieve preprocessing time close to $O(nr)$, one needs to use so-called \emph{hierarchical-cuttings} (see~\cite{Mat} and also \cite{AES}), in which we construct a hierarchy of cuttings using a constant value $r_0$ as the cutting parameter, instead of the nonconstant $r$. Using this approach, both storage and preprocessing cost are $O(n r^{1+\eps})$ for any $\eps>0$. Let $s$ be our storage parameter as above, so we want to choose $r$ such that $s = r n$. Thus we obtain that each cell of the cutting meets at most $n^2/s$ arcs. Following our approach above, for each cell of the cutting, the amount of allocated storage is $s/r^2 = n^2/s$. We are now ready to apply Pellegrini's data structure, leading to a query time of $O\left(\frac{n^{3/2+\eps}}{s^{3/4}}\right)$. Integrating this bound into the query time in~(\ref{qtrade}), we recall that at each level $0 \le j \le k$ the actual storage parameter is $O(s/D^{3j})$, and the number of triangles at hand is $O(n/D^{2j})$. We now need to sum the query bound over all $O(D^j)$ cells reached by the query at the $j$th level, and over all $j$. We thus obtain an overall bound of $$ O\left(D^k \frac{(n/D^{2k})^{3/2+\eps}}{(s/D^{3k})^{3/4}} \right) = O\left(\frac{n^{3/2+\eps} D^{k/4} }{s^{3/4}} \right) . $$ This is exactly the second term in~(\ref{qtrade}). Therefore adding the query time for ray shooting on $Z(f)$ does not increase the asymptotic bound in~(\ref{qtrade}). We comment that the overall storage and preprocessing time is $O(s^{1+\eps})$ (see our discussion below). We also comment that the query bound we obtained applies when $n \le s \le n^2$. When $s$ exceeds $n^2$, every cell of the cutting has a conflict list of $O(1)$ elements, which the query can handle in brute-force. This immediately brings the query time, for queries on the zero set, to $O(n^{\eps})$. \paragraph{Wrapping up.} In summary, our analysis implies that the query bound $Q(n,s)$ satisfies: \begin{equation} \label{eq:trade-q} Q(n,s) = \begin{cases} O\left( \frac{n^{5/4+\eps}}{s^{1/2}} \right) , & s = O(n^{3/2+\eps}) , \\ O\left( \frac{n^{4/5+\eps}}{s^{1/5}} \right) , & s = \Omega(n^{3/2+\eps}) . \end{cases} \end{equation} We recall that the overall storage (and preprocessing) is $O(s^{1+\eps})$, since we allocate to each subproblem, at any level $j$, the storage parameter $s/D^{3j}$, thus at each fixed level the total storage (and preprocessing) complexity is $O(s^{1+\eps})$, and since there are only logarithmically many levels, the overall storage (and preprocessing) is $O(s^{1+\eps})$ as well, for a slightly large $\eps$. Note that for the threshold $s\approx n^{3/2}$, both bounds yield a query cost of $O(n^{1/2+\eps})$. Note also that in the extreme cases $s = n^4$, $s = n$ (extreme for the older `four-dimensional' tradeoff), we get the respective older bounds $O(n^\eps)$ and $O(n^{3/4+\eps})$ for the query time. In this case, when either $s = n$ and $s = n^4$ we have $D^{k} = O(1)$, implying that we handle all the narrow triangles at the root of the recursion tree, that is, we use the technique of Pellegrini~\cite{Pel} once altogether. Informally, the bound in~(\ref{eq:trade-q}) `pinches' the tradeoff curve and pushes it down. The closer $s$ is to $\Theta(n^{3/2+\eps})$, the more significant is the improvement. See Figure~\ref{tradeoff}. \paragraph{Processing $m$ queries.} The improved tradeoff in (\ref{eq:trade-q}) implies that the overall cost of processing $m$ queries with $n$ input triangles, including preprocessing cost, is \[ O(s^{1+\eps} + mQ(n,s)) = \begin{cases} O\left( s^{1+\eps} + \frac{mn^{5/4}}{s^{1/2}} \right) , & s = O(n^{3/2+\eps}) , \\ O\left( s^{1+\eps} + \frac{mn^{4/5}}{s^{1/5}} \right) , & s = \Omega(n^{3/2+\eps}) . \end{cases} \] To balance the terms in the first case we choose $s = m^{2/3}n^{5/6}$; this choice satisfies $s = O(n^{3/2+\eps})$ when $m\le n$. To balance the terms in the second case we choose $s = m^{5/6}n^{2/3}$; this choice satisfies $s = \Omega(n^{3/2+\eps})$ when $m\ge n$. Recall also that $s$ has to be in the range between $n$ and $n^4$. So in the first case we must have $m^{2/3}n^{5/6} \ge n$, or $m\ge n^{1/4}$. Similarly, in the second case we must have $m^{5/6}n^{2/3} \le n^4$, or $m\le n^{4}$. We adjust the bounds, allowing also values of $m$ outside this range, by adding the near-linear terms, which dominate the bound for such off-range values of $m$. We thus get \begin{corollary} \label{cor:queries} We can process $m$ ray-shooting queries on $n$ triangles so that the total cost is \begin{equation} \label{eq:trade-queries} \max\Bigl\{ O(m^{2/3+\eps}n^{5/6+\eps} + n^{1+\eps}),\; O(n^{2/3+\eps}m^{5/6+\eps} + m^{1+\eps}) \Bigr\} . \end{equation} \end{corollary} \section{Other applications} \label{sec:other} \subsection{Detecting, counting or reporting line intersections in $\reals^3$} It is more convenient, albeit not necessary, to consider the bichromatic version of the problem, in which we are given a set $R$ of $n$ red lines and a set $B$ of $n$ blue lines in $\reals^3$, and the detection problem asks whether there exists a pair of intersecting lines in $R\times B$. Similarly, the counting problem asks for the number of such intersecting pairs, and the reporting problem asks for reporting all these pairs. An algorithm that solves the detection problem in $O(n^{3/2+\eps})$ time is easily obtained by regarding the problem as a special degenerate (and much simpler) instance of the ray shooting problem, in which we regard the, say red lines as degenerate triangles (unbounded and of zero area), construct the data structure of Section~\ref{sec:shoot} and query it with each of the blue lines. There exists a red-blue pair of intersecting lines if and only if at least one query has a positive outcome---the corresponding blue query line hits a red line. Since there are no wide triangles in this special variant, there is no need to construct the auxiliary data structure for wide triangles, as in Section~\ref{sec:wide}, and we simply construct the recursive hierarchy of polynomial partitions, where each cell in each subproblem is associated with the set of red lines that cross it. A blue query line $\ell$ is propagated through the cells that it crosses until it reaches bottom-level cells, and we check, in each such cell, whether $\ell$ intersects any of the $O(1)$ red lines associated with the cell. Handling lines that lie fully in the zero set $Z(f)$ is also an easy task (which can be performed using planar segment-intersection range searching, which also supports counting queries); further details are omitted. Both correctness and runtime analysis follow easily, as special and simpler instances of the analysis in Section~\ref{sec:shoot}. Note that here we do not face the issue of non-disjointness of canonical sets of wide triangles, which has prevented us from extending the technique to segment-triangle intersection counting problems; see Section~\ref{sec:seg}. \subsection{Computing the intersection of two polyhedra} \label{2poly} Let $K_1$ and $K_2$ be two polyhedra in 3-space, not necessarily convex, each with $n$ edges (so the number of vertices and faces of each of them is $O(n)$). The goal is to compute their intersection $K := K_1\cap K_2$ in an output-sensitive manner. We note that computing the union $K_1\cup K_2$ can be done using a very similar approach, within the same time bound. While there are additional steps in the algorithm that construct a representation of $K$ as a three-dimensional body, we will restrict here the presentation to the part that computes $\bd K$ from $\bd K_1$ and $\bd K_2$. Each face of $\bd K$ is a connected portion of a face of $\bd K_1$ or of $\bd K_2$, each edge is either a connected portion of an edge of $\bd K_1$ or of $\bd K_2$, or a connected portion of the intersection of a face of $\bd K_1$ and a face of $\bd K_2$. Finally, each vertex of $\bd K$ is either a vertex of $\bd K_1$ or of $\bd K_2$, or an intersection of an edge of one of these polyhedra with a face of the other. Note that not every vertex of $K_1$ or of $K_2$ is necessarily a vertex of $K$, but every edge-face intersection is a vertex of $K$. The main step of the algorithm is to compute the vertices of $\bd K$, from which the other features of $\bd K$ are fairly standard to construct, see, e.g.,~\cite{Pel} where the graph of the edges of $K$ is constructed by a tracing procedure~\cite{MS-85}, given the vertices of $\bd K$. We iterate over the edges of $\bd K_1$, and compute the intersections of each such edge with the faces of $\bd K_2$, using the algorithm in Theorem~\ref{thm:seg_intersection}. We apply a symmetric procedure to compute the intersections of each edge of $\bd K_2$ with the faces of $\bd K_1$. Collectively, these intersections are the vertices of $K$ of the second type (edge-face intersection vertices). The cost of this step is $O(n^{3/2+\eps} + k\log n)$, where $k$ is the number of edge-face intersections: we preprocess the $O(n)$ faces of, say $K_1$, and query with the $O(n)$ edges of $K_2$, which overall takes $O(n\cdot n^{1/2+\eps} + k\log n) = O(n^{3/2+\eps} + k\log n)$ time. Then, applying the tracing procedure in~\cite{MS-85} takes an additional cost of $O(k \log{k})$. This gives us all the edge-face intersection vertices. The other vertices of $K$ are vertices of $K_1$ or of $K_2$, and finding these vertices is done as follows. If such a vertex $v$, say of $K_1$, is incident to an edge $e$ of $K_1$ that intersects some face of $K_2$, then it is easy to determine whether $v\in K_2$ (and thus in $K$). Otherwise, we collect, by a simple graph traversal, a maximal cluster of vertices of $K_1$ that are connected by edges that have no intersection with $\bd K_2$. The vertices in such a cluster are either all inside $K_2$ (and in $K$) or all outside $K_2$ (and thus not in $K$). If the cluster consists of all vertices of $K_1$ then either $K_1$, $K_2$ are disjoint, or one contains the other. In such a case, we only need to test, in $O(n)$ time, if there is a vertex from one polyhedron that is contained in the other. Otherwise, we determine the status of the cluster (inside / outside $K$) by examining the edges that connect vertices from the cluster to vertices not in the cluster. By iteratively repeating this step, we construct all such clusters, from which we obtain all the vertices of $K_1$ and of $K_2$ the lie in $K$. In summary we obtain: \begin{corollary} \label{cor:intersection_poly} Given two arbitrary polyhedra $K_1$ and $K_2$ each of complexity $O(n)$, the intersection $K_1 \cap K_2$ can be computed in time $O(n^{3/2+\eps} + k \log n)$, where $k$ is the size of the intersection. \end{corollary} As discussed in the introduction, the overhead term in Pellegrini's algorithm~\cite{Pel} is $O(n^{8/5+\eps})$. \subsection{Output-sensitive construction of an arrangement of triangles} Let $\T$ be a set of $n$ possibly intersecting triangles in $\reals^3$, let $\A = \A(\T)$ denote their arrangement, and let $k$ denote its complexity, which, as in Section~\ref{2poly}, we measure by the number of its vertices, as the number of its other features (edges, faces, and cells) is proportional to $k$. The goal is to construct $\A$ in an output-sensitive manner with a small, subquadratic overhead. Pellegrini~\cite{Pel} gave such an algorithm that runs in $O(n^{8/5+\eps} + k\log k)$, and the algorithm that we present here reduces the overhead to $O(n^{3/2+\eps})$ time. As in the previous subsection, we focus on the main step of the algorithm that constructs the features of $\A$ (vertices, edges, and faces) on each triangle of $\T$. We will only briefly discuss the complementary part, which constructs the three-dimensional cells of $\A$ and establishes the connections between the various features on the boundary of each cell. Albeit not trivial, this latter step uses standard techniques, follows the approach in \cite{Pel} and in other works, and does not increase the overhead cost of the algorithm. Fix a triangle $\Delta\in\T$. We first construct the set of intersection segments $\Delta\cap\Delta'$, for $\Delta'\in\T\setminus\{\Delta\}$. We observe that, for any such segment $e = \Delta\cap\Delta'$, each endpoint of $e$ is either a vertex of $\Delta$, or an intersection of an edge of one triangle with the other triangle. We therefore take the collection of the $3n$ edges of the triangles of $\T$, and, for each such edge $e$, apply Theorem~\ref{thm:seg_intersection}, which reports all $k_e$ triangles that $e$ meets. This identifies all the intersection segments $\Delta\cap\Delta'$. We then take all the intersection segments within a fixed triangle $\Delta$, and run a sweepline procedure within $\Delta$ to obtain the portion of $\A$ on $\Delta$. Gluing these portions to each other, and some additional steps, complete the construction of $\A$. \section{Conclusion} In this paper we have managed to improve the performance of ray shooting amid triangles in three dimensions, as well as of several related problems. The improvement is based on the polynomial partitioning technique of Guth. The improvement is most significant when the storage is about $n^{3/2}$ and the query takes about $n^{1/2}$ time, but one gets an improvement for all values of the storage between $n$ and $n^4$, except at the very ends of this range. This is a significant improvement, the first in nearly 30 years, in this basic problem. There are several open questions that our work raises. First, the improvement for the special values of $O(n^{3/2+\eps})$ storage and $O(n^{1/2+\eps})$ query time seems too specialized, and one would like to get similar improvements for all possible values of the storage, ideally obtaining query time of $O(n^{1+\eps}/s^{1/3})$, where $s$ is the storage allocated to the structure, as in the case of ray shooting amid planes. Alternatively, can one establish a lower-bound argument that shows the limitations of our technique? Another open issue follows from our current inability to extend the technique to counting queries, due to the fact that the canonical sets that we collect during a query are not necessarily pairwise disjoint. It would be interesting to obtain such an extension, or, alternatively, to establish a gap between the performances of the counting and reporting versions of the segment intersection query problem. Finally, could one obtain similar bounds for non-flat input objects? for shooting along non-straight curves? It would also be interesting to find additional applications of the general technique developed in this paper. \paragraph{Acknowledgements.} We wish to thank Pankaj Agarwal for the useful interaction concerning certain aspects of the range searching problems encountered in this work.
2,869,038,156,031
arxiv
\section{Introduction} Energy functions that depend on thousands of binary variables and decompose according to a graphical model \cite{cowell-2007,koller-2009,lauritzen-1996,wainwright-2008} into potential functions that depend on subsets of all variables have been used successfully for pattern analysis, e.g.~in the seminal works \cite{besag-1986,boycov-2001,geman-1984,mceliece-1998}. An important problem is the minimization of the sum of potentials, i.e.~the search for an assignment of zeros and ones to the variables that minimizes the energy. This problem can be solved efficiently by dynamic programming if the graph is acyclic \cite{pearl-1988} or its treewidth is small enough \cite{lauritzen-1996}, and by finding a minimum s-t-cut \cite{boycov-2001} if the energy function is (permutation) submodular \cite{kolmogorov-2004,schlesinger-2007}. In general, the problem is NP-hard \cite{kolmogorov-2004}. For moderate problem sizes, exact optimization is sometimes tractable by means of Mixed Integer Linear Programming (MILP) \cite{schrijver-1986,schrijver-2003}. Contrary to popular belief, some practical computer vision problems can indeed be solved to optimality by modern MILP solvers (cf.~Section~\ref{section:experiments}). However, all such solvers are eventually overburdened when the problem size becomes too large. In cases where exact optimization is intractable, one has to settle for approximations. While substantial progress has been made in this direction, a deterministic non-redundant search algorithm that constrains the search space based on the topology of the graphical model has not been proposed before. This article presents a depth-limited exhaustive search algorithm, the Lazy Flipper, that does just that. The Lazy Flipper starts from an arbitrary initial assignment of zeros and ones to the variables that can be chosen, for instance, to minimize the sum of only the first order potentials of the graphical model. Starting from this initial configuration, it searches for flips of variables that reduce the energy. As soon as such a flip is found, the current configuration is updated accordingly, i.e.~in a greedy fashion. In the beginning, only single variables are flipped. Once a configuration is found whose energy can no longer be reduced by flipping of \emph{single} variables, all those subsets of two and successively more variables that are connected via potentials in the graphical model are considered. When a subset of more than one variable is flipped, all smaller subsets that are affected by the flip are revisited. This allows the Lazy Flipper to perform an exhaustive search over all subsets of variables whose flip potentially reduces the energy. Two special data structures described in Section \ref{section:data-structures} are used to represent each subset of connected variables precisely once and to exclude subsets from the search whose flip cannot reduce the energy due to the topology of the graphical model and the history of unsuccessful flips. These data structures, the Lazy Flipper algorithm and an experimental evaluation of state-of-the-art optimization algorithms on higher-order graphical models are the main contributions of this article. \enlargethispage{1mm} Overall, the new algorithm has four favorable properties: (i) It is strictly convergent. While a global minimum is found when searching through all subgraphs (typically not tractable), approximate solutions with a guaranteed quality certificate (Section \ref{section:algorithm}) are found if the search space is restricted to subgraphs of a given maximum size. The larger the subgraphs are allowed to be, the tighter the upper bound on the minimum energy becomes. This allows for a favorable trade-off between runtime and approximation quality. (ii) Unlike in brute force search, the runtime of lazy flipping depends on the topology of the graphical model. It is exponential in the worst case but can be shorter compared to brute force search by an amount that is exponential in the number of variables. It is approximately linear in the size of the model for a fixed maximum search depth. (iii) The Lazy Flipper can be applied to graphical models of any order and topology, including but not limited to the more standard grid graphs. Directed Bayesian Networks and undirected Markov Random Fields are processed in the exact same manner; they are converted to factor graph models \cite{kschischang-2001} before lazy flipping. (iv) Only trivial operations are performed on the graphical model, namely graph traversal and evaluations of potential functions. These operations are cheap compared, for instance, to the summation and minimization of potential functions performed by message passing algorithms, and require only an implicit specification of potential functions in terms of program code that computes the function value for any given assignment of values to the variables. Experiments on simulated and real-world problems, submodular and non-submodular functions, grids and irregular graphs (Section \ref{section:experiments}) assess the quality of Lazy Flipper approximations, their convergence as well as the dependence of the runtime of the algorithm on the size of the model and the search depth. The results are put into perspective by a comparison with Iterated Conditional Modes (ICM) \cite{besag-1986}, Belief Propagation (BP) \cite{pearl-1988,kschischang-2001}, Tree-reweighted BP \cite{wainwright-2005,wainwright-2008} and a Dual Decomposition ansatz using sub-gradient descent methods \cite{komodakis-2010,kappes-2010}. \section{Related Work} The Lazy Flipper is related in at least four ways to existing work. First of all, it generalizes Iterated Conditional Modes (ICM) for binary variables \cite{besag-1986}. While ICM leaves all variables except one fixed in each step, the Lazy Flipper can optimize over larger (for small models: all) connected subgraphs of a graphical model. Furthermore, it extends Block-ICM \cite{frey-2005} that optimizes over specific subsets of variables in grid graphs to irregular and higher-order graphical models. Naive attempts to generalize ICM and Block-ICM to optimize over subgraphs of size $k$ would consider all sequences of $k$ connected variables and ignore the fact that many of these sequences represent the same set. This causes substantial problems because the redundancy is large, as we show in Section~\ref{section:data-structures}. The Lazy Flipper avoids this redundancy, at the cost of storing one unique representative for each subset. Compared to randomized algorithms that sample from the set of subgraphs \cite{jung-2009,swendsen-1987,wolff-1989}, this is a memory intensive approach. Up to 8~GB of RAM are required for the optimizations shown in Section~\ref{section:experiments}. Now that servers with much larger RAM are available, it has become a practical option. Second, the Lazy Flipper is a deterministic alternative to the randomized search for tighter bounds proposed and analyzed in 2009 by Jung et al.~\cite{jung-2009}. Exactly as in \cite{jung-2009}, sets of variables that are connected via potentials in the graphical model are considered and variables flipped if these flips lead to a smaller upper bound on the sum of potentials. In contrast to \cite{jung-2009}, unique representatives of these sets are visited in a deterministic order. Both algorithms maintain a current best assignment of values to the variables and are thus related with the Swendsen-Wang algorithm \cite{swendsen-1987,barbu-2003} and Wolff algorithm \cite{wolff-1989}. Third, lazy flipping with a limited search depth as a means of approximate optimization competes with message passing algorithms \cite{kschischang-2001,minka-2001,globerson-2007,wainwright-2008} and with algorithms based on convex programming relaxations of the optimization problem \cite{globerson-2007,werner-2007,kohli-2008,kumar-2009}, in particular with Tree-reweighted Belief Propagation (TRBP) \cite{wainwright-2005,wainwright-2008,kolmogorov-2006} and sub-gradient descent \cite{komodakis-2010,kappes-2010}. Fourth, the Lazy Flipper guarantees that the best approximation found with a search depth $n_{\textnormal{max}}$ is optimal within a Hamming distance $n_{\textnormal{max}}$. A similar guarantee known as the Single Loop Tree (SLT) neighborhood \cite{weiss-2001} is given by BP in case of convergence. The SLT condition states that in any alteration of an assignment of values to the variables that leads to a lower energy, the altered variables form a subgraph in the graphical model that has at least two loops. The fact that Hamming optimality and SLT optimality differ can be exploited in practice. We show in one experiment in Section~\ref{section:experiments} that BP approximations can be further improved by means of lazy flipping. \section{The Lazy Flipper Data Structures} \label{section:data-structures} Two special data structures are crucial to the Lazy Flipper. The first data structure that we call a \emph{connected subgraph tree (CS-tree)} ensures that only \emph{connected} subsets of variables are considered, i.e.~sets of variables which are connected via potentials in the graphical model. Moreover, it ensures that every such subset is represented precisely once (and not repeatedly) by an ordered sequence of its variables, cf.~\cite{moerkotte-2006}. The rationale behind this concept is the following: If the flip of one variable and the flip of another variable not connected to the first one do not reduce the energy then it is pointless to try a simultaneous flip of both variables because the (energy increasing) contributions from both flips would sum up. Furthermore, if the flip of a disconnected set of variables reduces the energy then the same and possibly better reductions can be obtained by flipping connected subsets of this set consecutively, in any order. All disconnected subsets of variables can therefore be excluded from the search if the connected subsets are searched ordered by their size. Finding a unique representative for each connected subset of variables is important. The alternative would be to consider all sequences of pairwise distinct variables in which each variable is connected to at least one of its predecessors and to ignore the fact that many of these sequences represent the same set. Sampling algorithms that select and grow connected subsets in a randomized fashion do exactly this. However, the redundancy is large. As an example, consider a connected subset of six variables of a 2-dimensional grid graph as depicted in Fig.~\ref{figure:cs-tree}a. Although there is only one connected set that contains all six variables, $208$ out of the $6! = 720$ possible sequences of these variables meet the requirement that each variable is connected to at least one of its predecessors. This 208-fold redundancy hampers the exploration of the search space by means of randomized algorithms; it is avoided in lazy flipping at the cost of storing one unique representative for every connected subgraph in the CS-tree. The second data structure is a \emph{tag list} that prevents the repeated assessment of unsuccessful flips. The idea is the following: If some variables have been flipped in one iteration (and the current best configuration has been updated accordingly), it suffices to revisit only those sets of variables that are connected to at least one variable that has been flipped. All other sets of variables are excluded from the search because the potentials that depend on these variables are unaffected by the flip and have been assessed in their current state before. The tag list and the connected subgraph tree are essential to the Lazy Flipper and are described in the following sections, \ref{section:cs-tree} and \ref{section:tag-lists}. For a quick overview, the reader can however skip these sections, take for granted that it is possible to efficiently enumerate all connected subgraphs of a graphical model, ordered by their size, and refer directly to the main algorithm (Section \ref{section:algorithm} and Alg.~\ref{algorithm:lazy-flipper}). All non-trivial sub-functions used in the main algorithm are related to tag lists and the CS-tree and are described in detail now. \subsection{Connected Subgraph Tree (CS-tree)} \label{section:cs-tree} \begin{figure} \centering \includegraphics[width=\textwidth]{grid-model-and-subgraph-tree} \caption{All connected subgraphs of a graphical model (a) can be represented uniquely in a connected subgraph tree (CS-tree) (b). Every path from a node in the CS-tree to the root node corresponds to a connected subgraph in the graphical model. While there are $2^6 = 64$ subsets of variables in total in this example, only 40 of these subsets are connected.} \label{figure:cs-tree} \end{figure} The CS-tree represents subsets of connected variables uniquely. Every node in the CS-tree except the special root node is labeled with the integer index of one variable in the graphical model. The same variable index is assigned to several nodes in the CS-tree unless the graphical model is completely disconnected. The CS-tree is constructed such that every connected subset of variables in the graphical model corresponds to precisely one path in the CS-tree from a node to the root node, the node labels along the path indicating precisely the variables in the subset, and vice versa, there exists precisely one connected subset of variables in the graphical model for each path in the CS-tree from a node to the root node. In order to guarantee by construction of the CS-tree that each subset of connected variables is represented precisely once, the variable indices of each subset are put in a special order, namely the lexicographically smallest order in which each variable is connected to at least one of its predecessors. The following definition of these sequences of variable indices is recursive and therefore motivates an algorithm for the construction of the CS-tree for the Lazy Flipper. A small grid model and its complete CS-tree are depicted in Fig.~\ref{figure:cs-tree}. \begin{definition}[CSR-Sequence] \label{definition:subset-representing-sequence} Given an undirected graph $G = (V, E)$ whose $m \in \mathbb{N}$ vertices $V = \{1,\ldots,m\}$ are integer indices, every sequence that consists of only one index is called \emph{connected subset representing (CSR)}. Given $n \in \mathbb{N}$ and a CSR-sequence $(v_1, \ldots, v_n)$, a sequence $(v_1, \ldots, v_n, v_{n+1})$ of $n+1$ indices is called a \emph{CSR-sequence} precisely if the following conditions hold: (i) $v_{n+1}$ is not among its predecessors, i.e.~$\forall j \in \{1, \ldots, n\}: v_j \not= v_{n+1}$. (ii) $v_{n+1}$ is connected to at least one of its predecessors, i.e.~$\exists j \in \{1, \ldots, n\}: \{v_j, v_{n+1}\} \in E$. (iii) $v_{n+1} > v_1$. (iv) If $n \geq 2$ and $v_{n+1}$ could have been added at an earlier position $j \in \{2,\ldots,n\}$ to the sequence, fulfilling (i)--(iii), all subsequent vertices $v_j, \ldots, v_n$ are smaller than $v_{n+1}$, i.e. \begin{equation} \forall j \in \{2,\ldots,n\} \left( \{v_{j-1}, v_{n+1}\} \in E \Rightarrow \left( \forall k \in \{j,\ldots,n\}: v_k < v_{n+1} \right) \right) \enspace . \end{equation} \end{definition} Based on this definition, three functions are sufficient to recursively build the CS-tree $T$ of a graphical model $G$, starting from the root node. The function $q$ = \emph{growSubset}($T,G,p$) appends to a node $p$ in the CS-tree the smallest variable index that is not yet among the children of $p$ and fulfills (i)--(iv) for the CSR-sequence of variable indices on the path from $p$ to the root node. It returns the appended node or the empty set if no suitable variable index exists. The function $q$ = \emph{firstSubsetOfSize}($T,G,n$) traverses the CS-tree on the current deepest level $n-1$, calling the function \emph{growSubset} for each leaf until a node can be appended and thus, the first subset of size $n$ has been found. Finally, the function $q$ = \emph{nextSubsetOfSameSize}($T,G,p$) starts from a node $p$, finds its parent and traverses from there in level order, calling \emph{growSubset} for each node to find the length-lexicographic successor of the CSR-sequence associated with the node $p$, i.e.~the representative of the next subset of the same size. These functions are used by the Lazy Flipper (Alg.~\ref{algorithm:lazy-flipper}) to \emph{construct} the CS-tree. In contrast, the \emph{traversal} of already constructed parts of the CS-tree (when revisiting subsets of variables after successful flips) is performed by functions associated with tag lists which are defined the following section. \subsection{Tag Lists} \label{section:tag-lists} Tag lists are used to tag variables that are affected by flips. A variable is affected by a flip either because it has been flipped itself or because it is connected (via a potential) to a flipped variable. The tag list data structure comprises a Boolean vector in which each entry corresponds to a variable, indicating whether or not this variable is affected by recent flips. As the total number of variables can be large ($10^6$ is not exceptional) and possibly only a few variables are affected by flips, a list of all affected variables is maintained in addition to the vector. This list allows the algorithm to untag all tagged variables without re-initializing the entire Boolean vector. The two fundamental operations on a tag list $L$ are \emph{tag}$(L,x)$ which tags the variable with the index $x$, and \emph{untagAll}$(L)$. For the Lazy Flipper, three special functions are used in addition: Given a tag list $L$, a (possibly incomplete) CS-tree $T$, the graphical model $G$, and a node $s \in T$, $\textit{tagConnectedVariables}(L,T,G,s)$ tags all variables on the path from $s$ to the root node in $T$, as well as all nodes that are connected (via a potential in $G$) to at least one of these nodes. The function $s = \textit{firstTaggedSubset}(L,T)$ traverses the first level of $T$ and returns the first node $s$ whose variable is tagged (or the empty set if all variables are untagged). Finally, the function $t = \textit{nextTaggedSubset}(L,T,s)$ traverses $T$ in level order, starting with the successor of $s$, and returns the first node $t$ for which the path to the root contains at least one tagged variable. These functions, together with those of the CS-tree, are sufficient for the Lazy Flipper, Alg.~\ref{algorithm:lazy-flipper}. \section{The Lazy Flipper Algorithm} \label{section:algorithm} In the main loop of the Lazy Flipper (lines 2--26 in Alg.~\ref{algorithm:lazy-flipper}), the size $n$ of subsets is incremented until the limit $n_{\textnormal{max}}$ is reached (line 24). Inside this main loop, the algorithm falls into two parts, the \emph{exploration part} (lines 3--11) and the \emph{revisiting part} (lines 12--23). In the exploration part, flips of previously unseen subsets of $n$ variables are assessed. The current best configuration $c$ is updated in a greedy fashion, i.e.~whenever a flip yields a lower energy. At the same time, the CS-tree is grown, using the functions defined in Section \ref{section:cs-tree}. In the revisiting part, all subsets of sizes $1$ through $n$ that are affected by recent flips are assessed iteratively until no flip of any of these subsets reduces the energy (line 14). The indices of affected variables are stored in the tag lists $L_1$ and $L_2$ (cf.~Section \ref{section:tag-lists}). In practice, the Lazy Flipper can be stopped at any point, e.g.~when a time limit is exceeded, and the current best configuration $c$ taken as the output. It eventually reaches configurations for which it is guaranteed that no flip of $n$ or less variables can yield a lower energy because all such flips that could potentially lower the energy have been assessed (line 14). Such configurations are therefore guaranteed to be optimal within a Hamming radius of $n$: \begin{definition}[Hamming-$n$ bound] Given a function $E: \{0,1\}^m \rightarrow \mathbb{R}$, a configuration $c \in \{0,1\}^m$, and $n \in \mathbb{N}$, $E(c)$ is called a \emph{Hamming-}$n$ \emph{upper bound} on the minimum of $E$ precisely if $\forall c' \in \{0,1\}^m ( |c' - c|_1 \leq n \Rightarrow E(c) \leq E(c') )$. \end{definition} \begin{algorithm}[h!] \caption{Lazy Flipper} \label{algorithm:lazy-flipper} \KwIn{$G$: graphical model with $m \in \mathbb{N}$ binary variables, $c \in \{0,1\}^m$: initial configuration, $n_{\textnormal{max}} \in \mathbb{N}$: maximum size of subgraphs to be searched} \KwOut{$c \in \{0,1\}^m$ (modified): configuration corresponding to the smallest upper bound found ($c$ is optimal within a Hamming radius of $n_{\textnormal{max}}$).} $n \leftarrow 1$; CS-Tree $T \leftarrow \{$root$\}$; TagList $L_1 \leftarrow \emptyset$, $L_2 \leftarrow \emptyset$\; \Forever{ $s \leftarrow$ firstSubsetOfSize$(T, G, n)$\; \lIf{$s = \emptyset$}{% \Break \; } \While{$s \not= \emptyset$}{ \If{energyAfterFlip$(G, c, s) <$ energy($G, c$)}{ $c \leftarrow$ flip$(c, s)$\; tagConnectedVariables$(L_1, T, G, s)$\; } $s \leftarrow$ nextSubsetOfSameSize$(T, G, s)$\; } \Forever{ $s_2 \leftarrow$ firstTaggedSubset$(L_1, T)$\; \lIf{$s_2 = \emptyset$}{% \Break\; } \While{$s_2 \not= \emptyset$}{ \If{energyAfterFlip($G, c, s_2$) $<$ energy($G, c$)}{ $c \leftarrow$ flip$(c, s_2)$\; tagConnectedVariables$(L_2, T, G, s_2)$\; } $s_2 \leftarrow$ nextTaggedSubset$(L_1, T, s_2)$\; } untagAll$(L_1)$; swap$(L_1, L_2)$\; } \lIf{$n = n_{\textnormal{max}}$}{% \Break\; } $n \leftarrow n + 1$\; } \end{algorithm} \section{Experiments} \label{section:experiments} For a comparative assessment of the Lazy Flipper, four optimization problems of different complexity are considered, two simulated problems and two problems based on real-world data. For the sake of reproducibility, the simulations are described in detail and the models constructed from real data are available from the authors as supplementary material. The first problem is a ferromagnetic Ising model that is widely used in computer vision for foreground vs.~background segmentation \cite{boycov-2001}. Energy functions of this model consist of first and second order potentials that are submodular. The global minimum can therefore be found via a graph cut. We simulate random instances of this model in order to measure how the runtime of lazy flipping depends on the size of the model and the coupling strength, and to compare Lazy Flipper approximations to the global optimum (Section~\ref{section:ising-model}). The second problem is a problem of finding optimal subgraphs on a grid. Energy functions of this model consist of first and fourth order potentials, of which the latter are not permutation submodular. We simulate difficult instances of this problem that cannot be solved to optimality, even when allowing several days of runtime. In this challenging setting, Lazy Flipper approximations and their convergence are compared to those of BP, TRBP and DD as well as to the lower bounds on local polytope relaxations obtained by DD (Section~\ref{section:optimal-subgraph-model}). The third problem is a graphical model for removing excessive boundaries from image over-segmentations that is related to the model proposed in \cite{zhang-2010}. Energy functions of this model consist of first, third and fourth order potentials. In contrast to the grid graphs of the Ising model and the optimal subgraph model, the corresponding factor graphs are irregular but still planar. The higher-order potentials are not permutation submodular but the global optimum can be found by means of MILP in approximately 10 seconds per model using one of the fastest commercial solvers (IBM ILOG CPLEX, version~12.1). Since CPLEX is closed-source software, the algorithm is not known in detail and we use it as a black box. The general method used by CPLEX for MILP is a branch-and-bound algorithm \cite{dakin-1965,land-1960}. 100 instances of this model obtained from the 100 natural test images of the Berkeley Segmentation Database (BSD) \cite{martin-2001} are used to compare the Lazy Flipper to algorithms based on message passing and linear programming in a real-world setting where the global optimum is accessible (Section~\ref{section:2d-segmentation-model}). The fourth problem is identical to the third, except that instances are obtained from 3-dimensional volume images of neural tissue acquired by means of Serial Block Face Scanning Electron Microscopy (SBFSEM) \cite{denk-2004}. Unlike in the 2-dimensional case, the factor graphs are no longer planar. Whether exact optimization by means of MILP is practical depends on the size of the model. In practice, SBFSEM datasets consist of more than 2000$^3$ voxels. To be able to compare approximations to the \emph{global} optimum, we consider 16 models obtained from 16 SBFSEM volume sub-images of only 150$^3$ voxels for which the global optimum can be found by means of MILP within a few minutes (Section~\ref{section:3d-segmentation-model}). \subsection{Ferromagnetic Ising model} \label{section:ising-model} The ferromagnetic Ising model consists of $m \in \mathbb{N}$ binary variables $x_1,\ldots,x_m \in \{0,1\}$ that are associated with points on a 2-dimensional square grid and connected via second order potentials $E_{jk}(x_j, x_k) = 1-\delta_{x_j,x_k}$ ($\delta$: Kronecker delta) to their nearest neighbors. First order potentials $E_j(x_j)$ relate the variables to observed evidence in underlying data. The total energy of this model is the following sum in which $\alpha \in \mathbb{R}_0^+$ is a weight on the second order potentials, and $j \sim k$ indicates that the variables $x_j$ and $x_k$ are adjacent on the grid: \begin{equation} \forall x \in \{0,1\}^m: \quad E(x) = \sum_{j=1}^{m}{ E_j(x_j) } + \alpha \sum_{j=1}^{m}{ \sum_{ \substack{k=j+1 \\ k \sim j} }^{m}{ E_{jk}(x_j, x_k) } } \enspace . \end{equation} For each $\alpha \in \{0.1, 0.3, 0.5, 0.7, 0.9\}$, an ensemble of ten simulated Ising models of $50 \cdot 50 = 2500$ variables is considered. The first order potentials $E_j$ are initialized randomly by drawing $E_j(0)$ uniformly from the interval $[0,1]$ and setting $E_j(1) := 1-E_j(0)$. The exact global minimum of the total energy is found via a graph cut. For each model, the Lazy Flipper is initialized with a configuration that minimizes the sum of the first order potentials. Upper bounds on the minimum energy found by means of lazy flipping converge towards the global optimum as depicted in Fig.~\ref{figure:ising-results}. Color scales and gray scales in this figure respectively indicate the maximum size and the total number of distinct subsets that have been searched, averaged over all models in the ensemble. It can be seen from this figure that upper bounds on the minimum energy are tightened significantly by searching larger subsets of variables, independent of the coupling strength $\alpha$. It takes the Lazy Flipper less than 100~seconds (on a single CPU of an Intel Quad Xeon E7220 at 2.93GHz) to exhaustively search all connected subsets of 6~variables. The amount of RAM required for the CS-tree (in bytes) is 24~times as high as the number of subsets (approximately 50~MB in this case) because each subset is stored in the CS-tree as a node consisting of three 64-bit integers: a variable index, the index of the parent node and the index of the level order successor (Section~\ref{section:cs-tree})\footnote{The size of the CS-tree becomes limiting for very large problems. However, for regular graphs, implicit representations can be envisaged that overcome this limitation.}. For $n_{\textnormal{max}} \in \{1,6\}$, configurations corresponding to the upper bounds on the minimum energy are depicted in Fig.~\ref{figure:ising-states}. It can be seen from this figure that all connected subsets of falsely set variables are larger than $n_{\textnormal{max}}$. For a fixed maximum subgraph size $n_{\textnormal{max}}$, the runtime of lazy flipping scales approximately linearly with the number of variables in the Ising model (cf.~Fig.\ref{figure:runtime-scaling}). \begin{figure} \includegraphics[width=0.49\textwidth]{ising-2d-a01-curves} \includegraphics[width=0.49\textwidth]{ising-2d-a03-curves} \vspace{2ex}\\ \includegraphics[width=0.49\textwidth]{ising-2d-a05-curves} \includegraphics[width=0.49\textwidth]{ising-2d-a07-curves} \vspace{2ex}\\ \includegraphics[width=0.49\textwidth]{ising-2d-a09-curves} \includegraphics[width=0.45\textwidth]{ising-2d-rel} \caption{Upper bounds on the minimum energy of a graphical model can be found by flipping subsets of variables. The deviation of these upper bounds from the minimum energy is shown above for ensembles of ten random Ising models (Section~\ref{section:ising-model}). Compared to optimization by ICM where only one variable is flipped at a time, the Lazy Flipper finds significantly tighter bounds by flipping also larger subsets. The deviations increase with the coupling strength $\alpha$. Color scales and gray scales indicate the size and the total number of searched subsets.} \label{figure:ising-results} \end{figure} \begin{figure} \centering \begin{tabular}{cccccc} $n_{\textnormal{max}}$ & $\alpha=0.1$ & $\alpha=0.3$ & $\alpha=0.5$ & $\alpha=0.7$ & $\alpha=0.9$\\ $1$ & \includegraphics[width=2cm]{d-alpha-1e-1-subgraph-1.png} & \includegraphics[width=2cm]{d-alpha-3e-1-subgraph-1.png} & \includegraphics[width=2cm]{d-alpha-5e-1-subgraph-1.png} & \includegraphics[width=2cm]{d-alpha-7e-1-subgraph-1.png} & \includegraphics[width=2cm]{d-alpha-9e-1-subgraph-1.png} \\ $6$ & \includegraphics[width=2cm]{d-alpha-1e-1-subgraph-6.png} & \includegraphics[width=2cm]{d-alpha-3e-1-subgraph-6.png} & \includegraphics[width=2cm]{d-alpha-5e-1-subgraph-6.png} & \includegraphics[width=2cm]{d-alpha-7e-1-subgraph-6.png} & \includegraphics[width=2cm]{d-alpha-9e-1-subgraph-6.png} \\ $\infty$ & \includegraphics[width=2cm]{gc-alpha-1e-1.png} & \includegraphics[width=2cm]{gc-alpha-3e-1.png} & \includegraphics[width=2cm]{gc-alpha-5e-1.png} & \includegraphics[width=2cm]{gc-alpha-7e-1.png} & \includegraphics[width=2cm]{gc-alpha-9e-1.png} \\ \end{tabular} \caption{The configurations found by the Lazy Flipper converge to a global optimum as the search depth $n_{\textnormal{max}}$ increases. For Ising models with different coupling strengths $\alpha$ (columns), deviations from the global optimum ($n_{\textnormal{max}} = \infty$) are depicted in blue (false 0) and orange (false 1), for $n_{\textnormal{max}} \in \{1,6\}$. As the Lazy Flipper is greedy, these approximate solutions highly depend on the initialization and on the order in which subsets are visited.} \label{figure:ising-states} \end{figure} \begin{figure} \centering \includegraphics[height=6cm]{runtime_scaling} \caption{For a fixed maximum subgraph size ($n_{\textnormal{max}} = 6$), the runtime of lazy flipping scales only slightly more than linearly with the number of variables in the Ising model. It is measured for coupling strengths $\alpha=0.25$ (upper curve) and $\alpha=0.75$ (lower curve). Error bars indicate the standard deviation over 10 random models, and lines are fitted by least squares. Lazy flipping takes longer ($0.0259$ seconds per variable) for $\alpha=0.25$ than for $\alpha=0.75$ ($0.0218$~s/var) because more flips are successful and thus initiate revisiting.} \label{figure:runtime-scaling} \end{figure} \subsection{Optimal Subgraph Model} \label{section:optimal-subgraph-model} The optimal subgraph model consists of $m \in \mathbb{N}$ binary variables $x_1,\ldots,x_m \in \{0,1\}$ that are associated with the edges of a 2-dimensional grid graph. A subgraph is defined by those edges whose associated variables attain the value 1. Energy functions of this model consist of first order potentials, one for each edge, and fourth order potentials, one for each node $v \in V$ in which four edges $(j,k,l,m) = \mathcal{N}(v)$ meet: \begin{equation} \forall x \in \{0,1\}^m: \quad E(x) = \sum_{j=1}^{m}{ E_j(x_j) } \ + \hspace{-6mm} \sum_{(j,k,l,m) \in \mathcal{N}(V)}{ \hspace{-6mm} E_{j k l m}(x_j, x_k, x_l, x_m) } \enspace . \end{equation} All fourth order potentials are equal, penalizing dead ends and branches of paths in the selected subgraph: \begin{equation} E_{j k l m}(x_j, x_k, x_l, x_m) = \begin{cases} 0.0 & \textnormal{if}\ s = 0\\ 100.0 & \textnormal{if}\ s = 1\\ 0.6 & \textnormal{if}\ s = 2\\ 1.2 & \textnormal{if}\ s = 3\\ 2.4 & \textnormal{if}\ s = 4 \end{cases} \quad \textnormal{with} \quad s = x_j + x_k + x_l + x_m\ . \end{equation} An ensemble of 16 such models is constructed by drawing the unary potentials at random, exactly as for the Ising models. Each model has 19800 variables, the same number of first order potentials, and 9801 fourth order potentials. Approximate optimal subgraphs are found by Min-Sum Belief Propagation (BP) with parallel message passing \cite{pearl-1988,kschischang-2001} and message damping \cite{murphy-1999}, by Tree-reweighted Belief Propagation (TRBP) \cite{wainwright-2008}, by Dual Decomposition (DD) \cite{komodakis-2010,kappes-2010} and by lazy flipping (LF). DD affords also lower bounds on the minimum energy. Details on the parameters of the algorithms and the decomposition of the models are given in Appendix~\ref{section:parameters}. Bounds on the minimum energy converge with increasing runtime, as depicted in Fig.~\ref{figure:subgraph-problem-curves}. It can be seen from this figure that Lazy Flipper approximations converge fast, reaching a smaller energy after 3~seconds than the other approximations after 10000~seconds. Subgraphs of up to 7 variables are searched, using approximately 2.2~GB of RAM for the CS-tree. A gap remains between the energies of all approximations and the lower bound on the minimum energy obtained by DD. Thus, there is no guarantee that any of the problems has been solved to optimality. However, the gaps are upper bounds on the deviation from the global optimum. They are compared at $t=10000$~s in Fig.~\ref{figure:subgraph-problem-curves}. For any model in the ensemble, the energy of the Lazy Flipper approximation is less than 4\% away from the global optimum, a substantial improvement over the other algorithms for this particular model. \begin{figure} \centering \begin{tabular}{ll} \includegraphics[height=5.6cm]{subgraph-2d-bp} & \includegraphics[height=5.6cm]{subgraph-2d-trbp} \\ \includegraphics[height=5.6cm]{subgraph-2d-dd} & \includegraphics[height=5.6cm]{subgraph-2d-lf} \end{tabular} \includegraphics[height=4.8cm]{subgraph-2d-res} \caption{Approximate solutions to the optimal subgraph problem (Section~\ref{section:optimal-subgraph-model}) are found by BP, TRBP, DD and the Lazy Flipper (LF). Depicted are the median, minimum and maximum (over 16 models) of the corresponding energies. DD affords also lower bounds on the minimum energy. The mean search depth of LF ranges from 1 (yellow) to 7 (red). At $t=10^4$~s, the energies of LF approximations come close to the lower bounds obtained by DD and thus, to the global optimum.} \label{figure:subgraph-problem-curves} \end{figure} \subsection{Pruning of 2D Over-Segmentations} \label{section:2d-segmentation-model} The graphical model for removing excessive boundaries from image over-segmen\-tations contains one binary variable for each boundary between segments, indicating whether this boundary is to be removed (0) or preserved (1). First order potentials relate these variables to the image content, and non-submodular third and fourth order potentials connect adjacent boundaries, supporting the closedness and smooth continuation of preserved boundaries. The energy function is a sum of these potentials: $\forall x \in \{0,1\}^m$ \begin{equation} E(x) = \sum_{j=1}^{m}{ E_j(x_j) } + \hspace{-3mm} \sum_{(j,k,l) \in J}{ \hspace{-3mm} E_{jkl}(x_j, x_k, x_l) } + \hspace{-3mm} \sum_{(j,k,l,p) \in K}{ \hspace{-4mm} E_{jklp}(x_j, x_k, x_l, x_p) } \enspace . \end{equation} We consider an ensemble of 100 such models obtained from the 100 BSD test images \cite{martin-2001}. On average, a model has $8845 \pm 670$ binary variables, the same number of unary potentials, $5715 \pm 430$ third order potentials and $98 \pm 18$ fourth order potentials. Each variable is connected via potentials to at most six other variables, a sparse structure that is favorable for the Lazy Flipper. BP, TRBP, DD and the Lazy Flipper solve these problems approximately, thus providing upper bounds on the minimum energy. The differences between these bounds and the global optimum found by means of MILP are depicted in Fig.~\ref{figure:2d-seg-problem-curves}. It can be seen from this figure that, after 200 seconds, Lazy Flipper approximations provide a tighter upper bound on the global minimum in the median than those of the other three algorithms. BP and DD have a better peak performance, solving one problem to optimality. The Lazy Flipper reaches a search depth of 9 after around 1000 seconds for these sparse graphical models using roughly 720~MB of RAM for the CS-tree. At $t=5000$~s and on average over all models, its approximations deviate by only 2.6\% from the global optimum. \begin{figure} \centering \begin{tabular}{ll} \includegraphics[height=5.6cm]{seg-2d-bp} & \includegraphics[height=5.6cm]{seg-2d-trbp} \\ \includegraphics[height=5.6cm]{seg-2d-dd} & \includegraphics[height=5.6cm]{seg-2d-lf} \end{tabular} \includegraphics[height=4.8cm]{seg-2d-res} \caption{Approximate solutions to the problem of removing excessive boundaries from over-segmentations of natural images. The search depth of the Lazy Flipper, averaged over all models in the ensemble, ranges from 1 (orange) to 9 (red). At $t=5000$~s, $3 \cdot 10^7$ subsets are stored in the CS-tree.} \label{figure:2d-seg-problem-curves} \end{figure} \subsection{Pruning of 3D Over-Segmentations} \label{section:3d-segmentation-model} The model described in the previous section is now applied in 3D to remove excessive boundaries from the over-segmentation of a volume image. In an ensemble of 16 such models obtained from 16 SBFSEM volume images, models have on average $16748 \pm 1521$ binary variables (and first order potentials), $26379 \pm 2502$ potentials of order~3, and $5081 \pm 482$ potentials of order~4. For BP, TRBP, DD and Lazy Flipper approximations, deviations from the global optimum are shown in Fig.~\ref{figure:3d-seg-problem-curves}. It can be seen from this figure that BP performs exceptionally well on these problems, providing approximations whose energies deviate by only 0.4\% on average from the global optimum. One reason is that most variables influence many (up to 60) potential functions, and BP can propagate local evidence from all these potentials. Variables are connected via these potentials to as many as 100 neighboring variables which hampers the exploration of the search space by the Lazy Flipper that reaches only of search depth of 5 after 10000 seconds, using 4.8~GB of RAM for the CS-tree, yielding worse approximations than BP, TRBP and DD for these models. In practical applications where volume images and the according models are several hundred times larger and can no longer be optimized exactly, it matters whether one can further improve upon the BP approximations. Dashed lines in the first plot in Fig.~\ref{figure:3d-seg-problem-curves} show the result obtained when initializing the Lazy Flipper with the BP approximation at $t=100$s. This reduces the deviation from the global optimum at $t=50000$~s from 0.4\% on average over all models to 0.1\%. \begin{figure} \centering \begin{tabular}{ll} \includegraphics[height=5.6cm]{seg-3d-bp} & \includegraphics[height=5.6cm]{seg-3d-trbp} \\ \includegraphics[height=5.6cm]{seg-3d-dd} & \includegraphics[height=5.6cm]{seg-3d-lf} \end{tabular} \includegraphics[height=4.8cm]{seg-3d-res} \caption{Approximate solutions to the problem of removing excessive boundaries from over-segmentations of 3-dimensional volume images. The search depth of the Lazy Flipper, averaged over all models in the ensemble, ranges from 1 (yellow) to 5 (purple). After $50000$~s, $2 \cdot 10^8$ subsets are stored in the CS-tree. Dashed lines in the first plot show the result obtained when initializing the Lazy Flipper with the BP approximation at $t=100$s. This reduces the deviation from the global optimum at $t=50000$~s from 0.4\% on average over all models to 0.1\%.} \label{figure:3d-seg-problem-curves} \end{figure} \section{Conclusion} The optimum of a function of binary variables that decomposes according to a graphical model can be found by an exhaustive search over only the connected subgraphs of the model. We implemented this search, using a CS-tree to efficiently and uniquely enumerate the subgraphs. The C++ source code is available from \url{http://hci.iwr.uni-heidelberg.de/software.php}. Our algorithm is guaranteed to converge to a global minimum when searching through all subgraphs which is typically intractable. With limited runtime, approximations can be found by restricting the search to subgraphs of a given maximum size. Simulated and real-world problems exist for which these approximations compare favorably to those obtained by message passing and sub-gradient descent. For large scale problems, the applicability of the Lazy Flipper is limited by the memory required for the CS-tree. However, for regular graphs, this limit can be overcome by an implicit representation of the CS-tree that is subject of future research. \section*{Acknowledgments} Acknowledgement pending approval by the acknowledged individuals.
2,869,038,156,032
arxiv
\section{Introduction}\label{s:intro} One of the motivating questions of 3--dimensional contact topology is to characterize those (closed, oriented) 3--manifolds which admit (positive, cooriented) tight contact structures. This question was answered recently for Seifert fibered 3--manifolds~\cite{Duke}, but the general case is still wide open. A particular family of 3--manifolds is given by those which can be presented as surgery along a knot in $S^3$. It seems reasonable to expect that the use of the invariant $\widehat\mathcal L(\kappa)$ for Legendrian knots in $S^3$ defined in~\cite{LOSS} together with contact geometric constructions might provide a way to find tight examples on many such surgeries. Some justification for such an expectation is provided by a result of Sahamie \cite{bijan}, showing that if the Legendrian invariant $\widehat\mathcal L(\kappa)$ of a Legendrian knot $\kappa$ vanishes, then the contact Ozsv\'ath--Szab\'o invariant $c(\xi_1(\kappa))$ of the result of contact $(+1)$--surgery along $\kappa$ is also zero. On the other hand, for a Legendrian knot $\kappa$ in the standard contact $S^3$ satisfying $\tb(\kappa )=2g_s(\kappa )-1>0$ (where $g_s(\kappa )$ denotes the smooth 4--ball genus of the knot type of $\kappa$) it was shown in \cite{LS1} that the result of contact $(+1)$--surgery has nonvanishing contact Ozsv\'ath--Szab\'o invariant, implying in particular tightness for the contact structure. (The nonvanishing of this invariant implies tightness, while a contact structure with vanishing invariant might be either tight or overtwisted.) In this paper we extend this nonvanishing result to knots with other properties, allowing contact surgeries with higher coefficients. Given a knot type $K\subset S^3$, let the \emph{maximal self--linking number} of $K$ be the largest self--linking number of a transverse representative of $K$ (with respect to the standard contact structure $\xi_{st}$). Also, denote the Seifert genus of $K$ by $g(K)$. \begin{thm}\label{t:mainsl} Let $K\subset S^3$ be a knot type with maximal self--linking number equal to $2g(K)-1$. Then, for $r\geq 2g(K)$ the 3--manifold $S^3_{r}(K)$ carries tight contact structures. \end{thm} Examples of knots satisfying the assumptions of the above theorem are provided by strongly quasi-positive, fibered knots in $S^3$. In particular, iterated torus knots $K((p_1,q_1), \ldots, (p_k, q_k))$ with all $p_i, q_i>0$ are such examples. According to \cite{EH2} the (2,3)--cable $K_{2,3}$ of the (2,3) torus knot $T_{2,3}$ provides an example of a knot for which Theorem~\ref{t:mainsl} applies while the previous result from \cite{LS1} does not: the maximal self--linking of $K_{2,3}$ is equal to 7 (which is equal to $2g(K_{2,3}) -1$) while the maximal Thurston--Bennequin number of $K_{2,3}$ is 6. By taking the connected sum of $n$ copies of this knot, the difference between the maximal self--linking and the maximal Thurston--Bennequin number can be made, in fact, arbitrarily large. Related, prime knot examples for the same phenomenon are provided by $(p,q)$--cables ($q>p\geq 1$) $K_{p,q}$ of the (2,3) torus knot $T_{2,3}$: according to \cite{bulent} the maximal self-linking number of $K_{p,q}$ (which again coincides with $2g(K_{p,q})-1$) is equal to $pq+q-p$, while the maximal Thurston--Bennequin number of $K_{p,q}$ is $pq$. \bigskip We found it convenient to organize the surgery theoretic information about a Legendrian (and about a transverse) knot into an invariant which takes its values in Heegaard Floer homology groups (and ultimately in the inverse limit of some of these groups). Although the resulting surgery invariant $\tilde c$ shares a number of properties with the Legendrian (and transverse) knot invariants introduced in \cite{LOSS}, we found a vanishing result for $\tilde c$ (given in Theorem~\ref{t:main}) which is, according to a recent result of Vela-Vick \cite{SVV}, in sharp contrast with the corresponding behaviour of the Legendrian invariant $\widehat \mathcal L$ of \cite{LOSS}. In order to state our results we need some preliminary notation. Let $Y$ be a closed, oriented 3--manifold and $K\subset Y$ a knot type. Let $\mathcal F_K$ be the set of framed isotopy classes of framed knots in the (unframed) knot type $K$. We will follow the usual practice of referring to the elements of $\mathcal F_K$ as to the ``framings'' of $K$. Recall that for $K$ null--homologous $\mathcal F_K$ is an affine $\mathbb Z$--space, and that even if $K$ is not null--homologous this is still true if $Y$ is not of the form $Y'\# S^1\times S^2$~\cite{Ch, HP}. For $k\in\mathbb Z$ and $f\in\mathcal F_K$, we shall denote the result of $k$ acting on $f$ by $f+k$. When $\mathcal F_K$ is an affine $\mathbb Z$--space, $\mathcal F_K$ inherits a natural linear order from $\mathbb Z$: if $f, g\in\mathcal F_K$ with $f=g+k$, $k\in\mathbb Z$, then $f\geq g$ if and only if $k\geq 0$. We will denote by $Y_f(K)$ the 3--manifold resulting from surgery on $Y$ along $K$ with framing $f$. Given a contact 3--manifold $(Y,\xi)$, a {\em framed Legendrian knot} in $(Y,\xi)$ is a pair $(\kappa,f)$, where $\kappa\subset (Y,\xi)$ is a Legendrian knot and $f\in\mathcal F_K$ is a framing of the topological type $K$ of $\kappa$. A {\em framed transverse knot} in $(Y,\xi)$ is a pair $(\tau,f)$, where $\tau\subset (Y,\xi)$ is a transverse knot and $f\in\mathcal F_K$ is a framing of the topological type $K$ of $\tau$. Denote by $\mathbb T(Y,\xi,K,f)$ the set of transverse isotopy classes of framed transverse knots $(\tau,f)$ in $(Y,\xi)$ with $\tau$ in the topological type $K$. Let $\Cont(Y)$ be the set of isomorphism classes of contact structures on $Y$. Fix a transverse knot $\tau \subset (Y, \xi )$ in the knot type $K$. By considering a Legendrian approximation $\kappa$ of $\tau$, and by applying appropriate contact surgery along $\kappa$ (where the exact meaning of 'appropriate' will be clarified in Subsection~\ref{ss:cosu}), a contact structure $\tilde I(\xi , \tau , f)$ can be defined on the 3--manifold $Y_f(K)$. \begin{thm}\label{t:maintransverse} Let $Y$ be a closed, oriented 3--manifold and $K$ a knot type in $Y$. Suppose that either $K$ is null--homologous or $Y$ is not of the form $Y'\# S^1\times S^2$. Given a contact structure $\xi$ on $Y$ and a framing $f$ on $K\subset Y$, there is a well--defined map \[ \begin{matrix} \mathbb T(Y,\xi,K,f) & \longrightarrow & \Cont(Y_f(K))\\ [(\tau, f)] & \longmapsto & \tilde I(\xi,\tau,f) \end{matrix} \] \end{thm} In ~\cite{OSz-cont} Ozsv\'ath and Szab\'o associated an element of the Heegaard Floer group $\widehat{HF}(-Y)$ to every contact 3--manifold $(Y,\xi)$. By fixing an identification between the diffeomorphic 3--manifolds $Y_f(\tau)$ and $Y_f(K)$ we get a family of Heegaard Floer elements \begin{equation* \tilde c(\xi,\tau,f):= c(\tilde I(\xi,\tau,f))\in \widehat{HF}(-Y_f(K)),\quad f\in\mathcal F_K \end{equation*} for every transverse knot $\tau \subset (Y, \xi )$ representing the knot type $K$. (In this paper we always consider Heegaard Floer homology with $\mathbb Z/2\mathbb Z$ coefficients). The elements themselves might depend on the chosen identification of $Y_f(\tau)$ with $Y_f(K)$; their vanishing/nonvanishing properties, on the other hand, are independent of this choice. Since in the following we will exclusively focus on vanishing/nonvanishing questions, we shall not mention the above identification again. The invariant $\tilde c$ is non--trivial. In fact, in Example~\ref{exa:ketto} we show, using the main result of~\cite{LS1}, that if $\tau$ is the link of an isolated curve singularity in the standard contact 3--sphere $(S^3,\xi_{\rm st})$ then $\tilde c(\xi_{\rm st},\tau, f_S+2g_s(K))\neq 0$, where $f_S$ is the framing defined by a Seifert surface of $K$, and $g_s(K)$ is the slice genus of $K$. In Section~\ref{s:def} we also show (Corollary~\ref{c:defin}) that if $\tilde c(\xi,\tau,f)\neq 0$ then $\tilde c(\xi,\tau,g)\neq 0$ for every $g\geq f$. By using appropriate cobordisms and maps induced by them, an inverse limit $H(Y,K)$ of Heegaard Floer groups of results of surgeries of $Y$ along $K$ can be defined, and we show that the family $\left(\tilde c (\xi , \tau , f)\right)_{f\in\mathcal F_K}$ defines a single element $\tilde c (\xi , \tau )$ in this limit (see Proposition~\ref{p:invlimit}). Notice that $\tilde c$ is defined for a transverse knot $\tau$ through its Legendrian approximations, a feature similar to the definition of the transverse invariant $\widehat {\mathcal {T}}$ of \cite{LOSS} (resting on the corresponding Legendrian invariant $\widehat \mathcal L$). For $\tilde c$, however, we have the following vanishing result which shows, in particular, that $\tilde c(\xi,\tau )$ behaves quite differently from the transverse invariant $\widehat {\mathcal {T}}$ of~\cite{LOSS}. \begin{thm}\label{t:main} Let $\Sigma$ be an oriented surface with boundary and $\phi\co\Sigma\to\Sigma$ an orientation--preserving diffeomorphism which restricts to the identity on a collar around $\partial\Sigma$. Let $(Y,\xi_{(\Sigma,\phi)})$ be the contact 3--manifold compatible with the open book decomposition induced by $(\Sigma,\phi)$. Suppose that $b_1(Y)=0$ and $c(\xi_{(\Sigma,\phi)}) = 0$, and let $\tau\subset (Y,\xi_{(\Sigma,\phi)})$ be a component of the boundary of $\Sigma$ viewed as the binding of the open book. Then, $\tilde c(\xi_{(\Sigma,\phi)},\tau)=0$. \end{thm} Theorem~\ref{t:main} should be contrasted with the main result of~\cite{SVV}, which says that the transverse invariant $\widehat {\mathcal {T}}$ of~\cite{LOSS} is nonvanishing for a binding of an open book. (See also \cite{ESVV} for the case of disconnected bindings.) The following nonvanishing result provides the desired construction of tight contact structures on certain surgered 3--manifolds, and will serve as the main ingredient in the proof of Theorem~\ref{t:mainsl}. \begin{thm}\label{t:main2} Suppose that the open book decomposition induced by $(\Sigma,\phi)$ is compatible with the standard contact structure $\xi_{st}$ on $S^3$. Let $\tau\subset S^3$ be a binding component of the open book decomposition having knot type $K$ and self--linking number $\sel(\tau)=2g(K)-1$, where $g(K)$ is the Seifert genus of $K$. Then, $\tilde c(\xi_{st},\tau )\neq 0$. In fact, if $f_S$ denotes the Seifert framing of $\tau$, the Heegaard Floer homology element $\tilde c(\xi_{st},\tau, f)$ is nonzero for each $f\geq f_S+2g(K)$. \end{thm} The paper is organized as follows. In Section~\ref{s:contact} we establish the properties of contact surgeries that we use to define the transverse invariants. In Section~\ref{s:def} we define the invariants, thus establishing Theorem~\ref{t:maintransverse}, and we prove their basic properties. In Section~\ref{s:main} we prove the vanishing Theorem~\ref{t:main} while in Section~\ref{s:main2} we give the proofs of the nonvanishing result given by Theorem~\ref{t:main2} and ultimately we prove Theorem~\ref{t:mainsl}. {\bf Acknowledgements:} We would like to thank John Etnyre and Matt Hedden for stimulating discussions, and the anonymous referee for useful suggestions which helped to improve the presentation. Part of this work was carried out while the authors visited the Mathematical Sciences Research Institute, Berkeley, as participants of the `Homology theories for knots and links' special semester. The present work is part of the authors' activities within CAST, a Research Network Program of the European Science Foundation. PL was partially supported by PRIN 2007, MIUR. AS was partially supported by OTKA Grant NK81203 and by the \emph{Lend\"ulet program}. \section{Contact surgeries and stabilizations}\label{s:contact} \label{sec:second} In this section we establish the properties of certain contact surgeries which will allow us to define the invariants and study their basic properties. Let $K\subset Y$ be a knot type in the closed 3--manifold $Y$. Let $\xi$ be a contact structure on $Y$ and $\kappa\subset(Y,\xi)$ a Legendrian knot belonging to $K$. Recall that, given a non--zero rational number $r\in\mathbb Q$, one can perform~\emph{contact $r$--surgery} along $\kappa$ to obtain a new contact 3--manifold $(Y',\xi')$~\cite{DG}. When $r=\pm 1$ the contact structure $\xi'$ is uniquely determined, therefore in this case we can safely use the notation $\xi_{\pm 1}(\kappa)$ for $\xi'$. In general, there are several possible choices for $\xi'$. According to~\cite[Proposition~7]{DG}, for $\frac{p}{q}>1$ every contact $\frac{p}{q}$--surgery on $\kappa$ is equivalent to a contact $(+1)$--surgery on $\kappa$ followed by a contact $\frac{p}{q-p}$--surgery on a Legendrian pushoff copy of $\kappa$. Moreover, by~\cite[Proposition~3]{DG} (see~\cite{DGS} as well) every contact $r$--surgery along $\kappa\subset (Y,\xi)$ with $r<0$ is equivalent to a Legendrian (i.e. $(-1)$--) surgery along a Legendrian link $\mathbb {L} =\cup_{i=0}^m L_i$ belonging to a set determined via a simple algorithm by the Legendrian knot $\kappa$ and the contact surgery coefficient $r$. The algorithm to obtain the set of $\mathbb {L}$'s is the following. Let \[ [a_0,\ldots,a_m]:= a_0 - \cfrac{1}{a_1 - \cfrac{1}{\ddots - \cfrac{1}{a_m} }}, \quad a_0,\ldots a_m\geq 2, \] be the continued fraction expansion of $1-r$. To obtain $L_0$, stabilize $a_0-2$ times a Legendrian push--off of $\kappa$ in every possible way. Then, stabilize $a_1-2$ times a Legendrian push--off of $L_0$ in every possible way. Repeat the above scheme for each of the remaining pivots of the continued fraction expansion. We are interested in contact $n$--surgeries, where $n$ is a positive integer. In this case, since \[ 1-\frac{n}{1-n}=\frac{2n-1}{n-1}=[3,\overbrace{2,\ldots,2}^{n-2}], \] there are only two choices for the stabilizations of $\kappa$, because the choice of the first one determines all the others. An orientation of $\kappa$ allows one to specify unambiguously such a choice, because it specifies the~\emph{negative stabilization} $\kappa_-$ and the~\emph{positive stabilization} $\kappa_+$ of $\kappa$. In a standard neighborhood $\mathbb R/\mathbb Z\times \mathbb R^2$ of $\kappa$ with coordinates $(\th,x,y)$ the contact structure is given by $\xi=\ker(dx+yd\th)$, and the $(\th,x)$--projections of $\kappa$, $\kappa_-$ and $\kappa_+$ are illustrated in Figure~\ref{f:stab}. \begin{figure}[h!] \labellist \small\hair 2pt \pinlabel $x$ at 13 140 \pinlabel $\kappa$ at 74 114 \pinlabel $\theta$ at 109 90 \pinlabel $x$ at 235 179 \pinlabel $\kappa_-$ at 330 166 \pinlabel $\theta$ at 360 130 \pinlabel $\theta$ at 352 17 \pinlabel $x$ at 234 70 \pinlabel $\kappa_+$ at 283 58 \endlabellist \centering \includegraphics[scale=0.5]{fig-1} \caption{Negative and positive Legendrian stabilizations} \label{f:stab} \end{figure} {From} now on we shall assume that every Legendrian knot $\kappa$ is oriented. \begin{defn}\label{d:xi} We denote by $\xi_n^-(\kappa)$ (respectively $\xi_n^+(\kappa)$) the contact structure corresponding to the choice of the~\emph{negative} stabilization $\kappa_-$ (respectively the~\emph{positive} stabilization $\kappa_+$) of the oriented Legendrian knot $\kappa$ (see Figure~\ref{f:stab}). \end{defn} Observe that $\kappa_+$ and $\kappa_-$ inherit an orientation from $\kappa$ in a natural way. We shall always assume that $\kappa_+$ and $\kappa_-$ are given the orientation induced by $\kappa$. \begin{lem}\label{l:changesign} Let $\kappa\subset (Y,\xi)$ be an oriented Legendrian knot. Then, for each $n>0$ we have \[ \xi_n^+(\kappa) = \xi_n^-(-\kappa). \] \end{lem} \begin{proof} The statement follows from the definition of contact surgery together with the easily checked fact that $(-\kappa)_-=-(\kappa_+)$ for every oriented Legendrian knot $\kappa$. \end{proof} We want to study the contact structure $\xi^-_n(\kappa)$ when $\kappa$ is a stabilization. The following lemma was proved in greater generality in~\cite{Ozb} using the main result of~\cite{HKM}. Here we give a simple and constructive proof. \begin{lem}\label{l:overtwisted} Let $\kappa\subset (Y,\xi)$ be an oriented Legendrian knot. Then, $\xi_n^-(\kappa_+)$ is an overtwisted contact structure for each $n>0$. \end{lem} \begin{proof} Ozbagci~\cite[Proposition~13]{Ozb} shows that for $r>0$ any contact $r$--surgery on a positive stabilization in which the Legendrian pushoffs are all negative stabilizations is overtwisted, by constructing a non--right veering compatible open book and appealing to the results of~\cite{HKM}. On the other hand, the lemma can be easily checked directly as follows. The left--hand side of Figure~\ref{f:otdisk} illustrates the contact surgery yielding $\xi_n^-(\kappa_+)$ in a standard neighborhood of $\kappa$. \begin{figure}[h!] \labellist \small\hair 2pt \pinlabel {\scriptsize $+1$} at 19 63 \pinlabel $\kappa_+$ at 171 62 \pinlabel {\scriptsize $-1$} at 209 88 \pinlabel {\scriptsize $-1$} at 223 114 \pinlabel $\kappa$ at 243 17 \pinlabel {\scriptsize $+1$} at 451 81 \pinlabel {\scriptsize $-1$} at 519 93 \pinlabel {\scriptsize $-1$} at 536 120 \pinlabel $\kappa_+$ at 573 62 \pinlabel $\tilde\kappa$ at 573 17 \endlabellist \centering \includegraphics[scale=0.5]{fig-2} \caption{The overtwisted disk in $\xi_n^-(\kappa_+)$. Notice that the knots labeled by $\kappa$ on the left and $\tilde\kappa$ on the right are not necessarily isotopic in the surgered manifold. Nevertheless, $\kappa _+$ and $\tilde\kappa$ provide the shaded annulus, which then caps off to an overtwisted disk with boundary equal to $\tilde\kappa$.} \label{f:otdisk} \end{figure} The right--hand side of the picture shows how the $n-1$ push--offs of $(\kappa_+)_-$ can be Legendrian isotoped until one can see the shaded overtwisted disk. \end{proof} The following proposition gives the key property of $\xi_n^-(\kappa)$ which yields transverse invariants. \begin{prop}\label{p:stab} Let $\kappa _-$ denote the negative Legendrian stabilization of the oriented Legendrian knot $\kappa \subset (Y,\xi)$. Then, for each $n>0$ the contact structures $\xi_{n+1}^-(\kappa_-)$ and $\xi_n^-(\kappa)$ are isomorphic. \end{prop} Before proving the proposition we recall the lantern relation. Let $A$ be a surface with boundary homeomorphic to a twice punctured annulus. If we denote by $\delta_i$ the positive Dehn twist along a curve parallel to the $i$--th boundary component of $A$, and by $\delta_{ij}$ the positive Dehn twist along a curve encircling the boundary components $i$ and $j$, the \emph{lantern relation} reads $\delta_1\delta_2\delta_3\delta_4=\delta_{12}\delta_{13}\delta_{23}$. In the proof of Proposition~\ref{p:stab} we are going to use the equivalent relation $\delta_{12}^{-1}\delta_1\delta_2\delta_3 = \delta_{13}\delta_{23}\delta_4^{-1}$. Figure~\ref{f:lantern} provides a graphical representation of this relation. In fact, whenever the twice punctured annulus embeds into a surface, the Dehn twists corresponding to the images of the curves on the diagram satisfy the lantern relation. \begin{figure}[h!] \labellist \footnotesize\hair 2pt \pinlabel $1$ at 114 184 \pinlabel $1$ at 369 185 \pinlabel $2$ at 80 122 \pinlabel $2$ at 338 122 \pinlabel $3$ at 78 70 \pinlabel $3$ at 338 70 \pinlabel $4$ at 112 3 \pinlabel $4$ at 388 8 \pinlabel $+$ at 14 147 \pinlabel $+$ at 49 131 \pinlabel $+$ at 374 135 \pinlabel $-$ at 15 96 \pinlabel $+$ at 53 81 \pinlabel $+$ at 366 93 \pinlabel $-$ at 334 16 \endlabellist \centering \includegraphics[scale=0.7]{fig-3} \caption{The relation $\delta_{12}^{-1}\delta_1\delta_2\delta_3 = \delta_{13}\delta_{23}\delta_4^{-1}$. Signs on the curves indicate whether right-handed $(+)$ and left-handed $(-)$ Dehn twists are to be performed.} \label{f:lantern} \end{figure} \begin{proof}[Proof of Proposition~\ref{p:stab}] Consider an open book for $\xi$ with a page which contains $\kappa$ and such that the page framing induced on $\kappa$ is equal to the contact framing of $\kappa$. After two Giroux stabilizations we can accomodate $\kappa$, $\kappa_-$ and $(\kappa_-)_-$ on the same page of the resulting open book, still with equal page and contact framings (see eg~\cite{Et0}). After performing a negative Dehn twist along $\kappa_-$ and positive Dehn twists along $n$ parallel copies of $(\kappa_-)_-$ we obtain an open book for $\xi^-_{n+1}(\kappa_-)$, as illustrated in Figure~\ref{f:isotopy1}. \begin{figure}[ht!] \centering \subfloat[Open book for $\xi_{n+1}^-(\kappa_-)$]{ \labellist \small\hair 2pt \pinlabel $n$ at 480 72 \pinlabel $\kappa$ at 109 15 \pinlabel $\kappa_-$ at 113 67 \pinlabel $(\kappa_-)_-$ at 150 137 \pinlabel $+$ at 255 164 \pinlabel $+$ at 294 164 \pinlabel $+$ at 317 192 \pinlabel $+$ at 256 84 \pinlabel $-$ at 309 90 \pinlabel {\Large $\}$} at 465 72 \endlabellist \includegraphics[scale=0.5]{fig-4} \label{f:isotopy1}}\\ \subfloat[Open book for $\xi_{n+1}^-(\kappa_-)$ after applying the relation]{ \labellist \small\hair 2pt \pinlabel $+$ at 314 197 \pinlabel $+$ at 288 165 \pinlabel $+$ at 270 116 \pinlabel $+$ at 315 45 \pinlabel $n-1$ at 500 73 \pinlabel $-$ at 128 12 \pinlabel {\Large $\}$} at 464 73 \endlabellist \includegraphics[scale=0.45]{fig-5} \label{f:isotopy2}}\\ \subfloat[Open book for $\xi_n^-(\kappa)$]{ \labellist \small\hair 2pt \pinlabel $+$ at 318 194 \pinlabel $+$ at 289 164 \pinlabel $+$ at 290 105 \pinlabel $n-1$ at 496 72 \pinlabel $-$ at 106 13 \pinlabel $\kappa$ at 366 13 \pinlabel $\kappa_-$ at 390 69 \pinlabel {\Large $\}$} at 466 72 \endlabellist \includegraphics[scale=0.5]{fig-6} \label{f:isotopy4}} \caption{Isomorphism between $\xi_{n+1}^-(\kappa_-)$ and $\xi_n^-(\kappa)$} \end{figure} In Figure~\ref{f:isotopy2} we see what happens to the open book for $\xi_{n+1}^-(\kappa_-)$ when we apply the relation of Figure~\ref{f:lantern} inside the twice punctured annulus visible in the picture. The dashed arc of Figure~\ref{f:isotopy2} shows that the open book can be Giroux destabilized, yielding Figure~\ref{f:isotopy4}, which is an open book for $\xi_n^-(\kappa)$. \end{proof} The following corollary can be viewed as a generalization of~\cite[Theorem 1]{Et}. \begin{cor}\label{c:stab} Let $\kappa_1, \kappa_2\subset (Y,\xi)$ be two Legendrian knots. If after negatively stabilizing the same number of times $\kappa_1$ and $\kappa_2$ become Legendrian isotopic, then $\xi^-_n(\kappa_1)$ is isomorphic to $\xi^-_n(\kappa_2)$ for each $n>0$. \end{cor} \begin{proof} Suppose that $\kappa'_1$ and $\kappa'_2$ are Legendrian isotopic Legendrian knots obtained by negatively stabilizing $\kappa_1$ and $\kappa_2$ $m$ times. Then, for each $n>0$, the contact structure $\xi^-_{n+m}(\kappa'_1)$ is isotopic to $\xi^-_{n+m}(\kappa'_2)$. Applying Proposition~\ref{p:stab} $m$ times we conclude that $\xi^-_{n+m}(\kappa'_1)$ is isomorphic to $\xi^-_n(\kappa_1)$ and $\xi^-_{n+m}(\kappa'_2)$ is isomorphic to $\xi^-_n(\kappa_2)$. Therefore $\xi^-_n(\kappa_1)$ and $\xi^-_n(\kappa_2)$ are isomorphic for each $n>0$. \end{proof} Lemma~\ref{l:overtwisted} and Proposition~\ref{p:stab} admit slight refinements and alternative proofs, which potentially apply to more general situations (see Remark~\ref{r:propappl} below). We provide the alternative proofs in the following proposition, which is not used in the rest of the paper. \begin{prop}\label{p:stab2} Let $\kappa_-$, respectively $\kappa_+$, denote the negative, respectively positive, Legendrian stabilization of the oriented Legendrian knot $\kappa\subset (Y,\xi)$. Then, for each $n>0$ we have: \begin{enumerate} \item $\xi_{n+1}^-(\kappa_-)$ is isotopic to $\xi_n^-(\kappa)$; \item $\xi_n^-(\kappa_+)$ is overtwisted. \end{enumerate} \end{prop} \begin{proof} This simple proposition can be deduced using the foundational results of Ko Honda from~\cite{Ho}. We refer the reader to~\cite{Ho} for the necessary background in what follows. Let us quickly go over the details of the contact surgery construction. The contact framing together with the orientation on $\kappa$ determine an oriented basis $\mu,\lambda$ of the first integral homology group of the boundary of a standard neighborhood $\nu(\kappa)$ of $\kappa$. The basis determines identifications \[ \partial(\nu(\kappa))\cong \mathbb R^2/\mathbb Z^2,\quad -\partial(S^3\setminus\nu(\kappa))\cong\mathbb R^2/\mathbb Z^2. \] The surgery is determined by a gluing prescribed, with repect to the above identifications, by the matrix \[ A= \begin{pmatrix} n & -1\\ 1 & 0\\ \end{pmatrix}. \] The pull--back of the dividing set is determined by \[ A^{-1} \begin{pmatrix} 0\\1 \end{pmatrix} = \begin{pmatrix} 1\\mathbf n \end{pmatrix}, \] so it has slope $n$ on $\partial(\nu(\kappa))$. Applying a diffeomorphism of the solid torus $\nu(\kappa)$ this slope can be changed to $n/(1+nh)$ for any $h\in\mathbb Z$. Therefore we can normalized it to lie between $-1$ and $-\infty$, obtaining slope $-n/(n-1)$. By~\cite{Ho} there are exactly two choices of tight contact structures on the solid torus with this boundary slope, corresponding to the two possibile choices (positive or negative) of a basic slice with boundary slopes $-1$ and $-n/(n-1)$. With our conventions, choosing the negative basic slice gives rise to the contact structure $\xi_n^-$. The knots $\kappa_+$ and $\kappa_-$ can both be realized inside the neighborhood $\nu(\kappa)$. If $\nu(\kappa_\pm)\subset\nu(\kappa)$ is a standard neighborhood of $\kappa_\pm$, the (closure of the) difference $\nu(\kappa)\setminus\nu(\kappa_\pm)$ is a basic slice, which is positive for $\kappa_+$ and negative for $\kappa_-$~\cite{Ho}. Moreover, its boundary slopes with respect to the basis $\mu, \lambda$ are $-1$ on $\partial\nu(\kappa_\pm)$ and $\infty$ on $\partial\nu(\kappa)$. For each $n\geq 0$, we can perform contact $(n+1)$--surgery along $\kappa_\pm$ viewed as a Legendrian knot inside $\nu(\kappa)$, obtaining another contact solid torus $\mathbb T$ with convex boundary in standard form. $H_1(\partial\nu(\kappa_-);\mathbb Z)$ has a basis $\mu'$, $\lambda'$ such that, with the obvious identifications, $\mu'=\mu$ and $\lambda'=\lambda-\mu$. Thus, since $\lambda=\mu'+\lambda'$, the identity \[ \begin{pmatrix} n+1 & -1\\ 1 & 0 \end{pmatrix}^{-1} \begin{pmatrix} 0 & 1\\ 1 & 1 \end{pmatrix} = \begin{pmatrix} 1 & 1\\ n+1 & n \end{pmatrix} \] implies that, up to applying a diffeomorphism of $\mathbb T$, the slopes of $\partial\nu(\kappa)$ and $\partial\nu(\kappa_\pm)$ can be assumed to be, respectively, $-n/(n-1)$ and $-(n+1)/n$. This shows that $\mathbb T$ can be decomposed as \[ \mathbb T = N\cup B, \] where $N$ is standard neighborhood of a Legendrian curve with slope $-1$, and $B\cong T^2\times [0,1]$ has boundary slopes $-1$ and $-n/(n-1)$ and can be written as a union of two basic slices $B=B_1\cup B_2$, where $B_1$ has boundary slopes $(-1,-(n+1)/n)$ and $B_2=\nu(\kappa)\setminus\nu(\kappa_\pm)$, with the boundary slopes given above. Since $B_2$ is a basic slice, $B$ is a basic slice (i.e.~it is tight) if and only if $B_1$ and $B_2$ have the same sign as basic classes~\cite{Ho}. By definition, $\xi^-_{n+1}(\kappa_\pm)$ is the contact structure obtained by taking $B_1$ to be a~\emph{negative} basic slice. Since $B_2$ is positive for $\kappa_+$ and negative for $\kappa_-$, the analysis above proves simultaneously (1) and (2) of the statement. \end{proof} \begin{rem}\label{r:propappl} While the proof of Proposition~\ref{p:stab} only holds, as written, for closed contact 3--manifolds, both the statement and the proof of Proposition~\ref{p:stab2} can stay the same even if $(Y,\xi)$ is open or has non--empty boundary. This allows one, at least in principle, to apply the approach of this paper in situations which are more general than the ones considered here. We hope to return to this issue in a future paper. \end{rem} \begin{rem}\label{r:posstab} Let $\xi_n^+(\kappa)$ be the contact structure corresponding to the choice of the~\emph{positive} stabilization $\kappa_+$ of $\kappa$. Then, an argument analogous to that of the proof of Proposition~\ref{p:stab2} shows that $\xi_{n+1}^+(\kappa_+)$ is isotopic to $\xi_n^+(\kappa)$ and $\xi_{n+1}^+(\kappa_-)$ is overtwisted. Of course, this also follows from the fact that $(-\kappa)_-=-\kappa_+$ and Proposition~\ref{p:stab2}. In fact, \[ \xi_{n+1}^+(\kappa_+) = \xi_{n+1}^-(-\kappa_+) = \xi_{n+1}^-((-\kappa)_-) = \xi_n^-(-\kappa) = \xi_n^+(\kappa) \] and \[ \xi_{n+1}^+(\kappa_-) = \xi_{n+1}^-(-\kappa_-) = \xi_{n+1}^-(\kappa_+) \] \end{rem} \section{The invariants: definition and basic properties}\label{s:def} In this section we define the invariants, we prove some of their properties and present some examples. \subsection{Definition of the geometric invariant $I(\xi , \kappa , f)$} \label{ss:cosu} Let $K$ be the knot type of a Legendrian knot $\kappa\subset (Y,\xi)$, and let $t:=\tb(\kappa)\in\mathcal F_K$ be the Thurston--Bennequin invariant of $\kappa$, i.e.~the contact framing of $\kappa$. Suppose that either $K$ is null--homologous or $Y$ has no $S^1\times S^2$--summand. For each positive integer $n$, the contact structure $\xi_n^-(\kappa)$ lives on the closed 3--manifold $Y_{t+n}(K)$ obtained by performing topological surgery along $K$ corresponding to the framing $t+n$. Let $(\kappa,f)$ be a framed, oriented Legendrian knot in the contact 3--manifold $(Y,\xi)$, and let $\kappa'\subset (Y,\xi)$ be a Legendrian knot obtained by negatively stabilizing $\kappa$ sufficientely many times, so that $\tb(\kappa')<f$. In view of Proposition~\ref{p:stab}, the isomorphism class of the contact structure $\xi_{f-\tb(\kappa')}^-(\kappa')$ on $Y_f(K)$ does not depend on the choice of $\kappa'$ as long as $\tb(\kappa')<f$, therefore we can introduce the following: \begin{defn}\label{d:Ileginvariant} Assume that either the knot type $K$ is null--homologous or $Y$ is not of the form $Y'\# S^1\times S^2$. Let $(\kappa,f)$ be a framed, oriented Legendrian knot in the contact 3--manifold $(Y,\xi)$ such that $\kappa$ has topological type $K$. Define $I(\xi,\kappa,f)$ to be the isomorphism class of the contact structure $\xi_{f-\tb(k')}^-(\kappa')$ on $Y_f(K)$, where $\kappa'\subset (Y,\xi)$ is any negative stabilization of $\kappa$ such that $\tb(\kappa')<f$. \end{defn} \begin{prop}\label{p:sufflarge} Assume that either $K$ is null--homologous or $Y$ is not of the form $Y'\# S^1\times S^2$. Let $(\kappa,f)$ be a framed, oriented Legendrian knot in the contact 3--manifold $(Y,\xi)$ such that $\kappa$ has topological type $K$. Then, $I(\xi,\kappa,f)$ is overtwisted for each $f\leq\tb(\kappa)$. \end{prop} \begin{proof} By definition, $I(\xi,\kappa,f)$ is the isomorphism class of the contact structure $\xi^-_{f-\tb(\kappa')}(\kappa')$, where $\kappa'$ is any negative stabilization of $\kappa$ such that $\tb(\kappa')<f$. If $f\leq\tb(\kappa)$ we can choose $\kappa'$ so that $\tb(\kappa')=f-1$. We have $\kappa'=\kappa''_-$ for some oriented Legendrian knot $\kappa''$. Then, \[ \xi^-_{f-\tb(\kappa')}(\kappa')=\xi_1(\kappa''_-)=\xi_1(-(-\kappa'')_+) \] is overtwisted by Lemma~\ref{l:overtwisted}. \end{proof} Recall that transverse knots admit a preferred orientation and can be approximated, uniquely up to negative stabilization, by oriented Legendrian knots~\cite{EFM, EH}. Fix a transverse knot $\tau\subset (Y,\xi)$, and let $\kappa$ be a Legendrian approximation of $\tau$. Then, by~\cite{EFM, EH}, up to negative stabilizations the Legendrian knot $\kappa$ only depends on the transverse isotopy class of $\tau$. It follows immediately from Proposition~\ref{p:stab} that if $\kappa'\subset (Y,\xi)$ is a negative stabilization of the oriented Legendrian knot $\kappa\subset (Y,\xi)$, then for each framing $f$ we have $I(\xi,\kappa',f)=I(\xi,\kappa,f)$. This observation allows us to give the following: \begin{defn}\label{d:Itransvinvariant} Assume that either $K$ is null--homologous or $Y$ is not of the form $Y'\# S^1\times S^2$. Let $(\tau,f)$ be a framed transverse knot in the contact 3--manifold $(Y,\xi)$ such that $\tau$ has topological type $K$. Define $\tilde I(\xi,\tau,f):=I(\xi,\kappa,f)$, where $\kappa$ is any Legendrian approximation of $\tau$. \end{defn} \begin{proof}[Proof of Theorem~\ref{t:maintransverse}] Since the choice of $\kappa$ is unique up to negative stabilization, the repeated application of Proposition~\ref{p:stab} verifies the result. \end{proof} \subsection{Heegaard Floer invariants} We now apply the Heegaard Floer contact invariant defined by Ozsv\'ath and Szab\'o \cite{OSz-cont}. \begin{defn}\label{d:cleginvariant} Let $Y$ be a closed, oriented 3--manifold, $K\subset Y$ a knot type and $f\in\mathcal F_K$. Assume that either $K$ is null--homologous or $Y$ is not of the form $Y'\# S^1\times S^2$. Given an oriented Legendrian knot $\kappa\subset (Y,\xi)$, define \[ c(\xi,\kappa,f):=c(I(\xi,\kappa,f))\in \widehat{HF}(-Y_f(K)), \] and given a transverse oriented knot $\tau\subset (Y,\xi)$, define \[ \tilde c(\xi,\tau,f):=c(\tilde I(\xi,\tau,f))\in \widehat{HF}(-Y_f(K)). \] \end{defn} \begin{rems} \begin{itemize} \item It follows immediately from the definition, Lemma~\ref{l:overtwisted} and Proposition~\ref{p:stab} that, for each $f\in\mathcal F_K$, $c(\xi,\kappa_-,f)=c(\xi,\kappa,f)$ and $c(\xi,\kappa_+,f)=0$. \item It follows from Proposition~\ref{p:sufflarge} that $c(\xi,\kappa,f)=0$ for each $f\leq \tb(\kappa)$. \item If the complement of a Legendrian knot $\kappa$ in $(Y,\xi)$ is overtwisted or has positive Giroux torsion, the same holds for $\xi_n^-(\kappa')$ for some stabilization $\kappa'$ of $\kappa$. Therefore, it follows from the results of~\cite{GHV, OSz-cont} that $c(\xi,\kappa,f)=0$ for each $f\in\mathcal F_K$. \end{itemize} \end{rems} The following examples show that the invariant $c(\xi,\kappa,f)$ is non--trivial. \begin{exa}\label{exa:egy} {\rm Consider the Legendrian unknot $\kappa\subset (S^3,\xi_{st})$ with Thurston--Bennequin number $-1$. (In this case $\kappa=-\kappa$, so we do not need to specify the orientation). Since the result of contact $(+1)$--surgery is equal to the unique Stein fillable contact structure on $S^1\times S^2$, we get that $c(\xi_{st},\kappa,\tb(\kappa)+1)\neq 0$ (cf.~\cite[Lemma~5]{LS0}).} \end{exa} \begin{exa}\label{exa:ketto} {\rm Let $\kappa\subset (S^3,\xi_{st})$ be an oriented Legendrian knot with knot type $K$ such that \begin{equation}\label{e:tb} \tb(\kappa) = f_S+2g_s(K)-1>0, \end{equation} where $f_S$ is the framing defined by a Seifert surface of $K$, and $g_s(K)$ is the slice genus of $K$. Then, by~\cite[Proof of Theorem~1.1]{LS1} $c(\xi_{st},\kappa,\tb(\kappa)+1)\neq 0$. As remarked in~\cite{LS1}, the knot types containing Legendrian knots which satisfy Condition~\eqref{e:tb} include all non--trivial algebraic knots, i.e.~non--trivial knots which are links of isolated curve singularities, as well as negative twist knots.} \end{exa} \subsection{The inverse limit construction} The invariants $\tilde c(\xi , \tau , f)$ can be conveniently organized as a single element in the inverse limit of certain Heegaard Floer homology groups. In the rest of this section we spell out the details of this construction. Let $Y$ be a closed, oriented 3--manifold and $K$ a knot type in $Y$. To each framing $f\in\mathcal F_K$ one can naturally associate a triangle of 3--manifolds and cobordisms (cf.~\cite[pp.~933--935]{LS1}). Let $Y_{f-1}(K)$ be the 3--manifold resulting from surgery along $K$ with framing $f-1$. The first manifold in the triangle is $Y$, the second is $Y_{f-1}(K)$ and the third one is $Y_f(K)$. A cobordism $W_f$ from $Y_{f-1}(K)$ to $Y_f (K)$ can be given by considering a normal circle $N$ to $K$ in $Y$, equip if with framing $f_S-1$, and after the surgery on $K$ with framing $f-1$ has been performed, attach a 4--dimensional 2--handle along $N$ with the chosen framing. Simple Kirby calculus shows that $W_f$ is indeed a cobordism between $Y_{f-1}(K)$ and $Y_f(K)$. When viewed upside down, $W_f$ induces a map \[ \widehat F_{\overline W_f}\co \widehat{HF}(-Y_f(K))\to \widehat{HF}(-Y_{f-1}(K)). \] Given framings $f\geq g$, define $\varphi_{g,f}$ to be the identity on $\widehat{HF}(-Y_f(K))$ if $f=g$, and the composition \[ \widehat{HF}(-Y_f(K))\xrightarrow{F_{\overline W_f}}\widehat{HF}(-Y_{f-1}(K))\longrightarrow\cdots \xrightarrow{F_{\overline W_{g+1}}}\widehat{HF}(-Y_g(K)) \] if $f>g$. Then, it is easy to check that the set \[ \left\{\left(\widehat{HF}(-Y_f(K)), \varphi_{g,f}\right)\right\} \] is an inverse system of $\mathbb Z/2\mathbb Z$--vector spaces and linear maps over the set $\mathcal F_K$, so we can form the inverse limit $\mathbb Z/2\mathbb Z$--vector space \[ H(Y,K):= \varprojlim\widehat{HF}(-Y_f(K)), \] which is the subspace \[ \{(x_f)\in\prod_{f\in\mathcal F_K}\widehat{HF}(-Y_f(K))\ |\ x_g=\varphi_{g,f}(x_f)\ \text{for $g\leq f$}\} \subset\prod_{f\in\mathcal F_K}\widehat{HF}(-Y_f(K)). \] We define $\tilde c(\xi,\tau)$ as the vector $\left(c(\tilde I(\xi,\tau,f))\right)_{f\in\mathcal F_K}$, which is, \emph{a priori} an element of $\prod_{f\in\mathcal F_K}\widehat{HF}(-Y_f(K))$. \begin{prop}\label{p:invlimit} The invariant $\tilde c(\xi,\tau)$ is in $H(Y,K)$. \end{prop} \begin{proof} Choose $f\in\mathcal F_K$ and a negative stabilization $\kappa'$ of $\kappa$, with $\tb(\kappa')<f-1$. By~\cite{DG}, performing contact $(+1)$--surgery on an extra push--off copy of $\kappa'_-$ in the contact surgery presentation for $\xi_{f-\tb(\kappa')}^-(\kappa')$ gives $\xi_{f-1-\tb(\kappa')}^-(\kappa')$. The corresponding 2--handle attachment gives an oriented 4--dimensional cobordism from $Y_f(K)$ to $Y_{f-1}(K)$, and it is easy to check that reversing the orientation of that cobordism gives exactly the oriented cobordism $\overline W_f$. By~\cite[Theorem~2.3]{OSz-cont} (see also~\cite[Theorem~2.2]{LS1}), we have \[ \widehat F_{\overline W_f}(c(\xi_{f-\tb(\kappa')}^-(\kappa'))) = c(\xi_{f-1-\tb(\kappa')}^-(\kappa')), \] i.e. \[ \varphi_{f-1,f}(c(I(\xi,\kappa,f)))=c(I(\xi,\kappa,f-1)). \] Since this holds for each $f\in\mathcal F_K$, and for $g\leq f$ we have \[ \varphi_{g,f} = \varphi_{g+1,g}\circ\cdots\circ\varphi_{f-1,f}, \] the statement is proved. \end{proof} Proposition~\ref{p:invlimit} immediately gives the following: \begin{cor}\label{c:defin} If $c(\xi,\kappa,g)\neq 0$ then $c(\xi,\kappa,f)\neq 0$ for every $f\geq g$. \qed \end{cor} \section{Proof of Theorem~\ref{t:main}}\label{s:main} Let $\Sigma$ be an oriented surface-with-boundary and $\phi\co\Sigma\to\Sigma$ an orientation--preserving diffeomorphism which restricts to the identity on a collar around $\partial\Sigma$. Let $(Y,\xi_{(\Sigma,\phi)})$ be a contact 3--manifold compatible with the open book decomposition induced by $(\Sigma,\phi)$. Let $\tau\subset (Y,\xi_{(\Sigma,\phi)})$ be a component of the boundary of $\Sigma$ viewed as the binding of the open book and let $f_\Sigma$ be the framing induced on $\tau$ by $\Sigma$. \begin{prop}\label{p:cobident} There exists a Legendrian approximation $\kappa\subset (Y,\xi)$ to $\tau$ such that $\tb(\kappa)=f_\Sigma-1$ and, for each $n>0$, the contact structure $\xi^-_n(\kappa)$ admits a compatible open book with a binding component $\tau'$ having the following properties: \begin{itemize} \item Capping--off $\tau'$ gives back the open book $(\Sigma,\phi)$; \item Let $Z$ be the cobordism corresponding to capping--off $\tau'$, and let $X_{\kappa, n}$ be the topological cobordism obtained by attaching a 4--dimensional 2--handle along $\kappa$ with framing $\tb(\kappa)+n$. Then, $\overline Z=-X_{\kappa,n}$, i.e.~$Z$ is obtained from $X_{\kappa, n}$ by viewing it upside--down and reversing its orientation. \end{itemize} \end{prop} \begin{comment} We determine an open book compatible with the contact structure obtained by contact $(+n)$--surgery along a Legendrian approximation of $\tau$. We can express the monodromy $\phi$ as a product $\phi'$ of positive Dehn twists times $R^{-h}_{\partial\Sigma}$ for some $h>0$, where $R_{\partial\Sigma}$ is a positive Dehn twist along a simple curve parallel to $\partial\Sigma$. The pair $(\Sigma,\phi)$ is illustrated by the left--hand side of Figure~\ref{f:approx}, where the factor $\phi'$ is omitted. \end{comment} \begin{proof} As shown in~\cite[Lemma~3.1]{SVV}, any open book decomposition can be Giroux stabilized so that the page of the new open book $(\Sigma',\phi')$ contains a Legendrian approximation $\kappa$ of $\tau$ as a curve sitting on a page $\Sigma'$ and parallel to (a component of) $\partial\Sigma'$, with $\tb(\kappa)=f_{\Sigma'}=f_\Sigma-1$. This is illustrated in Figure~\ref{f:approx}a--b. \begin{figure}[ht!] \labellist \small\hair 2pt \pinlabel $\Sigma$ at 10 241 \pinlabel $\Sigma'$ at 607 233 \pinlabel $\Sigma''$ at 616 73 \pinlabel $\tau$ at 147 292 \pinlabel $\tau$ at 331 294 \pinlabel $\tau$ at 205 154 \pinlabel $+$ at 540 260 \pinlabel $+$ at 344 96 \pinlabel $+$ at 521 91 \pinlabel $\kappa$ at 385 252 \pinlabel $\kappa$ at 324 53 \pinlabel $\kappa_-$ at 199 90 \pinlabel $\text{\scriptsize (a)}$ at 83 175 \pinlabel $\text{\scriptsize (b)}$ at 450 175 \pinlabel $\text{\scriptsize (c)}$ at 365 4 \endlabellist \centering \includegraphics[scale=0.5]{fig-7} \caption{Legendrian approximation of the binding. Once again, the signs $\pm $ on the curves indicate whether right-- or left--handed Dehn twists are to be performed on the given curve.} \label{f:approx} \end{figure} As shown in Figure~\ref{f:approx}b--c, after a further Giroux stabilization both $\kappa$ and its negative stabilization $\kappa_-$ can be Legendrian realized on the page of an open book $(\Sigma'',\phi'')$. The knot $\kappa_-$ is parallel on $\Sigma''$ to a boundary component which coincides with $\tau$, as indicated in Figure~\ref{f:approx}c. Performing contact $(+n)$--surgery along $\kappa$ is equivalent to a contact $(+1)$--surgery along $\kappa$ plus contact $(-1)$--surgeries along $n-1$ parallel copies of $\kappa_-$. Therefore, the resulting contact structure is supported by the open book obtained by composing $\phi''$ with a negative Dehn twist along the curve corresponding to $\kappa$, as well as positive Dehn twists along $n-1$ parallel copies of the curve corresponding to $\kappa_-$. This is illustrated in Figure~\ref{f:surg}a. \begin{figure}[ht!] \labellist \small\hair 2pt \pinlabel {\scriptsize $+$} at 93 257 \pinlabel {\scriptsize $+$} at 93 225 \pinlabel {\scriptsize $+$} at 253 240 \pinlabel {\scriptsize $+$} at 416 237 \pinlabel {\scriptsize $+$} at 323 82 \pinlabel {\scriptsize $-$} at 191 200 \pinlabel $\tau'$ at 72 279 \pinlabel $\text{\scriptsize n-1}$ at 170 269 \pinlabel $\Sigma'$ at 402 74 \pinlabel $\Sigma''$ at 492 245 \pinlabel $\text{\scriptsize (a)}$ at 254 170 \pinlabel $\text{\scriptsize (b)}$ at 254 13 \endlabellist \centering \includegraphics[scale=0.5]{fig-8} \caption{Contact $n$--surgery and capping--off.} \label{f:surg} \end{figure} As shown in Figure~\ref{f:surg}a--b, capping--off the binding component denoted $\tau'$ in the picture yields the open book $(\Sigma',\phi')$. (Notice the cancellation of the left--handed Dehn twist of Figure~\ref{f:surg}a after the capping--off.) This proves the first part of the statement. In order to control what happens at the level of 4--dimensional 2--handles, we represent the surgeries inside a standard Legendrian neighborhood of $\kappa$, as illustrated in Figure~\ref{f:neighbor}a. \begin{figure}[ht!] \labellist \small\hair 2pt \pinlabel {\Huge $\}$} at 306 354 \pinlabel $n-1$ at 340 355 \pinlabel {\Huge $\}$} at 298 105 \pinlabel $n-1$ at 332 106 \pinlabel $\tau'$ at 306 404 \pinlabel $\tau'$ at 298 148 \pinlabel {\scriptsize $-2$} at 267 382 \pinlabel {\scriptsize $-2$} at 267 324 \pinlabel {\scriptsize $+1$} at 237 311 \pinlabel {\scriptsize $0$} at 264 155 \pinlabel {\scriptsize $-1$} at 264 133 \pinlabel {\scriptsize $-1$} at 264 75 \pinlabel {\scriptsize $+2$} at 215 62 \pinlabel {\scriptsize (a)} at 141 263 \pinlabel {\scriptsize (b)} at 141 15 \pinlabel $-1$ at 177 367 \endlabellist \centering \includegraphics[scale=0.55]{fig-9} \caption{Contact $(+n)$--surgery along $\kappa$ in a standard neighborhood.} \label{f:neighbor} \end{figure} \begin{figure}[ht!] \labellist \small\hair 2pt \pinlabel $\tau'$ at 226 369 \pinlabel {\scriptsize $-2$} at 167 365 \pinlabel {\scriptsize $-2$} at 153 321 \pinlabel {\scriptsize $-2$} at 231 313 \pinlabel {\scriptsize $-2$} at 510 343 \pinlabel {\scriptsize $-2$} at 510 313 \pinlabel {\scriptsize $-1$} at 125 330 \pinlabel {\scriptsize $-1$} at 563 355 \pinlabel {\scriptsize $-1$} at 503 297 \pinlabel {\scriptsize $+1$} at 231 276 \pinlabel {\scriptsize $+1$} at 595 294 \pinlabel {\scriptsize $0$} at 139 91 \pinlabel {\scriptsize $0$} at 521 91 \pinlabel {\small $n$} at 214 71 \pinlabel {\small $n+1$} at 588 73 \pinlabel {\scriptsize (a)} at 125 236 \pinlabel {\scriptsize (b)} at 530 236 \pinlabel {\scriptsize (c)} at 530 13 \pinlabel {\scriptsize (d)} at 125 13 \endlabellist \centering \includegraphics[scale=0.58]{fig-10} \caption{Kirby moves in a standard neighborhood. In~\ref{f:kirby}a there are $n-2$ small $(-2)$--framed circles and a further long $(-2)$--framed circle. In~\ref{f:kirby}b we have $n-2$ $(-2)$--framed circles.} \label{f:kirby} \end{figure} The framing coefficients appearing in the picture have the following significance. Observe that each curve isotopic to the core of the solid torus has a canonical framing coming from the identification of the solid torus with $S^1\times D^2$. We have chosen the identification of a neighborhood of $\kappa$ with $S^1\times D^2$ so that the canonical framing of the core corresponds to the contact framing of $\kappa$. With this convention, the framing induced by $\Sigma''$ on $\tau'$ would be denoted by $``-1"$ in Figure~\ref{f:neighbor}a. Figure~\ref{f:neighbor}b describes the surgery when we change the solid torus identification by a ``right--handed twist''. Then, the cobordism $Z$ corresponding to capping--off $\tau'$ is obtained by attaching a 4--dimensional 2--handle along $\tau'$ with framing $0$, as shown in Figure~\ref{f:neighbor}b. After sliding $\tau '$ over one of the $(-1)$--framed circles in Figure~\ref{f:neighbor}b, and then repeatedly the $(-1)$--framed circles over each other, and finally blowing up the last two curves, we arrive at Figure~\ref{f:kirby}a. A handle slide, followed by a blow--down gives Figure~\ref{f:kirby}b, and further blow--downs give Figure~\ref{f:kirby}c. Applying a ``left--handed twist'' to the solid torus neighborhood (which just undoes the ``right--handed twist" we applied earlier) gives Figure~\ref{f:kirby}d. This shows that $Z$ can be viewed as the cobordism obtained by attaching a 2--handle along a meridian to the original curve $\kappa$, with framing $0$ with respect to the meridian disk. The picture shows that $Z$ coincides precisely with $-\overline{X_{\kappa,n}}$, where $X_{\kappa,n}$ is the cobordism obtained by attaching a 2--handle along $\kappa$ with framing $+n$ with respect to the contact framing, the minus sign denotes orientation--reversal and the overline bar means viewing the cobordism ``up--side down''. \end{proof} \begin{proof}[Proof of Theorem~\ref{t:main}] By Baldwin's theorem~\cite[Theorem~1.2]{Ba}, for each positive integer $n$ there is a Spin$^c$ structure $\mathbf s_0$ on the cobordism $\overline Z$ such that \[ F_{{\overline Z},\mathbf s_0}(c(\xi)) = c(\xi^-_n(\kappa)). \] This equation then clearly shows that if $c(\xi)=0$ then $\tilde c(\xi,\tau,f)=0$ for each $f\geq f_\Sigma$, verifying that the element $\tilde c(\xi,\tau)\in H(Y, K)$ has only vanishing components, hence $\tilde c(\xi,\tau)=0$. \end{proof} \section{Proofs of Theorem~\ref{t:mainsl} and Theorem~\ref{t:main2}}\label{s:main2} Let $Y$ be a closed, oriented rational homology 3--sphere, and let $\xi$ be a contact structure on $Y$. Let $\kappa\subset (Y,\xi)$ be an oriented Legendrian knot. Suppose that $\kappa=\partial\Sigma$, where $\Sigma\subset Y$ is an embedded oriented surface. Let $X$ be the oriented 4--dimensional cobordism obtained by attaching a 4--dimensional 2--handle $H$ to $Y$ along $\kappa$. Let $[\Sigma\cup D]\in H_2(X;\mathbb Z)$ denote the homology class supported by the union of $\Sigma$ and the core $D$ of $H$. \begin{lem}\label{l:spincunique} If $\mathbf s_1$ and $\mathbf s_2$ are Spin$^c$ structures on $X_{\kappa,n}$ having the same restriction to $Y$ and satisfying \[ \langle c_1(\mathbf s_1),[\Sigma\cup D]\rangle = \langle c_1(\mathbf s_2),[\Sigma\cup D]\rangle, \] then $\mathbf s_1=\mathbf s_2$. \end{lem} \begin{proof} The set of Spin$^c$ structures on $X$ which restrict to $Y$ as a fixed Spin$^c$ structure is an affine space on $H^2(X,Y)$ (with integral coefficients). Since $Y$ is a rational homology sphere and $X$ is obtained up to homotopy by attaching a 2--disk to $Y$, by excision we have $H_2(X,Y)\cong\mathbb Z$ and $H^2(X,Y)\cong\mathbb Z$. The exact homology sequence of the pair $(X,Y)$ shows that the map $i_*\co H_2(X)\to H_2(X,Y)$ is injective, and therefore $H_2(X)\cong\mathbb Z$, with generator $[\Sigma\cup D]$. The exact cohomology sequence shows that the restriction map $i^*\co H^2(X,Y)\to H^2(X)$ is injective, and the free part of $H^2(X)$ has rank $1$. The evaluation map \[ H^2(X)\to Hom(H_2(X);\mathbb Z)\cong\mathbb Z,\quad \beta\mapsto\langle\beta,[\Sigma\cup D]\rangle \] is surjective. Therefore, the composition of $i^*$ with the evaluation map is injective. If $\mathbf s_1$, $\mathbf s_2$ are two Spin$^c$ structures on $X$ with coinciding restriction to $Y$, then $\mathbf s_1-\mathbf s_2=\alpha$, where $\alpha$ belongs to the image of $i^*$, and $c_1(\mathbf s_1)-c(\mathbf s_2) = 2\alpha$. Therefore, if the evaluation map takes the same value on $c_1(\mathbf s_1)$ and $c_2(\mathbf s_2)$, it follows that $\alpha=0$, hence $\mathbf s_1=\mathbf s_2$. \end{proof} Let tb denote the Thurston--Bennequin number of $\kappa$ with respect to $\Sigma$, and let rot be the rotation number of $\kappa$ with respect to $\Sigma$. Fix $n>0$, and let $X_{\kappa,n}$ be the oriented 4--dimensional cobordism obtained by attaching a 4--dimensional 2--handle to $Y$ along $\kappa$ with framing tb$+n$. \begin{prop}\label{p:spincvalue} There exists a Spin$^c$ structure $\mathbf s$ on $X_{\kappa,n}$ such that: \begin{enumerate} \item $\mathbf s$ extends the Spin$^c$ structures induced on $\partial X_{\kappa,n}$ by $\xi_{st}$ and $\xi^-_n(\kappa)$; \item $\frac14(c_1(\mathbf s)^2-3\sigma(X_{\kappa,n})-2\chi(X_{\kappa,n}))+1 = d_3(\xi^-_n(\kappa)) - d_3(\xi_{st})$; \item $\langle c_1(\mathbf s),[\Sigma\cup D]\rangle ={\mbox {rot}}+n-1$. \end{enumerate} \end{prop} \begin{proof} We can view Figure~\ref{f:neighbor}a (ignoring the knot $\tau$) as $S^3$ union $n$ 4--dimensional 2--handles. The sequence of Figures~\ref{f:kirby}a,~\ref{f:kirby}b and~\ref{f:kirby}c shows that in fact Figure~\ref{f:neighbor}a represents $\widehat X_{\kappa,n}:=X_{\kappa,n}\#(n-1)\overline{{\mathbb C}{\mathbb P}}^2$. By e.g.~\cite[Section~3]{DGS}, $\widehat X_{\kappa,n}\#{\mathbb C}{\mathbb P}^2$ carries an almost complex structure $J$ inducing 2--plane fields homotopic to $\xi_{st}$ and $\xi^-_n(\kappa)$ on its boundary. We define $\mathbf s_J$ to be the associated $\mathrm{Spin}^c$ structure, and $\mathbf s:={\mathbf s_J}|_{X_{\kappa,n}}$. By construction, $\mathbf s$ extends the Spin$^c$ structures induced on $\partial X_{\kappa,n}$ by $\xi_{st}$ and $\xi^-_n(\kappa)$. This proves Part (1) of the statement. By~\cite{DGS} we have \begin{equation}\label{e:d3formula} \frac14(c_1(\mathbf s_J|_{\widehat X_{\kappa,n}})^2-3\sigma(\widehat X_{\kappa,n})-2\chi(\widehat X_{\kappa,n})) + 1 = d_3(\xi^-_n(\kappa)) - d_3(\xi_{st}). \end{equation} Figure~\ref{f:neighbor}b gives a natural basis $(\beta,x_1,\ldots,x_{n-1})$ of $H_2(X_{\kappa,n};\mathbb Z)$ satisfying $\beta\cdot x_1=1$ and $x_i\cdot x_{i+1}=1$ for $i=1,\ldots,n-2$. By construction and~\cite{DGS}, the values of $c_1(\mathbf s_J)$ on this basis are given by $\langle c_1(\mathbf s_J),\beta\rangle =$rot, $\langle c_1(\mathbf s_J), x_i\rangle =$ rot$-1$, $i=1,\ldots, n-1$. We want to express the generator $[\Sigma\cup D]$ of $H_2(X_{\kappa,n};\mathbb Z)\cong\mathbb Z$ in terms of $\beta$ and $x_1,\ldots, x_{n-1}$. In Figure~\ref{f:kirby}b, the classes represented by the framed circles (except the $-1$--framed one corresponding to $\tau$) give us the new basis $(\beta,\beta-x_1,x_1-x_2,\ldots,x_{n-2}-x_{n-1})$. If we define the classes $e_1,\ldots, e_{n-1}$ by setting \[ e_1:=\beta-x_1,\ e_{i+1}-e_i:=x_i-x_{i+1},\ i=1,\ldots, n-2, \] it is easy to check that \[ [\Sigma\cup D]=\beta+e_1+\ldots,e_{n-1} \] and $\langle c_1(\mathbf s_J),e_i\rangle = 1$ for $i=1,\ldots, n-1$. Thus, \[ \langle c_1(\mathbf s), [\Sigma\cup D]\rangle = \langle c_1(\mathbf s_J), \Sigma\cup D]\rangle = {\mbox {rot}}+n-1. \] This proves Part (3) of the statement. Finally, the values $\langle c_1(\mathbf s_J),e_i\rangle = 1$ imply that \[ c_1(\mathbf s_J|_{\widehat X_{\kappa,n}})^2-3\sigma(\widehat X_{\kappa,n})-2\chi(\widehat X_{\kappa,n}) = c_1(\mathbf s)^2-3\sigma(X_{\kappa,n})-2\chi(X_{\kappa,n}). \] Thus, Equation~\eqref{e:d3formula} implies Part (2) of the statement. \end{proof} \begin{proof}[Proof of Theorem~\ref{t:main2}] Let $Z$ be the cobordism of Proposition~\ref{p:cobident} corresponding to an integer $n$. By~\cite[Theorem~1.2]{Ba}, there is a Spin$^c$ structure $\mathbf s_0$ on the cobordism ${\overline {Z}}$ such that \begin{equation}\label{e:bald} F_{{\overline Z},\mathbf s_0}(c(\xi_{st})) = c(\xi^-_n(\kappa)), \end{equation} where $\kappa$ is the Legendrian approximation of $\tau$ described in Proposition~\ref{p:cobident}. Since $c(\xi_{st})\neq 0$, we will prove that $\tilde c(\xi_{st},\tau,f_S+2g(K))\neq 0$ by showing that for an appropriate choice of $n$ the map $F_{{\overline Z},\mathbf s_0}$ is injective. By Proposition~\ref{p:cobident}, the cobordism ${\overline Z}$ is exactly the cobordism induced by $-p$--surgery along the mirror image knot $\overline{K}$, where $p=\tb(\kappa)+n$. Now we choose $n$ so that $p=2g(K)$. Then, the assumption $\sel(\tau)=\tb(\kappa)-\rot(\kappa)=2g(K)-1$ implies \[ \rot(\kappa)+n-1=\tb(\kappa)-2g(K)+1+2g(K)-\tb(\kappa)-1=0. \] Therefore, the $\mathrm{Spin}^c$ structure $\mathbf s$ of Proposition~\ref{p:spincvalue} satisfies $c_1(\mathbf s)=0$. By Equation~\eqref{e:bald} and the identification $\overline{Z}=-X_{\kappa,n}$, the $\mathrm{Spin}^c$ structure $\mathbf s_0$ satisfies \begin{equation}\label{e:degshift} \frac14(c_1(\mathbf s_0)^2-3\sigma(-X_{\kappa,n})-2\chi(-X_{\kappa,n}))=-d_3(\xi^-_n(\kappa))+d_3(\xi_{st}) \end{equation} Since $\sigma(-X_{\kappa,n})=-\sigma(X_{\kappa,n})$, Equation~\eqref{e:degshift} together with Proposition~\ref{p:spincvalue}(2) imply $c_1(\mathbf s_0)^2=-c_1(\mathbf s)^2=0$, therefore $c_1(\mathbf s_0)=0$. Let $\t_0$ denote the restriction of $\mathbf s_0$ to $S^3_{-2g(K)}(\overline{K})$. By~\cite[Theorem~9.19 and Remark 9.20]{OSzF2}, there is a surjective map \[ Q\colon \mathrm{Spin} ^c (S_0^3(\overline{K}))\to \mathrm{Spin}^c (S_{-2g(K)}^3(\overline{K})) \] and an exact triangle \begin{equation}\label{e:triangle} \begin{graph}(6,2) \graphlinecolour{1}\grapharrowtype{2} \textnode {A}(1,1.5){$\widehat{HF} (S^3)$} \textnode {B}(5, 1.5){$\widehat{HF} (S^3_{-2g(K)}(\overline{K}),\t_0)$} \textnode {C}(3, 0){$\widehat{HF} (S^3_0(\overline{K}), [\t_0])$} \diredge {A}{B}[\graphlinecolour{0}] \diredge {B}{C}[\graphlinecolour{0}] \diredge {C}{A}[\graphlinecolour{0}] \freetext (2.4,1.8){$F$} \end{graph} \end{equation} where \[ \widehat{HF} (S_0^3(\overline{K}), [\t_0]):= \oplus _{\t\in Q^{-1}(\t_0)}\widehat{HF} (S_0^3(\overline{K}),\t). \] We claim that for each $\t\in Q^{-1}(\t_0)\subset \mathrm{Spin} ^c (S_0^3({\overline {K}}))$ we have \[ \vert \langle c_1 (\t), h\rangle \vert \geq 2g(K), \] where $h$ is a homology class generating $H_2(S^3 _0(\overline{K});\mathbb Z)$. In fact, since $\mathbf s_0$ extends $\t_0$, by~\cite[Lemma~7.10]{abs}, \[ 0=\langle c_1(\mathbf s_0),[\Sigma\cup D]\rangle \equiv -2g(K) + \langle c_1(\t),h\rangle \qquad \bmod 4g(K), \] which immediately implies the claim. Notice that $h$ can be represented by a genus-$g(K)$ surface (by adding the core of the 2--handle to a Seifert surface of $\overline{K}$), therefore the adjunction formula of \cite[Theorem~7.1]{OSzF2} implies $\widehat{HF}(S_0^3 ({\overline {K}}),[\t_0])=\{0\}$. This shows that the horizontal map $F$ of~\eqref{e:triangle}, is an isomorphism. Since $c(\xi _{st})$ generates $\widehat{HF} (S^3)$, we have $F(c(\xi_{st}))\neq 0$. In view of Equation~\eqref{e:bald}, to prove that $\tilde c(\xi_{st},\tau,f_S+2g(K))\neq 0$ it suffices to show that $F=F_{\overline{Z},\mathbf s_0}$. From the general theory we know that \[ F=\sum _{\{\mathbf s\in Spin ^c(\overline{Z})\ \vert\ \mathbf s|_{S^3 _{-2g(K)}(K)}=\t_0\}} F_{\overline{Z},\mathbf s}. \] Since $F$ is an isomorphism on $\widehat{HF} (S^3)=\mathbb Z /2\mathbb Z$, all the $\mathrm{Spin}^c$ structures contributing nontrivially the sum have the same degree shift. Moreover, if $\overline{\mathbf s}$ denotes the $\mathrm{Spin}^c$ structure conjugate to $\mathbf s$, by~\cite[Theorem~3.6]{OSz-4d} we have \[ F_{\overline{Z},\overline{\mathbf s}}(c(\xi_{st})) = J F_{\overline{Z},\mathbf s}(c(\xi_{st})), \] where $J$ is the identification between $\widehat{HF} (Y, \mathbf s)$ and $\widehat{HF} (Y, {\overline {\mathbf s}})$ defined in~\cite{OSzF1}. Since $J$ preserves the absolute grading, this implies that each $\mathrm{Spin}^c$ structure $\mathbf s$ with $\mathbf s\neq\overline{\mathbf s}$ contributes trivially to $F(c(\xi_{st}))$. Finally, from $c_1(\mathbf s_0)=0$ we know that $\mathbf s_0=\overline{\mathbf s_0}$, therefore by Lemma~\ref{l:spincunique} $\mathbf s_0$ is the only $\mathrm{Spin}$ structure on $\overline{Z}$ which extends $\t_0$. This implies $F=F_{\overline{Z},\mathbf s_0}$ and proves $\tilde c(\xi_{st},\tau,f_S+2g(K))\neq 0$. Applying Corollary~\ref{c:defin} it follows that $\tilde c(\xi_{st},\tau,f)\neq 0$ for each $f\geq f_S+2g(K)$. \end{proof} \begin{proof}[Proof of Theorem~\ref{t:mainsl}] Let $\tau$ be a transverse knot in the knot type $K$ with $\sel (\tau )=2g(K)-1$. According to~\cite[Lemma~6.5]{BEVM} there is an open book decomposition of $S^3$ compatible with $\xi _{st}$ having one binding component equal to $\tau$. Applying Theorem~\ref{t:main2} the result follows for each integer surgery coefficient $n\geq 2g(K)$. In particular, Theorem~\ref{t:main2} gives a contact structure on $S^3_{2g(K)}(K)$ with nonvanishing contact invariant. By the algorithm for contact surgeries described in Section~\ref{sec:second}, for $r\in\mathbb Q$ with $r\geq 2g(K)$ a contact structure can be given by performing an appropriate sequence of Legendrian surgeries on the contact structure previously constructed on $S^3_{2g(K)}(K)$. Since under Legendrian surgery the nonvanishing property of the contact invariant is preserved, the 3--manifolds $S^3_r (K)$ with $r\geq 2g(K)$ all carry contact structures with nonvanishing contact Ozsv\'ath-Szab\'o invariants. Since $c(Y,\xi )\neq 0$ implies tightness for $(Y, \xi)$, the proof of the theorem is complete. \end{proof}
2,869,038,156,033
arxiv
\section{Introduction} \label{introduction} Shear viscosity is a common transport property for lots of substances, such as macroscopic matter, eg. water, oil, honey and air as well as microscopic and quantum matter, eg. hot dense quark matter etc. \cite{Rev1,Rev2,Rev3,Gao,Tang,Huang,Shen,Cao,KEE15}, and it has been studied for a few decades in nuclear physics \cite{GB78,PD84,LS03,AM04,KSS05}. An interesting behavior was found that the ratios of shear viscosity over entropy density ($\eta/s$) for many substances have minimum values at their corresponding critical temperatures \cite{RAL07}. By using string theory method, Kovtun-Son-Starinets found that the $\eta/s$ has a limiting value of $\hbar/4\pi$, which is called KSS bound \cite{KSS05}. In relativistic heavy ion collisions, experimental data indicated that the $\eta/s$ of the quark gluon plasma (QGP) is close to the KSS bound, which means that QGP matter is almost perfect fluid \cite{BC05,PC05,SC05,Heinz,Song,Shen,Reining}. So far there are a couple of approaches to calculate shear viscosity \cite{Ma_book}. From theoretical viewpoint, one can obtain shear viscosity from the Chapman-Enskog and the relaxation time approaches \cite{PD84,LS03,SP12,AW12,XJ13}. From the transport simulation viewpoint, one can extract shear viscosity in the periodic box \cite{AM04,CJ08,JA18}. Also the shear viscosity can be estimated by the mean free path of nucleons ~\cite{DQ14,LiuHL}. What's more, one can extract shear viscosity from the width and energy of the giant dipole resonance (GDR) \cite{NA09,ND11,GuoCQ,DM17,GDR2} or fragment production \cite{SP10} or fitting formula \cite{ZhouCL1,DXG16}. There are two general methods, namely the Green-Kubo formula and SLLOD algorithm to calculate shear viscosity, which were extensively used for the molecular dynamics simulations \cite{GP05,MM12,ZY15,ZhouCL1,GuoCQ}. One of motivations of the present work is to give a new form of the Green-Kubo formula. By a comparison among the traditional Green-Kubo formula, a new form of the Green-Kubo formula is presented and its validity of those methods is discussed by the SLLOD algorithm. The paper is organized as follows: In Sec. \ref{ThreeMethods}, we introduce the simulation model and analysis methods. In Sec. \ref{resultsAA}, we discuss the shear viscosity by different approaches with and without the Pauli blocking at different densities. Conclusion is given in Sec. \ref{summary}. \section{Nuclear system and analysis methods} \label{ThreeMethods} \subsection{ImQMD model} \label{ImQMDModel} Generally there are two types of transport models, i.e. Boltzmann-Uehling-Uhlenbeck type and quantum molecular dynamics type for describing heavy-ion collisions at low and intermediate energies~\cite{GF88,Aichelin,LiBA,Ono,XuJ2,XJ16,ZYX18} and had numerous applications \cite{LiBA2,Colonna,Bonasera,MaCW,LiSX,HeWB,Wei,Zhang,Yu,Yan,WangSS,HeYJ}. In this work, an improved quantum molecular dynamics (ImQMD) model is utilized \cite{ZYX06}. The potential energy density without spin-orbit term in the ImQMD model reads \cite{ZYX06,WN16}: \begin{equation} \begin{split} V_{loc}&= \frac{\alpha}{2}\frac{\rho^{2}}{\rho_{0}} + \frac{\beta}{\gamma+1}\frac{\rho^{\gamma+1}}{\rho_{0}^{\gamma}} +\frac{g_{sur}}{2\rho_{0}}(\bigtriangledown\rho)^{2} \\ &+g_{\tau} \frac{\rho^{\eta +1}}{\rho_{0}^{\eta}} +\frac{g_{sur,iso}}{\rho_{0}}[\nabla(\rho_{n}-\rho_{p})]^2+\frac{C_s}{2\rho_{0}}\rho^{2} \delta^{2}, \end{split} \label{QMDpotential} \end{equation} where $\rho, \rho_{n}$ and $\rho_{p}$ are the nucleon, neutron and proton densities, respectively. Here the saturation density of nuclear matter is $\rho_{0} \approx 0.16 fm^{-3}$. And $\delta = (\rho_{n}-\rho_{p})/(\rho_{n}+\rho_{p})$ is the isospin asymmetry. For simplicity, we investigate the shear viscosity of infinite nuclear matter without mean field. And a periodic box is constructed within the framework of the ImQMD model as did in Ref.~\cite{ZYX18}. The conditions for box initialization are particle number $A$, density $\rho$ and temperature $T_{0}$. For the simulations, the particle number $A$ is fixed at 600and the total nucleon-nucleon cross section ($\sigma_{NN}$) is fixed at 40 mb. Then the box size would be dependent on the nuclear density. Also we just consider the symmetric nuclear matter (i.e., $\rho_{n} = \rho_{p}$) here. With the simulations by the ImQMD model, shear viscosity is extracted by different approaches. \subsection{Pauli blocking effect on distribution of system} \label{PauliBlock} In the present work, we would consider the effects of Pauli blocking on shear viscosity. However, one thing which is obviously affected by the Pauli blocking is the momentum distribution of the system as shown in Fig.~\ref{fig:fig1}. The Pauli blocking would determine what kind of distribution for a system, eg. either Fermi-Dirac distribution or classical one (Boltzmann distribution). \begin{figure}[htb] \setlength{\abovecaptionskip}{0pt} \setlength{\belowcaptionskip}{8pt} \includegraphics[scale=0.59]{./figures/fig1-fp-10rho6MeV.pdf} \caption{(Color online) Time evolution of momentum distributions at density 1.0$\rho_{0}$ and temperature $T$ = 6 MeV. (a) without and (b) with the Pauli blocking. } \label{fig:fig1} \end{figure} In this figure, we consider the cases without the Pauli blocking in Fig.~\ref{fig:fig1} (a) as well as with the Pauli blocking in Fig.~\ref{fig:fig1} (b). Here the momenta of particles are initialized by the Fermi-Dirac equation in a condition with a given set of density ($\rho$), temperature ($T$) and chemical potential ($\mu$). The Fermi-Dirac distribution has the form of \begin{equation} \begin{split} f(\epsilon)= \frac{1}{\exp(\frac{\epsilon-\mu}{T})+1}, \end{split} \label{FermiDiracDis} \end{equation} where $\epsilon$ = $p^{2}/(2m)$ for non-relativistic case and while $\epsilon$ = $\sqrt{p^{2} + m^{2}}$ for relativistic case. The initial momentum distributions are also shown with the red lines in these figures. In Fig.~\ref{fig:fig1}(a) without the Pauli blocking, the shape of momentum distribution tends to the Boltzmann one as time increases. It is worth to mention that in the standard ImQMD model, the occupation probability in the Pauli blocking algorithm is calculated by a Wigner density in phase space cell \cite{ZYX18}. This method underestimates the occupation probability in the nuclear matter, and results in a larger collision rate than the analytically evaluated one. In this work, we calculate the occupation probability by using the Fermi-Dirac distribution with the calculated density and temperature at each spatial point. As we checked, the collision rate becomes reasonable comparing to the analytical one. \subsection{Two forms of the Green-Kubo formula} \label{GreenKubo-for} One of approaches in this work to calculate shear viscosity is the Green-Kubo formula which can be derived from the linear response theory. This method was extensively discussed in molecular dynamics simulations \cite{GP05,MM12,ZY15}. The Green-Kubo formula for the calculation of shear viscosity is by an integral of the stress autocorrelation function (SACF) \cite{RK66,DJE08,AH84}: \begin{equation} \begin{split} \eta_{nor} = \frac{V}{T} \int_{t_{0}}^{\infty} C(t) dt, \end{split} \label{GKubo1} \end{equation} where $V$ and $t_{0}$ are system volume and equilibrium time, respectively. The $\eta$ with subscript in Eq.~\ref{GKubo1} means a normal (or traditional) form of the Green-Kubo formula. In last decades, it was extensively used in molecular dynamics simulations. However, it was mostly used for a classical system, not for the Fermi-Dirac distribution system. Thus in this work, we re-derive and give a new form of the Green-Kubo formula for the calculation of shear viscosity, i.e. \begin{equation} \begin{split} \eta_{new} = \frac{VNm}{\langle \sum_{i}^{N}p_{ix}^{2} \rangle} \int_{t_{0}}^{\infty} C(t) dt, \end{split} \label{GKubo1-1} \end{equation} where $N$ is total particle number and $m$ is particle mass. The derivation can be found in the Appendix~[\ref{APPA}] and a more general form for shear viscosity as in Eq.~\ref{DRIEQ-52} . One can see that the new form the Green-Kubo formula does not relate to the temperature but instead of the particle momenta. And in both Eq.~\ref{GKubo1} and Eq.~\ref{GKubo1-1}, autocorrelation function $C(t)$ keeps the same and it reads, \begin{equation} \begin{split} C(t) = \langle P_{\alpha\beta}(t)P_{\alpha\beta}(t_{0}) \rangle {\quad} \alpha, \beta=x, y, z, \end{split} \label{GKubo2} \end{equation} where $P_{\alpha\beta}(t)$ is off-diagonal element of stress tensor. The bracket $\langle...\rangle$ denotes average over equilibrium ensemble (average by simulation events). Fig.~\ref{fig:fig2} shows the autocorrelation function $C(t)$ as a function of time for different tensor components at densities of $0.4\rho_{0}$ and $1.0\rho_{0}$ at $T_{0}$ = 30 MeV. Here temperature index $T_{0}$ means the initial temperature we set for the box system. \begin{figure}[htb] \setlength{\abovecaptionskip}{0pt} \setlength{\belowcaptionskip}{8pt} \includegraphics[scale=0.56]{./figures/fig2-Cttime-0410rho30MeV.pdf} \caption{(Color online) The evolution of autocorrelation function for different components at $T_{0}$ = 30 MeV and different densities of $0.4\rho_{0}$ (a) or $1.0\rho_{0}$ (b) without mean field.} \label{fig:fig2} \end{figure} One can see that $C(t)$ tends to zero with increasing time, which indicates that $C(t)$ is convergent. However, $C(t)$ at the lower density has longer correlation time in comparison with higher one, as shown in Fig.~\ref{fig:fig2}(a) and (b). The macroscopic momentum flux in the volume $V$ is given by \begin{equation} \begin{split} P_{\alpha\beta}(t) = \frac{1}{V}\int d^{3}r P_{\alpha\beta}(\vec{r}, t), \end{split} \label{GKubo3} \end{equation} and the local stress tensor is defined as \begin{equation} \begin{split} &P_{\alpha\beta}(\vec{r},t) = \sum_{i}^{N} \frac{p_{{i}\alpha}p_{{i}\beta}}{m_{i}}\rho_{i}(\vec{r},t) \\ & + \frac{1}{2} \sum_{i}^{N} \sum_{i\neq j}^{N} F_{ij\alpha}R_{ij\beta} \rho_{j}(\vec{r},t) \\ & + \frac{1}{6} \sum_{i}^{N} \sum_{i\neq j}^{N}\sum_{i\neq j \neq k}^{N} (F_{ijk\alpha}R_{ik\beta}+ F_{jik\alpha}R_{jk\beta} )\rho_{k}(\vec{r},t) \\ &+\cdots, \end{split} \label{GKubo4} \end{equation} where $F_{ij}$ and $\vec{R}_{ij} = \vec{r}_{j} - \vec{r}_{i}$ are interaction force and relative position of particle $i$ and $j$. The first term of right-hand side is momentum term, the second one is two-body interaction term, and the third one is three-body interaction term. Actually, the mean field is not considered for our simulations but we put the interaction terms here. Based on the ImQMD model, the $i$-th particle density distribution is given by \begin{equation} \begin{split} \rho_{i}(\vec{r}) = \frac{1}{(2\pi \sigma^{2})^{3/2}} \exp[-\frac{(\vec{r}-\vec{r}_{i})^{2}}{2\sigma^{2}}], \end{split} \label{GKubo5} \end{equation} where $\sigma$ is the wave-packet width which is taken as 2.0 $fm$ in the present work. \subsection{The Gaussian thermostated SLLOD algorithm} \label{GTSLLOD} Another approach is the SLLOD, which was named by Evans and Morriss \cite{GP06} and related to the dynamics with (artificial) strain rate $\dot{\gamma}$, algorithm for non-equilibrium molecular dynamics (NEMD) calculation and has been extensively applied to predict the rheological properties of real fluids \cite{GP06}. The SLLOD algorithm for shear viscosity was actually applied to a planar Couette flow field at shear rate $\dot{\gamma}$ = $\partial v_{x}/\partial y$ which is the change in streaming velocity $v_{x}$ in the $x$-direction with vertical position $y$. One can get the shear viscosity at shear rate \begin{equation} \begin{split} \eta = -\frac{\langle P_{xy} \rangle }{\dot{\gamma}}. \end{split} \label{GKubo6} \end{equation} With adding shear rate to the system, the dynamical equations of motion of the system are rewritten as \begin{align} &\frac{d\vec{r}_{i}}{dt}=\frac{\vec{p}_{i}}{m_{i}} + \dot{\gamma}y_{i} \hat{x} \label{GKubo77} \\ &\frac{d\vec{p}_{i}}{dt}=\vec{F}_{i}-\dot{\gamma}p_{yi} \hat{x}-h \vec{p}_{i}. \label{GKubo88} \end{align} Eq.~\ref{GKubo77} and Eq.~\ref{GKubo88} are called the SLLOD equations. In order to keep the kinetic energy conservative, one needs a `thermostat'. Thus a multiplier is applied to motion equations. Considering conservation of kinetic energy, one can get \begin{equation} \begin{split} h = \frac{\sum_{i}(\vec{F}_{i}\cdot \vec{p}_{i}/m_{i}-\dot{\gamma}p_{xi}p_{yi}/m_{i})}{\sum_{i}p_{i}^{2}/m_{i}}. \end{split} \label{GKubo9} \end{equation} One should notice that the shear rate can not be too large or small. If it is small, flow field can not be implanted to the system. \begin{figure}[htb] \setlength{\abovecaptionskip}{0pt} \setlength{\belowcaptionskip}{8pt} \includegraphics[scale=1.1]{./figures/fig3-Pxytime-04rho.pdf} \caption{(Color online) Stress tensor as a function of time at different temperatures and density of 0.4$\rho_{0}$ without the mean field.} \label{fig:fig3} \end{figure} On the other hand, when shear rate is too large, energy can not be kept conservative when the dynamical equations of motion are solved with a finite time step. As in Fig.~\ref{fig:fig3} shows, shear viscosity decreases with big shear rate $\dot{\gamma}$ due to the energy loss with big values of $\dot{\gamma}$. So in our simulations, different temperatures are correspondent to different values of shear rate. Here $\dot{\gamma}$ = 0.0003 $c/fm$, $\dot{\gamma}$ = 0.0005 $c/fm$ and $\dot{\gamma}$ = 0.002 $c/fm$ are taken for $T_{0}$ = 4 MeV, $T_{0}$ = 6 MeV and $T_{0} \geq $10 MeV, respectively. \begin{figure*} \setlength{\abovecaptionskip}{0pt} \setlength{\belowcaptionskip}{8pt} \includegraphics[scale=1.1]{./figures/fig4-shearTrho4rho10.pdf} \caption{(Color online) Different calculations of shear viscosity as a function of temperature in cases of w/ or w/o Pauli blocking (PB) at 0.4 $\rho_0$ and 1.0$\rho_0$. } \label{fig:fig4} \end{figure*} \begin{figure}[htb] \setlength{\abovecaptionskip}{0pt} \setlength{\belowcaptionskip}{8pt} \includegraphics[scale=1.1]{./figures/fig5-diff.pdf} \caption{(Color online) The respective relative difference between the shear rate method and the normal Green-Kubo formula or and the new Green-Kubo formula as a function of temperature.} \label{fig:fig5} \end{figure} \subsection{The linear Boltzmann equation set} \label{BoltzmannEquation} How good are they for these methods mentioned above while they are applied to the nuclear matter? As a comparison, a linear Boltzmann equation set which can be used to calculate the shear viscosity of uniform nuclear matter is presented here. One can get the expression by solving Boltzmann equation \cite{PD84,LS03,BB19}: \begin{equation} \begin{split} \eta& = \frac{5T}{9}\frac{\Big{(}\int d^{3}p{\,\,}p^{2}f(p)\Big{)}^{2}}{\int d^{3}p_{1}d^{3}p_{2}d\Omega {\,} v_{12}q_{12}^{4}sin^{2}{\theta}\frac{d\sigma_{NN}}{d\Omega} f_{1}f_{2}\tilde{f}_{3}\tilde{f}_{4}}, \end{split} \label{BoltzmannSet} \end{equation} where $v_{12} = |\vec{v}_{1}-\vec{v}_{2}|$ and $q_{12} = |\vec{p}_{1}-\vec{p}_{2}|/2$ are relative velocity and relative momentum, respectively, $\sigma_{NN}$ is total nucleon-nucleon cross section, and $\tilde{f}_{3}$$\tilde{f}_{4}$ = [1-$f(p_{3})$][1-$f(p_{4})$] is the Pauli blocking term. In Eq.~\ref{BoltzmannSet}, the conservation of momentum and energy between $p_{1}, p_{2}$ and $p_{3}, p_{4}$ are needed to be taken into account. \section{Shear viscosity with different approaches} \label{resultsAA} By using different approaches as we mentioned above, as shown in Fig.\ref{fig:fig4}, shear viscosity is calculated at different temperatures and densities with and without the Pauli blocking. One sees that in Fig.~\ref{fig:fig4}(a) and Fig.~\ref{fig:fig4}(c) without the Pauli blocking, shear viscosity increases with increasing temperature at both densities of $0.4\rho_{0}$ and $1.0\rho_{0}$. As temperature increases, exchanging of momentum among the particles, a `stopping' effect increases between two flow layers. That indicates that shear viscosity would increase. Seeing from Fig.~\ref{fig:fig4} (b) and Fig.~\ref{fig:fig4}(d) with the Pauli blocking, unlike Fig.~\ref{fig:fig4}(a) and Fig.~\ref{fig:fig4}(c) in the low temperature region, shear viscosity increases with decreasing temperature. It is due to a stronger Pauli blocking effect in the low temperature region. At lower temperature and with stronger Pauli blocking effect, it indicates that a good Fermi sphere is formed, also energy and momentum transport becomes quite efficient \cite{YK16}. As the temperature increases, the Pauli blocking effect reduces, and shear viscosity decreases. And as temperature increases again, the collision number would increase, so shear viscosity increases. Comparing among these three approaches, we find that shear viscosity determined by the Boltzmann type equation is lower than those which are calculated by the shear rate and the Green-Kubo method. At very low densities such as 0.05$\rho_0$, however, we found that the result by the Boltzmann type equation (Eq.~\ref{BoltzmannSet}) is consistent with other methods, which indicates that low density approximation for Eq.~\ref{BoltzmannSet} could be valid, then one would expect that shear viscosities determined by the shear rate and the Green-Kubo approaches are the same. As displayed in Fig.~\ref{fig:fig4} (a) and Fig.~\ref{fig:fig4} (c) without the Pauli blocking, shear viscosities from the shear rate method (the SLLOD algorithm), the normal form of the Green-Kubo formula, and the new form of the Green-Kubo formula are consistent with each other. However, as the Pauli blocking is taken into account as shown in Fig.~\ref{fig:fig4} (b) and Fig.~\ref{fig:fig4} (d), one sees that in low temperature region, the shear viscosity from the normal form of the Green-Kubo formula is higher than those from both the SLLOD algorithm and the new form of the Green-Kubo formula. However, shear viscosities from the SLLOD algorithm and the new form of the Green-Kubo formula are almost the same. The relative difference is shown in Fig.~\ref{fig:fig5}. One can find that there is more different between the normal form of the Green-Kubo formula and the SLLOD algorithm or the new form of the Green-Kubo formula at lower temperatures. It is obviously that the standard GK formula leads to larger shear viscosity in fermionic system especially at low temperatures. \section{Conclusions} \label{summary} In summary, shear viscosities are obtained by the SLLOD algorithm and the Green-Kubo formula in the framework of the ImQMD simulation and are compared with the Boltzmann equation method in the present work. By comparisons among different calculation methods, it is found that shear viscosity with the Boltzmann equation method is less than those from the SLLOD algorithm and the Green-Kubo formula. More interestingly, a new form of the Green-Kubo formula (Eq.~\ref{GKubo1-1}) is presented for shear viscosity calculation. By comparison with the SLLOD algorithm, we found that the standard GK formula leads to larger shear viscosity in fermionic systems especially at low temperatures. And the new form of the Green-Kubo formula for shear viscosity is consistent with the SLLOD algorithm for both classical and non-classical systems. \begin{acknowledgments} Thanks for helpful discussions with P. Danielewicz and H. Lin. This work was partially supported by the National Natural Science Foundation of China under Contract Nos. 11947217, 11890710 and 11890714, China Postdoctoral Science Foundation Grant No. 2019M661332, Postdoctoral Innovative Talent Program of China No. BX20200098, the Strategic Priority Research Program of the CAS under Grants No. XDB34000000, the Guangdong Major Project of Basic and Applied Basic Research No. 2020B0301030008. \end{acknowledgments} \begin{appendix} \section{The Green-Kubo formula for shear viscosity} \label{APPA} The particle density can be written as (for simplicity, here we use $\delta$-function to replace the Gaussian wave-packet). And the derivation of the Green-Kubo formula for shear viscosity is tedious. Here we give a simple introduction which is based on Ref.~\cite{DJE08}. For more details one can find in Refs.~\cite{DJE96,DJE08}. \subsection{$\vec{\bf{r}}$ and $\vec{\bf{k}}$-space representations} The streaming velocity, mass density, momentum density and stress tensor at position $\vec{\bf{r}}$ and time $t$ can be written: \begin{align} \label{DRIEQ-1} &{\bf u} (\vec{\bf{r}},t)=\frac {\sum_{i}^{N} m_{i} {\dot{\vec{\bf{r}}}_{\it{i}}} \delta (\vec{\bf{r}}-\vec{\bf{r}}_{i})}{\sum_{i}^{N} m_{i} \delta (\vec{\bf{r}}-\vec{\bf{r}}_{i})}, \\ \label{DRIEQ-2} &\rho(\vec{\bf{r}},t)=\sum_{i}^{N} m_{i} \delta (\vec{\bf{r}}-\vec{\bf{r}}_{i}),\\ \label{DRIEQ-3} &J(\vec{\bf{r}},t)=\rho(\vec{\bf{r}},t){\bf u} (\vec{\bf{r}},t)=\sum_{i}^{N} m_{i} {\dot{\vec{\bf{r}}}_{\it{i}}} \delta (\vec{\bf{r}}-\vec{\bf{r}}_{i}), \\ \label{DRIEQ-4} &P_{\alpha\beta}(\vec{\bf{r}},t)=\sum_{i}^{N} \frac{p_{{i}\alpha}p_{{i}\beta}}{m_{i}} \delta (\vec{\bf{r}}-\vec{\bf{r}}_{i}) \notag \\ &+\frac{1}{2} \sum_{i}^{N} \sum_{i\neq j}^{N} F_{ij\alpha}R_{ij\beta} \delta (\vec{\bf{r}}-\vec{\bf{r}}_{j}) \\ \notag &+\cdots , \end{align} where `$i$' and `$j$' are indexes of particles and $N$ is particle number. For Eq.~(\ref{DRIEQ-4}), the momentum density conservation law \begin{align} \label{DRIEQ-1-1} \frac{\partial J(\vec{\bf{r}},t)}{\partial t } = -\nabla_{\bf r} \cdot \vec{P}, \end{align} is needed. Moreover it needs \begin{align} \label{DRIEQ-1-2} &\frac{\partial}{\partial \vec{\bf{r}}_{\it{i}} } \delta (\vec{\bf{r}}-\vec{\bf{r}}_{i})= - \frac{\partial}{\partial \vec{\bf{r}} } \delta (\vec{\bf{r}}-\vec{\bf{r}}_{i}), \\ \label{DRIEQ-1-3} &\delta (\vec{\bf{r}}-\vec{\bf{r}}_{i}) -\delta (\vec{\bf{r}}-\vec{\bf{r}}_{j})\approx \vec{R}_{ij}\frac{\partial}{\partial \vec{\bf{r}} } \delta (\vec{\bf{r}}-\vec{\bf{r}}_{j}). \end{align} One can define the Fourier transform and inverse in three-dimensions by \begin{align} \label{DRIEQ-5} &f(\vec{\bf{k}}) = \int d^{3}{\bf r} f(\vec{\bf{r}}) {\rm exp} [ {\rm i} \vec{{\bf k}}\cdot \vec{\bf r}], \\ \label{DRIEQ-6} &f(\vec{\bf{r}}) = \frac{1}{(2\pi)^{3}}\int d^{3}{\bf k} f(\vec{\bf{k}}) {\rm exp} [- {\rm i}\vec{\bf{k}}\cdot \vec{\bf{r}} ], \end{align} where `${\rm i}$' is the unit of imaginary. Then the mass density, momentum density and stress tensor in $\vec{\bf{k}}$-space are given \begin{align} \label{DRIEQ-7} &\rho(\vec{\bf{k}},t)={\sum_{i}^{N}} {m_{i}} {\rm exp} [ {\rm i} \vec{{\bf k}}\cdot \vec{\bf r}_{i} ], \\ \label{DRIEQ-8} &J(\vec{\bf{k}},t)=\sum_{i}^{N} m_{i} {\dot{\vec{\bf{r}}}_{\it{i}}} {\rm exp} [ {\rm i} \vec{{\bf k}}\cdot \vec{\bf r}_{i} ], \\ \label{DRIEQ-9} &P_{\alpha\beta}(\vec{\bf{k}},t)=\sum_{i}^{N} \frac{p_{{i}\alpha}p_{{i}\beta}}{m_{i}} {\rm exp} [ {\rm i} \vec{{\bf k}}\cdot \vec{\bf r}_{i} ] \\ \notag &+\frac{1}{2} \sum_{i}^{N} \sum_{i\neq j}^{N} F_{ij\alpha}R_{ij\beta} {\rm exp} [ {\rm i} \vec{{\bf k}}\cdot \vec{\bf r}_{j} ]+\cdots \; . \end{align} \subsection{Shear viscosity and strain rate} The stress tensor corresponds to the strain rate (for simplicity, only the $x-y$ component is considered) reads \begin{equation} \begin{split} P_{xy}=-\eta \gamma(t), \end{split} \label{DRIEQ-10} \end{equation} where $\eta$ is static shear viscosity. Also one can see it in Eq.~(\ref{GKubo6}) but the strain rate here is time dependent. The most general linear relation between the strain rate and the shear stress can be written in the time domain as \begin{equation} \begin{split} P_{xy}(t)=-\int_{0}^{t} ds \; \eta_{M}(t-s)\gamma(s), \end{split} \label{DRIEQ-11} \end{equation} where $\eta_{M}(t)$ is called the Maxwell memory function. The memory function explains that the shear stress at time $t$ is not simply linearly proportional to the strain rate at the current time $t$, but to the entire strain rate process, over times $0 \leqslant s \leqslant t$. For the frequency dependent Maxwell viscosity is \begin{equation} \begin{split} \tilde{\eta}_{M}(\omega)= \frac{\eta}{1+{\rm i} \omega \tau_{M}}, \end{split} \label{DRIEQ-12} \end{equation} where $ \tau_{M}$ is the Maxwell relaxation time which controls the transition frequency between low frequency viscous behaviour and high frequency elastic behavior. In Eq.\ref{DRIEQ-12}, $\tilde{\eta}_{M}$ is the Fourier-Laplace transform which is read as \begin{equation} \begin{split} \tilde{\eta}(\omega)= \int_{0}^{\infty} dt\; {\rm exp}[-{\rm i}\omega t] \eta(t). \end{split} \label{DRIEQ-13} \end{equation} \subsection{Shear viscosity of the Green-Kubo formula} We separate vector-dependent momentum density into longitudinal (${\bf J}^{||}$) and transverse (${\bf J}^{\bot}$) parts. Considering a transverse momentum density ${\bf J}^{\bot}$($\vec{\bf k}$,t), for simplicity, we define the coordinate system in which $\vec{\bf k}$ is in $y$-direction and ${\bf J}^{\bot}$ is in the $x$-direction: \begin{align} \label{DRIEQ-28} J_{x}(k_{y},t) = \sum_{i}mv_{xi}(t){\rm exp}[{\rm i}k_{y}y_{i}(t)]. \end{align} According to Eq.~(\ref{DRIEQ-9}), one can get \begin{align} \label{DRIEQ-29} \dot{J}_{x}(k_{y},t) = {\rm i}k_{y}P_{xy}(k_{y},t). \end{align} In Ref.~\cite{DJE08}, by Mori-Zwanzig formalism, one can get the shear viscosity $\eta (t)$ which is time-dependent, i.e. \begin{align} \label{DRIEQ-51} \eta(t) = \frac{VNm}{\langle J_{x}(k_{y})J_{x}^{\ast}(k_{y})\rangle}\langle P_{xy}(t)P_{xy}(0) \rangle. \end{align} By the Fourier-Laplace transform of $\eta(t)$, one gets \begin{align} \label{DRIEQ-51} \tilde{\eta}(\omega) = \frac{VNm}{\langle J_{x}(k_{y}=0)J_{x}^{\ast}(k_{y}=0)\rangle} \\ \times \int_{0}^{\infty}\langle P_{xy}(t)P_{xy}(0) \rangle {\rm exp}[-{\rm i}\omega t]dt. \notag \end{align} As in Eq.~(\ref{DRIEQ-12}), static shear viscosity needs $\omega \rightarrow$0. Then one gets \begin{align} \label{DRIEQ-52} \eta = \frac{VNm}{\langle J_{x}(k_{y} = 0) J_{x}^{\ast}(k_{y} = 0)\rangle} \int_{0}^{\infty}\langle P_{xy}(t)P_{xy}(0) \rangle dt. \end{align} Here it should be noticed \begin{align} \label{DRIEQ-53-0} P_{xy}(t) = \lim_{k_{y}\rightarrow0}\frac{P_{xy}(k_{y},t) }{V}, \\ \label{DRIEQ-53-1} P_{xy}(0) = \lim_{k_{y}\rightarrow0}\frac{P_{xy}(k_{y},0) }{V}, \end{align} where $V$ is system volume. For the norm of the transverse current, one can get \begin{align} \label{DRIEQ-54} &\langle J_{x}(k_{y}=0) J_{x}^{\ast}(k_{y}=0)\rangle \\ &=\langle \sum_{i}^{N}p_{xi} \sum_{j}^{N}p_{xj}\rangle \notag\\ &=\langle \sum_{i}^{N}p_{xi}^{2}\rangle +N(N-1)\langle p_{1x}p_{2x}\rangle. \notag \end{align} At equilibrium, $ p_{1x}$ is independent of $p_{2x}$, so the second term of right-hand side of Eq.~(\ref{DRIEQ-54}) is zero. Then we can obtain a new form of the Green-Kubo formula for shear viscosity \begin{align} \label{DRIEQ-55} \eta_{new} = \frac{VNm}{\langle \sum_{i}^{N}p_{xi}^{2}\rangle} \int_{0}^{\infty}\langle P_{xy}(t)P_{xy}(0) \rangle dt, \end{align} where $\langle \cdots \rangle$ denotes ensemble average. For an equilibrium system which obeys the Boltzmann distribution, one can get \begin{align} \label{DRIEQ-56} \langle \sum_{i}^{N}p_{xi}^{2} \rangle = \langle \frac{1}{3}\sum_{i}^{N}p_{i}^{2} \rangle = Nmk_{B}T, \end{align} where $T$ is temperature and $k_{B}$ ($k_{B}$ = 1) is the Boltzmann constant. Then the normal Green-Kubo formula for shear viscosity can be given \begin{align} \label{DRIEQ-55} \eta_{nor} = \frac{V}{T} \int_{0}^{\infty}\langle P_{xy}(t)P_{xy}(0) \rangle dt. \end{align} \end{appendix}
2,869,038,156,034
arxiv
\section{INTRODUCTION} Imaging Fabry-Perot (FP) spectrophotometers have become powerful tools for studying the dynamics of the central regions of globular clusters. The samples of individual stellar velocities that have been obtained with a FP are some of the largest available for globular clusters (Gebhardt {\it et~al.}\ 1994, 1995, hereafter Paper~1 and Paper~2, respectively). In addition, FP observations of the integrated light at the center of a cluster provide a two-dimensional velocity map of that region with unprecedented detail. With the dramatic increase in the quality of the kinematic data available for globular clusters, more powerful analysis techniques become possible. Non-parametric techniques can provide an unbiased estimate of the dynamical state of a cluster (Merritt 1993a,b), such as the rotation and velocity dispersion properties (Paper~1 and 2). Combining the dispersion and surface brightness profiles with the Jeans equation directly yields the mass density profile and the mass function (Gebhardt \& Fischer 1994, hereafter GF, Merritt \& Tremblay 1994). The distribution of mass determined by this technique can be used to test the predictions of Fokker-Planck and N-body simulations. As one of the densest nearby objects known, M15 has significant implications for the formation and evolution of dense stellar systems. It has, therefore, been the target of intense observations and theoretical modeling. Images taken with the refurbished Hubble Space Telescope (Guhathakurta {\it et~al.}\ 1996) have shown that the luminosity density may rise all of the way into the center and do not support the core with a radius of about 2\arcsec\ previously proposed by Lauer {\it et~al.}\ (1991). Measurements of the dynamical state of M15 have also produced controversy. Is there a central cusp in the dispersion profile (Peterson, Seitzer, \& Cudworth 1989), or does the profile remain flat near the center at 11 $\rm {km}~\rm s^{-1}$\ (Paper~1, Dubath \& Meylan 1994, Dull {\it et~al.}\ 1996)? Somewhat conflicting amounts of rotation have been measured near the center by Peterson (1993) and Paper~1. GF derived a non-parametric mass model for M15 using the velocities from Paper~1, but this has not been compared with the results of either Fokker-Planck (Grabhorn {\it et~al.}\ 1992, Dull {\it et~al.}\ 1996, Einsel \& Spurzem 1996) or N-body (Makino \& Aarseth 1992, McMillan \& Aarseth 1993) evolutionary models. The strongest constraints on the models come from comparing them with the kinematics in the central few arcseconds, since this is where the effects of mass segregation, core collapse, and a possible central massive black hole are the most pronounced. Unfortunately, the uncertainties in and disagreements between the observations in this region have made comparisons difficult. To improve our knowledge and the significance of the rotation, the value of the dispersion within 2\arcsec\ of the center of M15 requires observations with higher angular resolution. The tools of crowded-field photometry can be directly applied to the two-dimensional data from a FP, making them easier to interpret for dense stellar systems than slit spectra and yielding the largest possible number of stellar velocities. Using a FP on telescopes and with instruments that provide the best seeing is a promising approach to resolving the mystery of the center of M15. This paper is the first reporting results from our study of globular cluster dynamics using data taken at the Canada-France-Hawaii Telescope (CFHT). These data have better seeing than that in Papers~1 and 2, which were taken at CTIO. The rest of this paper is organized as follows. Sec.~2 discusses the Fabry-Perot observations and the reductions. Sec.~3 describes measurements of the first and second moments of the velocity distribution. Sec.~4 presents the mass modeling and the mass function estimates. Sec.~5 discusses the results. \section{THE DATA} \subsection{Instrumentation} We used an imaging Fabry-Perot spectrophotometer with the Sub-arcsecond Imaging Spectrograph (SIS) at the CFHT on May 20--23, 1994, and May 12--17, 1995. The optical design of SIS is described in Morbey (1992) and Le Fevre {\it et~al.}\ (1994). Light is fed to the spectrograph by a tip-tilt mirror which is driven at about 20~Hz using the signal from a quadrant detector at the focal plane in order to compensate for image motion. We placed the Rutgers narrow etalon, which has a spectral resolution of 0.8~\AA\ FWHM at 6500~\AA, in the collimated beam. This etalon is fully compatible with the CFHT etalon controller and its coatings are suitable for work at the H$\alpha$ line. The order-selecting filter was in the collimated beam below the etalon and was tilted to eliminate ghost images due to filter reflections. Before being tilted, the filter had a 16\AA\ FWHM centered at 6569\AA\ and a peak transmission of 83\%. It was borrowed from the Dominion Astrophysical Observatory. The SIS+FP setup provides a $3\arcmin\times 3\arcmin$ field of view. The front-side illuminated Loral3 CCD imaged this field, and we binned $2\times 2$ during readout to obtain 1024x1024 pixels at a scale of 0.173\arcsec\ per pixel. The SIS guide probe is mounted on a 45\arcsec\ wide arm that runs across the entire field. Fortunately, the probe's field of view is $4\arcmin\times 3\arcmin$; with the rich star fields of globular clusters, we were able to choose V~=~13~--~14 guide stars that resulted in little or no vignetting of our images by the probe. Sampling the position of the star at 40~Hz seemed adequate to follow the image motion seen on a real-time display. Fast guiding produced an approximately 15\% improvement in the FWHM of the images. Thus, we were able to reduce the area of star images by about 30\%, which significantly increased the number of stars for which we could obtain velocities. \subsection{Observations and Data Reduction} The FP observing and reduction procedures were the same as those described in Paper~1. We took a series of exposures stepped by 0.33~\AA\ across the H$\alpha$ absorption line. Projector flats were obtained in the evening or morning for every wavelength setting of the etalon used that night. Approximately hourly exposures of a deuterium lamp, which provided both H$\alpha$ and D$\alpha$ emission lines, monitored the wavelength zero-point and provided the primary wavelength calibration. This calibration was supplemented by exposures of several neon lines taken throughout the run. A small offset between the focal planes of the guide probe and the SIS collimator resulted in reflections from the etalon being so out of focus that they were not detectable. We thus made no corrections for reflected light. We obtained 17 15-minute exposures of M15 in 1994 and 15 such exposures in 1995. Frame-to-frame normalizations for non-photometric conditions, determined as described in Paper~1, were as large as a factor of two for the 1994 data. For the 1995 data, the normalizations only varied by 20\%. The average FWHM of the stellar images was 0.8\arcsec\ (the 1995 run) and 1.1\arcsec\ (1994). The overall throughput of the system (telescope+FP+CCD) was about 15\%, and for a V=16 star we were able to obtain a signal-to-noise of 18 in a 15~minute exposure under photometric conditions. A significant change from our previous reduction procedure was employing HST images directly to assist in the photometry of the crowded central regions of M15. This was made possible by using ALLFRAME (Stetson 1994), which measures positions and brightnesses for all of the stars in a set of images simultaneously. Coordinate transformations with six coefficients map a location in one frame to the corresponding location in another. ALLFRAME makes maximum use of the positional information present in the set of frames, improving the photometry in every frame. Our initial ALLFRAME reduction used HST frames from Guhathakurta~{\it et~al.}\ (1996) and all of the frames from the 1994 and 1995 CFHT runs. Unfortunately, the line profiles resulting from this photometry were very noisy. We believe that this resulted from faint stars which are isolated in the HST images, but very blended with a brighter neighbor in the CFHT images. The partitioning of light among the crowded stars varied wildly from CFHT frame to CFHT frame. The ``sky'' level determined in the CFHT frames also tended to be incorrect. These problems are clearly related to the large difference in resolution between the CFHT and HST images. A different initial list of stars to be reduced or some other change of procedure might have solved these problems. However, what we chose to do was to photometer each CFHT frame individually using ALLSTAR, but holding the positions of the stars fixed at those found from the ALLFRAME reduction. This allowed ALLSTAR to determine, based on a uniform chi-square criterion, whether there was adequate information in an individual frame to determine a star's magnitude with acceptable precision. The line profiles produced by this procedure were smoother, but crowding probably still introduces some additional uncertainties into our velocities inside a radius of about 0.3\arcmin. This is discussed in somewhat more detail below. We also used the HST photometry to determine which stars in the CFHT frames were sufficiently contamination by light from neighbors that their measured velocity could be affected. The criterion is based on Monte Carlo simulations and is the same as used in Paper~1. Due to the 0.8\arcsec\ FWHM seeing of the CFHT SIS frames, only eight stars were excluded from the final sample. \subsection{Velocity Zero-Point and Uncertainties} Radial velocities for M15 stars were previously obtained by Peterson {\it et~al.}\ (1989, hereafter PSC), who used the echelle spectrograph and intensified Reticon detector on the MMT, and by us (Paper~1). The present study includes 66 stars in common with PSC and 188 in common with Paper~1. Based on this overlap, both the 1994 and 1995 raw velocities are $7.4\pm 0.4$~$\rm {km}~\rm s^{-1}$\ more negative than those of Paper~1, which were adjusted to the zero-point of PSC. While a zero-point offset between radial velocities measured with different instruments is not surprising, this is the largest offset that we have seen for FP data. In order to investigate this further, we observed the radial velocity standards HD107328 and HD182572 during the 1995 run. These data were taken and reduced in a fashion very similar to the globular cluster stars. With only one star in the field we could not derive transparency normalizations for the individual frames, but conditions were photometric and the exposure times were only 15~s. Observations of HD107328 with the star near the center of the field on one night and 370~pixels from the center on another yielded velocities of $29.85\pm 0.03$~$\rm {km}~\rm s^{-1}$\ and $30.35\pm 0.03$~$\rm {km}~\rm s^{-1}$, respectively. An observation of HD182572 on a third night with the star near the center of the field produced a velocity of $-106.79\pm 0.05$~$\rm {km}~\rm s^{-1}$. The quoted uncertainties are simply the uncertainties in the fitted line centroids, and may be underestimated since we do not include noise from possible transparency fluctuations. The standard velocity for HD107328 is 36.6~$\rm {km}~\rm s^{-1}$\ and for HD182572 is $-100.3$~$\rm {km}~\rm s^{-1}$\ (Latham \& Stefanik 1991). The differences between the three raw FP velocities and the standard velocities are $-6.75$, $-6.49$, and $-6.25$~$\rm {km}~\rm s^{-1}$. The mean difference is $-6.5$~$\rm {km}~\rm s^{-1}$, with an RMS scatter around this mean of 0.25~$\rm {km}~\rm s^{-1}$. This zero-point offset is probably caused by some combination of small wavelength, flatfielding, and normalization errors and a difference between the centroid of the stellar H$\alpha$ line measured by our line-fitting program and the tabulated laboratory wavelength of H$\alpha$. The scatter between the three velocity differences is satisfyingly small and argues that, whatever its cause, the velocity offset will not significantly increase the scatter within our cluster sample. The PSC velocity zero-point was based on spectra of the twilight sky, so we consider the agreement between the zero-points derived from the standards and from the M15 stars to be acceptable. We adopt the latter for the remainder of this paper. Figure 1 plots V$_{\rm Paper 1}$--V$_{\rm FP95}$ vs. V$_{\rm Paper 1}$. The comparison with PSC is similar and not shown here. Several binary star candidates are apparent in Fig.~1, but discussion of these is deferred to a future paper. The RMS difference measured with a bi-weight estimator, which is insensitive to a few outliers, is 5.0~$\rm {km}~\rm s^{-1}$\ after correcting the zero-point of the 1995 data. This implies a typical velocity uncertainty of 3.5~$\rm {km}~\rm s^{-1}$. Comparing this to the average internal velocity uncertainty calculated from fitting the the H$\alpha$ line profile suggests that we need to add 1~$\rm {km}~\rm s^{-1}$\ in quadrature to these internal uncertainties. This additional uncertainty could be due to the fits to the line profiles underestimating the uncertainties or to the velocity ``jitter'' of luminous cluster giants (Gunn \& Griffin 1979, Mayor {\it et~al.}\ 1983). There are 461 stars with velocities from both the 1994 and 1995 data and this provides a large, homogeneous sample with which to explore the FP measurement uncertainties. We determined what value, added in quadrature to the internal uncertainties, produced the most uniform distribution of chi-square probabilities. Probabilities smaller than 5\% are excluded to remove the effect of real velocity variables. This method was first employed by Duquennoy \& Mayor (1991) and the approach that we used to judge uniformity is described in Armandroff~{\it et~al.}\ (1995). The best additive uncertainty was zero for the entire sample, though values as large as 1.0~$\rm {km}~\rm s^{-1}$\ are acceptable. About 10\% of the stars have probabilities under a few percent, which would be a surprisingly large fraction of real variables. The stars with smaller average measurement uncertainties had more low probabilities, suggesting that the brighter stars should have larger additive uncertainties. Limiting the sample to the 118 stars with approximate $V$ magnitudes (described in the next section) less than 15 produced an additive uncertainty of 1.4~$\rm {km}~\rm s^{-1}$. This bright subsample also has about 10\% of the stars with probabilities below a few percent and we noticed that a larger than expected number of these were at small radii. Focusing on the 393 stars outside of a radius of 0.4\arcmin, the most uniform probability distribution resulted from an additive uncertainty of 0.4~$\rm {km}~\rm s^{-1}$\ for stars with any magnitude and from a value of 1.1~$\rm {km}~\rm s^{-1}$\ for the 91 stars brighter than magnitude 15. This difference very likely reflects the well-known velocity jitter of luminous cluster giants and suggests that much of the additional uncertainty reflected in the comparisons with the Paper~1 and PSC velocities, which are mostly for brighter stars, comes from this source. For the 68 stars with both 1994 and 1995 velocities inside a radius of 0.4\arcmin, the most uniform probability distribution results for an additional uncertainty of 3.2~$\rm {km}~\rm s^{-1}$. With this value, there is no evidence for an excess of stars with low probabilities. Most of these stars are brighter than a magnitude of 15.8 because of crowding, but it is still clear that the FP velocities suffer from additional uncertainty at small radii. Errors in the stellar photometry induced by crowding are almost certainly the cause. The line profiles at small radii show more scatter around the fitted profiles than do those at large radii. The PSC stars in this region are probably the brightest and least crowded. Comparing the PSC and 1995 velocities for the 26 stars in this region required an additional uncertainty for the CFHT velocities of only 0.7~$\rm {km}~\rm s^{-1}$. All of the additional uncertainties described above are much smaller than the cluster dispersion and, thus, have little effect on the dynamical analyses. We have thus adopted the simple prescription of adding 1~$\rm {km}~\rm s^{-1}$\ in quadrature to the uncertainties derived from the line profile fits for the 1994 and 1995 velocities. We have performed our dynamical analyses using an additional uncertainty of 3~$\rm {km}~\rm s^{-1}$\ instead of 1~$\rm {km}~\rm s^{-1}$ and find the same results. We have also performed the analyses both including and excluding those stars with small chi-square probabilities and find no difference in the results. The results presented below include these stars. \section{RESULTS} \subsection{The Individual Stellar Velocities} We have combined five sets of M15 velocities: the PSC measurements and Fabry-Perot measurements from 1991 (Paper~1), 1992 (Paper~1), 1994 (this paper), and 1995 (this paper). Each star for which there was any ambiguity in the matching among datasets (either in the position or in the velocities) was examined carefully to ascertain the accuracy of its identification. Table~1 presents the mean stellar velocity for every measured M15 star. Col.~1 is the star ID; Cols.~2 and 3 are the x and y offset from the cluster center in arcseconds (measured eastward and northward, respectively, from the center given in Guhathakurta {\it et~al.}\ 1996); Cols.~4 and 5 present the mean velocity and its uncertainty; Col.~6 gives the FP magnitude (estimated from the fitted continuum of our 3~\AA\ of spectrum); and Col.~7 lists either the probability of the chi-square from the multiple measurements exceeding the observed value -- a low probability suggests a variable velocity, though see \S~2.3 -- or a note which is explained below. The average velocity given was calculated weighting by the measurement variances, and the inverse of the uncertainty is the square root of the average of the individual inverse variances. The uncertainties include the 1.0~$\rm {km}~\rm s^{-1}$\ added in quadrature that was discussed in the previous section. The wavelength gradient with radius in the FP images (Papers~1 and 2) causes a bias in the velocity sample due to the limited wavelength coverage of the FP data. Stars at large radii and with velocities sufficiently more positive than the cluster mean will have incomplete wavelength coverage of the line, which makes the velocity impossible to measure. We therefore have to choose a radius from the center of the FP field where this effect may begin to affect the dynamical analysis. The SIS optics at CFHT produce a gradient of only 4~\AA\ as compared to 6~\AA\ at CTIO. As a result, we obtain a larger unbiased field at the CFHT than at CTIO for the same number of frames: 1.2\arcmin\ in radius vs. 1.0\arcmin, respectively. The stars that are beyond this radius are not used in the dynamical analysis and have ``bias'' in the final column of Table~1. Stars determined to be non-members based on their radial velocity have ``non'' in the final column of Table~1. The stars with a low probability (P$<0.01$) in Col.~7 are good candidates for stars whose velocity varies due to jitter or binary orbital motion. However, identifying binary candidates is difficult because of the complex situation with the velocity uncertainties discussed in \S~2.3. We, therefore, postpone discussion of the binary candidates and the binary frequency to a future paper. The individual velocity measurements from the 1994 and 1995 CFHT runs and from the previously published samples are given in Table~2. Col.~1 is the star ID; Cols.~2 and 3 are the velocity and uncertainty from the 1995 CFHT data; Cols.~4 and 5 are from the 1994 CFHT data; Cols.~6 and 7 are from the 1992 CTIO data (Paper~1); Cols.~8 and 9 are from the 1991 CTIO data (Paper~1); and Cols.~10 and 11 are from PSC. The uncertainties listed in Table~2 do not contain the additional uncertainty that is used in Table~1. Recently, Dull~{\it et~al.}\ (1996) reported velocities for 132 member stars in the central 1\arcmin\ of M15. We have repeat velocity measurements for all of their stars, and our average uncertainty is around 1.5~$\rm {km}~\rm s^{-1}$, while theirs is 4~$\rm {km}~\rm s^{-1}$. Because they do not list the uncertainties for the individual stars, we are unable to combine their velocities with our dataset in the same rigorous fashion. Since our uncertainties are smaller for their stars and since our dataset contains over a factor of ten more velocity measurements, the dynamical analysis will not be affected by not including their velocities. Tables~1 and 2 contain only stars with velocity uncertainties smaller than 10~$\rm {km}~\rm s^{-1}$. We have determined that it is worthwhile to use stars with uncertainties as large as this, even though the velocity dispersion of the cluster is around 11~$\rm {km}~\rm s^{-1}$. We formed velocity samples that excluded stars with uncertainties larger than values as small as 4~$\rm {km}~\rm s^{-1}$. Dynamical analyses of these samples showed no significant change in the results other than an increase in the size of the confidence bands for the measured quantities when fewer stars were used. Thus, we employed the complete velocity sample in the analysis presented in this paper. Only four stars were found to be non-members due to their large velocity difference from the cluster mean velocity; they were excluded. An additional 22 stars were not used because they are at large radii where the dynamics may be biased by the limited spectral coverage. Of the remaining 1575 stars with uncertainties less than 10~$\rm {km}~\rm s^{-1}$, 1100 have uncertainties less than 5~$\rm {km}~\rm s^{-1}$ and 685 less than 3~$\rm {km}~\rm s^{-1}$. Inside of 0.1\arcmin\ we were able to measure velocities for 80\% (71 of 89) of the stars brighter than V=18. Figure 2 plots the individual stellar velocities and their uncertainties against the distance from the center of M15. This figure suggests that the local mean velocity for M15 becomes more positive in the central 0.1\arcmin. The weighted mean velocity of the 71 innermost stars is $-105.6\pm 1.3$~$\rm {km}~\rm s^{-1}$, while the mean velocity of the whole sample is $-107.8\pm 0.3$~$\rm {km}~\rm s^{-1}$. Therefore, the apparent change in velocity is significant at the 2$\sigma$ level. We have looked for calibration and/or normalization errors and are confident that these cannot produce such a large velocity shift. Comparison with previous authors also increases our confidence that our velocity measurements are not biased. Dubath \& Meylan (1995) have measured velocities for 14 stars in the central 0.1\arcmin\ of M15. The mean velocity for these 14 stars is $-103\pm 4$~$\rm {km}~\rm s^{-1}$, consistent with our measurement. We feel that the variation of the mean velocity with radius is simply due to statistical noise. \subsection{Rotation} We detect rotation in M15. Using maximum likelihood techniques to fit a sinusoid to the entire sample of individual stellar velocities as a function of their position angle yields a rotation amplitude of $2.1\pm 0.4$~$\rm {km}~\rm s^{-1}$. The position angle of the maximum positive rotation velocity, measured from north through east, is $197\arcdeg\pm 10\arcdeg$. Both of these values agree with our previous measurement of $1.4\pm 0.8$~$\rm {km}~\rm s^{-1}$\ and $205\arcdeg \pm 33\arcdeg$ (Paper~1). Monte Carlo simulations show that a 2.1~$\rm {km}~\rm s^{-1}$\ rotation amplitude is expected by chance when no rotation is present less than 0.1\% of the time. We can study the two-dimensional structure of the projected rotation by using a thin-plate, smoothing spline (Wahba 1980, Bates {\it et~al.}\ 1986) to produce a map of the mean velocity as a function of location on the sky. This method allows us to check for twisting position angles and other unusual properties which might otherwise be missed using a one-dimensional representation of the data. The smoothing spline is fit to the velocity data with the smoothing length chosen by generalized cross-validation (GCV).\footnote{GCV uses a jackknife approach. For a given smoothing, one point is removed from the sample and the spline derived from the remaining $N-1$ points is used to estimate the value for the removed point. This procedure is repeated for each point, and the optimal smoothing is that which minimizes the sum of squared differences between the actual data points and the estimated points. A good explanation of GCV can be found in Craven \& Wahba (1979) and Wahba (1990) (see Gebhardt {\it et~al.}\ 1996 for an application to a one dimensional problem).} Figure~3 shows the resulting isovelocity contours for M15. The points represent the positions of the stars that have a measured velocity. The spacing of the contours in Fig.~3 is 1~$\rm {km}~\rm s^{-1}$; given that the cluster dispersion is about 11~$\rm {km}~\rm s^{-1}$, the estimate of the two-dimensional velocity map may be noisy, despite its large number of radial velocities. The contours do, however, suggest a possible twisting of the rotation axis position angle (PA) with radius, as we discuss below. To get a higher S/N estimate of the two-dimensional velocity structure we will assume axisymmetry. This assumption is invalid if the PA twists. However, we will regard the possible twisting in Fig.~3 as a perturbation to the global velocity structure and use the PA measured for the whole sample as the symmetry axis. We can then reflect stellar positions about both the rotation axis and a line perpendicular to that axis and re-estimate the velocity map, a procedure that will effectively increase the number of data points by a factor of four. Figure~4 plots the resulting axisymmetric isovelocity contours in a single quadrant, calculated using the same smoothing spine as above. Here the y and x axes are along and perpendicular to the rotation axis, respectively. The projected rotation of a spherical system with solid-body internal rotation (i.e., rotation velocity constant on cylinders) will have isovelocity contours parallel to the rotation axis. This is guaranteed because, along the line-of-sight at a fixed projected distance from the axis, the linear decrease in the radial velocity due to the changing projection of the rotation on the line-of-sight is offset by the linear increase in the internal solid-body rotation profile. The vertical contours of Fig.~4 suggest that the rotation in M15 is consistent with an internal solid-body rotation profile. A similar non-parametric estimate of the symmetrized rotation map for 47~Tuc shows the rotation decreasing below that expected for a solid-body form beyond a radius of 2\arcmin\ (Paper 2). The half-light radius of M15 is 2.9$\times$ smaller than that of 47~Tuc (Trager {\it et~al.}\ 1993; primarily reflecting the larger distance of M15), so the two clusters may have real differences in the form of their rotation. More velocities are need at large radii in M15 to be sure of this, however. Figure~3 suggests that the rotation PA changes from north-south at small radii to more east-west at large. We can check the significance of this result by radially binning the individual stars and estimating the rotation properties in each bin. We use a maximum likelihood estimator that yields the amplitude and phase for a sinusoidal variation of the mean velocity with position angle, and the dispersion around the sinusoid, while holding the mean velocity of the sample fixed at the cluster mean. The results are given in Table~3 and plotted in Fig.~5. The solid points in the two panels of Fig.~5 are the rotation amplitude and position angle as a function of radius for five bins containing 300 stars and a final outer bin of 74 stars. The amplitude of the rotation from Fig.~5 will not match that of Fig.~4 since, unlike Fig.~5, Fig.~4 is derived under the assumption of axisymmetry. The position angle of the rotation does increase beyond a radius of 1\arcmin, consistent with the twisting of the isovelocity contours in Fig.~3. However, this result depends somewhat on the binning adopted and more velocities at large radii are needed to confirm this feature. The two innermost solid points in Fig.~5 suggest that the position angle of the rotation also changes at small radii, though the rotation amplitude of even the larger innermost point is small enough that it could occur by chance about 13\% of the time. This feature might not appear as a twisting of the contours in Fig.~3 because of the smoothing. To obtain a more detailed picture of the rotation profile at small radii, we turn to the procedure of using the integrated light developed in Paper~1. The Fabry-Perot observations provide two-dimensional images that yield an integrated-light spectrum at every sufficiently bright pixel and, hence, the two-dimensional velocity structure of the cluster. The integrated light provides a better estimate of the rotation properties than the individual velocities because it effectively samples a larger number of stars, which will limit the noise due to the cluster dispersion. We azimuthally bin and average velocities from the integrated-light velocity map and find the best-fit sinusoid. These results are also listed in Table~3 and plotted as open circles in Fig.~5. The results from the 1994 and 1995 CFHT data are similar and we show only the latter since the seeing was better in 1995. The integrated light map only has adequate S/N within 20\arcsec\ of the center; beyond that, we have to rely on the stellar velocities to determine the rotation properties. In the region where there is significant overlap, the rotation properties measured by the integrated light and the stars agree in both amplitude and position angle. The integrated-light results presented here are reasonably consistent with the corresponding results from Paper~1 for the region within a radius of 15\arcsec\ (with a mean radius of 8\arcsec), which were an amplitude of $1.7\pm 0.3$~$\rm {km}~\rm s^{-1}$\ and a position angle for the maximum positive rotation velocity of $220\arcdeg\pm 11\arcdeg$. Our new results support a change in the position angle of the rotation axis at small radii and suggest that the rotation amplitude increases as well. However, in the central regions, seeing and sampling noise become an important consideration for interpreting the integrated-light rotation properties. Peterson (1993) reported a streaming motion with a full range of 15~$\rm {km}~\rm s^{-1}$\ in the central 1\arcsec\ of M15. Dubath \& Meylan (1995), using CORAVEL with slits at various positions in the center of M15, argued that Peterson's result may have been due to sampling noise. The two brightest stars in the central region (AC212 and AC215) have velocities differing by 30~$\rm {km}~\rm s^{-1}$\ and are aligned along the direction for which the streaming motion was reported. Using our two-dimensional velocity map, we find a result similar to that of Dubath \& Meylan. By synthesizing the slit position and size of Peterson's observation, our velocity map shows the same change in the mean velocity with position as Peterson measured; however, the result is due to contamination from the wings of the two bright stars, emphasizing the importance of understanding the sampling noise (Dubath~1993). Our integrated-light rotation measurements will suffer from the same problem. For seeing of 0.8\arcsec, we estimate that the radius at which two stars may cause a rotation signature is around 2\arcsec. The vertical dashed line in Fig.~5 is at this radius. However, the PA changes smoothly from larger radii to the smallest radius, giving us confidence that the increase in the rotation amplitude at small radii seen in Fig.~5 is a physical effect, and not due to sampling. We can further check the significance of the central increase in the rotation by exploiting the two-dimensional data of the FP. We subtract the five brightest stars in the central 4\arcsec, re-calculate the velocity map, and estimate the rotation from the map as before. This procedure may show the influence of the bright stars on the inferred central rotation. The rotation obtained in this way does not show the characteristics seen in Fig.~5; in fact, there is no detectable rotation when the five bright stars are subtracted. However, this result may be due mainly to noise in the subtraction. Inspection of the line profiles for the integrated light before and after subtraction demonstrates that the subtraction added significant noise, making determination of the line centroid uncertain. Another method, instead of subtracting the bright stars, is to ignore the pixels out to some radius underlying those stars. This procedure gives similar results to those in Fig.~5, but it is questionable since it depends on the radius chosen. Yet another check is to measure the rotation from the stellar velocities inside of 2\arcsec. For the 12 stars inside of 2\arcsec, we measure a rotation amplitude of $8.5\pm3.8$~$\rm {km}~\rm s^{-1}$\ and a PA of $303$\ifmmode^\circ\else$^\circ$\fi$\pm26$, consistent with the integrated light profile. We conclude that the rotation inside of 2\arcsec\ appears significant. However, better-seeing data are needed to reduce the sampling uncertainties and we, therefore, have chosen a radius of 2\arcsec\ as the boundary inside of which our results may have been affected by this. \subsection{Velocity Dispersion} Fig.~6 plots for our M15 sample the absolute magnitude of the deviation of each stellar velocity from the cluster mean velocity vs. radius. The points are the velocity measurements, and the solid and dashed lines are the LOWESS estimates of both the velocity dispersion and the 90\% confidence band, respectively (see Paper~2 for details). The dispersion does not increase significantly inside a radius of 0.4\arcmin. This dispersion profile is consistent not only with that of Paper~1 but also, outside of a radius of 1\arcsec, with the profile increasing towards smaller radii which was found by PSC. However, in the central 1\arcsec, PSC reported a cusp in the velocity dispersion based on integrated-light measurements. Our larger sample, which has four stars within 1\arcsec\ of the center, shows no evidence for a central cusp. PSC's high velocity dispersion measurement may have been due to sampling noise (Dubath 1993, Zaggia~{\it et~al.}\ 1993). Our estimate of the dispersion uses a smoothing length, which prevents our estimated profile from changing quickly. However, the dispersion of the four stars in the central 1\arcsec\ is 11.6 $\rm {km}~\rm s^{-1}$, consistent with the LOWESS estimate of Fig.~6; this estimate is also consistent with the results of Dubath \& Meylan (1994) and Dull~{\it et~al.}\ (1996). \section{MASS MODELING} \subsection{Non-Parametric Estimate} We have used the non-parametric mass modeling technique of GF to estimate the mass density profile for M15. Under the assumptions of spherical symmetry and an isotropic distribution of velocities at all points, the Abel equations provide estimates of the deprojected luminosity density and velocity dispersion profiles, given the projected quantities. With the same assumptions, the Jeans equation then yields a unique mass density profile. We use the velocity dispersion profiles from Fig.~6, and the surface brightness profiles from Grabhorn~{\it et~al.}\ (1992) and Guhathakurta~{\it et~al.}\ (1996). Figure~7 plots the resulting non-parametric estimates of the mass density and the mass-to-light ratio (M/L) profiles. The solid lines are the bias-corrected estimates and the dotted lines are the 90\% confidence bands. The bias is estimated through bootstrap resamplings (see GF). The dashed line in the upper panel of Fig.~7 has a slope of --2.23, which is the theoretical prediction for the region outside of the core in a core-collapse cluster (Cohn 1980). Theory and observation are clearly in good agreement. Fig.~8 plots the logarithmic slope of the mass density profile and its 90\% confidence band. Since derivatives are inherently noisier, the uncertainties on the slope are large; particularly at large radii the slope is highly uncertain. Not that the confidence bands depend on the amount of smoothing which has been introduced in the velocity dispersion estimate. Also, the uncertainties in the surface brightness profile are not considered here and may significantly increase the uncertainties in the slope estimate. The confidence band shown in Fig.~8 only reflects the uncertainty based on the smoothing used for Fig.~7; a more rigorous treatment would have to include the surface brightness uncertainties and try different velocity dispersion smoothings. Phinney~(1992, 1993) used measurements of pulsar accelerations to estimate the central density in M15 and found a value of greater than $2\times10^6~{\rm M}_\odot$/pc$^3$. We are able to follow M15's mass density over five decades of density (two of radius). In the central 1.4\arcsec\ (0.07 parsecs) the inferred density reaches $2\times 10^6~{\rm M}_\odot$/pc$^3$ with 1$\sigma$ uncertainties under 30\%, in concordance with Phinney's value. The estimated M/L$_{\rm V}$ for M15 increases from about 1.2 at a radius of 10\arcsec\ (0.5~pc) to nearly 3 in the central 0.05 parsecs. There is a similar, but more statistically significant, increase in the outer 8 parsecs. We attribute these increases to a concentration of heavy stellar remnants in the central regions and of low-mass stars in the outer parts. Rotation has not been included in this analysis. However, the dispersion about the average rotation velocity of 2.0~$\rm {km}~\rm s^{-1}$\ for our M15 sample is 9.8~$\rm {km}~\rm s^{-1}$. The ratio of the squares of these quantities, which measures the dynamical significance of rotation, is 0.04. In regions where the rotation velocity is as large as 4~$\rm {km}~\rm s^{-1}$ (see Fig.~5), this ratio approaches 0.2. Still, our dispersion values in Fig.~6 were increased by ignoring the rotation, which approximately mimics the effect of rotation in the Jeans equations, so our mass distribution calculated without including rotation will likely be a good approximation. GF also presented a mass density profile for M15 and their profile differs from the one presented here. In GF, there was a shoulder in the mass density profile at a radius of 2.0 parsecs (about 0.8\arcmin), which is not present in Fig.~7. The main reason for the difference is the increase in the size of our velocity sample by a factor of two, which obviously reduces the noise in our estimation. A related effect is the increased radial coverage of the CFHT data, which spans more than two decades compared to one for the CTIO data in GF. This larger coverage makes it easier to distinguish noise from trends and so choose a realistic smoothing. This choice presents more of a problem with non-parametric techniques that impose smoothing on the projected quantities, as we do here, rather than ones that impose smoothing in the space of the desired function (see Merritt \& Tremblay 1994), since the noise in the projected quantities will be amplified upon deprojection. We feel that the mass density profile for M15 in GF may have been undersmoothed. \subsection{Black Hole Models} We can also invert the above procedure by assuming a mass density profile for M15 and calculating the projected velocity dispersion implied by the isotropic Jeans equation. Figure~9 shows the projected dispersion profiles that result from assuming that M15 has a uniform stellar M/L and a central black hole with various values for the mass. The dashed and dotted lines are the estimated velocity dispersion profile and the 90\% confidence band from Fig~6. The solid lines are the expected velocity dispersion profiles assuming a stellar M/L of 1.7 and black hole masses of 0, 500, 1000, 3000, and 6000~${\rm M}_\odot$. Note that all of these models with an M/L that does not change with radius are just excluded at 90\% confidence because they rise too steeply between a radius of 100\arcsec\ and 10\arcsec. We will discuss the possibility of a black hole at the center of M15 in \S~5. \subsection{Mass Functions} We can estimate the present-day mass function of M15 using the technique described in GF. The Jeans equation relates the cluster potential and the number density and velocity dispersion profiles of a tracer population. We calculate the cluster potential from the mass density determined in \S~4.1. If we know the velocity dispersion profile for some population, then we can calculate the number density profile for that population up to a multiplicative constant. As in GF, we will assume local thermodynamic equilibrium (LTE); LTE allows us to relate the observed dispersion profile of the giants and turnoff stars to the profiles of objects with other masses. This method will only yield an approximate result, since LTE is not strictly valid at any radius and is, moreover, a poor approximation for low-mass objects at large radii. The mass density profiles for the individual mass components must sum to the total density profile. We determine the necessary multipliers for each number density profile using maximum penalized likelihood. To keep the derived mass function reasonably smooth, we use the second derivative of the mass function as the penalty function (equations 7--9 of GF). Figure~10 plots the resulting mass functions for M15. The different lines represent different radial ranges and their 68\% confidence bands: the solid lines are the mass function for the inner 25\% of the mass (0.0--2.0~pc); the dashed lines that for the 25--50\% mass range (2.0--5.0~pc); and the dotted lines that for the 50--75\% mass range (5.0--8~pc). The solid points are located at the masses used in the fitting. Both luminous and non-luminous objects contribute to the mass functions in Fig.~10. The global cluster M/L provides a constraint on the relative contributions, but exploiting this requires knowing the uncertain light-to-mass ratios (L/M) for the main-sequence and giant stars in the various mass groups. We begin by calculating the cluster M/L assuming that all of the stars with masses below the turn-off are luminous. Multiplying the L/M's of the various mass groups by the mass function yields the total cluster luminosity, which we may then divide into the dynamically-estimated total cluster mass. The L/M values that we used are based on direct estimates from color-magnitude diagrams (see Pryor {\it et~al.}\ 1991). The values are (m is the stellar mass in solar units) zero for m$<0.16$ and m$>0.8$, 0.014 for $0.16<{\rm m}<0.25$, 0.026 for $0.25<{\rm m}<0.40$, 0.14 for $0.40<{\rm m}<0.63$, 4.6 for $0.6<{\rm m}<0.8$ (including giant stars). The ratio of this population M/L to the average dynamical M/L derived from the normalization of the velocity dispersion profile is approximately the fractional contribution of the main-sequence and giants stars to the total mass. The M/L derived from the velocity dispersion normalization is 1.7 (\S 4.2), and the M/L derived from the mass function is 0.3. The ratio implies that 85\% of mass is in non-luminous objects not included in the L/M values, presumably stellar remnants. The mass functions in Fig.~10 argue that much of the mass of M15 is in the form of 0.6--0.7~${\rm M}_\odot$ objects, presumably white dwarfs. GF reached the same conclusion for M15 and three other clusters. This result may be in conflict with the mass of 0.5~${\rm M}_\odot$ found by Richer {\it et~al.}\ (1995) for white dwarfs in the globular cluster M4 based on their location in the color-magnitude diagram. However, some heavier white dwarfs are also expected from the evolution of massive main-sequence stars early in the history of the cluster. The constraints on the numbers of lower-mass objects ($<$ 0.3 M$_\odot$) are not as firm due to the uncertainty in the velocity dispersion at large radii, as reflected in the larger confidence band, and due to the assumption of LTE. A more reasonable approach would be to estimate properly the relation between the velocity dispersion profiles for different masses using Fokker-Planck simulations. However, we find relatively few objects with masses below $0.3~{\rm M}_\odot$. This last result contradicts GF, who found mass functions increasing at the smallest masses for all of the clusters that they studied, including M15. The different result yielded by the larger velocity dataset presented here demonstrates the sensitivity of the mass function estimate to noise in the data. \section{SUMMARY AND DISCUSSION} Knowledge of the present-day stellar mass function in globular clusters is crucial for understanding the initial mass function. The latter contains important information about star formation in the early galaxy and plays an important role in cluster dynamical evolution (e.g., Angelletti \& Giannone 1979; Chernoff \& Weinberg 1990). Dull~{\it et~al.}\ (1996) have produced an extensive set of Fokker-Planck models for M15 and find that the projected dispersion profile is well fitted by a post-collapse model. From this model, they estimate that there are a few $\times$ $10^4$ objects more massive than 1~${\rm M}_\odot$ in the central 6\arcsec\ (0.3~pc). Integrating our estimated mass function (Fig.~10) over objects more massive than 1~${\rm M}_\odot$ gives about 2000 objects, which are all in the innermost 25\% of the mass (radii smaller than 2.0~pc). Both models find a few $\times$ $10^5$ objects with masses of about 0.7~${\rm M}_\odot$, which are presumably a combination of main-sequence stars and stellar remnants (white dwarfs). The largest disagreement between their and our mass function is for the numbers of low-mass objects. Dull~{\it et~al.}\ find a few $\times 10^5$ stars with masses less than 0.3${\rm M}_\odot$, which is higher than our 95\% confidence level for these masses. It needs to be stressed that both of the above dynamical estimates of the present-day mass function are uncertain: our results will be inherently noisy since we are trying to estimate the quantities directly from the data, while Dull~{\it et~al.}\ require assumptions about the initial mass function and the relation between the initial stellar mass and the final remnant mass, both of which may have large uncertainties. The strongest conclusions about the M15 mass function are those found by both: that a large number (2000 to $10^4$) of 1.4~${\rm M}_\odot$ objects are present in the central region and that most of the cluster mass is in 0.5-0.7~${\rm M}_\odot$ objects. These conclusions can be compared to those of Heggie \& Hut (1996), who find that approximately 50\% of 47~Tuc's mass may be in the form of white dwarfs or lower-mass stars. Our results suggest that white dwarfs, rather than lower-mass stars, constitute most (about 85\%) of the mass of M15. Strengthening our dynamical estimates of M15's mass function requires larger number of velocities in the outer regions ($R>2$\arcmin) of M15. Low-mass stars are expected to populate the regions at large radii, where photometric studies have shown that there may be significant numbers of low-mass stars (Hesser {\it et~al.}\ 1987, Richer {\it et~al.}\ 1990). Since photometric estimates of the number of low-mass stars require large completeness corrections and depend on the uncertain main-sequence mass-luminosity relation, dynamical estimates provide valuable additional constraints. The theory of core collapse appears to pass a very basic test: the mass density profile for M15 shown in Fig.~7 is remarkably well fit by a power law with an index of --2.2 throughout most of its radial extent, as predicted (Cohn 1980). There is a shallow ``shoulder'' in the profile at a radius of about 0.2\arcmin\ which is suggestive of a transition between most of the density being provided by massive stellar remnants and most by stars with the turnoff mass (the latter having a shallower profile). Unfortunately, Dull {\it et~al.}\ (1996) did not show the spatial density profiles of their Fokker-Planck models. The projected density profiles (their Fig.~10) do suggest that this transition should occur at about that radius. However, our measured projected velocity dispersion profile in Fig.~9 is somewhat flatter than the profiles predicted by the the Dull {\it et~al.}\ models (see their Fig.~6), which increase from a value of 9.5~$\rm {km}~\rm s^{-1}$\ at 30\arcsec\ to 12~$\rm {km}~\rm s^{-1}$\ at 3\arcsec. Rotation has generally not been included in calculations of globular cluster evolution and we stress the importance doing so in light of our results for M15. The radial profiles of the rotation PA and amplitude, such as are shown in Fig.~5, could provide significant constraints for such models. Recently, Einsel \& Spurzem (1996) have produced Fokker-Planck simulations including rotation. Their projected rotation curve is similar to what we measure using the stellar velocities, presented as the solid points in Fig.~5: there is a peak in the rotation amplitude close to the half-mass radius. Unfortunately, we have too few velocities at large radii to allow a detailed comparison there to the results of Einsel \& Spurzem. Unlike the broad agreement at larger radii, in the central region we find a possible increase in the rotation amplitude which is not seen in their results. One explanation could be the existence of a central mass concentration not included in the models. Since the models contain only a single-mass stellar component, possibilities are both heavy remnants and a massive central black hole. We also see a large change in the rotation position angle at small radii, which is even harder to understand theoretically. Both the theory and the data need to be improved for the region near the center of the cluster. Fig.~9 suggests that the maximum allowed black hole mass is around 3000~${\rm M}_\odot$. However, this number is uncertain due to our assumptions of isotropy and constant M/L. Fully 2-integral, and possibly 3-integral, models are necessary to adequately constrain the maximum allowed black hole mass from the velocity dispersion data alone. Whether or not M15 contains a central black hole remains open. Fig.~9 suggests that the best isotropic model with a constant stellar M/L is one that contains a black hole of 1000~${\rm M}_\odot$. However, if the M/L is allowed to vary, then the velocity dispersion data imply a change from 1.7 at large radii to only 3 in the central region (Fig. 7). This barely significant change could be caused by only the modest number of heavy-remnants predicted in the central region of old globular clusters by Fokker-Planck simulations such as those of Dull~{\it et~al.}\ (1996). Both models -- a 1000 ${\rm M}_\odot$ central black hole and an increase of the M/L to 3 -- have the same projected velocity dispersion profile. Thus, even with additional velocity data in the central regions, it will be difficult to discriminate between them. One alternative is to use the rotation properties to discriminate between the models. For instance, a central mass concentration could cause an increase in the rotation amplitude at smaller radii, as has been observed in Fig.~5. Determining the strength of such a test will require detailed modeling including a central mass concentration and more sophisticated Fokker-Planck simulations. In summary, even excellent ground-based seeing ($\sim0.8\arcsec$) still imposes significant uncertainties on the rotation and velocity dispersion profiles of M15 at small radii. Better data would reduce the present uncertainties in these profiles in the central region and might reveal behavior that cannot be produced by the change in M/L resulting from mass segregation. For example, the velocity dispersion produced by a 3000~${\rm M}_\odot$ central black hole would lead to an increase of the inferred M/L to around 8 at a radius of 0.03~pc, which would be difficult to obtain with any reasonable remnant population. The data to test such predictions will require adaptive optics or HST. \acknowledgments KG would like to thank the Sigma $\chi$ foundation, which provided support for travel and accommodations during the observations. We also thank Jennifer Gieber for reducing the radial velocity standard star data and the staff at the CFHT who made two very difficult runs work smoothly. Partial support for this research was provided by grant AST90-20685 from the National Science Foundation.
2,869,038,156,035
arxiv
\section{Background} There is strong motivation to create large artificial neural networks (ANNs) to increase task performance, and there are efforts to develop new methods of training networks containing up to trillions of parameters.\cite{rajbh2019zero} However, there are a number of challenges to training and using large ANNs. Some of these challenges have found solutions in new types of layers or architectural designs. For example, some networks are too large for the task they are being trained to perform, resulting in poor task performance relative to a smaller network trained to perform the same task. This issue is resolved with skip or residual connections, which may require depth rescaling using another neuron layer.\cite{he2015deep} Another major issue is overtraining/memorization, a common problem in nearly all modern neural networks. Overtraining causes ANNs to memorize inputs rather than learn a set of rules that generalize to new data, and it has been shown that modern networks are highly capable of memorizing randomized image labels.\cite{zhang2016understanding} To combat overtraining and memorization, a variety of techniques and layers are used, including data augmentation,\cite{hernandez2018} dropout layers,\cite{srivastava14} and even the creation of new neural networks in the case of adversarial neural network training.\cite{antoniou2017data} Thus, the drive to create larger ANNs is often compounded by further increases in ANN size by using architecture components designed to combat the problems of large networks, which make the deployment of these networks at scale a challenge. The current state of ANN research has an analagous mindset to the human intelligence research prior to the 1990s. The general opinion in human intelligence research was that an individual with high intelligence was more capable of performing tasks with high performance because his or her brain would be more capable of recruiting large numbers of neurons. This changed in the 1990s with multiple publications from Haier \textit{et al},\cite{haier1988,haier1992b} one of which showed individuals with high intelligence had higher scores on Tetris but had lower brain metabolism while playing the game.\cite{haier1992a} This finding led to the formulation of the neural efficiency hypothesis, which states that a key factor to intelligence is the capacity of the brain to perform a task by using the smallest amount of neural activity.\cite{neubauer2009} In the context of human intelligence and neural efficiency, the current drive to increase neural network task accuracy by increasing the size and complexity of a network may be interpreted as the development of less intelligent neural networks. This is supported anecdotally in the literature by the tendency of most neural networks to memorize images with randomized labels during training.\cite{zhang2016understanding} Inspired by the discovery of the neural efficiency hypothesis from human intelligence research, this work describes a metric for assessing neural efficiency in neural network layers. The artificial intelligence quotient (aIQ) is defined as a combination of neural network efficiency and model performance, so that a neural network with "high intelligence" uses a small number of neurons to make accurate predictions. \section{Approach} \subsection{State Space} Prior attempts to increase the efficiency of a neural network were based on removal of weights (pruning) based on weight magnitude or gradients of weights during backpropagation \cite{NIPS1989_250,NIPS1992_647} or analysis of firing frequency.\cite{hu2016network} In this manuscript, the state space of a single layer is analyzed, where a single state is the collective output of a neural layer given a single set of inputs. Since the output values of all neurons for a given set of inputs are generally passed to the subsequent layer, it may be beneficial to analyze how neurons fire as a collective rather than analyzing individual neurons. If the output of a neuron layer defines a single state, then the state space is the frequency that each state of a layer occurs as all images in the train or test data pass through the network. When one image is passed through a convolutional neural network, convolutional layers will generate multiple states per image. In contrast, dense layers will generate only one state per image. In this manuscript, neuron outputs are quantized as either firing (output is greater than zero) or non-firing (output is less than or equal to 0). However, even with quantization the state space could still be unmanageably large since the number of possible states in a layer after quantization will be $2^{N_l}$, where $N_l$ is the number of neurons in a layer. For most ANNs, the number of neurons in a layer frequently exceeds 64, meaning most computers would be incapable of creating a memory address for each layer state. In reality it might be expected that significantly fewer states are actually generated, so bins for a layer state are only created when observed. \subsection{Neural Efficiency} Neural efficiency is defined here as utilization of state space, and it can be measured by entropic efficiency. If all possible states are recorded for data fed into the network, then the probability, $p$, of a state occuring can be used to calculate Shannon's entropy, $E_l$, of network layer $l$: $$E_{l} = -\sum p * log_2(p)$$ Intuitively, $E_l$ is an estimation of the minimum number of neurons required to encode the information exported by the neural layer if the output information could be perfectly encoded. The maximum theoretical entropy of the layer will occur when all states occur the same number of times, and the entropy value will be equal to the number of neurons in the layer, $N_l$. Neural efficiency, $\eta_{l}$ can then be defined as the entropy of the observed states relative to the maximum entropy: $$\eta_{l} = \frac{E_l}{N_l}$$ Thus, neural efficiency, $\eta_{l}$, is defined as state space efficiency using Shannon's entropy with a range of 0-1. Neural efficiency values close to zero are likely to have more neurons than needed to process the information in the layer, while neuron layers with neural efficiency close to one are making maximum usage of the available state space. Alternatively, high neural efficiency could also mean too few neurons are in the layer. \subsection{Artificial Intelligence Quotient} Neural efficiency is a characteristic of intelligence, but so is task performance. Therefore, an intelligent algorithm should perform a task with high accuracy and efficiency. Using $\eta_{l}$ as layer efficiency, the neural network efficiency, $\eta_N$, can be calculated as the geometric mean of all layer efficiencies in a network containing $L$ number of layers: $$\eta_{N} = \Bigl(\prod_{l=1}^{L} \eta_{l}\Bigr)^\frac{1}{L}$$ Then, the artificial intelligence quotient (aIQ) can be defined as: $$aIQ = \Bigl(P^{\beta}*\eta_{N}\Bigr)^\frac{1}{\beta+1}$$ where $P$ is the performance metric and $\beta$ is a tuning parameter to give more or less weight to performance at the cost of $\eta_N$. \section{Experiments} \subsection{Exhaustive LeNet Training} To evaluate neural efficiency and aIQ, two types of neural networks were trained on the MNIST digits data set.\cite{lecun98} The first network (LeNet-300-100) consists of two densely connected layers followed by a classification layer. The second network (LeNet-5) consisted of two convolutional layers, each followed by a max pooling layer (2x2 pooling with stride 2), and a densely connected layer followed by a classification layer. All layers used exponential linear unit (ELU) activation,\cite{clevert2015fast} L2 weight regularization (0.0005), and no batch normalization or dropout was used. Standard stochastic gradient descent with Nesterov updates was used with a static learning rate of 0.001 and momentum of 0.9. Training was stopped when the training accuracy did not increase within five epochs of a maximum value. For each neural network architecture, the number of neurons in every layer were varied from 2 to 1024 by powers of 2. All combinations of layer sizes were trained, with eleven replicates using different random seeds to determine variability resulting from different initializations. This resulted in a total of 1,100 different LeNet-300-100 trained networks, and 11,000 different LeNet-5 trained networks. Additional models were trained to identify the specific architecture with the highest aIQ, so that the total number of LeNet-300-100 models was 2,575 and total number of LeNet-5 models was 26,269. Models were constructed and trained using Tensorflow 2.1, and networks were trained in parallel on two gpu servers with 8 NVidia Quadro RTX 8000s each. These networks serve as a baseline for comparison in subsequent experiments. Once models were trained, the entropy of each neuron layer in a network was calculated based on the distribution of all possible layer states generated by passing all test data through the network (training data was also evaluated separately). Then, aIQ was calculated with $\beta = 2$ to give a nominal preference for higher accuracy networks. \subsection{Dropout and Batch Normalization} In the context of neural layer efficiency, batch normalization was hypothesized to be a method to improve efficiency while dropout is a method to decrease efficiency. The rationale for batch normalization improving neural efficiency is that neuron activation is driven toward the center of the distribution of neuron outputs. As the firing frequency of each neuron approaches 50\%, the entropy is more likely to obtain the maximum value. In contrast, dropout was hypothesized to decrease the available state space during training by dropping the outputs of neurons, effectively decreasing the maximum entropy value. This concept is also in line with the original paper that claimed that dropout creates redundancies within the network, and redundancies are innefficient. To test the effect of dropout and batch normalization, neural networks were trained with the same number of neurons as described in the Exhaustive LeNet Training section, except a dropout layer ($p = 50\%$) or batch normalization layer was added after every hidden layer. For networks with batch normalization or dropout layers, only 3 replicates were trained instead of 11. \subsection{Memorization and Generalization Tests} If aIQ provides an assessment of capacity to learn general rules rather than memorize training inputs, it might be expected that network architectures with high aIQ perform well on data sets with randomized labels. The reason for this is that for low aIQ networks with low $\eta_N$, training inputs are memorized because the state space is likely much larger than space of observed states. This means new data may be classified correctly or incorrectly based on how similar an image was to one of the memorized inputs. However, for high aIQ networks there is insufficient bandwidth to create a special state for an input with a randomized labe. To test this, network architectures were trained from scratch where 25\%, 50\%, 75\% or 100\% of the training labels were randomized. Then, accuracy and neural efficiency was measured for both the test and train data sets without randomized labels. To test the capacity of high aIQ networks to generalize to new data, trained networks were use to evaluate the EMNIST data set,\cite{cohen2017emnist} which contains 280,000 additional digit images in the same formats as the original 70,000 digit images contained in MNIST. For both memorization and generalization tests, models were trained as prevoiusly described with batch normalization layers but only three replicates were trained per model. \section{Results} \subsection{Accuracy, aIQ, and Neural Efficiency} \begin{figure}[htb!] \centering \includegraphics[width=\linewidth]{Figure1.png} \caption{Trends in Accuracy (a), aIQ (b), and Neural Efficiency for hidden layer 1 (c) and hidden layer 2 (d) in the LeNet-300-100 model. Data shown is for test data. All values range from a 0-1, and values are the mean $\pm$ 95\% confidence interval of 11 replicates.} \label{fig:01} \end{figure} The LeNet-300-100 network permits easy visualization of trends in model accuracy, aIQ, and the efficiency of each layer due to the network only containing two hidden layers. For the training data (not shown), accuracy increased monotonically with the number of neurons added in each layer, but the test data showed a slight decrease in accuracy as the number of neurons in hidden layer 1 ($N_{1}$) contained more than 128 neurons (Figure \ref{fig:01}a). In contrast, aIQ ($\beta=2$) values reached a local maximum when $N_1 = 8$ and $N_2 = 4$ (Figure \ref{fig:01}b). The decrease in aIQ values is due to the trend in neural efficiency to decrease as the number of neurons in a layer increases (Figure \ref{fig:01}c-d). While it may be expected that decreasing the number of neurons in a layer would increase the neural efficiency, it was unexpected how strong of an impact changing the number of neurons in other layers could impact neural efficiency. For example, for networks where $N_{1} = 8$, the neural efficiency of layer 1, $\eta_1$, generally increased as $N_2$ increased, except there was a local minimum at $N_2 = 16$. Local minima can be observed in other areas where the number of neurons in either layer is held constant and observing the changes in efficiency for the same layer (e.g. when $N_2 = 8$, a local minima occurs when $N_1 = 32$). Similar trends were observed in the LeNet-5 models, where changes in the number of neurons in one layer affected the efficiency of other layers. \begin{table}[htb!] \caption{The top three network architectures by accuracy and aIQ.} \label{table:01} \centering \begin{tabular}{cccccc} \toprule \multicolumn{3}{c}{LeNet-300-100 ($N_{l}$)} \\ \cmidrule(r){1-3} Layer 1 & Layer 2 && Accuracy (\%)$^{\dagger}$ & aIQ ($\beta=2$)$^{\dagger}$ & Parameters (fold decrease) \\ \midrule 128 & 1024 & & \textbf{97.58 $\pm$ 0.04} & 32.70 $\pm$ 0.01 & 242,826 (1x) \\ 256 & 1024 & & 97.54 $\pm$ 0.06 & 29.12 $\pm$ 0.01 & 474,378 (0.5x) \\ 64 & 1024 & & 97.53 $\pm$ 0.08 & 36.67 $\pm$ 0.02 & 127,050 (1.9x) \\ \addlinespace[0.25em] 11 & 4 & & 92.91 $\pm$ 0.19 & \textbf{86.41 $\pm$ 0.71} & 8,733 (27.8x) \\ 11 & 5 & & 93.59 $\pm$ 0.29 & 85.93 $\pm$ 1.04 & 8,755 (27.7x) \\ 7 & 4 & & 90.76 $\pm$ 0.25 & 85.90 $\pm$ 1.00 & 5,577 (43.5x) \\ \midrule \multicolumn{3}{c}{LeNet-5 ($N_{l}$)} \\ \cmidrule(r){1-3} Layer 1 & Layer 2 & Layer 3 & Accuracy (\%)$^{\dagger}$ & aIQ ($\beta=2$)$^{\dagger}$ & Parameters (fold decrease) \\ \midrule 1024 & 1024 & 1024 & \textbf{99.16 $\pm$ 0.02} & 24.28 $\pm$ 0.004 & 43,030,538 (1x) \\ 1024 & 128 & 512 & 99.15 $\pm$ 0.03 & 33.02 $\pm$ 0.008 & 4,357,770 (9.9x) \\ 1024 & 512 & 1024 & 99.14 $\pm$ 0.03 & 26.22 $\pm$ 0.005 & 21,534,218 (2.0x) \\ \addlinespace[0.25em] 3 & 9 & 4 & 96.84 $\pm$ 0.18 & \textbf{88.28 $\pm$ 0.46} & 1,392 (30,912.9x) \\ 3 & 5 & 5 & 96.72 $\pm$ 0.16 & 88.18 $\pm$ 1.52 & 923 (46,620.3x) \\ 3 & 4 & 4 & 95.37 $\pm$ 0.19 & 88.02 $\pm$ 0.64 & 692 (62,182.9x) \\ \bottomrule \multicolumn{6}{l}{$\dagger$ Accuracy and aIQ values are mean $\pm$ 95\% CI (n=11). aIQ values are x100.} \\ \multicolumn{6}{l}{Data shown is for metrics calculated on the test data set.} \\ \end{tabular} \end{table} Additional networks were trained to identify networks with the highest aIQ for each architecture. Analysis of the top three neural networks for accuracy or aIQ for both LeNet models are shown in Table \ref{table:01}. For the LeNet-300-100 models, the model with the highest test accuracy ($N_1 = 128$, $N_2 = 1024$) achieved an accuracy of 97.58\% $\pm$ 0.04\% with an aIQ of 32.7 $\pm$ 0.01 (values are mean $\pm$ 95\% CI, n=11). The highest aIQ model ($N_1 = 11$, $N_2 = 4$) had an accuracy of 92.91\% $\pm$ 0.19\% with an aIQ of 86.41 $\pm$ 0.71. Thus, the highest aIQ network was 4.76\% less accurate but contained ~27.8 times fewer parameters. The differences between the highest accuracy and highest aIQ networks were even more drastic for the LeNet-5 models. The highest accuracy network ($N_1 = N_2 = N_3 = 1024$) had an accuracy of 99.58\% $\pm$ 0.02\% and an aIQ of 24.28 $\pm$ 0.004 (values are mean $\pm$ 95\% CI, n=11). However, the highest aIQ network ($N_1 = 3$, $N_2 = 9$, $N_3 = 4$) had an accuracy of 96.84\% $\pm$ 0.18\% and aIQ of 88.28 $\pm$ 0.46. The highest aIQ network had a lower accuracy by 2.32\% but contained 30,912.9 times fewer parameters. \subsection{Batch Normalization and Dropout as Neural Efficiency Modifiers} \subsubsection{Batch Normalization} Batch normalization generally increased the accuracy and $\eta_N$ for both LeNet-300-100 and LeNet-5 networks, resulting in a rise in aIQ for most networks (Table \ref{table:02}). For LeNet-300-100 networks, 59.26\% of network architectures with batch normalization had a mean $\eta_N$ (n=3 replicates) higher than the mean $\eta_N$ of corresponding networks trained without batch normalization (n=11). The reason why all networks with batch normalization do not have higher efficiency than networks without batch normalization can be explained by state space limits. For fully connected layers, the largest number of observable states would be equal to the number of inputs. This was confirmed by looking at the state space of the last dense layer to verify only 60,000 states (i.e. the number of training examples) were observed for large neuron layers when evaluating the training data. Thus, batch normalization could not increase the entropy of large layers. When only small networks are considered ($N_1 \leq 16$, $N_2 \leq 16$), 74.13\% of networks with batch normalization had a mean $\eta_N$ higher than the same network trained without batch normalization. In addition to higher network efficiency, neural networks trained with batch normalization also had higher accuracy on test data, where 77.78\% of networks with batch normalization achieved higher accuracies than the same networks without batch normalization. Since both accuracy and $\eta_N$ increase with batch normalization, it is unsurprising that aIQ increased in 74.07\% of networks where all layers had 16 or fewer neurons. For LeNet-5 networks, 74.2\% of all networks with batch normalization had a mean $\eta_N$ (n=3) higher than corresponding networks without batch normalization (n=11). However, the accuracy for networks with batch normalization caused the accuracy of networks to decrease on average, with only 32.60\% of networks achieving a higher accuracy than corresponding networks without batch normalization. When considering only small networks where $N_l \leq 16$ for all layers, 92.6\% of networks with batch normalization had a higher accuracy. This discrepancy can be explained by batch normalization increasing the likelihood of large networks memorizing training inputs, leading to the worse performance on the test data (i.e. overtraining). For $\eta_N$, 71.10\% of all networks with batch normalization had a higher $\eta_N$ relative to their corresponding networks without batch normalization, and $\eta_N$ decreased to 64.35\% for networks where all layers had less than 16 neurons. Overall, 74.20\% of all networks had a higher aIQ when trained with batch normalization. These results confirm the hypothesis that batch normalization generally acts to improve $\eta_N$ in addition to improving classification accuracy (Table \ref{table:02}), making neural networks "more intelligent" as assessed by aIQ. \begin{table}[htb!] \caption{The top network architectures by aIQ when batch normalization (BatchNorm), drop out (Dropout), or neither (None) are added to every layer.} \label{table:02} \centering \begin{tabular}{ccccccc} \toprule \multicolumn{3}{c}{LeNet-300-100 ($N_{l}$)} \\ \cmidrule(r){1-3} Layer 1 & Layer 2 && Modifier & Accuracy (\%)$^{\dagger}$ & aIQ ($\beta=2$)$^{\dagger}$ & $\eta_N$ (\%) \\ \midrule 11 & 4 && \textbf{None} & 92.91 $\pm$ 0.19 & \textbf{86.41 $\pm$ 0.71} & \textbf{74.77 $\pm$ 1.72} \\ & && BatchNorm & \textbf{93.50 $\pm$ 0.36} & 82.10 $\pm$ 1.44 & 63.33 $\pm$ 3.07 \\ & && Dropout & 78.41 $\pm$ 3.82 & 74.36 $\pm$ 4.06 & 66.89 $\pm$ 4.47 \\ \addlinespace[0.25em] 10 & 6 && None & 93.52 $\pm$ 0.14 & 83.16 $\pm$ 2.52 & 66.20 $\pm$ 5.75 \\ & && \textbf{BatchNorm} & \textbf{93.65 $\pm$ 0.44} & \textbf{87.76 $\pm$ 0.44} & \textbf{77.07 $\pm$ 1.16} \\ & && Dropout & 83.63 $\pm$ 0.63 & 75.62 $\pm$ 0.93 & 61.83 $\pm$ 1.49 \\ \addlinespace[0.25em] 7 & 5 && None & 91.53 $\pm$ 0.31 & 83.88 $\pm$ 1.75 & 70.65 $\pm$ 4.21 \\ & && BatchNorm &\textbf{ 91.91 $\pm$ 0.25} & \textbf{86.42 $\pm$ 1.20} & \textbf{76.43 $\pm$ 2.76} \\ & && \textbf{Dropout} & 80.63 $\pm$ 2.96 & 78.31 $\pm$ 2.44 & 73.88 $\pm$ 1.47 \\ \midrule \multicolumn{3}{c}{LeNet-5 ($N_{l}$)} \\ \cmidrule(r){1-3} Layer 1 & Layer 2 & Layer 3 & Modifier & Accuracy (\%)$^{\dagger}$ & aIQ ($\beta=2$)$^{\dagger}$ & $\eta_N$ (\%) \\ \midrule 3 & 9 & 4 &\textbf{ None } & 96.84 $\pm$ 0.18 & \textbf{88.28 $\pm$ 0.46} & 73.38 $\pm$ 1.22 \\ & & & BatchNorm &\textbf{ 96.95 $\pm$ 0.58} & 82.44 $\pm$ 3.08 & 59.73 $\pm$ 6.47 \\ & & & Dropout & 79.72 $\pm$ 4.96 & 77.82 $\pm$ 2.91 & \textbf{74.23 $\pm$ 0.99} \\ \addlinespace[0.25em] 2 & 8 & 8 & None & 97.83 $\pm$ 0.10 & 85.58 $\pm$ 2.28 & 65.81 $\pm$ 5.03 \\ & & & \textbf{BatchNorm} & \textbf{98.05 $\pm$ 0.20} & \textbf{88.12 $\pm$ 1.68} & 71.21 $\pm$ 4.30 \\ & & & Dropout & 88.06 $\pm$ 2.00 & 82.75 $\pm$ 0.38 & \textbf{73.16 $\pm$ 4.21} \\ \addlinespace[0.25em] 3 & 4 & 7 & None & 97.17 $\pm$ 0.22 & 83.71 $\pm$ 2.65 & 62.55 $\pm$ 5.74 \\ & & & BatchNorm &\textbf{ 97.27 $\pm$ 0.34} & 84.91 $\pm$ 2.16 & 64.76 $\pm$ 4.45 \\ & & & \textbf{Dropout} & 90.15 $\pm$ 1.66 & \textbf{85.45 $\pm$ 1.47} & \textbf{76.82 $\pm$ 3.32} \\ \bottomrule \multicolumn{7}{l}{$\dagger$ Accuracy, aIQ, and Efficiency values are mean $\pm$ 95\% CI (n=11 for None, n=3 otherwise).} \\ \multicolumn{7}{l}{aIQ values are x100. Data shown is for metrics calculated on the test data set.} \\ \end{tabular} \end{table} \subsubsection{Dropout} Dropout had different effects on the LeNet-300-100 and LeNet-5 architectures. Dropout generally decreased $\eta_N$ in LeNet-300-100 networks and increased $\eta_N$ in the LeNet-5 networks, but decreased aIQ in nearly all networks due to a drop in accuracy for nearly all networks (Table \ref{table:02}). For LeNet-300-100 networks, none of the networks trained with dropout had an accuracy higher than corresponding networks trained without dropout, and only 5.59\% of networks had a higher efficiency. Accordingly, no networks with dropout had a higher aIQ relative to corresponding networks without dropout. The trends were surprisingly different for LeNet-5 networks. While none of the networks with dropout had an accuracy higher than corresponding networks without dropout, 60.60\% of dropout networks had a higher $\eta_N$. When considering neural networks with 16 or fewer neurons in each layer, 99.33\% of networks with dropout had a higher $\eta_N$ than their corresponding networks without dropout. However, only 15.60\% of dropout networks had a higher aIQ than corresponding networks without dropout, meaning the increase in efficiency was not sufficient to offset the decrease in accuracy in the majority of networks. The general conclusion from these results is that dropout generally decreases accuracy, leading to a drop in aIQ. The reason why accuracy dropped in all networks may be due to a variety of factors, including the dropout rate being too high ($p = 50\%$) or because dropout was placed in every layer of the network. The explanation for why dropout appeared to decrease $\eta_N$ in LeNet-300-100 networks but increased $\eta_N$ in LeNet-5 networks may be due to LeNet-300-100 only having dense layers while LeNet-5 contained convolutional layers. It is known that dropout can have a nominal or detrimental impact on network performance because dropout layers add noise to the network\cite{park2017}. As a result, the increased $\eta_N$ may be due to increased noise in the convolutional layers leading to higher entropy. A better assessment of the effect of dropout for convolutional layers may be to use dropout layers with different dropout rates or use dropout layer types constructed specifically for convolutional layers, such as dropblock or spatial dropout (a survey of different dropout types was performed by Labach \textit{et al}).\cite{labach2019survey} \subsection{Memorization and Generalization} \begin{table}[htb!] \caption{Memorization tests on the networks with the highest aIQ or accuracy.} \label{table:03} \centering \begin{tabular}{cccccccc} \toprule \multicolumn{3}{c}{LeNet-300-100 ($N_{l}$)} & \multicolumn{5}{c}{Accuracy (\%)} \\ \cmidrule(r){1-3} \cmidrule(r){4-8} Layer 1 & Layer 2 && 0\% Rand & 25\% Rand & 50\% Rand & 75\% Rand & 100\% Rand \\ \midrule 1024& 1024 && 98.03 & 85.91 & 63.43 & 36.99 & 9.48 \\ \addlinespace[0.25em] 10 & 6 && 93.65 & 91.56 & 89.59 & 84.68 & 10.89 \\ \midrule \multicolumn{3}{c}{LeNet-5 ($N_{l}$)} \\ \cmidrule(r){1-3} Layer 1 & Layer 2 & Layer 3 & 0\% Rand & 25\% Rand & 50\% Rand & 75\% Rand & 100\% Rand \\ \midrule 1024& 1024 & 1024 & 99.11 & 93.99 & 76.07 & 44.24 & 10.34 \\ \addlinespace[0.25em] 2 & 8 & 8 & 98.05 & 96.73 & 95.69 & 92.51 & 7.49 \\ \bottomrule \multicolumn{8}{l}{The \% Rand indicates the percentage of labels that were randomized prior to training.} \\ \end{tabular} \end{table} \begin{table}[htb!] \caption{Generalization tests on the networks with the highest aIQ or accuracy.} \label{table:04} \centering \begin{tabular}{ccccccc} \toprule \multicolumn{3}{c}{LeNet-300-100 ($N_{l}$)} & \multicolumn{2}{c}{MNIST Accuracy (\%)} & \multicolumn{2}{c}{EMNIST Accuracy (\%)} \\ \cmidrule(r){1-3} \cmidrule(r){4-5} \cmidrule(r){6-7} Layer 1 & Layer 2 && 0\% Rand & 75\% Rand & 0\% Rand & 75\% Rand \\ \midrule 1024& 1024 && 98.03 & 36.99 & 76.21 & 24.93 \\ \addlinespace[0.25em] 10 & 6 && 93.65 & 84.68 & 58.43 & 55.46 \\ \midrule \multicolumn{3}{c}{LeNet-5 ($N_{l}$)} \\ \cmidrule(r){1-3} Layer 1 & Layer 2 & Layer 3 & 0\% Rand & 75\% Rand & 0\% Rand & 75\% Rand \\ \midrule 1024& 1024 & 1024 & 99.11 & 44.24 & 90.56 & 34.16 \\ \addlinespace[0.25em] 2 & 8 & 8 & 98.05 & 92.51 & 89.36 & 74.67 \\ \bottomrule \multicolumn{7}{l}{The \% Rand indicates the percentage of labels that were randomized prior to training on MNIST.} \\ \end{tabular} \end{table} To test the resistance of networks to overfitting/memorization, a percentage of image labels were randomized before training as previously described by Zhang \textit{et al}.\cite{zhang2016understanding} Overfitting occurs when incorrect, random labels are learned and is analagous to memorizing the label for an image. Table \ref{table:03} shows the results for the neural networks with the highest aIQ and the highest accuracy from the batch normalization tests for both the LeNet-300-100 and LeNet-5 networks. When none of the labels are randomized (0\%), the accuracy of the largest network is higher than the best aIQ network for both LeNet-300-100 and LeNet-5. When 25\%-75\% of the labels were randomized, the network with the highest aIQ had the best accuracy. Even when 75\% of the labels were randomly assigned, the high aIQ networks were able to achieve accuracies of 84.68\% and 92.51\% for the LeNet-300-100 and LeNet-5 networks, respectively. These numbers are considerably better than the neural networks with the highest accuracies in the batch normalization tests, which had test accuracies of 36.99\% and 44.24\% when trained on data with 75\% of the labels randomized. Both types of networks had bad performance regardless of original aIQ or accuracy values when all labels were randomly assigned (100\%). This result demonstrates that high aIQ networks have the property of being resistant to overtraining and memorization, learning the correct classification weights even when a majority of the input labels are incorrect. To test the capacity of networks to generalize results to a larger, more diverse data set, the highest aIQ or accuracy networks were trained on the MNIST data set and then classification accuracy was measured on the EMNIST data set. The EMNIST digits data set has a similar format to MNIST, except it contains 280,000 more samples.\cite{cohen2017emnist} The general trend was that the highest accuracy network performed better on EMNIST than MNIST, with significant differences between the LeNet-300-100 and LeNet-5 networks (Table \ref{table:04}). The LeNet-300-100 network with the highest accuracy on MNIST (98.03\%) was much better than the accuracy on EMNIST (76.21\%), while the highest aIQ network had a much larger decrease in accuracy from MNIST to EMNIST (93.65\% to 58.43\% respectively). In contrast, the differences between the highest accuracy and aIQ networks were less drastic for the LeNet-5 networks. The EMNIST accuracy for the highest accuracy network was 90.56\% and was 89.36\% for the highest aIQ network. These experiments were repeated after training the same networks on MNIST with 75\% of the labels randomized, and as expected the highest accuracy networks had a significant decrease in performance on EMNIST while the highest aIQ networks performed considerably better for both LeNet-300-100 and LeNet-5 networks. This data demonstrates that high aIQ convolutional neural networks do not considerably underperform in comparison to much larger networks when performance is assessed on a much larger, diverse data set, but dense networks with high aIQ may not generalize as well. \section{Discussion} The major contribution of this work is the establishment of neuron layer state space and the capacity to use state space as a means of evaluating neuron layer utilization. Quantizing neurons into on/off positions that encode information collectively as a neural layer state provides a different perspective on how data is processed in the network. One prevailing thought is that neurons are discrete units that encode individual features, but the work here suggests that the information an individual neuron encodes may have value in the context of the other neurons it fires with. One advantage of conceptualizing the flow of information between layers using state space is the number of tools that become available for network analysis. In this work, a rudimentary metric was created to understand layer utilization, but many other methods of analyzing state space could be used such as relative entropy of a single layer or the mutual information between two layers. Further investigation of state space may help to further compress the size of the network without significant decrease in accuracy, and may even permit training the number of neurons in a layer. While neural architecture search has become a topic of interest to search for an ideal network computational cell, few if any of these methods include parameters to learn layer sizes. There are a few deficits in the current approach that should be resolved in future work. First is the issue of class imbalances. If there are class imbalances in either the training or test data, it would be expected that the efficiency metrics would be skewed. Class imbalances in the training data might cause more neurons in the network to be dedicated to classification of the most common class while imbalances in the test data might over represent states that occur. A second issue is the underlying assumption that maximum efficiency is achieved when all states occur at the same frequency. It might be expected that some states occur far more frequently than others so that an ideal distribution of states might look more like an exponential distribution. This was superficially confirmed by looking at the distribution of states in the LeNet-300-100 networks by analyzing networks with $N_1 = 8$ around the local minimum when $N_2 = 16$ (see Figure \ref{fig:01}). Therefore, some other metric of calculating efficiency that accounts for an ideal distribution of states might be a better measure of efficiency. A third issue is implementation. Recording and processing data collected in state space is expensive. For some networks, the number of neurons in a convolutional layer can be 128 or more, meaning that at least $2^{128}$ layer states are possible and the states for every location in an image should be tracked to calculate the entropy. The current implementation of calculating entropy is not practical for larger networks that contain many more layers and layers with larger numbers of neurons (such as AlexNet with 4,096 neurons in the dense layers).\cite{alex2012} One potential solution to this might be to create a method of approximating the entropy by collecting sufficient information on a layer by layer basis to capture the shape of the distribution rather than recording every observed state. Finally, one topic not investigated here is how data augmentation impacts efficiency and aIQ. Data augmentation is generally used to help improve accuracy and generalization, likely because it helps to mitigate memorization. Networks with high aIQ were small and were fairly resistant to memorization (see Table \ref{table:03}), therefore it might be expected that certain types of augmentation (i.e. random cropping) might not improve performance when training a high aIQ network, but other types of augmentation might help (i.e. image flipping). \section{Conclusions} This work introduced the concept of state space, and demonstrated how analysis of state space is a useful tool for assessing neural layer efficiency. High aIQ networks were shown to have desireable properties, such as resistance to overtraining and comparable general performance on EMNIST to much larger networks. Future work with state space should establish better metrics of layer efficiency and methods of computing efficiency and evaluation of larger networks than the two models presented in this paper. State space may provide insight into sizing neural network layers to vastly decrease the size of existing neural networks. \newpage \section*{Broader Impact} One benefit of understanding neural networks in terms of state space is that guidelines on how many neurons to place in a layer can be established knowing only superficial information about the training data. The current thought is that more neurons leads to higher accuracy, but this does not appear to provide improved generalization for the small convolutional models tested here (see Table \ref{table:04}). Using the concept of state space, the number of states in a dense layers cannot exceed the number of input images. If there are X training examples, then $ceil(log_{2}X)$ neurons are sufficient to memorize every training image. As an example, the ImageNet data set has ~14 million images (leading to \textasciitilde2\textsuperscript{24} states) which is considerably smaller than the available statespace of AlexNet's dense layers with 4096 neurons.\cite{alex2012} This means that the statespace of AlexNet is \textasciitilde6*10\textsuperscript{1225} times larger than the number of available training examples in ImageNet. Due to random initialization, it is doubtful that a dense layer would memorize every training image if it contains exactly the number of neurons required to memorize all inputs. However, assuming the network is generating general rules for classification, the entire bandwidth of the channel should never need to be used. An analagous guideline could be applied to convolutional layers, where all combinations of pixel intensities of an 8-bit, grayscale image in a 3x3 grid could be perfectly represented by 72 neurons ($256^9 = 2^{72}$), so the first convolutional layer of a network should never contain more than 72 neurons when analyzing 8-bit grayscale images. Thus, simple upper limits on the number of neurons in different layers can be inferred from the implications of state space, where the upper limits may be considerably smaller than the number of neurons are currently observed in some networks such as AlexNet. Using the guidelines laid out above, it is reasonable to say that most networks that are created contain many more neurons than needed and may explain why most neural networks are prone to attack vectors. For example, adversarial networks can be trained to add imperceptable amounts of noise to an image to cause the neural network to misclassify the image.\cite{goodfellow2014explaining} It is plausible that these attack vectors take advantage of noise in an overparameterized network, a problem that a smaller neurals network may not face. Thus, use of state space and neural efficiency may help to make networks more resistant to such attacks. \section*{Acknowledgements and Disclosure of Funding} \bibliographystyle{unsrt}
2,869,038,156,036
arxiv
\section{ INTRODUCTION } Dwarf novae are a subclass of the cataclysmic variables $-$ interacting binary stars with orbital periods of several hours in which a Roche lobe filling K or M dwarf secondary transfers matter at a rate ${\dot M}_T$ into an accretion disk surrounding a white dwarf (WD) primary. Dwarf novae are characterized observationally by having outbursts of several magnitudes which recur on time scales of days to decades and which can last from $\sim1$ day to several weeks (Warner 1995). The accretion disk limit cycle model has been developed and refined to account for the outbursts (Meyer \& Meyer-Hofmeister 1981; for recent reviews see Cannizzo 1993a and Osaki 1996). In this model, material is stored up at large radii in a relatively inviscid accretion disk as neutral gas, and then dumped onto the central star as ionized gas when a certain critical surface density is achieved somewhere in the disk. Warner (1987) presented a thorough systematic study of the dwarf novae. He noted several interesting correlations between various attributes associated with the outbursts. Of particular interest is his finding of a relation between the peak absolute magnitude of dwarf nova outbursts and their orbital periods: $M_V$(peak)$=5.64-0.259P_{\rm orbital}$(h). In deriving this relation, Warner made corrections for distance and inclination for $\sim20$ systems that were well enough studied to have reliable values for $M_V$(peak) and orbital period $P_{\rm orbital}$. The distance determinations were made primarily on the basis of infrared fluxes of the secondary stars, as explained in Warner (1987). In this work we quantify the theoretical relation between $M_V$(peak) and $P_{\rm orbital}$ by running computations of our time dependent code which describes the accretion disk limit cycle model for a range in values of $r_{\rm outer}$, the outer radius of the accretion disk. We compare our results to Warner's empirical relation. \vfil\eject \section {BACKGROUND } The physical basis for the limit cycle lies within the vertical structure of the accretion disk. In particular, the change in the functional form of the opacity at $\sim10^4$ K coincident with the transition from neutral to ionized gas leads to a hysteresis relation between the effective temperature $T_{\rm eff}$ and surface density $\Sigma$ in vertical structure computations carried out at a given annulus. This hysteresis leads to local maxima $\Sigma_{\rm max}$ and minima $\Sigma_{\rm min}$ in $\Sigma$ when the locus of solutions is plotted as $T_{\rm eff}$ vs. $\Sigma$. The resultant series of steady state solutions resembles an "S". Instability analyses of the equilibrium vertical structure solutions and their linearly perturbed states show that the upper and lower branches of the "S" curve, those portions where $d\log T_{\rm eff}/d\log\Sigma > 0$, are viscously and thermally stable and can accommodate physically attainable states during quiescence and outburst. The "middle" part of the "S" where $d\log T_{\rm eff}/d\log\Sigma < 0$ is unstable and physically unattainable. During quiescence material accumulates at large radii in the disk because the viscosity is so low that there is little inward transport of the gas. The disk is far from steady state. When $\Sigma > \Sigma_{\rm max}$ somewhere in the disk, a heating instability is triggered which begins an outburst; and when $\Sigma < \Sigma_{\rm min}$, a cooling transition front is started which controls the decay of the outburst from maximum light. As the disk goes from quiescence to outburst, the matter in the disk is redistributed from large to small radii, and the local ${\dot M}(r)$ profile becomes close to steady state, except for a strong outflow at large radii. The local maximum in $\Sigma$ determined from the vertical structure computations is \begin{equation} \Sigma_{\rm max} = 11.4 \ {\rm g} \ {\rm cm}^{-2} \ r_{10}^{1.05} \ m_1^{-0.35} \ \alpha_{\rm cold}^{-0.86}, \end{equation} where $r_{10}$ is the radius in units of $10^{10}$ cm, $m_1$ is the WD primary mass in solar units $M_1/1M_\odot$, and $\alpha_{\rm cold}$ is the value of the Shakura \& Sunyaev (1973) viscosity parameter on the lower (or neutral) stable branch of the $S-$ curve (Cannizzo \& Wheeler 1984). (To avoid confusion, we use the subscript ``max'' to refer to properties of the accretion disk associated with $\Sigma_{\rm max}$, and ``peak'' to refer to either the peak flux or peak accretion rate during an outburst.) The values for $\Sigma_{\rm max}$ found by other workers are similar, reflecting slight differences in the equations of state, opacities, or treatments of the boundary conditions. The largest uncertainties may turn out to be due to the treatment of convection in the vertical structure. Within the confines of standard mixing length theory which is used in the vertical structure computations, convection may in fact be less important in dwarf novae accretion disks than was previously thought (Gu \& Vishniac 1998). Nevertheless, even if convection is set to zero, one still obtains the hysteresis between $T_{\rm eff}$ and $\Sigma$ (see Fig. 4 of Ludwig et al 1994). Cannizzo (1993b, hereafter C93b) commented briefly on Warner's finding of the empirical relation between the peak flux level in a dwarf nova outburst and the orbital period: $M_V$(peak)$=5.64-0.259P_{\rm orbital}$(h). C93b used the following argument to derive a scaling law for the rate of accretion onto the WD during the peak of a dwarf nova outburst. One can express the mass of the disk at the end of the quiescence interval during which time material accumulates in the disk as $f M_{\rm max}$, where $ M_{\rm max} = \int 2\pi r dr \Sigma_{\rm max}$ is the ``maximum mass'' that the disk could have reached in quiescence if the disk were filled up to the level $\Sigma_{\rm max}$ at every radius. Once the outburst has begun and progressed for a short time, the surface density profile adjusts from one which was non-steady state in quiescence to one which is in quasi-steady state in outburst. This basically involves a sloshing around of the gas from large to small radii, so that $\Sigma_{\rm quiescence}\propto r$ and $\Sigma_{\rm outburst}\propto r^{-1}$. If this time interval is short, one may equate the mass of the disk at the end of the quiescence interval to the mass of the disk at the beginning of the outburst. Therefore if one integrates a Shakura-Sunyaev scaling for $\Sigma_{\rm outburst}=$ $\Sigma(r,\alpha,{\dot M})$ over the disk as was done for the quiescent state by taking $ M_{\rm outburst} = \int 2\pi r dr \Sigma_{\rm outburst}$, sets this equal to $f M_{\rm max}$, and then inverts this expression to obtain ${\dot M}$, one derives an approximation for the peak ${\dot M}$ value in the disk during outburst \begin{equation} {\dot M}_{\rm peak} = 1.1\times 10^{-8} \ M_\odot \ {\rm yr}^{-1} \ \left(\alpha_{\rm hot}\over 0.1\right)^{1.14} \left(\alpha_{\rm cold}\over 0.02\right)^{-1.23} \left(r_{\rm outer}\over 4\times 10^{10} \ {\rm cm}\right)^{2.57} \left(f\over 0.4\right)^{1.43}, \end{equation} (C93b), where $\alpha_{\rm hot}$ is the alpha value on the upper (or ionized) stable branch, and $r_{\rm outer}$ is the outer radius of the accretion disk (which is set by the orbital period). The values of the parameters entering into eqn. (2) have been scaled to the values which C93b found to be relevant for SS Cygni, a dwarf nova with $P_{\rm orbital}=$ 6.6 h. From Kepler's law $P_{\rm orbital}^2\propto a^3$, where $a$ is the orbital separation. So if $r_{\rm outer}\propto a$ we basically have ${\dot M}_{\rm peak}$ scaling as $P_{\rm orbital}^{1.7}$, assuming the the other parameters in eqn. (2) do not vary with orbital period. If the visual flux were to scale linearly with ${\dot M}$ in the disk, then this would imply $M_V$(peak) $\propto -0.68 \log P_{\rm orbital}$, a different functionality than that observed. One additional consideration which comes into play for dwarf novae at increasingly longer orbital periods is that $r_{\rm outer}$ varies nonlinearly with $a$ in the regime where the secondary star transitions from being less massive than the primary star to being of comparable mass. Warner (1995, see his Fig. 3.10) over-plotted eqn. (2) with the data taken from Warner (1987) to show that the analytical expression does a reasonable job of characterizing the observations. The analytical expression for ${\dot M}_{\rm peak}$ derived above is only useful in comparing with observations, however, if the variables appearing in eqn. (2) do not vary appreciably with radius. Of particular concern is the variable $f$ which gives $M_{\rm outburst}/M_{\rm max}$ at the time of burst onset. One might expect for $f$ to vary significantly with orbital period $P_{\rm orbital}$ or secondary mass transfer rate ${\dot M}_T$, in which case the dependence on $r_{\rm outer}$ which appears in eqn. (2) would be misleading. Our aim in this work is to understand the function $f$ by running time dependent models for a range in values of $r_{\rm outer}$ and ${\dot M}_T$. \vfil\eject \section {THE MODELS } We use the computer model described in previous works (C93b, Cannizzo 1994, Cannizzo et al. 1995, hereafter CCL). This is a one dimensional time dependent numerical model which solves explicitly for the evolution of surface density and midplane temperature in the accretion disk. We carry this out by solving the mass and energy conservation equations written in cylindrical coordinates and averaged over disk thickness. The scalings which characterize the steady state relationship between $T_{\rm eff}$ and $\Sigma$ were taken from the vertical structure calculations (Cannizzo \& Wheeler 1984, C93b). For the WD mass we adopt $M_1=1M_\odot$, for the inner disk radius we take $r_{\rm inner}=5\times10^8$ cm, and for the number of grid points $N=300$. Our grid spacing is such that $\Delta r=\sqrt{r}$. We compute the visual flux as described in C93b by assuming a face-on oriented disk and taking the standard Johnson $V-$band filters. C93b presents many tests of the numerical model to assess systematic effects. For the $\alpha$ parameter which characterizes the strength of viscous dissipation and angular momentum transport within the accretion disk we utilize the form given in CCL, $\alpha=\alpha_0(h/r)^n$, where $n\simeq1.5$. CCL quantified the use of this form based on the observed ubiquitous exponential decays seen in soft X-ray transients and dwarf novae, but they were not the first to use it; it was introduced by Meyer \& Meyer-Hofmeister (1984). The normalization constant $\alpha_0\simeq 50$ is based on the magnitude of the $e-$folding time constant, and the exponent $n$ determines the functional form of the decay: $n<1.5$ gives a faster-than-exponential decay, and $n>1.5$ gives a slower-than-exponential decay. The reason for this particular functional form being the preferred one, based on a detailed examination of the departure from steady state conditions within the hot part of the accretion disk, has been recently provided by Vishniac \& Wheeler (1996). There is one shortcoming in adopting this form for dwarf novae which must be addressed. This shortcoming can be rectified if one imposes an upper limit on $\alpha$, which seems physically reasonable. Figure 7 of CCL shows the failure of the form $\alpha\propto(h/r)^{1.5}$ to effect a change in the $e-$folding decay time in an outburst with outer disk radius, or equivalently, orbital period. For dwarf novae, however, it is well known that there exists a linear relation between the $e-$folding time constant associated with the decay of the dwarf nova outbursts and the orbital period (Bailey 1975). Therefore, the CCL $\alpha$ value cannot be applicable to dwarf novae without modification. Recent computations applying the form $\alpha=\alpha_0(h/r)^{1.5}$ to integrations of the vertical structure show that, if $\alpha$ is computed self-consistently within this formalism for dwarf novae, one derives large values of $\alpha$ in the ``outburst state'' $-$ meaning the point on the upper branch of the $\log T_{\rm eff} - \log\Sigma$ curve which lies at the vertical extrapolation of the $\Sigma_{\rm max}$ value (Gu \& Vishniac 1998). In fact, the values are considerably larger than can be tolerated within the framework derived from the theoretical constraint imposed by the Bailey relation (Smak 1984, Cannizzo 1994) $-$ this constraint being that $\alpha_{\rm hot}$ cannot exceed $\sim0.1-0.2$. This limit will be reached, unfortunately, when $h/r \simeq 0.016 - 0.025$ if we strictly take $\alpha=\alpha_0(h/r)^{1.5}$. Since this value of $h/r$ is exceeded on the upper stable branch of steady state solutions for dwarf novae, we conclude that the physical mechanism responsible for generating the viscous dissipation and transporting angular momentum in accretion disks must saturate to some $\alpha_{\rm limit}\sim 0.1-0.2$ so as to give the Bailey relation (Smak 1984). Disks tend to be thinner in systems with larger central masses, generally speaking, so systems such as X-ray novae (and AGN) do not run into this limit (Gu \& Vishniac 1998). This explains why X-ray novae do not show a ``Bailey relation'' (see Fig. 3 of Tanaka \& Shibazaki 1996). In light of these considerations, we utilize the CCL $\alpha$ form in our computations, but in the limit of large $\alpha$ we do not allow it to exceed 0.2. The fact that we see a system as a dwarf nova means that the mass transfer rate into the outer accretion disk from the secondary star cannot be too great or else the disk would be in permanent outburst (Smak 1983). This reasoning must apply to the systems used by Warner (1987) in his compilation. Shafter (1992) considered the relative frequency of dwarf novae as a fraction of all cataclysmic variables in different period bins longward of the $2-3$ h period gap in an attempt to understand the variation of the rate of mass transfer ${\dot M}_T$ from the secondary star (feeding into the outer accretion disk) with orbital period which we observe in the dwarf novae. Shafter concluded that, by restricting our attention solely to dwarf novae, we are probably not sampling the long term ${\dot M}_T (P_{\rm orbital})$ value which characterizes the secular evolution of cataclysmic variables as a whole (Kolb 1993). Therefore we need not feel reticent in adopting an ${\dot M}_T (P_{\rm orbital})$ law which serves only to ensure ${\dot M}_T < {\dot M}_{\rm crit}$, where ${\dot M}_{\rm crit}$ is the value of the secondary mass transfer ${\dot M}_T$ which must be exceeded for the entire disk to be stable in the high state of the "S" curve. Stated another way, our concern in this work is to adopt a law for ${\dot M}_T (P_{\rm orbital})$ which will produce dwarf nova outbursts. This law is not related to that which characterizes the entire class of cataclysmic variables. From a practical standpoint, we find that our computed values of $M_V$(peak) are relatively insensitive to the specific ${\dot M}_T$ at a given orbital period. We run computations for five values of $r_{\rm outer}/10^{10}$ cm $-$ 2, 3, 4, 5, and 6 $-$ while at the same time scaling the value of the rate of mass transfer ${\dot M}_T$ feeding into the outer disk so that ${\dot M}_T <$ ${\dot M}_{\rm crit}=$ ${\dot M}(\Sigma_{\rm min}) \simeq$ $10^{16}$ g s$^{-1}$ $ (r_{\rm outer}/10^{10} \ {\rm cm})^{2.6} m_1^{-0.87}$ (Cannizzo \& Wheeler 1984). For our canonical SS Cyg model, we take $r_{\rm outer}=4\times10^{10}$ cm and ${\dot M}_T=6.3\times 10^{16}$ g s$^{-1}$ (C93b, Cannizzo 1996). At this value, the system is about a factor of 4 below ${\dot M}_{\rm crit}$. Therefore, to determine ${\dot M}_T$ for other $r_{\rm outer}$ values we scale the SS Cyg ${\dot M}_T$ by $r_{\rm outer}^{2.6}$. We also ran a second series of computations with the normalization on ${\dot M}_T$ a factor of two smaller. Figure 1 shows the computations from the first series of models. In Figure 1a we give the light curves, and in Figure 1b we give the accretion disk masses, shown in units of $M_{\rm max}$ for the relevant $r_{\rm outer}$ value. The assumption implicit in eqn. (2) that $f$ is constant with orbital period appears to be quite good: there is no noticeable drifting of the disk mass relative to $M_{\rm max}$ as $r_{\rm outer}$ changes. Figure 2 shows the computations from the second series. The results are similar to the first series. The equilibrium disk masses shift to slightly lower values, as do the $M_V$(peak) values. Figure 3 shows the results taken from the previous figures of $M_V$(peak) versus orbital period. The results from the first series are indicated by the hatched area with hatched lines inclined at $\pm45^{\circ}$ with respect to the $x-$axis, whereas those from the second series have hatched lines inclined at $90^{\circ}$ and $180^{\circ}$. The conversion between $r_{\rm outer}$ and $P_{\rm orbital}$ was carried out using the fitting formula given in Eggleton (1983), assuming a secondary mass $M_2 = 0.1M_\odot P_{\rm orbital}$(h). The hatched area accompanying each series shows the range allowed by taking the disk to fill between 0.7 and 0.8 of the Roche lobe of the primary. \section{ DISCUSSION AND CONCLUSION } We have run models using the accretion disk limit cycle model for dwarf novae in an attempt to understand Warner's relation for dwarf nova outbursts $M_V$(peak)$= 5.64 - 0.259 P_{\rm orbital}$(h). As noted in C93b, the observed upper limit is a natural consequence of the ``maximum mass'' of the accretion disk that is allowed by the critical surface density $\Sigma_{\rm max}$. This value increases steeply with orbital period, therefore the amount of fuel available in a dwarf nova outburst also scales with orbital period. An important finding is that, for a constant value of ${\dot M}_T/{\dot M}_{\rm crit}$, the value $f$ which is the ratio of the accretion disk mass at outburst onset to the ``maximum mass'' is relatively constant with orbital period. This gives some confidence in the analytical estimate given by C93b. The fact that the theoretical variation of $M_V$(peak) with orbital period is flatter than might have been expected from C93b's scaling is due in part to the variation of the orbital period with outer disk radius in the limit where the mass of the secondary star starts to become comparable to the primary mass. This effect contributes to the arc-like shape of the shaded regions shown in Fig. 3. The mean level of $M_V$(peak) in our models exceeds Warner's empirical line by $\sim0.3-0.5$ mag over most of the range shown. It is probable that our method for computing the $V$ band flux is too crude to expect consistency with observations at this level of detail $-$ for instance we do not include limb darkening in the models, an effect mentioned by Warner (1987). Also, we have utilized Planckian flux distributions for the disk. Wade (1988) has shown that, to varying degrees, both Planckian distributions and stellar or Kurucz type distributions fail to represent adequately the observations. Although Wade found that the Planckian distributions can account for both UV flux and UV color in a sample of nova-like variables, the failure of the model in other respects leads to the conclusion that ``one cannot rely on model fitting to give the correct luminosities or mass-transfer rates (within an order of magnitude).'' Unless there are systematic effects which depend strongly on orbital period, however, our computed $M_V$(peak)$-P_{\rm orbital}$ slope should have some physical relevance. When one goes beyond this to considering the normalization level of $M_V$(peak), it would seem that a better flux model must be utilized. It is also interesting to note the relative insensitivity of $M_V$(peak) to ${\dot M}_T$ at a given orbital period. This would imply that the observed scatter at a given orbital period must be due largely to variations in distance and inclination. We varied the normalization constant on ${\dot M}_T$ by a factor of two between our two series in this work, and found the resulting $M_V$(peak) values to differ by $\la0.3$ mag. The soft X-ray transients $-$ interacting binary stars containing a neutron star or black hole as the accreting object $-$ also seem to obey a relation between $M_V$(peak) and $P_{\rm orbital}$ for outbursts. Van Paradijs \& McClintock (1994) noted a relation between $M_V$ and $(L_X/L_{\rm Edd})^{1/2}$ $P_{\rm orbital}^{2/3}$, where $L_X/L_{\rm Edd}$ is the X-ray luminosity in units of the Eddington luminosity. Irradiation is a complicating factor in these systems, and by comparing the $M_V$ values between Figure 1 of Warner (1987) and Figure 2 of van Paradijs \& McClintock (1994), one can see that the X-ray binaries are $\sim1-7$ mag brighter than the dwarf novae at maximum light. Clearly most of the optical flux is reprocessed X-ray radiation coming from large radii in the disk. For the black hole X-ray binaries, however, a large fraction of the difference comes from having larger disks due to the larger orbital separations, at a given orbital period (Cannizzo 1998). For the neutron star systems (in which the primary mass is $\sim1M_\odot$ as in dwarf novae), the difference is entirely due to the irradiation. We thank the following people for allowing us generous use of CPU time on their DEC AXP workstations: Thomas Cline and Johnson Hays in the Laboratory for High Energy Astrophysics at Goddard; Clara Hughes, Ron Polidan, and George Sonneborn in the Laboratory for Astronomy and Solar Physics at Goddard; and Laurence Taff, Alex Storrs, and Ben Zellner at the Space Telescope Science Institute. JKC was supported through the visiting scientist program under the Universities Space Research Association (USRA contract NAS5-32484) in the Laboratory for High Energy Astrophysics at Goddard Space Flight Center.
2,869,038,156,037
arxiv
\section{Conclusion} There will be meaningful opportunities in which future generations of AI systems may benefit older adults such as by maintaining physical independence and facilitating social connection, as well as interacting with the variety of AI-powered applications aimed toward the general public. Ensuring the success of such interaction may be dependent on whether the data sets that are created to train and test AI systems represent older adults. In this work, we explored 92 face data sets as a case study to investigate whether the age categories represented in these data sets reflect the fast-changing age distribution of the population at large. We highlight that older adults are under-represented in these data sets and that ensuring representation of various age demographics poses many challenges. Informed by our findings, we suggest more standardized practices for documenting and annotating subjects’ age in these data sets and call for our field’s continual efforts to curate representative and inclusive AI data sets, including with attention to age. \section{Discussion} Ensuring that older adults are represented in the data sets used to train and/or test AI can help ensure that emerging AI tools will work well for this important and growing population. But creating more representative data sets for older adults should start from understanding how well they are represented right now. In this section, we synthesize what we learned from studying the 92 face data sets and discuss the extent of older adults’ representation. We provide concrete suggestions towards better representation of this population. \subsection{Older Adults Are Under-Represented} That older adults are under-represented may not be surprising; they are often not the target user population for new technology, their data is less available on the web for scraping, and their data contributions may be less readily accessible to university researchers compared to convenience samples such as college students. But the extent to which they are under-represented is cause for concern. Less than half of the data sets whose documentation provided the maximum age of their subjects had at least one person older than 65 years old, while only one data set out of the 92 that we studied in this work explicitly had at least one person older than 85, the starting age for the fast-growing oldest-old adult category. On the other hand, younger people, particularly those in their undergraduate years, are much more heavily represented in the data sets, as the researchers who curate these data sets often recruit their subjects from the academic institutions that they are a part of. This skew towards the younger generation is highly reminiscent of many psychology studies that recruited from the undergraduate population, a field which now has growing concerns for the generalizability of research findings towards the larger population outside of a university \cite{74_Hanel, 75_Henrich}. \subsection{Challenges of Older Adults' Representation} We note that there are unique challenges in creating representative data sets in terms of the subjects’ age when compared to other demographic categories. Firstly, the age distribution in the general population is fast-changing with the average lifespan of individuals increasing across different parts of the world \cite{77_UN, AARP2010}. It is possible that in the future, even the definition of the oldest-old may need to be revisited to include those who are much older than 85 years old (e.g. aged 100+). So although the data sets we studied continue to be used by the research community, their representativeness in terms of age will worsen as time advances. Further complicating matters, recent literature on aging suggests that as life expectancy increases, the way we age changes as well \cite{76_Jones}, making it potentially non-trivial to account for the current generation of older adults with the previous generation’s data even if varied age categories are represented. Intersectionality is also an important concern when creating age-representative data sets. For instance, are all the older adults in the data sets of certain genders or races? The older adults population is unevenly distributed across different countries, race, and gender identities \cite{77_UN, AARP2010}. Even if we make efforts to represent older adults in our data sets, if older adults are under-represented in certain intersectional demographics, we might repeat and propagate intersectional biases, an issue pointed out by the Gender Shades study that showed that AI systems' performance disparity could be particularly severe for certain intersectional groups \cite{Buolamwini2018}. Given these challenges in ensuring representative AI data sets in terms of age, we see the core message of our work not only as one that suggests that older adults are dismissed when creating such data sets right now. Instead, our work also implies that a successful representation of older adults is a complex moving target for which we as a community need to make continuous efforts to understand the changing demographic makeup of our society and adapt. Thus we see our core contribution as not only determining whether we are succeeding in representing older adults but as starting a conversation about age representation that will inform future efforts to create more representative AI data sets. \subsection{Need For a Standardized Approach} More concretely, what should we as a community of researchers and engineers creating future AI data sets strive towards? One important theme that arose from our results is the lack of a standardized approach for documenting age-related information. To start, we found that only about a quarter of the data sets we studied included some form of age-related information in their documentation (26\%), while a fifth did in their metadata (19\%). Although collecting such information could be challenging in certain scenarios, especially when the data is not directly sourced from the subjects and the ground-truth label is unattainable, the importance of making a conscious effort to create an age-representative data set needs to be highlighted. But beyond this, we found that the ways age was categorized were inconsistent across different data sets, with some using unevenly spaced age categories and some using more abstract categories like ``young'' and ``old.'' This makes interpreting and comparing the age representation across different data sets challenging. To remedy this, researchers could consider collecting the raw ages of their subjects if the data collection is taking place in-person. In cases where the raw ages are not available or collecting raw ages poses privacy concerns for the subjects, a possible source of inspiration we could draw from is to adhere to the age categories as they are presented in a large scale census (e.g. a government curated census), which would provide standardization as well as enabling comparisons regarding representation. If different categories of age are used, we suggest that they be motivated and defined. Finally, it should be noted that some methods for inferring a subject’s age (e.g. estimating age by observing the subject’s appearance) could reflect societal bias and other forms of inaccuracy. For instance, a recent work showed that age attribute is often labeled in a gender-dependant way and exhibited gender bias with crowd workers more likely to rate faces of men as “Young” and faces of women as ``Old'' \cite{Ramaswamy2020}. If such methods need to be employed to annotate the ages of subjects, we suggest that the annotation procedure be clearly documented. \section{Limitations and Future Directions} Our work presents a focused but poignant illustration of the under-representation of older adults in 92 face data sets used in the academic literature, and it is relevant to the greater discourse around AI, its ethics and fairness. However, we note important limitations to our work that suggest opportunities for future research. First, we focused on the representation of older adults in AI data sets but we did not directly cover the performance of the models trained with data sets that under-represent older adults. We see data representativeness as a worthwhile goal to pursue as what is used to train and test a model often directly correlates with the performance of that model when used by different demographics. Future studies can build on our findings by exploring how the under-representation of older adults translates to AI systems’ performance for this population in practice. A recent report from NIST \cite{Grother2019} suggests such performance disparities exist, noting that they observed an increase in false positives in face recognition among the older adults. Second, our exploration of face data sets should be considered as a case in point to illustrate the possible under-representation of older adults not only in face data sets but also in other forms of AI data sets. For instance, does the audio data for training voice recognition systems account for older adults with slower speech, and does the motion data for identifying moving pedestrians account for older adults with limited mobility? We hope our study functions to motivate future efforts to investigate and improve age representation in other types of AI data. Additionally, it is worth noting that our findings that suggest the uniquely challenging aspect of representing age in AI data sets, which is deeply transitory, resonate with the fluid manner in which people increasingly view other demographic categories such as gender and race. Subsequent work should continue to explore how to bridge this diverse and ever-developing demographic landscape with better representation in AI technology. Finally, there are important normative questions around what representative training data sets and AI models would mean for older adults; what are the right use cases for the AI models that can be trained with data sets studied here, and how can we ensure that older adults actually reap the benefit from these systems? For instance, facial recognition technology has drawn concerns over the years that it might be vulnerable to abuse, especially when used in contexts such as automated surveillance \cite{Ruha2019}. In such cases, could better representation in AI data sets sometimes generate harm for the marginalized communities, and if so, how can we prevent or mitigate such harm? For the scope of this paper, we did not directly engage with these fundamental normative questions, but we believe that they should be continually discussed as we refine our AI models. \section{Introduction} Our society is getting older. Today, more than 15\% of the U.S. population is 65 years old or older \cite{ACL2017}, and by 2050 this proportion will be matched globally \cite{UN2019}. Additionally, studies suggest the ``oldest-old'' population, defined as those who are at least 85 years old \cite{Campion1994, Cohen2013, Loi2021}, will see the greatest rate of increase \cite{Pollack2005}. This will mark a significant change in the makeup of our society, and it will become increasingly more important to ensure that emerging AI-infused systems are inclusive of this large and growing population who may benefit from the power that AI brings to both general-purpose and health-related domains. A key component for creating inclusive AI systems that work for a diverse group of users is ensuring representation of diverse popluations in the data used to train and test ML models. A growing number of evaluations have explored AI systems’ performance disparities for people with marginalized demographic attributes that often originate from biased data sets used to train them \cite{3_Whittaker2019}. These works have investigated how commonly-used AI systems such as facial analysis or speech recognition fail to achieve the same level of performance on the basis of gender \cite{Buolamwini2018, Lohr2018, Rodger2004}, race \cite{Buolamwini2018, Lohr2018}, socioeconomic status \cite{FB2019}, and disability status \cite{33_Guo2020, 3_Whittaker2019}, but found that such disparities can often be mitigated by updating the model using using a more balanced data set across different demographics \cite{18_Puri2018}. In this work, we extend this line of effort to discuss whether the AI data sets used today represent the older adult population, a group that has been subjected to negative societal attitudes and stereotypes in the form of ageism \cite{Butler1969}. In particular, we use facial analysis systems as a case study for observing how the older adult population is represented in the data sets that are used to train such AI systems. We focus on facial analysis systems as they are particularly relevant for understanding how our identity is operationalized in today's AI systems \cite{Scheuerman2020}, but we suggest that our work is the first step towards investigating age-related AI performance that should be expanded into other areas of AI. We started our work by drawing from a list of 92 face image data sets based on 277 academic publications that the authors of a prior work compiled to study how people’s genders were classified in the training data of facial analysis systems \cite{Scheuerman2020}. Using the publicly available documentation of these data sets (n=92), we analyze how and why the data was collected, particularly focusing on the presence of age-relevant metadata for the subjects and the data sets' coverage of the older adults' age bracket. Additionally, we selected 31 of the data sets in the list that are still publicly downloadable with clear terms of service that complied with our institution’s IRB to further inspect how age was actually codified in these data sets. Specifically, we ask the following research questions: \begin{enumerate} \item Is the age-related information of the subjects included in the metadata of the data set or its documentation? If so, how was the age binned (i.e. did the data set include the specific age of the subject or were numerical brackets or broad age descriptors used, and if so, and how were such brackets defined)? \item What was the process for annotating the subjects’ age (i.e. was it directly sourced by the subjects when their photos were taken or was it derived afterward such as by third-party labelers)? \item What was the goal of creating the data set, and how did this interact with whether age was included in the metadata (i.e. was age included to train algorithms for age-related classification tasks like age estimation)? \item Is the older adult population (aged 65+) represented proportionally to the population at large? Does this representation, or lack thereof, extend to the oldest-old adults population (aged 85+)? \end{enumerate} We find that only 26\% of the 92 face image data sets and their documentation contain any age-related metadata about their subjects. Furthermore, for these data sets, the norms of how to report the subjects’ ages are highly inconsistent, with some data sets simply documenting age in a binary category of “young” and “old,'' while others using unevenly spaced brackets where the older age brackets encompass a much wider range than younger brackets. In addition, the age metadata in these data sets rarely acknowledges older adults, with the highest age bracket often ending with 50 years old or older or even lower. The few exceptions to this were specialized data sets that were collected to train algorithms for age estimation, but even in these data sets the age distribution includes only a small number of older adults and very few (or zero) oldest old adults. Finally, we find that even in those data sets where age was included, this information often was not verified nor sourced directly from the subjects but instead was annotated by crowdworkers who guessed the subjects’ age based on their appearance or inferred it using publicly available information (such as the date of birth for celebrities). The contribution we make in this work is focused, but poignant; our findings suggest that the representation of older adults aged 65+ in popular data sets used to train AI systems for facial analysis is severely lacking, while that of the oldest-old adults aged 85+ is almost none. This lack of representation is, though not necessarily surprising, more severe than what one might expect -- only five out of 92 data sets explicitly included an age bracket that covers older adults and only one included an age bracket that covers oldest-old adults. Taking the face data sets as one example of lack of representation for older adults, there is cause for concern as to whether other classes of AI training data are representative of diverse ages and whether newer AI-infused technologies will work well for this fast-growing population. Given this, we highlight the need for better representation of the older adults in AI data sets, and the need for standardized procedures for documenting age metadata. \section{Method} Our aim is to conduct a broad investigation of how face image data sets represented age-relevant metadata for their subjects by analyzing the data sets and the documentation offered in their relevant academic publications. To this end, we take advantage of a recently compiled list of face image data sets referenced in research publications and conduct analysis on those data sets and publications. In this section, we briefly summarize how the list was compiled and explain our methods of analysis. \subsection{Collecting the Data Sets} In a recent 2020 study, Scheuerman et al. investigated how people’s gender and race were codified into face image data sets by gathering an extensive list of such data sets that were published by academic researchers \cite{Scheuerman2020}. This list was made public as a part of the 2020 study.\footnote{The list is available for download through the following DOI: 10.5281/zenodo.3735400} We used the data sets included in this list as the basis for our study. Scheuerman et al. generated this list by taking the following approach \cite{Scheuerman2020}: They first gathered a corpus of research papers that were published by two of the largest associations for computing research, the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE). This was done by scraping or downloading manuscripts from the respective communities’ digital archive. For the papers published in the ACM, Scheuerman et al. scraped 18,661 manuscripts from ACM Digital Library’s (ACM DL) search results for “facial recognition” and for those published in the IEEE, Scheuerman et al. used IEEE’s Xplore library to export 4,000 manuscripts using the search terms ``facial recognition” and “facial classification.” This corpus was then narrowed down by filtering the author-provided keywords for “facial recognition,” “face recognition,” “face classification,” and finally with a publication period that ranged from 2014 to 2019. This resulted in 277 manuscripts from which a final list of data sets included 92 image data sets that had publicly available documentation. Given this list \cite{Scheuerman2020}, we further identified the data sets that are still publicly downloadable (as of October 2020) and have terms of service. For the 31 data sets that were still available in October 2020 and also complied with our institution’s IRB data set onboarding process, we proceeded to download the data sets in order to inspect how age information is represented in their metadata. For the remaining 61 data sets, we only analyze the documentation included in the original academic publications that introduced the data set or the main download pages of the data sets that illustrate the contents of the data set and how it was collected. We report findings from our analysis of the documentation of all 92 image data sets, and use our findings from inspecting the metadata of the 31 data sets we were able to download to illustrate the trends we find. \input{content/table/dataset_info} \subsection{Analyzing the Data Sets} By studying the data sets and their documentation, we aimed to find out whether 1) age-related information about the subjects is included in the data sets or in their documentation, 2) older adults are represented proportionally to the population at large, 3) the goal of the data set interacted with whether and how age was codified, and 4) how age of the subjects was annotated. In order to quantify our observations, we iteratively developed a codebook to codify our data, as described below: \subsubsection{Age-related information} To summarize whether age-related information is included in the data sets, we coded a data set with “present” if it includes any information about a person's age. This included those data sets that documented either the age distribution or the raw age for their subjects. \subsubsection{Older adult representation by bracket} Prior literature that connects age and technology notes that "older adult" is a broad term used to categorize those of age 65 and older and that it can be subdivided further: the youngest old (65-74), the middle old (75–84), and the oldest old (85+) \cite{Campion1994, Cohen2013, Loi2021}. We used these three age categories in our coding. \subsubsection{The goal of a data set} Scheuerman et al.’s findings noted that the face data sets in the list had three broad categories of use cases \cite{Scheuerman2020}: 1)~for individual face recognition or verification, 2)~for image labeling or classification, and 3)~for adding diversity to training and evaluation data. In this work, we are interested in understanding an additional goal of these data sets, i.e., whether the creators anticipated any uses or issues with respect to age that motivated the inclusion of age-related information in the metadata. We coded data sets with any explanations their documentation provided for including the subjects’ age and recorded the emergent themes that arose. \subsubsection{Age annotation scheme} Finally, among the data sets that contained some form of age-related information, we coded how the age of the subjects was annotated. Our main focus was to distinguish the following three types of annotation schemes: 1)~recording the actual age of the subjects provided by the subjects themselves, 2)~inferring the subjects’ age using other metadata, and 3)~estimating the subjects’ age by observing their appearance. \section{Related Work} We summarize prior work that investigated how older adults may benefit from AI technology. We then cover the growing concern around biases and performance disparities that AI systems exhibit in relation to users' demographic traits and how the lack of representation of certain groups of people in training data sets can aggravate such outcomes. \subsection{AI Bias and Under-Representation} An important on-going challenge in AI and ethics has been that of bias and performance disparities of AI systems for people with historically marginalized demographic attributes. A growing number of studies have shown that one’s race \cite{Buolamwini2018, Lohr2018}, gender \cite{8_Nicol2002, 10_Tatman2017, Scheuerman2020}, socioeconomic status \cite{FB2019}, and disability status \cite{3_Whittaker2019, Park2021, 1_Guo2020} can lower the performance of AI systems like facial recognition or natural language processing systems. The source of this challenge has often been attributed to the under-representation of marginalized populations in AI training data sets as the models learn to perceive the world based on what they are given, and if a certain demographic population is missing, they will inevitably fail to recognize that population \cite{3_Whittaker2019}. For instance, the Gender Shades study showed that commercial AI systems that are used for binary gender classification based on one’s appearance often fail for women of darker skin color \cite{Buolamwini2018}. Following this, the developers of such systems updated their models with a more balanced training data set to remedy these shortcomings, reducing the error rate by nearly ten-fold when tested against a similar data set to what was used in the Gender Shades study \cite{18_Puri2018}. As a response to this, there have been calls for action and efforts to create more balanced data sets in terms of race, gender, socioeconomic status, and disability status. However, the dimension of age, which is the focus of our work, has received very little attention in the context of AI data representation despite its importance. \subsection{Aging Population and Technology} The global population is aging as life expectancy rises. The United Nations reports that this trend, which first emerged among developed countries, is now observed in virtually all developing countries \cite{77_UN}. Globally, a large increase is expected among older adults (defined as those at least 65 years of age), a group that is expected to nearly double by 2050 \cite{UN2019}, while in the U.S., a particularly significant increase is expected among the oldest-old adults (defined as those at least 85 years of age), a group that is expected to represent 4.3\% of the nation’s population by 2050 \cite{AARP2010}. While we expect that many in the older age bracket will remain healthy and productive, many will also experience physical and cognitive impairment at a higher rate than those who are younger \cite{Hebert2003}. Physical impairment \cite{Berkman1993}, age-related mobility disability due to decreased strength \cite{Guralnik2000}, and sensory deficits \cite{Berkman1993, Jeste2013} commonly accompany ageing, and the rate of clinical depression and loneliness rises as older adults grieve losses and their social ties decline \cite{Adams2004}. AI-infused assistive technologies and other AI tools present opportunities to significantly support the needs of older adults and help them age on their own terms \cite{Jeste2013, Zuckerman2020, Faber2001, Montross2006}. As older adults increasingly prefer to age-in-place or live in their homes rather than long-term care \cite{AARP2018}, AI can offer them greater mobility through smart navigation or robotics \cite{NSF2018}, control over their living spaces for more independence through smart homes \cite{Trajkova2020, Koshy2021}, and access to on-demand medical expertise through powerful medical recommender systems \cite{Angulu2018}. AI can also be used to augment existing technologies and online communities that older adults use. For example, facial recognition systems for unlocking phones can remove or reduce barriers for older people with limited motor control and/or experiencing cognitive decline so they do not need to type on a small phone keyboard or remember passwords \cite{xfinity2019}. AI can also support natural language interfaces, removing barriers to keyboard or keypad input \cite{Trajkova2020, Rieland2017}, thereby making technologies easier and more natural to use even for those older adults with low computer literacy. Such advances can not only allow older adults to use digital communication to engage with their social networks \cite{Angelini2016, Ballagas2010, Davis2007}, but also combat the stereotypical premise that older adults lack the desire to use technologies \cite{Loi2021, Coleman2010}. However, without proper representation of older adults in data sets used to train and test AI models, it is difficult to ensure that the new generation of AI technology will work for this population. Here, we highlight that age is a demographic category that is difficult to balance but potentially highly impactful and worth considering for our research community. In the remainder of the paper, we take the first step towards considering age representation in AI data sets by studying 92 face data sets that are used to train facial recognition and analysis systems to see how age is represented in these data sets. Our focus on face data sets is motivated in large part by the fact that facial analysis technologies are “particularly pertinent to understanding how identity is operationalized in new technical systems” \cite{Scheuerman2020}. But our study on face data sets is also a case study that should inspire similar explorations on other forms of AI data sets. \section{Results} In this section, we present our findings, organized by our research questions presented at the beginning of this paper. \subsection{Age-Related Information} We find that a majority of the face image data sets in our study did not include any age-related information. Of the 92 face data sets whose documentation we studied, only 24 mentioned some form of collecting age-related information of their subjects (26\%). Similarly, of the 31 data set downloads that we studied, only 6 of the data sets contained age-related metadata (19\%). It is worth noting that all data sets with such metadata also included information about their subjects' age in their documentation. Of those that included age information, only 20\% of the documentation and 33\% of the data sets included or mentioned the subjects’ raw age or date/year of birth. The rest codified the subjects’ age in some aggregate forms but without any consistent standards across them. Among the documentation of data sets that contained age-related information, two most common way of aggregating the subjects' ages was to simply provide an overall range for the ages included in the data set (29\%) and to bin by age group categories with the range of each age group numerically defined (25\%). However, the age groups used to report the subjects’ age were inconsistent both in terms of the range of each age category and the starting and ending age for the age field. For example, the 10K U.S. Adult Faces Database that contains over ten thousand images of U.S. adults used age group categories that included 20-30, 30-45, 45-60, and over 60 years old \cite{66_Bainbridge}, whereas CAS-PEAL with images of 1,040 individuals used categories that included 18-44, 45-59 and 60-74 years old \cite{67_Gao}. Meanwhile, some documentation simply reported the age distribution of the subjects, for instance in the form of the average and standard deviation (21\%), or simply noted the subjects’ age requirement for participation (4\%). Of the 6 data sets that we were able to download which also had age-related metadata, we found that 33\% of them used more abstract categories to describe age that illustrate the rough approximation of the subjects’ age but that are only weakly defined and open to the interpretation of annotators. For example, the Large-Age Gap data set that contains images of 1,010 celebrities, each with images from when they were young and old, simply codified the subjects’ age with a binary field of “young” and “old” in its metadata without numerical definitions \cite{69_Bianco}. Similarly, CIFAR-100, which contains a large number of annotated images, included categories of “baby,” “boy,” “girl,” “man,” and “woman” without providing precise definitions that distinguish the age of a boy from man and a girl from woman \cite{68_Krizhevsky}. \subsection{Annotating Subjects’ Age} We found that the method for annotating subjects’ age could be summarized using three categories: (1)~to record the age of the subjects as provided by the subjects themselves, (2)~to infer subjects' age using other data sources, and~(3) to estimate subjects' age by observing their appearance. Of these methods, recording the subjects’ age-related information during an in-person data collection process (e.g. inviting participants to a studio and taking photos) was the most prevalent (58\%), though this often resulted in a smaller number of unique subjects represented in the data set with an average of 314.8 (std=386.5) people per data set. Other methods for annotating subjects’ age provided a more scalable means to annotate ages. For instance, multiple data sets used publicly available dates for when a photo was taken and the subjects’ date of birth to infer their ages in that particular photo at a larger scale (21\%). This method was particularly common with data sets that contained images of celebrities as they appeared in movies or other public events as the date of the capture is given in the form of the release date of the movie, and as the date of birth of celebrities are easily accessible. This method provided the data sets with a relatively precise estimate of their subjects’ ages, although there could be some deviations as it usually takes some time for a movie to be released after it is filmed. A related (but less precise) method for annotating subjects’ age was to search for celebrities’ names on an online search engine, followed by a descriptor such as “young” and “old” to retrieve young and old-looking images of the same celebrity \cite{69_Bianco}. Such data sets that used other metadata to infer a subjects’ age included the Indian Movie Face Database and Cross-Age Celebrity data set and represented an average of 169,367.5 (std=406,955.0) people per data set. Finally, some data sets employed crowdworkers on Amazon Mechanical Turk or students to study the appearance of the photos' subjects and manually annotate them based on how old the subjects looked (13\%). The training and annotation procedure for the annotators, if there was any, was not made clear in any of the data set documentation except for the categories with which the subjects were labeled. As covered above, there were no standards running through the categories used across these data sets. For instance, the CIFAR-100 data set labeled its subjects with weakly-defined categories such as “baby,” “boy,” “girl,” “man,” and “woman” \cite{68_Krizhevsky} while the 10K U.S. Adult Faces Database labeled them with unevenly spaced age categories that started from 20-30, 30-45, 45-60, and ended with over 60 years of age \cite{66_Bainbridge}. These data sets contained an average of 5,384.1 (std=6,765.6) unique subjects but it is unclear how well the annotation reflects the ground-truth age of the subjects.\footnote{The average and standard deviation in this sentence was calculated without the value for the Real-World Affective Faces Database, which was only documented to include thousands of unique individuals without a precise value.} \subsection{Goal of Gathering Age} A total of 14 out of 24 data sets we studied that included age-related information did not specify why the subjects’ ages were collected (58\%), but rather noted in the documentation that the demographics of the subjects were included as a part of the data set distribution. For example, the documentation of CMU’s Multi-Pie Face Database writes: “As part of the distribution we make the following demographic information available: gender, year of birth, race and whether the subject wears glasses” \cite{72_Gross}. On the other hand, some data sets gave broad reasoning for including age metadata, suggesting that the goal of the data set is to provide a data set for face recognition or analysis tasks that covers a relatively diverse population (17\%) to serve as “an unbiased platform” for future studies \cite{66_Bainbridge}. However, of the 10 face data sets that more specifically described the goal of collecting the subjects’ age, the most common reasons for doing so were directly connected to supporting age-related classification or analysis tasks (60\%). For instance, the Iranian Face Database that contains face images of subjects between ages 2-85 was curated by the data set authors to support the creation of “a reliable age classification algorithm” that takes as an input a face image and outputs the age estimate of the subject in the image \cite{73_Bastanfard}. Meanwhile, data sets such as the Large Age-Gap data set \cite{69_Bianco} or Cross-Age Celebrity data set were created to provide longitudinal face data for a particular person to help create face recognition algorithms that can recognize a particular person at different ages, suggesting that such algorithms can help tag users on photo-sharing websites like Facebook and Flickr where users post images over many years. Of those data sets that explicitly covered the older adults category in their closed intervals for the subjects’ ages, a majority of them (60\%) were created in order to support such age-related classification or analysis tasks. \subsection{Representation of Older Adults} The U.S. census reported that 13.0\% of the adults in the U.S. are at least 65 years old while 1.9\% are at least 85 years old in 2010 \cite{AARP2010}. Both of these numbers are expected to grow significantly in the years to come, with the older adult population expected to account for 20.2\% and the oldest-old expected to account for 4.3\% of the U.S. population by 2050 \cite{AARP2010}. However, in the face data sets we studied, we found that the representation of older adults is not proportional to the population at large. This is particularly evident in the ranges for the subjects’ ages that 50\% of the data set documentation with age-related information provided. For example, the Long Distance Heterogeneous Face Database described the subjects’ age range in the data set as: “The 100 subjects who participated in our study (70 males and 30 females) were students at Korea University with an age range of 20-30 years old.” Among the data sets that provided such ranges in their documentation, we found that the average age of the oldest individuals included in the face data sets was 56.3 years old (std=19.3), which is lower than 65 years old, the lower bound for older adults that is commonly used in literature on age and technology \cite{Loi2021}. In addition, of those data sets whose documentation provided the age ranges, less than half of them included at least one subject who was older than 65 years old (42\%) with only the Iranian Face Database with images of 616 people including at least one subject who was older than 85 years old. However, even the Iranian Face Database was heavily skewed towards the younger generation; only 7.5\% of its subjects were included in the age categories greater than 60 years old and only 3.7\% in the age categories greater than 70, while its median age category was 21-30 years old. Other data set documentation summarized the subjects’ ages either with an open bracket (e.g. older than 50) (13\%) or by calculating the average and standard deviation of the subjects’ age (21\%). Observing such ranges and aggregate statistics also presents a similar concern for under-representation of older adults. The highest age categories in those that summarized subjects’ age with an open bracket started well below the starting age of older adults; for example, the oldest category of “31 and above” used in GUC Light Field Face Artifact Database \cite{70_Raghavendra}. Meanwhile, the average age of the subjects that 21\% of 24 the documentation with age-related information provided averages to 25.00 years old (std=3.4). Overall, the 92 face data sets we studied are under-representing older adults while only one explicitly includes any represention at all of the oldest-old adults. But it is just as noteworthy that some data sets are also particularly skewed towards the younger population in their 20’s. Some of the documentation indicated that one possible explanation for this distribution is that the prevalence of younger adults may be an artifact of convenience sampling. 25\% of the data set documentation that included age-related information specifically mentions that the subjects were drawn from a university undergraduate population, for example as mentioned in the documentation of the NIMSTIM Set of Facial Expressions data set: “[the subjects] included… undergraduate students from a liberal arts college located in the Midwestern United States” \cite{71_Tottenham}.
2,869,038,156,038
arxiv
\section{Introduction}\label{int} A complete Riemannian metric $g$ on a smooth manifold $M^n$ is called a {\it gradient steady Ricci soliton} if there exists a smooth potential function $f$ on $M^n$ such that the Ricci tensor $Ric$ of the metric $g$ satisfies the equation \begin{equation} \label{Eq1} Ric=Hess\,f. \end{equation} Here, $Hess\,f$ denotes the Hessian of $f.$ Such a function $f$ is called a potential function of the gradient steady soliton. Clearly, when $f$ is a constant the gradient steady Ricci soliton is simply a Ricci flat manifold. Thus Ricci solitons are natural extensions of Einstein metrics. Gradient steady solitons play an important role in Hamilton’s Ricci flow \cite{Hamilton2} as they correspond to translating solutions and often arise as Type $II$ singularity models, thus playing a crucial role in the singularity analysis of the Ricci flow \cite{Perelman2}. It is well-known that compact gradient steady solitons must be Ricci flat. For dimension $n=2,$ Hamilton {\cite{Hamilton3}} obtained the {\it cigar soliton}, i.e., the first example of a complete noncompact gradient steady soliton on $\mathbb{R}^2,$ where the explicit metric is given by $g=\frac{dx^2 + dy^2}{1+x^2 + y^2}$ and the potential function is $f= - \ln (1+x^2 + y^2).$ It has positive curvature and is asymptotic to a cylinder of finite circumference at infinity. The scalar curvature decays exponentially. It is important to highlight that $$R|\nabla f|=|\nabla R|$$ on the cigar soliton. This identity also is trivially satisfied, in higher dimensions, for Ricci flat solitons with a constant potential function, a quotient of the product steady soliton $N^{n-1}\times\mathbb{R}$, where $N^{n-1}$ is Ricci flat and the product of the cigar soliton and any complete Ricci flat manifold. For dimension $n\ge 3,$ Robert Bryant {\cite{Bryant1}} proved that there exists, up to scaling, a unique complete rotationally symmetric gradient Ricci soliton on $\mathbb{R}^n.$ Its sectional curvature is positive and the scalar curvature $R$ decays like $r^{-1}$ at infinity, and the volume of the geodesic balls $B_r(0)$ grows according to the order of $r^{(n+1)/2}$. Here, $r$ denotes the geodesic distance from the origin. It was conjectured by Perelman {\cite{Perelman2}} that in dimension $n=3$ the Bryant soliton is the only complete noncompact ($\kappa$-noncollapsed) gradient steady soliton with positive sectional curvature. This conjecture was proved by Brendle {\cite{brendle2013}} in 2013 and was extended by himself in 2014 {\cite{brendle2014}} for arbitrary dimension $n\geq 4$ under the additional condition that the steady Ricci soliton is asymptotically cylindrical. Moreover, Deng and Zhu \cite{deng2020} proved that any noncompact $\kappa$-noncollapsed steady Ricci soliton with nonnegative curvature operator must be rotationally symmetric if it has a linear curvature decay. Hence, for $n\geq 4,$ it is desirable to find geometrically interesting conditions under which the uniqueness would hold. In this context, Cao and Chen {\cite{cao2011}} proved that a complete noncompact $n$-dimensional $(n \ge 3)$ locally conformally flat gradient steady Ricci soliton with positive sectional curvature is isometric to the Bryant soliton. Moreover, they showed that a complete noncompact $n$-dimensional locally conformally flat gradient steady Ricci soliton is either flat or isometric to the Bryant soliton (see also Catino-Mantegazza {\cite{catino}}). For $n = 4,$ Chen and Wang {\cite{chenwang}} showed that any $4$-dimensional complete half-conformally flat gradient steady Ricci soliton is either Ricci flat, or locally conformally flat (hence isometric to the Bryant soliton). In {\cite{cao2014}}, Cao, Catino, Chen, Mantegazza and Mazzieri proved that any $n$-dimensional $(n\ge 4)$ complete Bach-flat gradient steady Ricci soliton with positive Ricci curvature is isometric to the Bryant soliton. Very recently, Cao and Yu proved that any $n$-dimensional complete noncompact gradient steady Ricci soliton with vanishing $D$-tensor is either Ricci flat, or a quotient of the product steady soliton $N^{n-1}\times \mathbb{R},$ where $N^{n-1}$ is Ricci flat, or isometric to the Bryant soliton (up to scalings). Here, $D$-tensor is a $3$-tensor defined by \eqref{tensorD}, see \cite{cao2020}. In recent years, a lot of progress has been made in understanding the curvature estimate of gradient steady Ricci solitons, {see}, e.g. \cite{chan2019,chow2011,deng2020,lopez2013}. It follows from a result by Chen \cite{chen2009} that every a complete gradient steady Ricci soliton has nonnegative scalar curvature, i.e., $R\geq 0.$ On the other hand, it is well know that a complete gradient steady Ricci soliton satisfies \begin{equation} R+|\nabla f|^2 ={C_{0}}\nonumber, \end{equation} where $C_{0}$ is a positive constant. In other words, the scalar curvature of a complete gradient steady Ricci soliton is uniformly bounded. In particular, up to normalization, we may consider \begin{equation} \label{eqR1} R+|\nabla f|^2 =1. \end{equation} This therefore implies that a (normalized) gradient steady Ricci soliton satisfies $0\leq R\leq 1.$ Before proceeding, we recall that Brendle \cite{brendle2011} (see also \cite[Proposition 5.2]{cao2014}) proved the following result. \begin{theorem}[Brendle, \cite{brendle2011}] \label{thm1} Let $(M^n, g, f)$ ($n\ge 3$) be a complete $n$-dimensional gradient steady Ricci soliton. Suppose that the scalar curvature R of $(M^n, g)$ is positive and approaches zero at infinity. Denote by $\psi: (0,1) \to \mathbb R$ the smooth function such that the vector field $$X:=\nabla R +\psi (R)\nabla f=0$$ on the Bryant soliton, and define $u: (0,1) \to \mathbb R$ by $$ u(s)=\log \psi (s) +\frac{1}{n-1}\int_{1/2}^{s} \left(\frac{n}{1-t} - \frac{n-1-(n-3)t}{(1-t)\psi(t)}\right) dt. $$ Moreover, assume that there exists an exhaustion of $M^n$ by bounded domains $\Omega_l$ such that \begin{equation}\label{asym} \lim_{l \to \infty} \int_{\partial\Omega_l} e^{u(R)} \langle \nabla R +\psi (R) \nabla f, \nu\rangle \, = 0. \end{equation} Then $X=0$ and $D_{ijk}=0$. In particular, for $n=3$, $(M^3, g, f)$ is isometric to the Bryant soliton. \end{theorem} This combined with Proposition 5.1 by Cao et al. \cite{cao2014} implies that a complete gradient steady Ricci soliton $(M^{n},g_{ij}, f),$ ($n\ge 4$), with positive Ricci curvature such that the scalar curvature $R$ approaches zero at infinity so that the condition~\eqref{asym} is satisfied for some exhaustion of $M^n$ by bounded domains $\Omega_l$ is isometric to the Bryant soliton. We highlight that the proof of Theorem \ref{thm1} is partially based on the ideas outlined by Robinson \cite{robinson} to study the uniqueness of static black holes, which depends essentially of a suitable divergence formula. In this paper, motivated by the result obtained by Brendle \cite{brendle2011} and the ideas by Robinson \cite{robinson}, we obtain a divergence formula (Lemma \ref{lema22}) for the steady gradient Ricci soliton by following the ideas of Robinson in order to obtain a rigidity result. More precisely, we have established the following result. \begin{theorem} \label{thmA} Let $\big(M^n,\,g,\,f)$ be a complete noncompact steady gradient Ricci soliton satisfying \begin{eqnarray} \label{riccibound} \sigma R|\nabla f|\leq|\nabla R|, \end{eqnarray} where $\sigma= \frac{(n+1)+\sqrt{(n-1)(7n-13)}}{3n-2}.$ Suppose that there exists an exhaustion of $M^n$ by bounded domains $\Omega_\ell$ such that \begin{eqnarray}\label{asympBene} \displaystyle\lim_{\ell\rightarrow+\infty} \int_{\partial \Omega_\ell}|\nabla R + R\nabla f| = 0. \end{eqnarray} Then $(M^n,\,g,\,f)$ is either Ricci flat with a constant potential function, or a quotient of the product steady soliton $N^{n-1}\times\mathbb{R}$, where $N^{n-1}$ is Ricci flat, or isometric to the Bryant soliton (up to scalings). \end{theorem} \begin{remark} \label{remarkajuda} We point out that no sectional curvature bound is assumed. This is important because, in dimension $4$, the sectional curvature of shrinking and steady Ricci solitons may change sign. Moreover, equality in \eqref{riccibound} holds for the cigar soliton with $\sigma=1$, and \eqref{asympBene} is trivially satisfied. In that sense, our theorem is a comparison theorem with the geometry of the cigar soliton. \end{remark} \begin{remark}\label{curvature scalar bound} In \cite[Lemma 3]{lopez2013}, the authors proved that a nonnegatively curved steady soliton satisfies the following inequality: $$|Ric |^{2}\leq \frac{R^{2}}{2}.$$ Therefore, $$|\nabla R|^{2}\leq 2R^2|\nabla f|^2.$$ This shows us that \eqref{riccibound} can be interpreted as a lower bound for $|\nabla R|$. In fact, $\sigma \leq \dfrac{1+\sqrt{7}}{3}$, see also \cite[Lemma 2.3]{deng2020}. \end{remark} One possible conjecture concerning the curvature decay of a steady gradient Ricci soliton is that the decay rate is either linear or exponential \cite{chan2022}. In \cite[Corollary 1]{chan2019} the author proved the scalar curvature of a steady Ricci soliton with nonnegative Ricci curvature decays exponentially if an asymptotic condition holds. In \cite[Theorem 9.56]{CLN1}, Chow, Lu and Ni proved that the scalar curvature of complete and noncompact steady gradient Ricci soliton with positively pinched Ricci curvature has exponential decay. In that sense, condition \eqref{asympBene} can be replaced by an exponential decay for the scalar curvature, i.e., \begin{eqnarray*} R = o(e^{-r}),\quad r\rightarrow\infty, \end{eqnarray*} where $r$ stands for the geodesic distance from a fixed point. In \cite{munteanu}, the authors proved that the sectional curvature $Rm$ of a steady gradient Ricci soliton (non-flat) with potential function $f$ bounded from above by a constant and such that $|Rm|r=o(1)$, at infinity, decays like \begin{eqnarray}\label{munteanu22} |Rm|\leq c(1+r)^{3(n+1)}e^{-r}, \end{eqnarray} where $c$ is a positive constant and $r$ stands for the geodesic distance. It is known that the assumption over $f$ holds true when $Ric > 0$. The exponential decay rate in the theorem is sharp as seen from $M = N\times\Sigma$; where $\Sigma$ is the cigar soliton and $N$ a compact Ricci flat manifold. Moreover, Chan and Zhu \cite{chan2022} proved that a steady gradient Ricci soliton $(M,\,g,\,f)$ with $|Rm|\to 0$, then either one of the following estimates holds outside a compact set of $M$: \begin{eqnarray*} &&C^{-1}r^{-1} \leq |Rm| \leq Cr^{-1};\\ &&C^{-1}e^{-r} \leq |Rm| \leq Ce^{-r}, \end{eqnarray*} where $C$ is a positive constant and $r$ is the distance function. The control of the curvature is an important issue in the analysis of a Ricci soliton and there are several results proving that the scalar curvature controls the sectional curvature, see \cite{cui2020,chan2019}. Here is important to emphasize that four dimensional Ricci solitons must satisfies the following condition: \begin{eqnarray*} |Rm|\leq A\left(\frac{|\nabla Ric|}{|\nabla f|} + |Ric|\right), \end{eqnarray*} where $A$ is an universal positive constant (see also \cite{cui2020,munteanu2015}). Please, see also \cite[Proposition 2]{chan2019}. Therefore, inspired by the above curvature properties of four dimensional steady Ricci solitons we will prove that if the Ricci curvature is controlled by the scalar curvature we get the Bryant soliton. Moreover, we will not assume that the steady Ricci soliton is $\kappa$-noncollapsed. \begin{theorem} \label{thmMunteanu} Let $\big(M^n,\,g,\,f)$ be a complete noncompact steady gradient Ricci soliton with positive Ricci curvature and \begin{eqnarray*} |Ric| \leq \frac{1}{4}\left[\frac{3}{2}\frac{|\nabla R|^2}{|\nabla f|^2}-2R^2\right]. \end{eqnarray*} Suppose that \begin{eqnarray*} \lim_{r\to\infty}R=0\quad\mbox{and}\quad\lim_{r\to\infty}|Rm|r=o(1). \end{eqnarray*} Then $(M^n,\,g,\,f)$ is isometric to the Bryant soliton (up to scalings). \end{theorem} According to Hamilton \cite{Hamilton,Hamilton2}, a Riemannian manifold $(M^n,\,g)$ is {\it positively pinched Ricci curvature} if there is a uniform constant $\delta > 0$ such that $$\delta Rg\leq Ric(g).$$ Deng and Zhu \cite{deng2015mathz} proved that any $(n\geq 2)$-dimensional complete noncompact steady K\"ahler-Ricci soliton $(M^n,\,g,\,f)$ with positively pinched Ricci curvature should be Ricci flat (provided that there is a point $p$ so that $\nabla f(p)=0$). The existence of such a point $p$ is called equilibrium point condition. Under our approach, we shall show that this condition can be removed. To be precise, we have established the following results. \iffalse \begin{corollary} Let $\big(M^n,\,g,\,f)$ be a complete noncompact steady gradient Ricci soliton. If $(M^n,\,g)$ has positively pinched Ricci curvature for $\delta =\sigma/2,$ then $(M^n,\,g,\,f)$ is either Ricci flat with a constant potential function, or a quotient of the product steady soliton $N^{n-1}\times\mathbb{R}$, where $N^{n-1}$ is Ricci flat. \end{corollary} The above result answers a question proposed by Ni in case of steady Ricci solitons (cf. \cite{ni2005}). \begin{remark} We recommend to the reader to see equation \eqref{pinchedricci} and the proof of Corollary \ref{maintheoremkahler} for the conclusion of the above corollary. \end{remark} For the steady gradient K\"ahler-Ricci soliton the above theorem becomes more rigid. \fi \begin{corollary}\label{maintheoremkahler} Let $\big(M^n,\,g,\,f)$, $n\geq3$, be a complete noncompact steady gradient K\"ahler-Ricci soliton. If $(M^n,\,g)$ has positively pinched Ricci curvature , then $(M^n,\,g,\,f)$ is Ricci flat. \end{corollary} Thus, Corollary \ref{maintheoremkahler} answers a question proposed by Chow, Lu and Ni in case of steady K\"ahler-Ricci solitons (cf. \cite{CLN1,ni2005}), and proves Corollary 1.5 in \cite{deng2015mathz}. In fact, the above corollary still holds for any steady gradient Ricci soliton (not necessarily K\"ahler). Steady Ricci solitons with pinched Ricci curvature have been studied in the past years (cf. \cite{deng2021} and the references therein). In \cite{ni2005}, Ni proved that any steady Ricci soliton with pinched Ricci curvature and nonnegative sectional curvature must be flat. Then, Deng and Zhu \cite{deng2015mathz} proved that Ni’s result is true without the assumption over the sectional curvature for steady K\"ahler-Ricci solitons. \begin{remark} We highlight that in the K\"ahler case, Cao found two examples of complete rotationally symmetric noncompact gradient steady K\"ahler-Ricci solitons (see \cite{cao2011} and the references therein). These examples are $U(n)$ invariant and have positive sectional curvature. \end{remark} \section{Background} Throughout this section we review some basic facts and present key results that will be useful in the proof of our main result. We start by recalling that for a Riemannian manifold $(M^{n},\,g),$ $n\geq 3,$ the Weyl tensor $W$ is defined by the following decomposition formula \begin{eqnarray*} \label{weyl} R_{ijkl}&=&W_{ijkl}+\frac{1}{n-2}\big(R_{ik}g_{jl}+R_{jl}g_{ik}-R_{il}g_{jk}-R_{jk}g_{il}\big)\nonumber\\&&-\frac{R}{(n-1)(n-2)}\big(g_{jl}g_{ik}-g_{il}g_{jk}\big), \end{eqnarray*} where $R_{ijkl}$ stands for the Riemannian curvature operator. Moreover, the Cotton tensor $C$ is given according to \begin{equation*} \label{cotton} \displaystyle{C_{ijk}=\nabla_{i}R_{jk}-\nabla_{j}R_{ik}-\frac{1}{2(n-1)}\big(\nabla_{i}R g_{jk}-\nabla_{j}R g_{ik}).} \end{equation*} The Cotton and Weyl tensors are related by the following equation on steady gradient Ricci solitons (cf. \cite[Lemma 2.4]{cao2014}). \begin{lemma}\label{lem20} Let $\big(M^n,\,g,\,f)$ be a steady gradient Ricci soliton. Then: \begin{eqnarray*} C_{ijk}&=&W_{ijks}\nabla^{s}f+D_{ijk}, \end{eqnarray*} where the $D$-tensor is given by \begin{eqnarray}\label{tensorD} D_{ijk} &=& \frac{1}{n-2}(R_{ik}\nabla_{j}f - R_{jk}\nabla_{i}f) \nonumber\\ &+& \frac{1}{2(n-1)(n-2)}[g_{jk}(\nabla_{i}R+2R\nabla_{i}f)-g_{ik}(\nabla_{j}R+2R\nabla_{j}f)] \end{eqnarray} \end{lemma} It is well-known that a gradient Ricci soliton satisfies the equation (cf. \cite[Proposition 3]{brendle2011}) \begin{eqnarray}\label{RF1} \nabla R=-2Ric(\nabla f). \end{eqnarray} Moreover, it also satisfies \begin{eqnarray}\label{Rf} R + |\nabla f|^{2}=C_{0} \end{eqnarray} at each any point on $M,$ where $C_{0}$ is a positive constant. Up to a normalization of $f,$ we may assume that $C_{0}=1.$ Therefore, a steady (normalized) gradient Ricci soliton satisfies \begin{eqnarray} R + |\nabla f|^{2}=1\nonumber. \end{eqnarray} We remember that a complete steady gradient Ricci soliton has nonnegative scalar curvature (see \cite[Lemma 2.2]{cao2014}), i.e., $R\geq0.$ Therefore, we conclude that a steady (normalized) gradient Ricci soliton must satisfy $$0\leq R\leq 1.$$ As a consequence \eqref{tensorD} we have the following key lemma (see \cite[Proposition 4]{brendle2011}). \begin{lemma}\label{lem200} Let $\big(M^n,\,g,\,f)$ be a steady gradient Ricci soliton. Then we have: \begin{eqnarray*} |D|^{2} + \frac{|\nabla R + 2R\nabla f|^{2}}{2(n-1)(n-2)^{2}} &=& -\frac{(1-R)}{(n-2)^{2}}\Delta R-\frac{(1-R)}{(n-2)^{2}}\langle\nabla R,\,\nabla f\rangle -\frac{1}{2(n-2)^{2}}|\nabla R|^{2}. \end{eqnarray*} \iffalse \begin{eqnarray*} |D|^{2} &=& \frac{-(1-R)\Delta R}{(n-2)^{2}}-\frac{\langle\nabla R,\,\nabla f\rangle}{(n-2)^{2}} -\frac{n|\nabla R|^{2}}{2(n-1)(n-2)^{2}}\nonumber\\ &+&\frac{(n-3)R\langle\nabla R,\,\nabla f\rangle}{(n-1)(n-2)^{2}}-\frac{2R^{2}(1-R)}{(n-1)(n-2)^{2}}. \end{eqnarray*} \fi \end{lemma} \begin{proof} Let us rewrite the norm of $D$ only depending of the function $f$ and the scalar curvature $R$. We begin the computation using \eqref{tensorD}. To that end, we start with the following equality: \begin{eqnarray*} |D|^{2} &=& \frac{2}{(n-2)^{2}}|Ric|^{2}|\nabla f|^{2} -\frac{2}{(n-2)^{2}}R_{ik}\nabla_jfR_{jk}\nabla_if \nonumber\\ &+&\frac{1}{2(n-1)(n-2)^{2}}|\nabla R + 2R\nabla f|^{2}\nonumber\\ &+&\frac{2}{(n-1)(n-2)^{2}}(R_{ik}\nabla_{j}f - R_{jk}\nabla_{i}f)(\nabla_{i}R+2R\nabla_{i}f)g_{jk}. \end{eqnarray*} Then, by using \eqref{Eq1}, \eqref{RF1} and \eqref{Rf} we get \begin{eqnarray*} |D|^{2} &=& \frac{2}{(n-2)^{2}}|Ric|^{2}|\nabla f|^{2} -\frac{2}{(n-2)^{2}}\nabla_{i}\nabla_{k}f\nabla_jf\nabla_{j}\nabla_{k}f\nabla_if \nonumber\\ &+&\frac{1}{2(n-1)(n-2)^{2}}|\nabla R + 2R\nabla f|^{2}\nonumber\\ &-&\frac{1}{(n-1)(n-2)^{2}}(\nabla_{i}R + 2R\nabla_{i}f)(\nabla_{i}R+2R\nabla_{i}f)\nonumber\\ &=& \frac{2}{(n-2)^{2}}|Ric|^{2}|\nabla f|^{2} -\frac{1}{2(n-2)^{2}}|\nabla R|^{2}\nonumber\\ &-&\frac{1}{2(n-1)(n-2)^{2}}(|\nabla R|^{2}+4R\langle\nabla R,\,\nabla f\rangle+4R^{2}|\nabla f|^{2}).\nonumber\\ \end{eqnarray*} Now, we need to use the following identity which comes from \eqref{Eq1}: \begin{eqnarray*}\label{eq1} 2Ric(\nabla f)=\nabla |\nabla f|^{2}. \end{eqnarray*} Then, taking the divergence of the above equation and again using \eqref{Eq1} and the contracted second Bianchi identity, we get \begin{eqnarray*} 2|Ric|^{2}+ \langle\nabla R,\,\nabla f\rangle=\Delta|\nabla f|^{2}. \end{eqnarray*} Thus, from \eqref{Rf} we have \begin{eqnarray*}\label{topd} 2|Ric|^{2}=-\langle\nabla R,\,\nabla f\rangle-\Delta R. \end{eqnarray*} Therefore, combining this identities we obtain \begin{eqnarray*} |D|^{2} + \frac{|\nabla R + 2R\nabla f|^{2}}{2(n-1)(n-2)^{2}} &=& \frac{-1}{(n-2)^{2}}|\nabla f|^{2}\Delta R-\frac{|\nabla f|^{2}}{(n-2)^{2}}\langle\nabla R,\,\nabla f\rangle -\frac{1}{2(n-2)^{2}}|\nabla R|^{2}.\nonumber\\ \end{eqnarray*} By using one more time \eqref{Rf} in the above equation the result follows. \end{proof} \begin{remark} We need to point out that Proposition 4 in \cite{brendle2011} follows from the above lemma considering $n=3.$ In fact, we can rewrite Lemma \ref{lem200} in the following form \begin{eqnarray*} |D|^{2} &=& \frac{-(1-R)\Delta R}{(n-2)^{2}}-\frac{\langle\nabla R,\,\nabla f\rangle}{(n-2)^{2}} -\frac{n|\nabla R|^{2}}{2(n-1)(n-2)^{2}}\nonumber\\ &+&\frac{(n-3)R\langle\nabla R,\,\nabla f\rangle}{(n-1)(n-2)^{2}}-\frac{2R^{2}(1-R)}{(n-1)(n-2)^{2}}. \end{eqnarray*} The above equation was the one used in \cite{brendle2011} for $n=3$. \end{remark} In what follows, we provide a new divergente formula for the steady Ricci solitons. \begin{lemma}\label{lema22} Let $\big(M^n,\,g,\,f)$ be a steady gradient Ricci soliton. Then, \begin{eqnarray*} (1-R)^{3/2}div\left(\frac{\nabla R}{\sqrt{1-R}}-2\sqrt{1-R}\nabla f\right)=-(n-2)^{2}|D|^{2} - \frac{|\nabla R + 2R\nabla f|^{2}}{2(n-1)}-2R(1-R)^{2}. \end{eqnarray*} \end{lemma} \begin{proof} Consider $Y=\phi(R)\nabla R+ \psi(R)\nabla f$. Since from \eqref{Eq1} we have $\Delta f= R$, we get \begin{eqnarray*} div(Y) = \phi\Delta R + \phi'|\nabla R|^{2}+\psi'\langle\nabla R,\,\nabla f\rangle + \psi\Delta f = \phi\Delta R + \phi'|\nabla R|^{2}+ \psi'\langle\nabla R,\,\nabla f\rangle + R\psi. \end{eqnarray*} Thus, \begin{eqnarray*} 2(1-R)div(Y) = 2(1-R)\phi\Delta R+2(1-R)\phi'|\nabla R|^{2} + 2(1-R)\psi'\langle\nabla R,\,\nabla f\rangle + 2(1-R)R\psi. \end{eqnarray*} Combining this with the previous lemma we get \begin{eqnarray*} &&2\phi(n-2)^{2}|D|^{2} + \frac{\phi|\nabla R + 2R\nabla f|^{2}}{(n-1)} = -2\phi(1-R)\Delta R-2\phi(1-R)\langle\nabla R,\,\nabla f\rangle -\phi|\nabla R|^{2}\nonumber\\ &=&-2(1-R)div(Y)+2(1-R)\phi'|\nabla R|^{2} + 2(1-R)\psi'\langle\nabla R,\,\nabla f\rangle \nonumber\\ &+& 2(1-R)R\psi-2(1-R)\phi\langle\nabla R,\,\nabla f\rangle -\phi|\nabla R|^{2}. \end{eqnarray*} Hence, \begin{eqnarray*} &&2\phi(n-2)^{2}|D|^{2} + \frac{\phi|\nabla R + 2R\nabla f|^{2}}{(n-1)}=-2(1-R)div(Y)+ 2(1-R)R\psi\nonumber\\ &+&[2(1-R)\phi'-\phi]|\nabla R|^{2} + 2(1-R)[\psi'-\phi]\langle\nabla R,\,\nabla f\rangle. \end{eqnarray*} Consider, \begin{eqnarray*} \phi= \frac{k}{\sqrt{1-R}}\quad\mbox{and}\quad \psi= -2k\sqrt{1-R}, \end{eqnarray*} where $k$ is a nonnull constant. Therefore, \begin{eqnarray*} 2\phi(n-2)^{2}|D|^{2} + \frac{\phi|\nabla R + 2R\nabla f|^{2}}{(n-1)}+4k(1-R)^{3/2}R=-2(1-R)div(Y). \end{eqnarray*} \end{proof} \section{Proof of the main result} This section is reserved for the proofs of the main results of this paper. Let us start with the following theorem: \begin{proof}[{\bf Proof of Theorem {\ref{thmA}}}] From Lemma \ref{lema22} we can infer that \begin{eqnarray*} (1-R)^{3/2}div\left(Y\right)=-(n-2)^{2}|D|^{2}- \frac{|\nabla R + 2R\nabla f|^{2}}{2(n-1)}-2R(1-R)^{2}, \end{eqnarray*} where $Y=\frac{\nabla R}{\sqrt{1-R}}-2\sqrt{1-R}\nabla f.$ Let $\Omega$ be a bounded domain on $M$ with smooth boundary. Using the divergence theorem, we get \begin{eqnarray}\label{imptinf} &&\int_{\partial \Omega}\langle(1-R)\nabla R-2(1-R)^{2}\nabla f,\,\nu\rangle = \int_{\Omega\cap\{R<1\}}div\left((1-R)^{3/2}Y\right) \nonumber\\ &=&\int_{\Omega\cap\{R<1\}}(1-R)^{3/2}div\left(Y\right) - \frac{3}{2}\int_{\Omega\cap\{R<1\}}\sqrt{1-R}\langle Y,\,\nabla R\rangle\nonumber\\ &=&-\int_{\Omega\cap\{R<1\}}\left[(n-2)^{2}|D|^{2} + \frac{|\nabla R + 2R\nabla f|^{2}}{2(n-1)}+2R(1-R)^{2}\right]\nonumber\\ &-& \frac{3}{2}\int_{\Omega\cap\{R<1\}}\sqrt{1-R}\langle Y,\,\nabla R\rangle\nonumber\\ &=&-\int_{\Omega\cap\{R<1\}}\left[(n-2)^{2}|D|^{2} + \frac{|\nabla R + 2R\nabla f|^{2}}{2(n-1)}+2R(1-R)^{2}\right] \nonumber\\ &-& \frac{3}{2}\int_{\Omega\cap\{R<1\}}\langle \nabla R- 2(1-R)\nabla f,\,\nabla R\rangle\nonumber\\ &=&-\int_{\Omega\cap\{R<1\}}\left[(n-2)^{2}|D|^{2} + \frac{|\nabla R + 2R\nabla f|^{2}}{2(n-1)}+2R(1-R)^{2}\right] \nonumber\\ &-& \frac{3}{2}\int_{\Omega\cap\{R<1\}} [|\nabla R|^{2}- 2(1-R)\langle\nabla f,\,\nabla R\rangle]\nonumber\\ &=&-\int_{\Omega\cap\{R<1\}}\left[(n-2)^{2}|D|^{2} + \frac{|\nabla R + 2R\nabla f|^{2}}{2(n-1)}\right] \nonumber\\ &-& \int_{\Omega\cap\{R<1\}} [\frac{3}{2}|\nabla R|^{2}+ 6(1-R)Ric(\nabla f,\,\nabla f)+2R(1-R)^{2}], \end{eqnarray} where $\nu$ is the normal vector field for $\Omega$. We used in the above equation the identity \begin{eqnarray}\label{relacaof e R} -2Ric(\nabla f)=\nabla R. \end{eqnarray} Furthermore, using $R=\Delta f$ we get \begin{eqnarray*}\label{divparalelo} div((1-R)^{2}\nabla f)-4(1-R)Ric(\nabla f,\,\nabla f)= R(1-R)^{2}. \end{eqnarray*} Thus, from the above equations we have \begin{eqnarray}\label{intgrande} \int_{\partial \Omega}\langle(1-R)\nabla R-2(1-R)^{2}\nabla f,\,\nu\rangle &=&-\int_{\Omega\cap\{R<1\}}\left[(n-2)^{2}|D|^{2} + \frac{|\nabla R + 2R\nabla f|^{2}}{2(n-1)}\right] \nonumber\\ &&- \int_{\Omega\cap\{R<1\}} \Bigg[\frac{3}{2}|\nabla R|^{2} - 2(1-R)Ric(\nabla f,\,\nabla f)\nonumber\\ &&+2div((1-R)^{2}\nabla f)\Bigg],\nonumber \end{eqnarray} and so \begin{eqnarray}\label{ric>0} \int_{\partial \Omega}\langle(1-R)\nabla R,\,\nu\rangle &=&-\int_{\Omega\cap\{R<1\}}\left[(n-2)^{2}|D|^{2} + \frac{|\nabla R + 2R\nabla f|^{2}}{2(n-1)}\right] \nonumber\\ &&- \int_{\Omega\cap\{R<1\}} [\frac{3}{2}|\nabla R|^{2} - 2(1-R)Ric(\nabla f,\,\nabla f)]. \end{eqnarray} Moreover, a straightforward computation yields to \begin{eqnarray*} \int_{\partial \Omega}\langle(1-R)\nabla R,\,\nu\rangle &\leq&-\int_{\Omega\cap\{R<1\}}\left[(n-2)^{2}|D|^{2} + \frac{|\nabla R + R\nabla f|^{2}}{2(n-1)}\right] \nonumber\\ && - \frac{3}{2}\int_{\Omega\cap\{R<1\}}\left[|\nabla R|^{2}+ \frac{1}{(n-1)}R^2|\nabla f|^{2}\right] \nonumber\\ &&- \frac{1}{(n-1)}\int_{\Omega\cap\{R<1\}}R\langle\nabla R,\,\nabla f\rangle + \int_{\Omega\cap\{R<1\}} 2(1-R)Ric(\nabla f,\,\nabla f)\nonumber\\ &\leq&-\int_{\Omega\cap\{R<1\}}\left[(n-2)^{2}|D|^{2} + \frac{n|\nabla R + R\nabla f|^{2}}{2(n-1)}\right] \nonumber\\ && - \int_{\Omega\cap\{R<1\}}|\nabla R|^{2} + \frac{(n-4)}{2(n-1)}\int_{\Omega\cap\{R<1\}}R^2|\nabla f|^{2} \nonumber\\ &&+ \frac{n-2}{(n-1)}\int_{\Omega\cap\{R<1\}}R\langle\nabla R,\,\nabla f\rangle \nonumber\\ &&+ \int_{\Omega\cap\{R<1\}} 2(1-R)Ric(\nabla f,\,\nabla f).\nonumber\\ \end{eqnarray*} On the other hand, \begin{eqnarray*} \int_{\partial \Omega}(1-R)\langle\nabla R+R\nabla f,\,\nu\rangle &\leq&-\int_{\Omega\cap\{R<1\}}\left[(n-2)^{2}|D|^{2} + \frac{n|\nabla R + R\nabla f|^{2}}{2(n-1)}\right] \nonumber\\ && - \int_{\Omega\cap\{R<1\}}|\nabla R|^{2} + \frac{(n-4)}{2(n-1)}\int_{\Omega\cap\{R<1\}}R^2|\nabla f|^{2} \nonumber\\ &&+ \frac{n-2}{(n-1)}\int_{\Omega\cap\{R<1\}}R\langle\nabla R,\,\nabla f\rangle \nonumber\\ &&- \int_{\Omega\cap\{R<1\}} (1-R)\langle\nabla R,\,\nabla f\rangle + \int_{\partial \Omega}(1-R)R\langle \nabla f,\,\nu\rangle. \end{eqnarray*} We also have the following identity: \begin{eqnarray*} \int_{\partial \Omega}(1-R)R\langle \nabla f,\,\nu\rangle &=& \int_{\Omega\cap\{R<1\}}div((1-R)R\nabla f)\nonumber\\ &=& \int_{\Omega\cap\{R<1\}}[(1-R)R\Delta f + (1-R)\langle\nabla R,\,\nabla f\rangle- R\langle\nabla R,\,\nabla f\rangle]\nonumber\\ &=& \int_{\Omega\cap\{R<1\}}[R^{2}|\nabla f|^{2} + (1-R)\langle\nabla R,\,\nabla f\rangle- R\langle\nabla R,\,\nabla f\rangle]. \end{eqnarray*} Therefore, \begin{eqnarray}\label{kahler} \int_{\partial \Omega}(1-R)\langle\nabla R+R\nabla f,\,\nu\rangle &\leq&-\int_{\Omega\cap\{R<1\}}\left[(n-2)^{2}|D|^{2} + \frac{n|\nabla R + R\nabla f|^{2}}{2(n-1)}\right] \nonumber\\ && - \int_{\Omega\cap\{R<1\}}|\nabla R|^{2} + \frac{3(n-2)}{2(n-1)}\int_{\Omega\cap\{R<1\}}R^2|\nabla f|^{2} \nonumber\\ &&+ \frac{1}{(n-1)}\int_{\Omega\cap\{R<1\}}2RRic(\nabla f,\,\nabla f)\\ &=&-\int_{\Omega\cap\{R<1\}}\left[(n-2)^{2}|D|^{2}\right] \nonumber\\ && + \int_{\Omega\cap\{R<1\}}\left[-\frac{(3n-2)}{2(n-1)}|\nabla R|^{2} + \frac{(n-3)}{(n-1)}R^2|\nabla f|^{2} \right] \nonumber\\ &&+ \frac{(n+1)}{(n-1)}\int_{\Omega\cap\{R<1\}}2RRic(\nabla f,\,\nabla f).\nonumber \end{eqnarray} Considering \eqref{relacaof e R} we can infer that \begin{eqnarray*} 2Ric(\nabla f,\,\nabla f)\leq |\nabla R||\nabla f|. \end{eqnarray*} Thus, \begin{eqnarray*} \int_{\partial \Omega}(1-R)\langle\nabla R+R\nabla f,\,\nu\rangle &\leq&-\int_{\Omega\cap\{R<1\}}\left[(n-2)^{2}|D|^{2}\right] \nonumber\\ && + \int_{\Omega\cap\{R<1\}}\Bigg[-\frac{(3n-2)}{2(n-1)}|\nabla R|^{2} +\frac{(n+1)R|\nabla f|}{(n-1)}|\nabla R|\nonumber\\ &&+ \frac{(n-3)}{(n-1)}R^2|\nabla f|^{2}\Bigg]. \end{eqnarray*} Now, from hypothesis $$\dfrac{(n+1)+\sqrt{(n-1)(7n-13)}}{3n-2}R|\nabla f|\leq|\nabla R|.$$ Therefore, $$-\frac{(3n-2)}{2(n-1)}|\nabla R|^{2} +\frac{(n+1)R|\nabla f|}{(n-1)}|\nabla R|+ \frac{(n-3)}{(n-1)}R^2|\nabla f|^{2}\leq0.$$ Consequently, \begin{eqnarray*} \int_{\Omega_\ell\cap\{R<1\}}\left[(n-2)^{2}|D|^{2}\right]\leq - \int_{\partial \Omega_\ell}(1-R)\langle\nabla R+R\nabla f,\,\nu\rangle. \end{eqnarray*} Now, consider an exhaustion of $M$ by bounded domains $\Omega_\ell$ such that $$\displaystyle\lim_{\ell\rightarrow\infty} \int_{\partial \Omega_\ell}|\nabla R+R\nabla f|=0.$$ Then, making $\ell\rightarrow\infty$, we have \begin{eqnarray*} &&\int_{\{R<1\}}\left[(n-2)^{2}|D|^{2}\right] \leq - \displaystyle\lim_{\ell\rightarrow\infty} \int_{\partial \Omega_\ell}(1-R)\langle\nabla R+R\nabla f,\,\nu\rangle \nonumber\\ &&\leq \left| \displaystyle\lim_{\ell\rightarrow\infty} \int_{\partial \Omega_\ell}(1-R)\langle\nabla R+R\nabla f,\,\nu\rangle\right| \\ &&\leq \displaystyle\lim_{\ell\rightarrow\infty} \int_{\partial \Omega_\ell}(1-R)|\langle\nabla R+R\nabla f,\,\nu\rangle|\leq \displaystyle\lim_{\ell\rightarrow\infty} \int_{\partial \Omega_\ell}|\nabla R + R\nabla f| =0. \end{eqnarray*} Since $M$ is complete, from \eqref{Rf} we can infer that $\{R<1\}$ is dense. Otherwise $f$ will be a constant function. Therefore, the $D$-tensor is identically zero. To finish the proof, we need to invoke \cite[Theorem 1.1]{cao2020}, \cite[Theorem 1.4]{cao2013} and \cite[Theorem 1.2]{cao2011}. \end{proof} \begin{proof}[{\bf Proof of Theorem \ref{thmMunteanu}}] From \eqref{imptinf} we get \begin{eqnarray*} &&\int_{\partial \Omega}(1-R)\langle\nabla R-2(1-R)\nabla f,\,\nu\rangle =-\int_{\Omega\cap\{R<1\}}\left[(n-2)^{2}|D|^{2} + \frac{|\nabla R + 2R\nabla f|^{2}}{2(n-1)}\right] \nonumber\\ &-& \int_{\Omega\cap\{R<1\}} [\frac{3}{2}|\nabla R|^{2}+ 6(1-R)Ric(\nabla f,\,\nabla f)+2R(1-R)^{2}]. \end{eqnarray*} Then, a straightforward computation yields to \begin{eqnarray*} &&\int_{\partial \Omega}(1-R)\langle\nabla R+2R\nabla f,\,\nu\rangle =-\int_{\Omega\cap\{R<1\}}\left[(n-2)^{2}|D|^{2} + \frac{|\nabla R + 2R\nabla f|^{2}}{2(n-1)}\right] \nonumber\\ &-& \int_{\Omega\cap\{R<1\}} [\frac{3}{2}|\nabla R|^{2}+ 6(1-R)Ric(\nabla f,\,\nabla f)+2R(1-R)^{2}] + 2\int_{\partial \Omega}(1-R)\langle\nabla f,\,\nu\rangle. \end{eqnarray*} Then, we use that $\div [2(1-R)\nabla f]=-2\langle\nabla R,\,\nabla f\rangle + 2(1-R)R$ to obtain \begin{eqnarray*} &&\int_{\partial \Omega}(1-R)\langle\nabla R+2R\nabla f,\,\nu\rangle =-\int_{\Omega\cap\{R<1\}}\left[(n-2)^{2}|D|^{2} + \frac{|\nabla R + 2R\nabla f|^{2}}{2(n-1)}\right] \nonumber\\ &-& \int_{\Omega\cap\{R<1\}} [\frac{3}{2}|\nabla R|^{2}+ 2(1-3R)Ric(\nabla f,\,\nabla f)-2R^2(1-R)]. \end{eqnarray*} Now, since $Ric>0$ and $-R\geq-1$ we can infer that \begin{eqnarray*} &&-\int_{\partial \Omega}(1-R)\langle\nabla R+2R\nabla f,\,\nu\rangle \geq \int_{\Omega\cap\{R<1\}}\left[(n-2)^{2}|D|^{2} + \frac{|\nabla R + 2R\nabla f|^{2}}{2(n-1)}\right] \nonumber\\ &+& \int_{\Omega\cap\{R<1\}} [\frac{3}{2}|\nabla R|^{2}-4Ric(\nabla f,\,\nabla f)-2R^2(1-R)]. \end{eqnarray*} From Kato’s inequality, for the steady solitons we have $$|Ric|^2=|\nabla^2f|^2\geq|\nabla|\nabla f||^2=|\nabla\sqrt{1-R}|^2=\frac{|\nabla R|^2}{4|\nabla f|^2}.$$ Hence, \begin{eqnarray}\label{kato} 2Ric(\nabla f,\,\nabla f) = -\langle\nabla R,\,\nabla f\rangle\leq |\nabla R||\nabla f|\leq 2|Ric||\nabla f|^2. \end{eqnarray} So, \begin{eqnarray*} &&-\int_{\partial \Omega}(1-R)\langle\nabla R+2R\nabla f,\,\nu\rangle \geq \int_{\Omega\cap\{R<1\}}\left[(n-2)^{2}|D|^{2} + \frac{|\nabla R + 2R\nabla f|^{2}}{2(n-1)}\right] \nonumber\\ &+& \int_{\Omega\cap\{R<1\}} [\frac{3}{2}|\nabla R|^{2}-4|Ric||\nabla f|^2-2R^2|\nabla f|^2]. \end{eqnarray*} Now, consider the bounded domains as geodesic balls, i.e., $|\partial\Omega_ \ell|=w_{n-1}r(\ell)^{n-1}$, where $w_{n-1}$ stands for the volume of the $(n-1)$-sphere. Here, $\ell\to\infty$ implies that $r\to\infty.$ For a nontrivial gradient steady soliton with $Ric \geq 0$ and $\lim_{r\to\infty} R = 0$, we know that $|Ric|^2\leq R^2$ (see \cite{carrillo2009} and \cite[page 12]{chan2019}). Thus, from \eqref{munteanu22} and \eqref{kato} we have \begin{eqnarray*} \int_{\{R<1\}}\left[(n-2)^{2}|D|^{2}\right]&\leq&\lim_{\ell\to\infty}\int_{\partial \Omega_\ell}(|\nabla R|+2R|\nabla f|)\leq\lim_{\ell\to\infty}\int_{\partial \Omega_{\ell}}(\sqrt{2}R+2R)|\nabla f|\nonumber\\ &\leq&A(n)\displaystyle\lim_{r\rightarrow\infty}(2+\sqrt{2})w_{n-1}r^{n-1}(1+r)^{3(n+1)}e^{-r}\nonumber\\ &\leq&c\displaystyle\lim_{r\rightarrow\infty}(1+r)^{2(2n+1)}e^{-r}=0, \end{eqnarray*} where we use that $|\nabla R|\leq\sqrt{2}R.$ Here, $c=A(n)(2+\sqrt{2})w_{n-1}$ and $A(n)$ a constant depending on $n.$ Then, $D=0$ and the proof follows like in Theorem \ref{thmA}. \end{proof} \begin{proof}[{\bf Proof of Corollary \ref{maintheoremkahler}}] It is well-known that for K\"ahler manifolds with nonnegative Ricci curvature the following inequality holds: $$2Ric(\nabla f,\,\nabla f)\leq R|\nabla f|^{2},$$ see Theorem 3.2 in \cite{Deruelle2012}. So, from \eqref{kahler} we get \begin{eqnarray*} \int_{\partial \Omega}(1-R)\langle\nabla R+R\nabla f,\,\nu\rangle &\leq&-\int_{\Omega\cap\{R<1\}}\left[(n-2)^{2}|D|^{2} + \frac{n|\nabla R + R\nabla f|^{2}}{2(n-1)}\right] \nonumber\\ && - \int_{\Omega\cap\{R<1\}}|\nabla R|^{2} + \frac{3n-4}{2(n-1)}\int_{\Omega\cap\{R<1\}}R^2|\nabla f|^{2}. \end{eqnarray*} Assuming $\frac{1}{2}\sqrt{\frac{3n-4}{2(n-1)}}Rg\leq Ric$, from \eqref{relacaof e R} we have \begin{eqnarray}\label{pinchedricci} \sqrt{\frac{3n-4}{2(n-1)}}R \leq 2Ric(\frac{\nabla f}{|\nabla f|},\,\frac{\nabla f}{|\nabla f|})= \frac{-1}{|\nabla f|^{2}}\langle\nabla R,\,\nabla f\rangle\leq \frac{|\nabla R|}{|\nabla f|}. \end{eqnarray} It is worth to say that for any steady gradient Ricci soliton (not necessarily K\"ahler), the above inequality holds considering $\frac{1}{2}\sigma Rg\leq Ric$, where $\sigma$ is given by Theorem \ref{thmA}. Thus, $\sigma R|\nabla f|\leq|\nabla R|.$ Now, the result follows by the same steps of Theorem {\ref{thmA}}. In fact, considering the bounded domains as geodesic balls, i.e., $|\partial\Omega_ \ell|=w_{n-1}r(\ell)^{n-1}$, where $w_{n-1}$ stands for the volume of the $(n-1)$-sphere, from \eqref{eqR1} we can conclude that \eqref{asympBene} satisfies \begin{equation}\label{expdecay} \displaystyle\lim_{\ell\rightarrow+\infty} \int_{\partial \Omega_\ell}(|\nabla R| + R|\nabla f|) = \displaystyle\lim_{r\rightarrow+\infty}w_{n-1}r^{n-1}\left[o(e^{-r})+o(e^{-r})\sqrt{(1-o(e^{-r})})\right] = 0,\nonumber \end{equation} where we assume that $r\to\infty$ when $\ell\to\infty$. Here, we consider that the scalar curvature has an exponential decay at infinity, i.e., \begin{eqnarray*} R = o(e^{-r}),\quad r\rightarrow\infty, \end{eqnarray*} where $r$ stands for the geodesic distance from a fixed point, see \cite[Theorem 9.56]{CLN1}. Therefore, we obtain $\nabla R+R\nabla f =0$ and $D=0$. Moreover, $$\frac{3n-4}{2(n-1)}R^{2}|\nabla f|^{2} = |\nabla R|^{2}.$$ So, either $n=2$ or $(M,\,g)$ is Ricci flat and $f$ a constant function. \end{proof} \begin{acknowledgement} The authors wants to thank Professor E. Ribeiro Jr. for the helpful remarks and discussions. Jeferson Poveda was supported by PROPG-CAPES [Finance Code 001]. Benedito Leandro was partially supported by Brazilian National Council for Scientific and Technological Development (CNPq Grant 403349/2021-4). \end{acknowledgement}
2,869,038,156,039
arxiv
\subsubsection{\ref{app:subsec-inftermtime}.2 General Case} \ \\[3mm] \begin{lemma} \label{lem:hitting-time-finite-chain} Consider a finite Markov chain on a set~$Q$ of states with $|Q| = n$. Let $x$ denote the smallest nonzero transition probability in the chain. Let $p \in Q$ be any state and $S \subseteq Q$ any subset of~$Q$. Define the random variable $T$ on runs starting in~$p$ by \[ T := \begin{cases} k & \text{if the run hits a state in~$S$ for the first time after exactly $k$ steps} \\ \mathit{undefined} & \text{if the run never hits a state in~$S$ .} \end{cases} \] We have $\mathcal{P}(T \ge k) \le 2 c^k$ for all $k \ge n$, where $c := \exp(-x^n/n)$. \end{lemma} \begin{proof} If $x=1$ then all states that are visited are visited after at most $n-1$ steps and hence $\mathcal{P}(T \ge n) = 0$. Assume $x < 1$ in the following. Since for each state the sum of the probabilities of the outgoing edges is~$1$, we must have $x \le 1/2$. Call \emph{crash} the event of, within the first $n-1$ steps, either hitting~$S$ or some state~$r \in Q$ from which $S$ is not reachable. The probability of a crash is at least $x^{n-1} \ge x^n$, regardless of the starting state. Let $k \ge n$. For the event where $T \ge k$, a crash has to be avoided at least $\lfloor \frac{k-1}{n-1} \rfloor$ times; i.e., \[ \mathcal{P}(T \ge k) \le (1-x^n)^{\lfloor \frac{k-1}{n-1} \rfloor} \,. \] As $\lfloor \frac{k-1}{n-1} \rfloor \ge \frac{k-1}{n-1} - 1 \ge \frac{k}{n} - 1$, we have \begin{align*} \mathcal{P}(T \ge k) & \le \frac{1}{1-x^n} \cdot \left((1-x^n)^{1/n}\right)^k \le 2 \cdot \left((1-x^n)^{1/n}\right)^k \\ & = 2 \cdot \exp\left(\frac1n \log(1-x^n)\right)^k \le 2 \cdot \exp\left(\frac1n \cdot (-x^n)\right)^k = 2 \cdot c^k \,. \end{align*} \qed \end{proof} \newcommand{\Rs}[1]{R^{(#1)}}% \begin{lemma} \label{lem:etime-case-C} Let $p, q \in Q$ such that $[p{\downarrow}q] > 0$ and $q$ is not in a BSCC of~$\mathcal{X}$. Then \[ E(p{\downarrow}q) \le \frac{5 |Q|}{x_{\min}^{|Q| + |Q|^3}} \,. \] \end{lemma} \begin{proof} Consider the finite Markov chain~$\mathcal{X}$. Define, for runs in~$\mathcal{X}$ starting in~$p$, the random variable $\widehat{R}$ as the time to hit~$q$, and set $\widehat{R} := \mathit{undefined}$ for runs that do not hit~$q$. There is a straightforward probability-preserving mapping that maps runs in~$\mathcal{M}_\mathscr{A}$ with $R_{p{\downarrow}q} = k$ to runs in~$\mathcal{X}$ with $\widehat{R} = k$. Hence, $\mathcal{P}(R_{p{\downarrow}q} = k) \le \mathcal{P}(\widehat{R} = k)$ for all $k \in \mathbb{N}_0$ and so \begin{align*} E(p{\downarrow}q) \cdot [p{\downarrow}q] & = \sum_{k \in \mathbb{N}_0} \mathcal{P}(R_{p{\downarrow}q} = k) \cdot k \le \sum_{k \in \mathbb{N}_0} \mathcal{P}(\widehat{R} = k) \cdot k \\ & = \sum_{k \in \mathbb{N}} \mathcal{P}(\widehat{R} \ge k) \le \sum_{k=1}^{|Q|} 1 + \sum_{k=0}^\infty 2 c^k = |Q| + \frac{2}{1 - c} && \text{(Lemma~\ref{lem:hitting-time-finite-chain})\,.} \intertext{We have $1-c = 1 - \exp(-x_{\min}^{|Q|}/|Q|) \ge x_{\min}^{|Q|}/(2 |Q|)$, hence} E(p{\downarrow}q) \cdot [p{\downarrow}q] & \le |Q| + \frac{4 |Q|}{x_{\min}^{|Q|}} \le \frac{5 |Q|}{x_{\min}^{|Q|}} \,. \end{align*} As $[p{\downarrow}q] \ge x_{\min}^{|Q|^3}$ by Proposition~\ref{prop:termprobs}, it follows \[ E(p{\downarrow}q) \le \frac{5 |Q|}{x_{\min}^{|Q| + |Q|^3}} \,. \] \qed \end{proof} \begin{lemma} \label{lem:etime-case-D} Let $p, q \in Q$ such that $[p{\downarrow}q] > 0$ and $q$ is in a BSCC with trend $t \ne 0$. Then \[ E(p{\downarrow}q) \le 85000 \cdot \frac{|Q|^6}{x_{\min}^{5 |Q| + |Q|^3} \cdot t^4} \,. \] \end{lemma} \begin{proof} Let $B$ denote the BSCC of~$q$. For a run $w \in \mathit{Run}(p{\downarrow}q)$, define $\Rs1(w)$ as the time to hit~$B$, and $\Rs2(w)$ as the time to reach $q(0)$ after hitting~$B$. For other runs~$w$ let $\Rs1(w) := \mathit{undefined}$ and $\Rs2(w) := \mathit{undefined}$. Note that $R_{p{\downarrow}q}(w) = \Rs1(w) + \Rs2(w)$ whenever $\Rs1(w)$ and $\Rs2(w)$ are defined. We have: \begin{align*} E(p{\downarrow}q) \cdot [p{\downarrow}q] & = \sum_{k \in \mathbb{N}_0} \mathcal{P}(R_{p{\downarrow}q} = k) \cdot k \\ & = \sum_{k \in \mathbb{N}_0} \mathcal{P}(\Rs1 + \Rs2 = k) \cdot k \\ & = \sum_{k_1, k_2 \in \mathbb{N}_0} \mathcal{P}(\Rs1 = k_1 \ \land \ \Rs2 = k_2) \cdot (k_1 + k_2) \\ & = \sum_{k_1, k_2 \in \mathbb{N}_0} \mathcal{P}(\Rs1 = k_1) \cdot \mathcal{P}(\Rs2 = k_2 \mid \Rs1 = k_1) \cdot (k_1 + k_2) \\ & = E_1 + E_2 \,, \end{align*} where \begin{align*} E_1 & := \sum_{k_1, k_2 \in \mathbb{N}_0} \mathcal{P}(\Rs1 = k_1) \cdot \mathcal{P}(\Rs2 = k_2 \mid \Rs1 = k_1) \cdot k_1 \qquad \text{and} \\ E_2 & := \sum_{k_1, k_2 \in \mathbb{N}_0} \mathcal{P}(\Rs1 = k_1) \cdot \mathcal{P}(\Rs2 = k_2 \mid \Rs1 = k_1) \cdot k_2 \,. \end{align*} For a bound on~$E_1$ we have \begin{align*} E_1 & = \sum_{k_1 \in \mathbb{N}_0} \mathcal{P}(\Rs1 = k_1) \cdot k_1 \cdot \sum_{k_2 \in \mathbb{N}_0} \mathcal{P}(\Rs2 = k_2 \mid \Rs1 = k_1) \\ & \le \sum_{k_1 \in \mathbb{N}_0} \mathcal{P}(\Rs1 = k_1) \cdot k_1 \end{align*} Consider the finite Markov chain~$\mathcal{X}$. Define, for runs in~$\mathcal{X}$ starting in~$p$, the random variable $\widehat{\Rs1}$ as the time to hit~$B$, and set $\widehat{\Rs1} := \mathit{undefined}$ for runs that do not hit~$B$. There is a straightforward probability-preserving mapping that maps runs in~$\mathcal{M}_\mathscr{A}$ with $\Rs1 = k_1$ to runs in~$\mathcal{X}$ with $\widehat{\Rs1} = k_1$. Hence, $\mathcal{P}(\Rs1 = k_1) \le \mathcal{P}(\widehat{\Rs1} = k_1)$ for all $k_1 \in \mathbb{N}_0$ and so \begin{equation} \label{eq:bound-E1} E_1 \le \sum_{k_1 \in \mathbb{N}_0} \mathcal{P}(\widehat{\Rs1} = k_1) \cdot k_1 \quad = \quad \sum_{k_1 \in \mathbb{N}} \mathcal{P}(\widehat{\Rs1} \ge k_1) \quad \le \quad \frac{2}{1-c} \end{equation} with $c$ from Lemma~\ref{lem:hitting-time-finite-chain}. For a bound on $E_2$, fix any $k_1 \in \mathbb{N}_0$. We have: \begin{align*} & \quad \sum_{k_2 \in \mathbb{N}_0} \mathcal{P}(\Rs2 = k_2 \mid \Rs1 = k_1) \cdot k_2 \\ & = \sum_{j=0}^{k_1+1} \sum_{k_2 \in \mathbb{N}_0} \underbrace{\mathcal{P}(\Rs2 = k_2 \mid \Rs1 = k_1, \ \cs{0} = j)}_{= \mathcal{P}(\Rs2 = k_2 \mid \cs{0} = j)} \cdot k_2 \cdot \mathcal{P}(\cs{0} = j \mid \Rs1 = k_1) \,, \intertext{% where we denote by~$\cs{0}$ the counter value when hitting~$B$. In the last equality we used the fact that in each step the counter value can increase by at most~$1$, thus $\Rs1 = k_1$ implies $\cs{0} \le k_1+1$. Denote by $m(k_1) \in \{0, \ldots, k_1+1\}$ the value of~$j$ that maximizes $\sum_{k_2 \in \mathbb{N}_0} \mathcal{P}(\Rs2 = k_2 \mid \cs{0} = j) \cdot k_2$. Then we can continue: } & \le \sum_{k_2 \in \mathbb{N}_0} \mathcal{P}(\Rs2 = k_2 \mid \cs{0} = m(k_1)) \cdot k_2 \cdot \underbrace{\sum_{j=0}^{k_1+1} \mathcal{P}(\cs{0} = j \mid \Rs1 = k_1)}_{=1} \intertext{% Denote by $h(\cs{0})$ the $h$ from Lemma~\ref{lem:expected-term-bound-prob}. We have $h(m(k_1)) \le 2 \frac{|\vv| + m(k_1)}{|t|} \le 2 \frac{|\vv| + k_1 + 1}{|t|} =: \hat h(k_1)$. So we can continue: } & \le \sum_{k_2 = 0}^{\left\lfloor \hat h(k_1)\right\rfloor} k_2 + \sum_{k_2 = \left\lceil\hat h(k_1)\right\rceil}^\infty a^{k_2} \cdot k_2 \qquad \text{(with $a$ from Proposition~\ref{lem:expected-term-bound-prob})} \\ & \le \hat h(k_1)^2 + \frac{a}{(1-a)^2} = \frac{4 (|\vv| + k_1 + 1)^2}{t^2} + \frac{a}{(1-a)^2} \,. \end{align*} With this inequality and the random variable $\widehat{\Rs2}$ from above at hand we get a bound on~$E_2$: \begin{align*} E_2 & = \sum_{k_1\in \mathbb{N}_0} \underbrace{\underbrace{\mathcal{P}(\Rs1 = k_1)}_{\Large \le \mathcal{P}(\widehat{\Rs1}=k_1)}}_{\le \mathcal{P}(\widehat{\Rs1}\ge k_1)} \cdot {\underbrace{\sum_{k_2 \in \mathbb{N}_0} \mathcal{P}(\Rs2 = k_2 \mid \Rs1 = k_1) \cdot k_2}_{{\Large \le \frac{4 (|\vv| + k_1 + 1)^2}{t^2} + \frac{a}{(1-a)^2}}}} \\ & \le \sum_{k_1=0}^{|Q|-1}\left( \frac{4 (|\vv| + k_1 + 1)^2}{t^2} + \frac{a}{(1-a)^2}\right) + \sum_{k_1=0}^\infty 2 c^{k_1} \frac{a}{(1-a)^2} + \sum_{k_1=0}^\infty 2 c^{k_1} \frac{4 (|\vv| + k_1 + 1)^2}{t^2} \\ & \le \frac{4 |Q| (|\vv| + |Q|)^2}{t^2} + \frac{2 |Q|}{(1-c)(1-a)^2} + \frac{8}{t^2} \sum_{k_1=0}^\infty c^{k_1} (|\vv| + k_1 + 1)^2 \end{align*} The last series can be bounded as follows: \begin{align*} \sum_{k_1=0}^\infty c^{k_1} (|\vv| + k_1 + 1)^2 & \le \sum_{k_1=0}^{\left\lfloor |\vv| + 1 \right\rfloor} \left(2 (|\vv| + 1)\right)^2 + \sum_{k_1=\left\lfloor |\vv| + 1 \right\rfloor + 1}^\infty c^{k_1} \cdot (2 k_1)^2 \\ & \le 4 (|\vv| + 2)^3 + 4 \sum_{k_1=0}^\infty c^{k_1} \cdot k_1^2 = 4 (|\vv| + 2)^3 + 4 \frac{c (c+1)}{(1-c)^3} \\ & \le 4 (|\vv| + 2)^3 + \frac{8}{(1-c)^3} \end{align*} It follows: \begin{equation} \label{eq:bound-E2} E_2 \le \frac{4 |Q| (|\vv| + |Q|)^2}{t^2} + \frac{2 |Q|}{(1-c)(1-a)^2} + \frac{32}{t^2} \left( (|\vv| + 2)^3 + \frac{2}{(1-c)^3} \right) \end{equation} Recall the following bounds: \begin{align*} |\vv| & \le 2 |Q| / x_{\min}^{|Q|} && \text{(Lemma~\ref{lem:v})} \\ 1-c & = 1 - \exp(-x_{\min}^{|Q|}/|Q|) \ge x_{\min}^{|Q|} / (2 |Q|) && \text{(Lemma~\ref{lem:hitting-time-finite-chain})} \\ 1-a & = 1 - \exp\left(- t^2 / \left(8 (|\vv| + 2)^2\right)\right) \ge t^2 / \left(16 (|\vv| + 2)^2\right) && \text{(Proposition~\ref{lem:expected-term-bound-prob})} \\ [p{\downarrow}q] & \ge x_{\min}^{|Q|^3} && \text{(Proposition~\ref{prop:termprobs})} \end{align*} After plugging those bounds into \eqref{eq:bound-E1} and~\eqref{eq:bound-E2} we obtain using straightforward calculations: \begin{gather*} E_1 \le 4 \frac{|Q|}{x_{\min}^{|Q|}} \qquad \text{and} \qquad E_2 \le 84356 \frac{|Q|^6}{x_{\min}^{5 |Q|} \cdot t^4} \;, \qquad \text{hence} \\ E(p{\downarrow}q) = \frac{E_1 + E_2}{[p{\downarrow}q]} \le 85000 \cdot \frac{|Q|^6}{x_{\min}^{5 |Q| + |Q|^3} \cdot t^4} \,. \end{gather*} \qed \end{proof} \begin{lemma} \label{lem:pumping-pre-post} Let $p,q\in Q$. If $\mathit{Pre}^*(q(0))\cap \mathit{Post}^*(p(1))$ is finite, then \[ |\mathit{Pre}^*(q(0))\cap \mathit{Post}^*(p(1))|\quad \leq\quad |Q|^2\cdot (|Q|+2) \] \end{lemma} \begin{proof} In this proof we use some notions and results of~\cite{EHRS:MC-PDA} (in particular, we use the notion of $\mathcal{P}$-automata as defined in Section~2.1 of~\cite{EHRS:MC-PDA}). Consider the pOC as a (non-probabilistic) pushdown system with one letter stack alphabet, say $\Gamma=\{X\}$ (the counter of height $n$ then corresponds to the stack content $X^n$). A $\mathcal{P}$-automaton $\mathscr{A}_{q(0)}$ accepting the set of configurations $\{q(0)\}$ can be defined to have the set of states $Q$, no transitions, and $q$ as the only accepting state. Let $\mathscr{A}_{\mathit{pre}^*}$ be the $\mathcal{P}$-automaton accepting $\mathit{Pre}^*(q(0))$ constructed using the procedure from Section~4 of \cite{EHRS:MC-PDA}. The automaton $\mathscr{A}_{\mathit{pre}^*}$ has the same set of states, $Q$, as $\mathscr{A}_{q(0)}$. A $\mathcal{P}$-automaton $\mathscr{A}_{p(1)}$ accepting the set of configurations $\{p(1)\}$ can be defined to have the set of states $Q\cup \{p_{acc}\}$, one transition $(p,X,p_{acc})$, and $q_{acc}$ as the only accepting state. Let $\mathscr{A}_{\mathit{post}^*}$ be the automaton accepting $\mathit{Post}^*(p(1))$ constructed using the procedure from Section~6 of \cite{EHRS:MC-PDA}. The automaton $\mathscr{A}_{\mathit{post}^*}$ has at most $|Q|+2$ states. Using standard product construction we obtain a $\mathcal{P}$-automaton $\mathscr{A}$ accepting $\mathit{Pre}^*(q(0))\cap \mathit{Post}^*(p(1))$, which has $|Q|\cdot (|Q|+2)$ states. Now note that if $\mathit{Pre}^*(q(0))\cap \mathit{Post}^*(p(1))$ is finite, then a standard pumping argument for finite automata implies that the length of every word accepted by $\mathscr{A}$ is bounded by $|Q|\cdot (|Q|+2)$. It follows that there are only $|Q|^2\cdot (|Q|+2)$ configurations in $\mathit{Pre}^*(q(0))\cap \mathit{Post}^*(p(1))$. \qed \end{proof} \begin{lemma} \label{lem:etime-case-E} Let $p,q\in Q$ such that $\mathit{Pre}^*(q(0))\cap \mathit{Post}^*(p(1))$ is finite. Then \[ E(p{\downarrow}q) \le E(p{\downarrow}q) \le \frac{15 |Q|^3}{x_{\min}^{4 |Q|^3}} \] \end{lemma} \begin{proof} We construct a finite Markov chain~$\mathcal{Y}$ as follows. The states of~$\mathcal{Y}$ are the states in $\mathit{Pre}^*(q(0))\cap \mathit{Post}^*(p(1)) \cup \{o\}$, where $o$ is a fresh symbol. In general, the transitions in~$\mathcal{Y}$ are as in the infinite Markov chain~$\mathcal{M}_\mathscr{A}$, with the following exceptions: \begin{itemize} \item all transitions leaving the set~$\mathit{Pre}^*(q(0))\cap \mathit{Post}^*(p(1))$ are redirected to~$o$; \item all transitions leading to a configuration $r(0)$ with $r \ne q$ are redirected to~$o$; \item $o$ gets a probability~$1$ self-loop. \end{itemize} Let $T$ denote the time that a run in~$\mathcal{Y}$ starting from~$p(1)$ hits~$q(0)$ in exactly $k$ steps. This construction of~$\mathcal{Y}$ makes sure that $\mathcal{P}(T=k) = \mathcal{P}(R_{p{\downarrow}q} = k)$. Note that by Lemma~\ref{lem:pumping-pre-post} the chain~$\mathcal{Y}$ has at most $\ell := 3|Q|^3$ states. So we have: \begin{align*} [p{\downarrow}q] \cdot E(p{\downarrow}q) & \le \sum_{k \in \mathbb{N}} \mathcal{P}(R_{p{\downarrow}q} \ge k) = \sum_{k \in \mathbb{N}} \mathcal{P}(T \ge k) \\ & = \sum_{k=1}^{\ell-1} \mathcal{P}(T \ge k) + \sum_{k=\ell}^\infty \mathcal{P}(T \ge k) \\ & \le \ell + \sum_{k=0}^\infty 2 c^k = \ell + \frac{2}{1-c} && \text{(Lemma~\ref{lem:hitting-time-finite-chain})} \intertext{We have $1-c = 1 - \exp(-x_{\min}^{\ell}/\ell) \ge x_{\min}^{\ell}/(2 \ell)$, hence} [p{\downarrow}q] \cdot E(p{\downarrow}q) & \le 3 |Q|^3 + \frac{12 |Q|^3}{x_{\min}^{3 |Q|^3}} \le \frac{15 |Q|^3}{x_{\min}^{3 |Q|^3}} \,, \end{align*} and so, by Proposition~\ref{prop:termprobs}, \[ E(p{\downarrow}q) \le \frac{15 |Q|^3}{x_{\min}^{4 |Q|^3}} \,. \] \qed \end{proof} By combining Lemmata \ref{lem:etime-case-C}, \ref{lem:etime-case-D} and~\ref{lem:etime-case-E} we obtain the following proposition, which directly implies Theorem~\ref{thm-exp-infinite}: \begin{proposition} \label{prop:grand-bound} Let $(p,q) \in T^{>0}$. Let $\mathscr{B}$ be the SCC of~$q$ in~$\mathcal{X}$. Let $x_{\min}$ denote the smallest nonzero probability in~$A$. Then we have: \begin{itemize} \item If $\mathit{Pre}^*(q(0)) \cap \mathit{Post}^*(p(1))$ is a finite set, then $\displaystyle E(p{\downarrow}q) \le 15 |Q|^3 / x_{\min}^{4 |Q|^3} $; \item otherwise, if $\mathscr{B}$ is not a BSCC of~$\mathcal{X}$, then $\displaystyle E(p{\downarrow}q) \le 5 |Q| / \left(x_{\min}^{|Q| + |Q|^3}\right) $; \item otherwise, if $\mathscr{B}$ has trend $t \ne 0$, then $\displaystyle E(p{\downarrow}q) \le 85000 |Q|^6 / \left(x_{\min}^{5 |Q| + |Q|^3} \cdot t^4\right) $. \item otherwise, $E(p{\downarrow}q)$ is infinite. \end{itemize} \end{proposition} \subsection{Finiteness of the expected termination time (Section~\ref{subsec:inftermtime})} \label{app:subsec-inftermtime} Recall that $\mathscr{A} = (Q,\delta^{=0},\delta^{>0},P^{=0},P^{>0})$ is a fixed pOC, $\mathcal{X}$ is the underlaying Markov chain of $\mathscr{A}$, and $A$ is the transition matrix of~$\mathcal{X}$. This section has two parts. In the first part (Section~\ref{app:subsec-inftermtime}.1) we provide the proofs that apply specifically to the case where $\mathcal{X}$ is strongly connected. In the second part (Section~\ref{app:subsec-inftermtime}.2) we deal with the general case, showing Theorem~\ref{thm-exp-infinite}. \subsubsection{\ref{app:subsec-inftermtime}.1 Strongly connected $\mathcal{X}$} \ \\[3mm] \noindent Recall that \begin{itemize} \item $\boldsymbol{\alpha} \in (0,1]^Q$ is the invariant distribution of~$\mathcal{X}$, \item $\boldsymbol{s} \in \mathbb{R}^Q$ is the vector expected counter changes defined by \[ \boldsymbol{s}_p = \sum_{(p,c,q) \in \delta^{>0}} P^{>0}(p,c,q) \cdot c \] \item $t$ is the trend of $\mathcal{X}$ given by $t = \boldsymbol{\alpha} \boldsymbol{s}$. \end{itemize} \noindent A \emph{potential} is any vector $\boldsymbol{v}$ that satisfies $\boldsymbol{s} + A \boldsymbol{v} = \boldsymbol{v} + \boldsymbol{1} t$. The intuitive meaning of a potential~$\boldsymbol{v}$ is that, starting in any state $p \in Q$, the expected counter increase after $i$ steps for large~$i$ is $i t + \boldsymbol{v}_p$. Given a potential~$\boldsymbol{v}$, we define $|\vv| := \vv_{\max} - \vv_{\min}$, where $\vv_{\max}$ and $\vv_{\min}$ are the largest and the smallest component of~$\boldsymbol{v}$, respectively. Now we prove two lemmata that together imply Proposition~\ref{prop-martingale}. \begin{lemma} \label{lem:v} We have the following: \begin{itemize} \item[(a)] Let $W := \boldsymbol{1} \boldsymbol{\alpha}$, i.e., each row of~$W$ equals~$\boldsymbol{\alpha}$. Let $Z := (I - A + W)^{-1}$. The matrix $Z$ exists and the vector $Z \boldsymbol{s}$ is a potential. \item[(b)] Denote by~$x_{\min}$ the smallest nonzero coefficient of~$A$. There exists a potential~$\boldsymbol{v}$ with $|\vv| \le 2 |Q| / x_{\min}^{|Q|}$. \end{itemize} \end{lemma} \begin{proof}~ \begin{itemize} \item[(a)] The matrix $Z := (I - A + W)^{-1}$ exists by \cite[Theorem 5.1.3]{KS60}. (The matrix~$Z$ is sometimes called the \emph{fundamental matrix} of the finite Markov chain induced by~$A$.) Furthermore, by \cite[Theorem 5.1.3(d)]{KS60} the fundamental matrix~$Z$ satisfies $I + A Z = Z + W$. Multiplying with~$\boldsymbol{s}$ and setting $\vu := Z \boldsymbol{s}$, we obtain $\boldsymbol{s} + A \vu = \vu + \boldsymbol{1} \boldsymbol{\alpha} \boldsymbol{s}$; i.e., $Z \boldsymbol{s}$ is a potential. \item[(b)] Let $\vu$ be the potential from~(a); i.e., we have \begin{equation} (I - A) \vu = \boldsymbol{s} - \boldsymbol{1} t \,. \label{eq:lem-bound-v-lin-eq} \end{equation} By the Perron-Frobenius theorem for strongly connected matrices, there exists a positive vector $\vd \in (0,1]^Q$ with $A \vd = \vd$; i.e., $(I - A) \vd = \boldsymbol{0}$. Observe that $\vu + r \vd$ is a potential for all $r \in \mathbb{R}$. Choose $r$ such that $\boldsymbol{v} := \vu + r \vd$ satisfies $\vv_{\max} = 2 |Q| / x_{\min}^{|Q|}$. It suffices to prove $\vv_{\min} \ge 0$. Let $q \in Q$ such that $\boldsymbol{v}_q = \vv_{\max}$. Define the \emph{distance} of a state $p \in Q$ as the distance of~$p$ from~$q$ in the graph induced by~$A$. Note that $q$ has distance~$0$ and all states have distance at most $n-1$, as $A$ is strongly connected. We prove by induction that a state~$p$ with distance~$i$ satisfies $\boldsymbol{v}_p \ge 2 (n-i) / x_{\min}^{n-i}$. The claim is obvious for the induction base ($i=0$). For the induction step, let $p$ be a state with distance~$i+1$ and $i \ge 0$. Let $r$ be a state with distance $i$ and $A_{p r} > 0$. We have: \begin{align*} \boldsymbol{v}_p & = (A \boldsymbol{v})_p + \boldsymbol{s}_p - t && \text{(as $\boldsymbol{v}$ is a potential)} \\ & \ge (A \boldsymbol{v})_p - 2 && \text{(as $\boldsymbol{s}_p, t \in [-1,1]$)} \\ & \ge x_{\min} \boldsymbol{v}_r - 2 && \text{(as $A_{p r} > 0$ implies $A_{p r} \ge x_{\min}$)} \\ & \ge x_{\min} \cdot 2 (n-i) / x_{\min}^{n-i} - 2 && \text{(by induction hypothesis)} \\ & = 2 (n-i) / x_{\min}^{n-(i+1)} - 2 \\ & \ge 2 (n-(i+1)) / x_{\min}^{n-(i+1)} && \text{(as $x_{\min} \le 1$)} \,. \end{align*} This completes the induction step. Hence we have $\vv_{\min} \ge 0$ as desired. \end{itemize} \qed \end{proof} \noindent In the following, the vector~$\boldsymbol{v}$ is always a potential. Recall that $\ps{i}$ and $\cs{i}$ are random variables which to every run $w \in \mathit{Run}(r(c))$ assign the control state and the counter value of the configuration $w(i)$, respectively, and $\ms{i}$ is a random variable defined by \[ \ms{i} = \begin{cases} \cs{i} + \boldsymbol{v}_{\ps{i}} - i t & \text{if $\cs{j} \ge 1$ for all $0 \le j < i$} \\ \ms{i-1} & \text{otherwise} \end{cases} \] \begin{lemma} \label{lem:martingale} The sequence $\ms{0}, \ms{1}, \ldots$ is a martingale. \end{lemma} \begin{proof} Fix a path $u \in \mathit{FPath}(\ps{0}(\cs{0}))$ of length $i \ge 1$. First assume that $\cs{j} \ge 1$ does not hold for all $j \in \{0,\ldots,i-1\}$. Then for every run $w \in \mathit{Run}(u)$ we have $\ms{i}(w) = \ms{i-1}(w)$. Now assume that $\cs{j} \ge 1$ holds for all $j \in \{0,\ldots,i-1\}$. Then we have: \begin{align*} \mathbb{E}\left[ \ms{i} \;\middle\vert\; \mathit{Run}(u) \right] & = \mathbb{E}\left[ \cs{i} + \boldsymbol{v}_{\ps{i}} - i t \;\middle\vert\; \mathit{Run}(u) \right] \\ & = \cs{i-1} + \mathop{\sum_{(\ps{i-1},a,q) \in \delta^{>0}}}_{P^{>0}(\ps{i-1},a,q)=x} x \cdot a + \mathop{\sum_{(\ps{i-1},a,q) \in \delta^{>0}}}_{P^{>0}(\ps{i-1},a,q)=x} x \cdot \boldsymbol{v}_q - i t \\ & = \cs{i-1} + \boldsymbol{s}_{\ps{i-1}} + \left( A \boldsymbol{v} \right)_{\ps{i-1}} - i t \\ & = \ms{i-1} + \boldsymbol{s}_{\ps{i-1}} + \left( A \boldsymbol{v} \right)_{\ps{i-1}} - \boldsymbol{v}_{\ps{i-1}} - t \\ & = \ms{i-1} \,, \end{align*} where the last equality holds because $\boldsymbol{v}$ is a potential. \qed \end{proof} \noindent A direct corollary to Lemma~\ref{lem:v} and Lemma~\ref{lem:martingale} is the following: \begin{refproposition}{prop-martingale} There is a vector $\boldsymbol{v} \in \mathbb{R}^{Q}$ such that the stochastic process $m^{(1)},m^{(2)},\dots$ defined by \[ \ms{i} = \begin{cases} \cs{i} \ +\ \boldsymbol{v}_{\ps{i}} \ -\ i \cdot t & \text{if $\cs{j} \ge 1$ for all $0 \leq j < i$;}\\ \ms{i-1} & \text{otherwise} \end{cases} \] is a martingale, where $t$ is the trend of $\mathcal{X}$. Moreover, the vector $\boldsymbol{v}$ satisfies $\vv_{\max}-\vv_{\min}\ \le\ 2 |Q| / x_{\min}^{|Q|}$, where $x_{\min}$ is the smallest positive transition probability in~$\mathcal{X}$, and $\vv_{\max}$ and $\vv_{\min}$ are the maximal and the minimal components of $\boldsymbol{v}$, respectively. \end{refproposition} \noindent Now we prove the propositions needed to justify Claims~(A) and~(B) of Section~\ref{subsec:inftermtime}. \begin{refproposition}{lem:expected-term-bound-prob} Let $p(k)$ be an initial configuration, and let $H_i$ be set of all runs initiated in $p(k)$ that visit a configuration with zero counter in exactly $i$~transitions. Let \[ a = \exp\left(- \frac{t^2}{8 (|\vv| + t + 1)^2} \right)\,. \] Note that $0 < a < 1$. Further, let \[ h = \begin{cases} 2 \cdot \frac{-|\vv| - \cs{0}}{t} & \text{if \ $t < 0$} \\ 2 \cdot \frac{ |\vv| - \cs{0}}{t} & \text{if \ $t > 0$ .} \end{cases} \] Then for all $i \in \mathbb{N}$ with $i \ge h$ we have that $\mathcal{P}(H_i) \le a^i$. \end{refproposition} \begin{proof} For all runs in~$H_i$ we have $\ms{i} = \boldsymbol{v}_{\ps{i}} - i t$ and so \begin{equation} \ms{0} - \ms{i} = \cs{0} + \boldsymbol{v}_{\ps{0}} - \boldsymbol{v}_{\ps{i}} + i t \,. \label{eq:lem-exp-term-bound-mart-hit} \end{equation} \begin{description} \item[Case $t < 0$:] By~\eqref{eq:lem-exp-term-bound-mart-hit} we have for $i \ge h$: \begin{align*} \mathcal{P}(H_i) & = \mathcal{P}(H_i \ \land \ \ms{i} - \ms{0} = -\cs{0} - \boldsymbol{v}_{\ps{0}} + \boldsymbol{v}_{\ps{i}} - i t) \\ & \le \mathcal{P}(\ms{i} - \ms{0} = -\cs{0} - \boldsymbol{v}_{\ps{0}} + \boldsymbol{v}_{\ps{i}} - i t) \\ & \le \mathcal{P}(\ms{i} - \ms{0} \ge -\cs{0} - |\vv| - i t) \\ & = \mathcal{P}\left(\ms{i} - \ms{0} \ge (i - h/2) \cdot (-t)\right) \\ & \le \mathcal{P}\left(\ms{i} - \ms{0} \ge (i/2) \cdot (-t)\right) \,. \end{align*} \item[Case $t > 0$:] By~\eqref{eq:lem-exp-term-bound-mart-hit} we have for $i \ge h$: \begin{align*} \mathcal{P}(H_i) & = \mathcal{P}(H_i \ \land \ \ms{0} - \ms{i} = \cs{0} + \boldsymbol{v}_{\ps{0}} - \boldsymbol{v}_{\ps{i}} + i t) \\ & \le \mathcal{P}(\ms{0} - \ms{i} = \cs{0} + \boldsymbol{v}_{\ps{0}} - \boldsymbol{v}_{\ps{i}} + i t) \\ & \le \mathcal{P}(\ms{0} - \ms{i} \ge \cs{0} - |\vv| + i t) \\ & = \mathcal{P}\left(\ms{0} - \ms{i} \ge (i - h/2) \cdot t \right) \\ & \le \mathcal{P}\left(\ms{0} - \ms{i} \ge (i/2) \cdot t \right) \,. \end{align*} \end{description} In each step, the martingale value changes by at most $|\vv| + t + 1$. Hence Azuma's inequality (see~\cite{Williams:book}) asserts for $t \ne 0$ and $i \ge h$: \begin{align*} \mathcal{P}(H_i) & \le \exp \left(- \frac{(i/2)^2 t^2}{2 i (|\vv| + t + 1)^2} \right) && \text{(Azuma's inequality)} \\ & = a^i \,. \end{align*} \qed \end{proof} \begin{refproposition}{prop-term-inf} Assume that $\mathit{Pre}^*(q(0))$ is infinite. Then almost all runs initiated in an arbitrary configuration reach $Q(0)$. Moreover, there is $k_1\in \mathbb{N}$ such that, for all $\ell\geq k_1$, the expected length of an honest path from $r(\ell)$ to $Q(0)$ is infinite. \end{refproposition} \begin{proof} As $\mathit{Pre}^*(q(0))=\infty$ and $\mathcal{X}$ is strongly connected, $Q(0)$ is reachable from every configuration with positive probability. \tomas{NOT SURE, PLEASE CHECK!!} Also, recall that $t=0$. Using strong law of large numbers (see~e.g.~\cite{Williams:book}) and results of~\cite{BBEKW:OC-MDP-arXiv} (in particular Lemma~19), one can show that $Q(0)$ is reached from any configuration with probability one. Consider an initial configuration $r(\ell)$ with $\ell + \boldsymbol{v}_r > \vv_{\max}$. We will show that the expected length of an honest path from~$r(\ell)$ to~$Q(0)$ is infinite; i.e., we can take $k_1 := \lceil |\vv| + 1\rceil$. Consider the martingale $m^{(1)},m^{(2)},\dots$ defined in Proposition~\ref{prop-martingale} over $\mathit{Run}(r(\ell))$. Note that as $t=0$, the definition of the martingale simplifies to \[ \ms{i} = \begin{cases} \cs{i} \ +\ \boldsymbol{v}_{\ps{i}} & \text{if $\cs{j} \ge 1$ for all $0 \leq j < i$;}\\ \ms{i-1} & \text{otherwise} \end{cases} \] Observe that $\ms{0} = \ell + \boldsymbol{v}_{r}$ and that the martingale value changes by at most~$M := \lceil|\vv|\rceil + 1$ in a single step. Let us fix $k\in \mathbb{N}$ such that $\ell + \boldsymbol{v}_r < \vv_{\max} + k$. Define a {\em stopping time}~$\tau$ (see e.g.~\cite{Williams:book}) which returns the first point in time in which either $m^{(\tau)}\geq \vv_{\max} + k$, or $m^{(\tau)}\leq \vv_{\max}$. Observe that $\tau$ is almost surely finite and that $m^{(\tau)}\in [\vv_{\max} - M, \vv_{\max}]\cup [\vv_{\max} + k, \vv_{\max} + k + M]$. Define $x := \mathcal{P}(\ms\tau \ge \vv_{\max} + k)$. Then \begin{eqnarray}\label{eq:expboundmart} \mathbb{E}[\ms\tau] &\ \leq\ & x \cdot (\vv_{\max}+k+M) + (1-x) \cdot \vv_{\max} = \vv_{\max} + x \cdot (k+M) \end{eqnarray} and by the optional stopping theorem (see e.g.~\cite{Williams:book}), \begin{equation}\label{eq:expstart} \mathbb{E}[\ms\tau]\ =\ \mathbb{E}[\ms0]\ =\ \ell+\boldsymbol{v}_r \,. \end{equation} By putting the equations~(\ref{eq:expboundmart})~and~(\ref{eq:expstart}) together, we obtain that \begin{equation} \mathcal{P}(\ms\tau \geq \vv_{\max} + k)\quad\geq\quad \frac{\ell + \boldsymbol{v}_r - \vv_{\max}}{k+M} \,. \label{eq:mart-opt-stop-upper} \end{equation} Denote by~$T$ the time to hit~$Q(0)$. We need to show $\mathbb{E} T = \infty$. For any run~$w$ with $\ms\tau \ge \vv_{\max} + k$ we have \[ \cs\tau = \ms\tau - \boldsymbol{v}_{\ps\tau} \ge \vv_{\max} + k - \boldsymbol{v}_{\ps\tau} \ge k \;, \] hence we have $T \ge k$ for~$w$, as at least $k$ steps are required to decrease the counter value from~$k$ to~$0$. It follows $\mathcal{P}(\ms\tau \ge \vv_{\max} + k) \le \mathcal{P}(T \ge k)$. Hence: \begin{align*} \mathbb{E} T & = \sum_{k \in \mathbb{N}} \mathcal{P}(T \ge k) \ge \sum_{k=\ell + 1}^\infty \mathcal{P}(T \ge k) \\ & \ge \sum_{k=\ell + 1}^\infty \mathcal{P}(\ms\tau \ge \vv_{\max} + k) \mathop{\ge}^{\eqref{eq:mart-opt-stop-upper}} \sum_{k=\ell + 1}^\infty \frac{\ell + \boldsymbol{v}_r - \vv_{\max}}{k+M} = \infty \,. \end{align*} \qed \end{proof} \begin{refproposition}{prop-pre-closed} There is $k_2 \in \mathbb{N}$ such that for every configuration $r(\ell) \in \mathit{Pre}^*(q(0))$, where $\ell \geq k_2$, we have that if $r(\ell) \tran{} r'(\ell')$, then $r'(\ell') \in \mathit{Pre}^*(q(0))$. \end{refproposition} \begin{proof} We start by observing that $\mathit{Pre}^*(q(0))$ has an ``ultimately periodic'' structure. For every $i \in \Nset_0$, let $\mathit{Pre}(i) = \{ r \in Q \mid r(i) \in \mathit{Pre}^*(q(0))\}$. Note that if $\mathit{Pre}(i) = \mathit{Pre}(j)$ for some $i,j \in \Nset_0$, then also $\mathit{Pre}(i{+}1) = \mathit{Pre}(j{+}1)$. Let $m_1$ be the least index such that $\mathit{Pre}(m_1) = \mathit{Pre}(j)$ for some $j>m_1$, and let $m_2$ be the least $j$ with this property. Further, we put $m = m_2 - m_1$. Observe that $m_1,m_2 \leq 2^{|Q|}$, and for every $\ell \geq m_2$ we have that $\mathit{Pre}(\ell) = \mathit{Pre}(\ell{+}m)$. For every configuration $r(\ell)$ of $\mathscr{A}$, let $C(r(\ell))$ be the set of all configurations $r(\ell+i)$ such that $0\leq i < m$ and $r \in \mathit{Pre}(\ell{+}i)$. Note that $C(r(\ell))$ has at most $m$ elements, and we define the \emph{index} of $r(\ell)$ as the cardinality of $C(r(\ell))$. Due the periodicity of $\mathit{Pre}^*(q(0))$, we immediately obtain that for every $r(\ell)$ and $j \in \Nset_0$, where $\ell \geq m_1$, the index of $r(\ell)$ is the same as the index of $r(\ell{+}j)$. Let $k_2 = m_1 + |Q| + 1$, and assume that there is a transition $r(\ell) \tran{} r'(\ell')$ such that $r \in \mathit{Pre}(\ell)$, $r' \not\in \mathit{Pre}(\ell')$, and $\ell \geq k_2$. Then $r(\ell{+}i) \tran{} r'(\ell'{+}i)$ for all $0 \leq i <m$. Obviously, if $r' \in \mathit{Pre}(\ell'{+}i)$, then also $r \in \mathit{Pre}(\ell{+}i)$, which means that the index of $r'(\ell')$ is \emph{strictly smaller} that the index of $r(\ell)$. Since $\mathcal{X}$ is strongly connected, there is finite path from $r'(\ell')$ to $r(n)$ of length at most $|Q|$, where $n \geq m_1$. This means that there is a finite path from $r'(\ell'{+}i)$ to $r(n{+}i)$ for every $0 \leq i <m$. Hence, the index of $r'(\ell')$ is at least as large as the index of $r(n)$. Since the indexes of $r(n)$ and $r(\ell)$ are the same, we have a contradiction. \qed \end{proof} \subsection{Quantitative Model-Checking of $\omega$-regular Properties (Section~\ref{sec-LTL})} \begin{refproposition}{prop-product} Let $\Sigma$ be a finite alphabet, $\mathscr{A}$ a pOC, $\nu$ a valuation, $\mathcal{R}$ a DRA over $\Sigma$, and $p(0)$ a configuration of $\mathscr{A}$. Then there is a pOC $\mathscr{A}'$ with Rabin acceptance condition and a configuration $p'(0)$ of $\mathscr{A}'$ constructible in polynomial time such that the probability of all $w \in \mathit{Run}_{\mathscr{A}}(p(0))$ where $\nu(w)$ is accepted by $\mathcal{R}$ is equal to the probability of all accepting $w \in \mathit{Run}_{\mathscr{A}'}(p'(0))$. \end{refproposition} \begin{proof} Let $(E_1,F_1),\dots,(E_k,F_k)$ be the Rabin acceptance condition of $\mathcal{R}$. The automaton $\mathscr{A}'$ is the synchronized product of $\mathscr{A}$ and $\mathcal{R}$ where \begin{itemize} \item $Q \times R$ is the set of control states, where $R$ is the set of states of $\mathcal{R}$; \item $(p,r) \prule{x,c} (p',r')$ iff $p \prule{x,c} p'$ and $r \xrightarrow{\nu(p(1))} r'$ is a transition in $\mathcal{R}$; \item $(p,r) \zrule{x,c} (p',r')$ iff $p \zrule{x,c} p'$ and $r \xrightarrow{\nu(p(0))} r'$ is a transition in $\mathcal{R}$. \end{itemize} The Rabin acceptance condition of $\mathscr{A}'$ is $(Q \times E_1, Q \times F_1),\dots,(Q \times E_k,Q \times F_k)$. \qed \end{proof} \begin{refproposition}{prop-visiting-approx} Let $c = 2|Q|$. For every $s \in G$, let $R_s$ be the probability of visiting a BSCC of $\mathcal{G}$ from $s$ in at most $c$ transitions, and let $R = \min\{R_s \mid s \in G\}$. Then $R > 0$ and if all transition probabilities in $\mathcal{G}$ are computed with relative error at most $\varepsilon R^3/8(c+1)^2$, then the resulting system $(I-A')\vec{V} = \vec{b}'$ has a unique solution $\vec{U}^*$ such that $|\vec{V}^*_s - \vec{U}^*_s|/\vec{V}^*_s \leq \varepsilon$ for every $s \in G$. \end{refproposition} \begin{proof} The first step towards applying Theorem~\ref{thm:error} is to estimate the condition number $\kappa = \norm{I-A} \cdot \norm{(I-A)^{-1}}$. Obviously, $\norm{I-A} \leq 2$. Further, $\norm{(I-A)^{-1}}$ is bounded by the expected number of steps needed to reach a BSCC of $\mathcal{G}$ from a state of $G$ (here we use a standard result about absorbing finite-state Markov chains). Since $G$ has at most $c$ states, we have that $R_s > 0$, and hence also $R >0$. Obviously, the probability on \emph{non-visiting} a BSCC of $\mathcal{G}$ in at most $i$ transitions from a state of $G$ is bounded by $(1-R)^{\lfloor i/c\rfloor}$. Hence, the probability of visiting a BSCC of $\mathcal{G}$ from a state of $G$ after \emph{exactly} $i$ transitions is bounded by $(1-R)^{\lfloor (i-1)/c\rfloor}$. Further, a simple calculation shows that \begin{eqnarray*} \norm{(I-A)^{-1}} & \quad\leq\quad & \sum_{i=1}^\infty i \cdot (1-R)^{\lfloor (i-1)/c\rfloor} \quad=\quad \sum_{i=0}^\infty \left(\frac{c(c+1)}{2} + ic^2\right) \cdot \left(1-R\right)^i\\ & = & \frac{c(c+1)}{2R} + \frac{c^2 (1-R)}{R^2} \quad\leq\quad \left(\frac{c+1}{R}\right)^2 \end{eqnarray*} Hence, $\kappa \leq 2(c+1)^2/R^2$. Let $\vec{V}^*$ be the unique solution of $(I-A)\vec{V} = \vec{b}$. Since $\norm{\vec{V}^*} \leq 1$ and $\vec{V}^*_s \geq R$ for every $s \in G$, it suffices to compute an approximate solution $\vec{U}^*$ such that \[ \frac{\norm{\vec{V}^*-\vec{U}^*}}{\norm{\vec{V}^*}} \quad\leq\quad \varepsilon \cdot R \] By Theorem~\ref{thm:error}, we have that \[ \frac{\norm{\vec{V}^*-\vec{U}^*}}{\norm{\vec{V}^*}} \quad\leq\quad 4\tau\kappa \quad\leq\quad \frac{8\tau(c+1)^2}{R^2} \] where $\tau$ is the relative error of $A$ and $\vec{b}$. Hence, it suffices to choose $\tau$ so that \[ \tau \quad\leq\quad \frac{\varepsilon R^3}{8(c+1)^2} \] and compute all transition probabilities in $\mathcal{G}$ up to the relative error~$\tau$. Note that the approximation $A'$ of the matrix $A$ which is obtained in this way is still regular, because \[ \norm{A - A'} \quad\leq\quad \tau \quad\leq\quad \frac{\varepsilon R^3}{8(c+1)^2} \quad<\quad \frac{R^2}{(c+1)^2} \quad\leq\quad \frac{1}{\norm{(I-A)^{-1}}} \] \qed \end{proof} \noindent Now we prove the divergence gap theorem. Some preliminary lemmata are needed. \begin{lemma} \label{lem:reach-high} Let $A$ be strongly connected and $t \ge 0$. Assume $[p{\downarrow}] > 0$ for all $p \in Q$. Let $\cs{0} \ge 1$ and $\ps{0} \in Q$ such that $\boldsymbol{v}_{\ps{0}} = \vv_{\max}$. Let $b \in \mathbb{N}$. Then \[ \mathcal{P} \left(\exists i: \cs{i} \ge b \land \forall j \le i: \cs{j} \ge 1 \;\middle\vert\; \mathit{Run}(\ps{0}(\cs{0})) \right) \quad \ge \quad \frac{1}{b + 1 + |\vv|} \,. \] \end{lemma} \begin{proof} If $\cs{0} \ge b$, the lemma holds trivially. So we can assume that $\cs{0} < b$. For a run $w \in \mathit{Run}(\ps{0}(\cs{0}))$, we define a so-called \emph{stopping time} $\tau$ as follows: \[ \tau := \inf\{ i \in \mathbb{N}_0 \mid \ms{i} \le \vv_{\max} \ \lor \ \ \ms{i} \ge b + \vv_{\max} \} \] Note that $1 + \vv_{\max} \le \ms{0} < b + \vv_{\max}$, i.e., $\tau \ge 1$. Let $E$ denote the subset of runs in~$\mathit{Run}(\ps{0}(\cs{0}))$ where $\tau < \infty$ and $\ms{\tau} \ge b + \vv_{\max}$; i.e., $E$ is the event that the martingale~$\ms{i}$ reaches a value of $b+\vv_{\max}$ or higher without previously reaching a value of $\vv_{\max}$ or lower. Similarly, let $D$ denote the subset of runs in~$\mathit{Run}(\ps{0}(\cs{0}))$ such that the counter reaches a value of~$b$ ore higher without previously hitting~$0$. To prove the lemma we need to show $\mathcal{P}(D) \ge 1 / (b + 1 + |\vv|)$. We will do that by showing that $D \supseteq E$ and $\mathcal{P}(E) \ge 1 / (b + 1 + |\vv|)$. First we show $D \supseteq E$. Consider any run in~$E$; i.e., $\ms{\tau} \ge b + \vv_{\max}$ and $\ms{i} > \vv_{\max}$ for all $i \le \tau$. So, for all $i \le \tau$ we have $\ms{i} = \cs{i} + \boldsymbol{v}_{\ps{i}} - i t > \vv_{\max}$, implying $\cs{i} > 0$. Similarly, $\ms{\tau} = \cs{\tau} + \boldsymbol{v}_{\ps{\tau}} - \tau t \ge b + \vv_{\max}$, implying $\cs{\tau} \ge b$. Hence, the run is in~$D$, implying $D \supseteq E$. Hence it remains to show $\mathcal{P}(E) \ge 1 / (b + 1 + |\vv|)$. Next we argue that $\mathbb{E} \tau$ is finite: Since $[p{\downarrow}] > 0$ for all $p \in Q$, there are constants $k \in \mathbb{N}$ and $x \in (0,1]$ such that, given any configuration $p(c)$ with $p \in Q$ and $c \ge 1$, the probability of reaching in at most~$k$ steps a configuration $q(c-1)$ for some $q \in Q$ is at least~$x$. Since $A$ is strongly connected, it follows that there are constants $k' \in \mathbb{N}$ and $x' \in (0,1]$ such that, given any configuration $p(c)$ with $p \in Q$ and $c \ge 1$, the probability of reaching in at most~$k'$ steps either a configuration with zero counter or a configuration $p(c-b)$ is at least~$x'$. It follows that whenever $\ms{i} < b + \vv_{\max}$ the probability that there is $j \le k'$ with $\ms{i+j} \le \vv_{\max}$ is at least~$x'$. Hence we have \[ \mathbb{E} \tau = \sum_{\ell = 0}^\infty \mathcal{P}( \tau > \ell ) \le k' \sum_{\ell = 0}^\infty \mathcal{P}( \tau > k' \ell ) \le k' \sum_{\ell = 0}^\infty (1-x')^\ell = k' / x' \,; \] i.e., $\mathbb{E} \tau$ is finite. Consequently, the \emph{Optional Stopping Theorem}~\cite{Williams:book} is applicable and asserts \begin{equation} \mathbb{E} \ms{\tau} = \mathbb{E} \ms{0} = \ms{0} \ge 1 + \vv_{\max}\,. \label{eq:optional-stopping} \end{equation} For runs in~$E$ we have $\ms{\tau-1} < b + \vv_{\max}$. Since the value of $\ms{i}$ can increase by at most $1 + |\vv|$ in a single step, we have $\ms{\tau} \le b + \vv_{\max} + 1 + |\vv|$ for runs in~$E$. It follows that \begin{align*} \mathbb{E} \ms{\tau} & \le \mathcal{P}(E) \cdot (b + \vv_{\max} + 1 + |\vv|) + (1 - \mathcal{P}(E)) \cdot \vv_{\max} \\ & = \vv_{\max} + \mathcal{P}(E) \cdot (b + 1 + |\vv|) \,. \end{align*} Combining this inequality with~\eqref{eq:optional-stopping} yields $\mathcal{P}(E) \ge 1 / (b + 1 + |\vv|)$. This completes the proof. \qed \end{proof} Let $[\ps{0}(\cs{0}){\downarrow}]$ denote the probability that a run initiated in $\ps{0}(\cs{0})$ eventually reaches counter value zero. The following lemma gives an upper bound on~$[\ps{0}(\cs{0}){\downarrow}]$. \begin{lemma} \label{lem:azuma} Let $A$ be strongly connected and $t > 0$. Let \[ a := \exp\left(- \frac{t^2}{2 (|\vv| + t + 1)^2} \right)\,. \] Note that $0 < a < 1$. Let $\cs{0} \ge |\vv|$. Then we have \[ [\ps{0}(\cs{0}){\downarrow}] \quad \le \quad \frac{a^{\cs{0}}}{1-a} \qquad \text{for all $\ps{0} \in Q$.} \] Moreover, if $\cs{0} \ge 6 (|\vv| + t + 1)^3 / t^3$, then $[\ps{0}(\cs{0}){\downarrow}] \le 1/2$ for all $\ps{0} \in Q$. \end{lemma} \begin{proof} Define $H_i$ as the event that the counter reaches zero for the first time after exactly $i$ steps; i.e., $H_i := \{w \in \mathit{Run}(\ps{0}(\cs{0})) \mid \cs{i} = 0 \ \land \ \forall 0 \le j < i: \cs{j} \ge 1\}$. We have $[\ps{0}(\cs{0}){\downarrow}] = \mathcal{P}\left( H_0 \cup H_1 \cup \cdots \right)$. Observe that $H_i = \emptyset$ for $i < \cs{0}$, because in each step the counter value can decrease by at most~$1$. For all runs in~$H_i$ we have $\ms{i} = \boldsymbol{v}_{\ps{i}} - i t$ and so \begin{equation*} \ms{0} - \ms{i} = \cs{0} + \boldsymbol{v}_{\ps{0}} - \boldsymbol{v}_{\ps{i}} + i t \,. \end{equation*} It follows that \begin{align*} \mathcal{P}(H_i) & = \mathcal{P}(H_i \ \land \ \ms{0} - \ms{i} = \cs{0} + \boldsymbol{v}_{\ps{0}} - \boldsymbol{v}_{\ps{i}} + i t) \\ & \le \mathcal{P}(\ms{0} - \ms{i} = \cs{0} + \boldsymbol{v}_{\ps{0}} - \boldsymbol{v}_{\ps{i}} + i t) \\ & \le \mathcal{P}(\ms{0} - \ms{i} \ge \cs{0} - |\vv| + i t) \\ & \le \mathcal{P}(\ms{0} - \ms{i} \ge i t) && \text{(as $\cs{0} \ge |\vv|$)\,.} \end{align*} In each step, the martingale value changes by at most $|\vv| + t + 1$. Hence Azuma's inequality (see~\cite{Williams:book}) asserts \begin{align*} \mathcal{P}(H_i) & \le \exp \left(- \frac{i t^2}{2 (|\vv| + t + 1)^2} \right) && \text{(Azuma's inequality)} \\ & = a^i \,. \end{align*} It follows that \begin{align*} [\ps{0}(\cs{0}){\downarrow}] = \sum_{i=0}^\infty \mathcal{P}(H_i) & = \sum_{i=\cs{0}}^\infty \mathcal{P}(H_i) && \text{(as $H_i = \emptyset$ for $i < \cs{0}$)} \\ & \le \sum_{i=\cs{0}}^\infty a^i && \text{(by the computation above)} \\ & = a^{\cs{0}} / (1-a) \,. \end{align*} This proves the first statement. For the second statement, we need to find a condition on~$\cs{0}$ such that $[\ps{0}(\cs{0}){\downarrow}] \le 1/2$. The condition provided by the first statement is equivalent to \begin{equation*} \cs{0} \ge \frac{\ln(1-a) - \ln 2}{\ln a}\,. \label{eq:cond-threshold} \end{equation*} Define $d := \frac{t^2}{2 (|\vv| + t + 1)^2}$. Note that $a = \exp(-d)$ and $0 < d < 1$. It is straightforward to verify that \[ \frac{\ln(1-\exp(-d)) - \ln 2}{-d} \le \frac{2}{d^{3/2}} \quad \text{for all $0 < d < 1$.} \] Since \[ \frac{2}{d^{3/2}} = \frac{2 \cdot 2^{3/2} \cdot (|\vv| + t + 1)^3}{t^3} \le \frac{6 (|\vv| + t + 1)^3}{t^3}\,, \] the second statement follows. \qed \end{proof} \begin{proposition} \label{prop:gap} Let $A$ be strongly connected and $t > 0$ and $[p{\downarrow}] > 0$ for all $p \in Q$. Let $p \in Q$ with $\boldsymbol{v}_p = \vv_{\max}$. Then \[ [p{\uparrow}] \ge \frac{t^3}{12 (2 |\vv| + 4)^3}\,. \] \end{proposition} \begin{proof} Define $b$ as the smallest integer $b \ge 6 (|\vv| + t + 1)^3 / t^3$. By Lemma~\ref{lem:reach-high} we have \[ \mathcal{P} \left(\exists i: \cs{i} \ge b \land \forall j \le i: \cs{j} \ge 1 \;\middle\vert\; \mathit{Run}(p(1)) \right) \quad \ge \quad \frac{1}{b + 1 + |\vv|} \,. \] Since $0 < t \le 1$, we have \[ b + 1 + |\vv| \quad\le\quad 6 (|\vv| + t + 2)^3 / t^3 + 1 + |\vv| \quad\le\quad 6 (2 |\vv| + 4)^3 / t^3 \] and so \[ \mathcal{P} \left(\exists i: \cs{i} \ge b \land \forall j \le i: \cs{j} \ge 1 \;\middle\vert\; \mathit{Run}(p(1)) \right) \quad \ge \quad \frac{t^3}{6 (2 |\vv| + 4)^3}\,. \] Using the Markov property and Lemma~\ref{lem:azuma} we obtain \[ [p{\uparrow}] \ge \frac{t^3}{12 (2 |\vv| + 4)^3} \,. \] \qed \end{proof} Now let us drop the assumption that $A$ is strongly connected. Each BSCC $\mathscr{B}$ of $A$ induces a strongly connected pOC in which we have a trend $t$ and a potential $\vec{v}$. \begin{reftheorem}{thm-gap} Let $\mathscr{A} = (Q,\delta^{=0},\delta^{>0},P^{=0},P^{>0})$ be a pOC and $\mathcal{X}$ the underlying finite-state Markov chain of $\mathscr{A}$. Let $p \in Q$ such that $[p{\uparrow}]>0$. Then there are two possibilities: \begin{enumerate} \item There is $q\in Q$ such that $[p,q]>0$ and $[q{\uparrow}]=1$. Hence, $[p{\uparrow}] \geq [p,q]$. \item There is a BSCC $\mathscr{B}$ of $\mathcal{X}$ and a state $q$ of $\mathscr{B}$ such that $[p,q]>0$, $t > 0$, and $\vec{v}_{q}=\vec{v}_{\max}$ (here $t$ is the trend, $\vec{v}$ is the vector of Proposition~\ref{prop-martingale}, and $\vec{v}_{\max}$ is the maximal component of~$\vec{v}$; all of these are considered in $\mathscr{B}$). Further, \[ [p{\uparrow}]\quad \ge\quad [p,q]\cdot \frac{t^3}{12 (2 |\vec{v}| + 4)^3}\,. \] \end{enumerate} \end{reftheorem} \begin{proof} Assume that $[q{\uparrow}]<1$ for all $q\in Q$. Given a BSCC $\mathscr{B}$, denote by $R_{\mathscr{B}}$ the set of runs of $\mathit{Run}(p{\uparrow})$ that reach $\mathscr{B}$. Almost all runs of $\mathit{Run}(p{\uparrow})$ belong to $\bigcup_{\mathscr{B}} R_{\mathscr{B}}$. Moreover, using strong law of large numbers (see~e.g.~\cite{Williams:book}) and results of~\cite{BBEKW:OC-MDP-arXiv} (in particular Lemma~19), one can show that almost every run of $\mathit{Run}(p{\uparrow})$ belongs to some $R_{\mathscr{B}}$ satisfying $t>0$. It follows that there is a BSCC $\mathscr{B}$ such that $t>0$ and $\mathcal{P}(R_\mathscr{B})>0$. Now almost all runs of $R_\mathscr{B}$ either terminate, or visit all states of $\mathscr{B}$ infinitely many times. In particular, almost all runs of $R_\mathscr{B}$ reach a state $q$ satisfying $\vec{v}_{q}=\vec{v}_{\max}$, and thus $[p,q]>0$. \qed \end{proof} \subsection{Efficient approximation of finite expected termination time (Section~\ref{subsec:fintermtime})} \noindent We will use the following theorem from numerical analysis (see, e.g., \cite{EWY:one-counter}): \begin{theorem} \label{thm:error} Consider a system of linear equations, $B\cdot \vec{V}=\vec{b}$, where $B\in \mathbb{R}^{n\times n}$ and $\vec{b}\in \mathbb{R}^n$. Suppose that $B$ is regular and $\vec{b}\not = \vec{0}$. Let $\vec{V}^*=B^{-1}\cdot\vec{b}$ be the unique solution of this system and suppose that $\vec{V}^*\not = \vec{0}$. Denote by $\kappa(B)=\norm{B}\cdot \norm{B^{-1}}$ the condition number of $B$. Consider a system of equations $(B+{\Delta})\cdot \vec{V}=\vec{b}+\vec{\zeta}$ where ${\Delta}\in \mathbb{R}^{n\times n}$ and $\vec{\zeta}\in \mathbb{R}^n$. If $\norm{{\Delta}}<\frac{1}{\norm{B^{-1}}}$, then the system $(B+{\Delta})\cdot \vec{V}=\vec{b}+\vec{\zeta}$ has a unique solution $\vec{V}^*_{p}$. Moreover, for every $\delta>0$ satisfying $\frac{\norm{\Delta}}{\norm{B}}\leq \delta$ and $\frac{\norm{\zeta}}{\norm{b}}\leq \delta$ and $4\cdot \delta\cdot \kappa(B)<1$ the solution $\vec{V}^*_{p}$ satisfies \[ \frac{\norm{\vec{V}^* - \vec{V}^*_{p}}}{\norm{\vec{V}^*}}\quad \leq\quad 4\cdot \delta\cdot \kappa(B) \] \end{theorem} \begin{proposition} \label{prop:error} Consider a system of linear equations, $C\cdot \vec{W}=\vec{c}$, where $C\in \mathbb{R}^{n\times n}$ and $\vec{c}\in \mathbb{R}^n$. Suppose that $C$ is nonsingular and $\vec{c}\not = \vec{0}$. Let $\vec{W}^*=C^{-1}\cdot\vec{c}$ be the unique solution of this system. Let $\norm{\cdot}$ be the $l_\infty$ norm. Consider a system $(C+{\mathcal{E}}) \cdot \vec{W}=\vec{c}$ where ${\mathcal{E}}\in \mathbb{R}^{n\times n}$. Let $\norm{C} \le u \ge 1$ and $\norm{C^{-1}} \le v \ge 1$. If $\norm{{\mathcal{E}}}<1/v$, then the system $(C+{\mathcal{E}})\cdot \vec{W}=\vec{c}$ has a unique solution $\vec{W}^*_{p}$. Moreover, if $\norm{\mathcal{E}} \le \delta < 1 / (4 u v)$, then $\vec{W}^*_{p}$ satisfies \[ \frac{\norm{\vec{W}^* - \vec{W}^*_{p}}}{\norm{\vec{W}^*}}\quad \leq\quad \delta\cdot 4 u v \] \end{proposition} \begin{proof} We apply Theorem~\ref{thm:error} with \[ B := \left(\begin{matrix} C \ & 0 \\ 0 \ & 1 \end{matrix}\right) \qquad \text{and} \qquad b := \left(\begin{matrix} c \\ 1 \end{matrix} \right) \qquad \text{and} \qquad \Delta := \left(\begin{matrix} \mathcal{E} \ & 0 \\ 0 \ & 0 \end{matrix}\right) \,; \] i.e., a single equation $x=1$, for a new variable~$x$ is added to the system, without new errors. Notice that \[ B^{-1} = \left(\begin{matrix} C^{-1} \ & 0 \\ 0 \ & 1 \end{matrix}\right) \qquad \text{and} \qquad \vec{V}^* := \left(\begin{matrix} \vec{W}^* \\ 1 \end{matrix} \right) \,. \] Further $\norm{B^{-1}} = \max\{1, \norm{C^{-1}}\}$. So we have $\norm{\Delta} = \norm{\mathcal{E}} < 1/v \le 1/\max\{1,\norm{C^{-1}}\} = 1 / \norm{B^{-1}}$. Thus, by Theorem~\ref{thm:error} there is a unique solution of $(B+{\Delta})\cdot \vec{V}=\vec{b}$, hence $\vec{W}^*_{p}$ is unique too. Moreover, we have \begin{align*} \frac{\norm{\Delta}}{\norm{B}} & = \frac{\norm{\Delta}}{\max\{1,\norm{C}\}} \le \norm{\Delta} = \norm{\mathcal{E}} \le \delta \qquad \text{and} \\ 4 \cdot \delta \cdot \kappa(B) & = 4 \cdot \delta \cdot \max\{1,\norm{C}\} \cdot \max\{1,\norm{C^{-1}}\} \le 4 \cdot \delta \cdot u \cdot v < 1\,, \end{align*} so Theorem~\ref{thm:error} implies \[ \frac{\norm{\vec{W}^* - \vec{W}^*_{p}}}{\norm{\vec{W}^*}}\quad \leq\quad 4\cdot \delta\cdot \kappa(B) \quad \le \quad \delta \cdot 4 u v\,. \] \qed \end{proof} \noindent With this at hand we can prove Proposition~\ref{prop-exp-approx}: \begin{refproposition}{prop-exp-approx} Let $b\in \mathbb{R}^+$ satisfy $E(p{\downarrow}q)\leq b$ for all $(p,q) \in T^{>0}_{<\infty}$. For each $\varepsilon$, where $0 < \varepsilon < 1$, let $\delta = \varepsilon\, / (12\cdot b^2)$. If $\norm{G-H} \le \delta$, then the perturbed system $\vec{V} = G \cdot \vec{V} + \boldsymbol{1}$ has a unique solution $\vec{F}$. Moreover, we have that \[ |E(p{\downarrow} q) - \vec{F}_{pq}| \quad \leq\quad \varepsilon \qquad \text{for all $(p,q) \in T^{>0}_{<\infty}$.} \] Here $\vec{F}_{pq}$ is the component of $\vec{F}$ corresponding to the variable $V(p{\downarrow}q)$. \end{refproposition} \begin{proof} Denote by $\vec{E}$ the vector of expected termination times, i.e., the unique solution of $\mathcal{L}'$, i.e., $\vec{E} = (I-H)^{-1} \boldsymbol{1}$. Recall that all components of $\vec{E}$ are finite. We will apply Proposition~\ref{prop:error} using the following assignments: $C=I-H, C+{\mathcal{E}} =I-G, \vec{c}=\vec{1}, \vec{W}^*=\vec{E}, \vec{W}^*_p=\vec{F}$. To find a suitable~$u$, we need to find a bound on $\norm{I-H}$. By comparing $\mathcal{L}'$ with~\eqref{eq:termination-probabilities} it follows that $\norm{H \boldsymbol{1}} \le 2$ and hence \begin{equation} \label{eq:norm-I-H} \norm{I - H} \quad \le \quad 1 + \norm{H} \quad = \quad 1 + \norm{H \boldsymbol{1}} \quad \le \quad 3 \ =: \ u \,. \end{equation} Further, we set $v := b$, so we need to show $\norm{(I-H)^{-1}} \le b$. By our assumption, $\norm{\vec{E}} \ \leq\ b$. Recall that $\vec{E} = (I-H)^{-1} \boldsymbol{1}$, so if $(I-H)^{-1}$ is nonnegative, then $\norm{(I-H)^{-1}} = \norm{(I-H)^{-1} \boldsymbol{1}} = \norm{\vec{E}} \le b$, hence it remains to show that $(I-H)^{-1}$ is nonnegative. To see this, note that $\vec{E}$ is the (unique) fixed point of a linear function $\mathcal{F}$ which to every $\vec{V}$ assigns $H\cdot \vec{V}+\vec{1}$. This function is continuous and monotone, so by Kleene's theorem we get that $\vec{E}=\sup_{i\in \mathbb{N}} \mathcal{F}^i(\vec{0}) = \sum_{i=0}^\infty H^{i}\boldsymbol{1}$. Recall that $\vec{E}$ is finite, so the matrix series $H^* := \sum_{i=0}^\infty H^{i}$ converges and thus equals $(I-H)^{-1}$. Hence $(I-H)^{-1} = H^*$, which is nonnegative as $H$ is nonnegative. Now we are ready to apply Theorem~\ref{prop:error}. Since $\norm{G - H} \le \varepsilon / (12\cdot b^2) < 1/v$, the perturbed system $\vec{V} = G \cdot \vec{V} + \boldsymbol{1}$ has a unique solution $\vec{F}$ as desired. By applying the second part of Theorem~\ref{prop:error} we get \begin{equation} \label{eq:prop-error-application} \frac{\norm{\vec{E} - \vec{F}}}{\norm{\vec{E}}} \ \le \ \delta \cdot 12 \cdot b \qquad \text{for $\norm{G - H} \le \delta \le 1 / (12 \cdot b)$.} \end{equation} Hence, \begin{align*} |E(p{\downarrow} q) - \vec{F}_{pq}| & \le \norm{\vec{E} - \vec{F}} && \text{(by the definition of the norm)} \\ & \le b \cdot \frac{\norm{\vec{E} - \vec{F}}}{\norm{\vec{E}}} && \text{by $\norm{\vec{E}}\leq b$} \\%\text{(by Lemma~\ref{prop:exp-bound})} \\ & \le b \cdot \delta \cdot 12\cdot b && \text{(by~\eqref{eq:prop-error-application})} \\ & = \varepsilon && \text{(by the definition of~$\delta$).} \end{align*} \qed \end{proof} \begin{refproposition}{prop:exp-time-bound-special} \stmtpropexptimeboundspecial \end{refproposition} \begin{proof} The proof follows directly from Proposition~\ref{prop:grand-bound}. \qed \end{proof} \section{Experimental results, future work} \label{sec-concl} We have implemented a prototype tool in the form of a Maple worksheet% \footnote{Available at {\scriptsize \texttt{http://www.comlab.ox.ac.uk/people/stefan.kiefer/pOC.mws}}.}% , which allows to compute the termination probabilities of pOC, as well as the conditional expected termination times. Our tool employs Newton's method to approximate the termination probabilities within a sufficient accuracy so that the expected termination time is computed with absolute error (at most) one by solving the linear equation system from Section~\ref{subsec:fintermtime}. We applied our tool to the pOC from Fig.~\ref{fig-and-or-model} for various values of the parameters. Fig.~\ref{fig:numbers} shows the results. We also show the associated termination probabilities, rounded to three digits. We write $[a{\downarrow}0]$ etc.\ to abbreviate $[(\textit{and,init}){\downarrow}(\textit{or,return,0})]$ etc., and $[a{\downarrow}]$ for $[a{\downarrow}0] + [a{\downarrow}1])$. \begin{figure}[t] \begin{center} \begin{tabular}{l@{\quad}|@{\quad}r@{\quad}r@{\quad}r@{\quad}r@{\quad}r} & $[a{\downarrow}]$ & $[a{\downarrow}0]$ & $[a{\downarrow}1]$ & $E[a{\downarrow}0]$ & $E[a{\downarrow}1]$ \\ \hline $z = 0.5, y = 0.4, x_a = 0.2, x_o = 0.2$ & 0.800 & 0.500 & 0.300 & 11.000 & 7.667 \\ $z = 0.5, y = 0.4, x_a = 0.2, x_o = 0.4$ & 0.967 & 0.667 & 0.300 & 104.750 & 38.917 \\ $z = 0.5, y = 0.4, x_a = 0.2, x_o = 0.6$ & 1.000 & 0.720 & 0.280 & 20.368 & 5.489 \\ $z = 0.5, y = 0.4, x_a = 0.2, x_o = 0.8$ & 1.000 & 0.732 & 0.268 & 10.778 & 2.758 \\ \hline $z = 0.5, y = 0.5, x_a = 0.1, x_o = 0.1$ & 0.861 & 0.556 & 0.306 & 11.400 & 5.509 \\ $z = 0.5, y = 0.5, x_a = 0.2, x_o = 0.1$ & 0.931 & 0.556 & 0.375 & 23.133 & 20.644 \\ $z = 0.5, y = 0.5, x_a = 0.3, x_o = 0.1$ & 1.000 & 0.546 & 0.454 & 83.199 & 111.801 \\ $z = 0.5, y = 0.5, x_a = 0.4, x_o = 0.1$ & 1.000 & 0.507 & 0.493 & 12.959 & 21.555 \\ \hline $z = 0.2, y = 0.4, x_a = 0.2, x_o = 0.2$ & 0.810 & 0.696 & 0.115 & 7.827 & 6.266 \\ $z = 0.3, y = 0.4, x_a = 0.2, x_o = 0.2$ & 0.811 & 0.636 & 0.175 & 8.928 & 6.783 \\ $z = 0.4, y = 0.4, x_a = 0.2, x_o = 0.2$ & 0.808 & 0.571 & 0.236 & 10.005 & 7.258 \\ $z = 0.5, y = 0.4, x_a = 0.2, x_o = 0.2$ & 0.800 & 0.500 & 0.300 & 11.000 & 7.667 \\ \end{tabular} \end{center} \caption{Quantities of the pOC from Fig.~\ref{fig-and-or-model}} \label{fig:numbers} \end{figure} We believe that other interesting quantities and numerical characteristics of pOC, related to both finite paths and infinite runs, can also be efficiently approximated using the methods developed in this paper. An efficient implementation of the associated algorithms would result in a verification tool capable of analyzing an interesting class of infinite-state stochastic programs, which is beyond the scope of currently available tools limited to finite-state systems only. \section{Definitions} \label{sec-defs} \noindent We use $\mathbb{Z}$, $\mathbb{N}$, $\mathbb{N}_0$, $\mathbb{Q}$, and $\mathbb{R}$ to denote the set of all integers, positive integers, non-negative integers, rational numbers, and real numbers, respectively. Let $\delta > 0$, $x \in \mathbb{Q}$, and $y \in \mathbb{R}$. We say that $x$ approximates $y$ up to a relative error $\delta$, if either \mbox{$y \neq 0$} and \mbox{$|x-y|/|y| \leq \delta$}, or $x = y = 0$. Further, we say that $x$ approximates $y$ up to an absolute error $\delta$ if $|x-y| \leq \delta$. We use standard notation for intervals, e.g., $(0,1]$ denotes \mbox{$\{x \in \mathbb{R} \mid 0 < x \leq 1 \}$}. Given a finite set~$Q$, we regard elements of~$\mathbb{R}^Q$ as vectors over~$Q$. We use boldface symbols like $\vu, \boldsymbol{v}$ for vectors. In particular we write~$\boldsymbol{1}$ for the vector whose entries are all~$1$. Similarly, matrices are elements of~$\mathbb{R}^{Q \times Q}$. Let $\mathcal{V} = (V,\tran{})$, where $V$ is a non-empty set of vertices and ${\tran{}} \subseteq V \times V$ a \emph{total} relation (i.e., for every $v \in V$ there is some $u \in V$ such that $v \tran{} u$). The reflexive and transitive closure of $\tran{}$ is denoted by $\tran{}^*$. A \emph{finite path} in $\mathcal{V}$ of \emph{length} $k \geq 0$ is a finite sequence of vertices $v_0,\ldots,v_k$, where $v_i \tran{} v_{i+1}$ for all $0 \leq i <k$. The length of a finite path $w$ is denoted by $\mathit{length}(w)$. A \emph{run} in $\mathcal{V}$ is an infinite sequence $w$ of vertices such that every finite prefix of $w$ is a finite path in $\mathcal{V}$. The individual vertices of $w$ are denoted by $w(0),w(1),\ldots$ The sets of all finite paths and all runs in $\mathcal{V}$ are denoted by $\mathit{FPath}_{\mathcal{V}}$ and $\mathit{Run}_{\mathcal{V}}$, respectively. The sets of all finite paths and all runs in $\mathcal{V}$ that start with a given finite path $w$ are denoted by $\mathit{FPath}_{\mathcal{V}}(w)$ and $\mathit{Run}_{\mathcal{V}}(w)$, respectively. A \emph{bottom strongly connected component (BSCC)} of $\mathcal{V}$ is a subset $B \subseteq V$ such that for all $v,u \in B$ we have that $v \tran{}^* u$, and whenever $v \tran{} u'$ for some $u' \in V$, then $u' \in B$. We assume familiarity with basic notions of probability theory, e.g., \emph{probability space}, \emph{random variable}, or the \emph{expected value}. As usual, a \emph{probability distribution} over a finite or countably infinite set $X$ is a function $f : X \rightarrow [0,1]$ such that \mbox{$\sum_{x \in X} f(x) = 1$}. We call $f$ \emph{positive} if $f(x) > 0$ for every $x \in X$, and \emph{rational} if $f(x) \in \mathbb{Q}$ for every $x \in X$. \begin{definition} A \emph{Markov chain} is a triple \mbox{$\mathcal{M} = (S,\tran{},\mathit{Prob})$} where $S$ is a finite or countably infinite set of \emph{states}, ${\tran{}} \subseteq S \times S$ is a total \emph{transition relation}, and $\mathit{Prob}$ is a function that assigns to each state $s \in S$ a positive probability distribution over the outgoing transitions of~$s$. As usual, we write $s \tran{x} t$ when $s \tran{} t$ and $x$ is the probability of $s \tran{} t$. \end{definition} A Markov chain $\mathcal{M}$ can be also represented by its \emph{transition matrix} $M \in [0,1]^{S{\times}S}$, where $M_{s,t} = 0$ if $s \not\rightarrow t$, and $M_{s,t} = x$ if $s \tran{x} t$. To every $s \in S$ we associate the probability space $(\mathit{Run}_{\mathcal{M}}(s),\mathcal{F},\mathcal{P})$ of runs starting at $s$, where $\mathcal{F}$ is the \mbox{$\sigma$-field} generated by all \emph{basic cylinders}, $\mathit{Run}_{\mathcal{M}}(w)$, where $w$ is a finite path starting at~$s$, and $\mathcal{P}: \mathcal{F} \rightarrow [0,1]$ is the unique probability measure such that $\mathcal{P}(\mathit{Run}_{\mathcal{M}}(w)) = \prod_{i{=}1}^{\mathit{length}(w)} x_i$ where $w(i{-}1) \tran{x_i} w(i)$ for every $1 \leq i \leq \mathit{length}(w)$. If $\mathit{length}(w) = 0$, we put $\mathcal{P}(\mathit{Run}_{\mathcal{M}}(w)) = 1$. \begin{definition} \label{def-pOC} A \emph{probabilistic one-counter automaton (pOC)} is a tuple, $\mathscr{A} = (Q,\delta^{=0},\delta^{>0},P^{=0},P^{>0})$, where \begin{itemize} \item $Q$ is a finite set of \emph{states}, \item $\delta^{>0} \subseteq Q \times \{-1,0,1\} \times Q$ and $\delta^{=0} \subseteq Q \times \{0,1\} \times Q$ are the sets of \emph{positive} and \emph{zero rules} such that each $p \in Q$ has an outgoing positive rule and an outgoing zero rule; \item $P^{>0}$ and $P^{=0}$ are \emph{probability assignments}: both assign to each $p \in Q$, a positive rational probability distribution over the outgoing rules in $\delta^{>0}$ and $\delta^{=0}$, respectively, of~$p$. \end{itemize} \end{definition} \noindent In the following, we often write $p \zrule{x,c} q$ to denote that $(p,c,q) \in \delta^{=0}$ and $P^{=0}(p,c,q) = x$, and similarly $p \prule{x,c} q$ to denote that $(p,c,q) \in \delta^{>0}$ and $P^{>0}(p,c,q) = x$. The size of $\mathscr{A}$, denoted by $|\mathscr{A}|$, is the length of the string which represents~$\mathscr{A}$, where the probabilities of rules are written in binary. A \emph{configuration} of $\mathscr{A}$ is an element of $Q \times \Nset_0$, written as $p(i)$. To $\mathscr{A}$ we associate an infinite-state Markov chain $\mathcal{M}_\mathscr{A}$ whose states are the configurations of $\mathscr{A}$, and for all $p,q \in Q$, $i\in \mathbb{N}$, and $c \in \Nset_0$ we have that $p(0) \tran{x} q(c)$ iff $p \zrule{x,c} q$, and $p(i) \tran{x} q(c)$ iff $p \prule{x,c{-}i} q$. For all $p,q \in Q$, let \begin{itemize} \item $\mathit{Run}_\mathscr{A}(p{\downarrow}q)$ be the set of all runs in $\mathcal{M}_\mathscr{A}$ initiated in $p(1)$ that visit $q(0)$ and the counter stays positive in all configurations preceding this visit; \item $\mathit{Run}_\mathscr{A}(p{\uparrow})$ be the set of all runs in $\mathcal{M}_\mathscr{A}$ initiated in $p(1)$ where the counter never reaches zero. \end{itemize} We omit the ``$\mathscr{A}$'' in $\mathit{Run}_\mathscr{A}(p{\downarrow}q)$ and $\mathit{Run}_\mathscr{A}(p{\uparrow})$ when it is clear from the context, and we use $[p{\downarrow}q]$ and $[p{\uparrow}]$ to denote the probability of $\mathit{Run}(p{\downarrow}q)$ and $\mathit{Run}(p{\uparrow})$, respectively. Observe that $[p{\uparrow}] = 1 - \sum_{q \in Q} [p{\downarrow}q]$ for every $p \in Q$. At various places in this paper we rely on the following proposition proven in~\cite{EWY:one-counter} (recall that we adopt the unit-cost rational arithmetic RAM model of computation): \begin{proposition}\label{prop:termprobs} \label{prop-termination} Let $\mathscr{A} = (Q,\delta^{=0},\delta^{>0},P^{=0},P^{>0})$ be a pOC, and $p,q \in Q$. \begin{itemize} \item The problem whether $[p{\downarrow}q] >0$ is decidable in polynomial time. \item If $[p{\downarrow}q] >0$, then $[p{\downarrow}q] \geq x_{\min}^{|Q|^3}$, where $x_{\min}$ is the least (positive) probability used in the rules of~$\mathscr{A}$. \item The probability $[p{\downarrow}q]$ can be approximated up to an arbitrarily small relative error $\varepsilon > 0$ in a time polynomial in $|\mathscr{A}|$ and $\log(1/\varepsilon)$. \end{itemize} \end{proposition} Due to Proposition~\ref{prop-termination}, the set $T^{>0}$ of all pairs $(p,q)\in Q\times Q$ satisfying $[p{\downarrow}q]>0$ is computable in polynomial time. \section{Expected Termination Time} \label{sec-etime} In this section we give an efficient algorithm which approximates the expected termination time in pOC up to an arbitrarily small relative (or even absolute) error $\varepsilon > 0$. For the rest of this section, we fix a pOC $\mathscr{A} = (Q,\delta^{=0},\delta^{>0},P^{=0},P^{>0})$. For all \mbox{$p,q \in Q$}, let $R_{p{\downarrow}q} : \mathit{Run}(p(1)) \rightarrow \Nset_0$ be a random variable defined as follows: \begin{eqnarray*} R_{p{\downarrow}q}(w) & = & \begin{cases} k & \mbox{if } w \in \mathit{Run}(p{\downarrow}q) \mbox{ and $k$ is the least index such that $w(k) = q(0)$;}\\ 0 & \mbox{otherwise.} \end{cases}\\[.5ex] \end{eqnarray*} If $(p,q)\in T^{>0}$, we use $E(p{\downarrow}q)$ to denote the conditional expectation $\mathbb{E}[R_{p{\downarrow}q} \mid \mathit{Run}(p{\downarrow}q)]$. Note that $E(p{\downarrow}q)$ can be finite even if $[p{\downarrow}q] < 1$. The first problem we have to deal with is that the expectation $E(p{\downarrow}q)$ can be infinite, as illustrated by the following example. \begin{example} Consider a simple pOC with only one control state~$p$ and two positive rules $(p,-1,p)$ and $(p,1,p)$ that are both assigned the probability $1/2$. Then $[p{\downarrow}p] =1$, and due to results of \cite{EKM:prob-PDA-expectations}, $E(p{\downarrow}p)$ is the least solution (in $\mathbb{R}^{+} \cup \{\infty\}$) of the equation $x = 1/2 + 1/2 (1+2x)$, which is $\infty$. \end{example} We proceed as follows. First, we show that the problem whether $E(p{\downarrow}q) = \infty$ is decidable in polynomial time (Section~\ref{subsec:inftermtime}). Then, we eliminate all infinite expectations, and show how to approximate the finite values of the remaining $E(p{\downarrow}q)$ up to a given absolute (and hence also relative) error $\varepsilon > 0$ efficiently (Section~\ref{subsec:fintermtime}). \subsection{Finiteness of the expected termination time} \label{subsec:inftermtime} Our aim is to prove the following: \begin{theorem} \label{thm-exp-infinite} Let $(p,q) \in T^{>0}$. The problem whether $E(p{\downarrow}q)$ is finite is decidable in polynomial time. \end{theorem} Theorem~\ref{thm-exp-infinite} is proven by analysing the underlying finite-state Markov chain~$\mathcal{X}$ of the considered pOC~$\mathscr{A}$. The transition matrix $A \in [0,1]^{Q \times Q}$ of $\mathcal{X}$ is given by \[ A_{p,q} = \sum_{(p,c,q) \in \delta^{>0}} P^{>0}(p,c,q). \] We start by assuming that $\mathcal{X}$ is strongly connected (i.e. that for all $p,q\in Q$ there is a path from $p$ to $q$ in $\mathcal{X}$). Later we show how to generalize our results to an arbitrary $\mathcal{X}$. \bigskip \noindent{\bf Strongly connected $\mathcal{X}$:} Let $\boldsymbol{\alpha} \in (0,1]^Q$ be the \emph{invariant distribution} of~$\mathcal{X}$, i.e., the unique (row) vector satisfying $\boldsymbol{\alpha} A = \boldsymbol{\alpha}$ and $\boldsymbol{\alpha} \boldsymbol{1} = 1$ (see, e.g., \cite[Theorem 5.1.2]{KS60}). Further, we define the (column) vector $\boldsymbol{s} \in \mathbb{R}^Q$ of \emph{expected counter changes} by \[ \boldsymbol{s}_p = \sum_{(p,c,q) \in \delta^{>0}} P^{>0}(p,c,q) \cdot c \] and the \emph{trend} $t \in \mathbb{R}$ of $\mathcal{X}$ by $t = \boldsymbol{\alpha} \boldsymbol{s}$. Note that~$t$ is easily computable in polynomial time. Now consider some $E(p{\downarrow}q)$, where $(p,q) \in T^{>0}$. We show the following: \begin{itemize} \item[(A)] If $t \neq 0$, then $E(p{\downarrow}q)$ is finite. \item[(B)] If $t = 0$, then $E(p{\downarrow}q) = \infty$ iff the set $\mathit{Pre}^*(q(0)) \cap \mathit{Post}^*(p(1))$ is infinite, where \begin{itemize} \item $\mathit{Pre}^*(q(0))$ consists of all $r(k)$ that can reach $q(0)$ along a run $w$ in $\mathcal{M}_\mathscr{A}$ such that the counter stays positive in all configurations preceding the visit to $q(0)$; \item $\mathit{Post}^*(p(1))$ consists of all $r(k)$ that can be reached from $p(1)$ along a run $w$ in $\mathcal{M}_\mathscr{A}$ where the counter stays positive in all configurations preceding the visit to $r(k)$. \end{itemize} \end{itemize} Note that the conditions of Claims~(A) and~(B) are easy to verify in polynomial time. (Due to \cite{EHRS:MC-PDA}, there are finite-state automata constructible in polynomial time recognizing the sets $\mathit{Pre}^*(q(0))$ and $\mathit{Post}^*(p(1))$. Hence, we can efficiently compute a finite-state automaton $\mathcal{F}$ recognizing the set $\mathit{Pre}^*(q(0)) \cap \mathit{Post}^*(p(1))$ and check whether the language accepted by $\mathcal{F}$ is infinite.) Thus, if $\mathcal{X}$ is strongly connected and $(p,q)\in T^{>0}$, we can decide in polynomial time whether $E(p{\downarrow}q)$ is finite. It remains to prove Claims~(A) and~(B). This is achieved by employing a generic observation which connects the study of pOC to martingale theory. Recall that a stochastic process $\ms{0},\ms{1},\dots$ is a martingale if, for all $i \in \mathbb{N}$, $\mathbb{E}(|\ms{i}|) < \infty$, and \mbox{$\mathbb{E}(\ms{i+1} \mid \ms{1},\dots,\ms{i}) = \ms{i}$} almost surely. Let us fix some initial configuration \mbox{$r(c) \in Q \times \mathbb{N}$}. Our aim is to construct a suitable martingale over $\mathit{Run}(r(c))$. Let $\ps{i}$ and $\cs{i}$ be random variables which to every run $w \in \mathit{Run}(r(c))$ assign the control state and the counter value of the configuration $w(i)$, respectively. Note that if the vector~$\boldsymbol{s}$ of expected counter changes is constant, i.e., $\boldsymbol{s} = \boldsymbol{1} \cdot t$ where $t$ is the trend of $\mathcal{X}$, then we can define a martingale $\ms{0},\ms{1},\dots$ simply by \[ \ms{i} = \begin{cases} \cs{i} \ -\ i \cdot t & \text{if $\cs{j} \ge 1$ for all $0 \leq j < i$;}\\ \ms{i-1} & \text{otherwise.} \end{cases} \] Since~$\boldsymbol{s}$ is generally not constant, we might try to ``compensate'' the difference among the individual control states by a suitable vector $\boldsymbol{v} \in \mathbb{R}^{Q}$. The next proposition shows that this is indeed possible. \begin{proposition} \label{prop-martingale} There is a vector $\boldsymbol{v} \in \mathbb{R}^{Q}$ such that the stochastic process $\ms{0},\ms{1},\dots$ defined by \[ \ms{i} = \begin{cases} \cs{i} \ +\ \boldsymbol{v}_{\ps{i}} \ -\ i \cdot t & \text{if $\cs{j} \ge 1$ for all $0 \leq j < i$;}\\ \ms{i-1} & \text{otherwise} \end{cases} \] is a martingale, where $t$ is the trend of $\mathcal{X}$. Moreover, the vector $\boldsymbol{v}$ satisfies $\vv_{\max}-\vv_{\min}\ \le\ 2 |Q| / x_{\min}^{|Q|}$, where $x_{\min}$ is the smallest positive transition probability in~$\mathcal{X}$, and $\vv_{\max}$ and $\vv_{\min}$ are the maximal and the minimal components of $\boldsymbol{v}$, respectively. \end{proposition} Due to Proposition~\ref{prop-martingale}, powerful results of martingale theory such as optional stopping theorem or Azuma's inequality (see, e.g., \cite{Rosenthal:book,Williams:book}) become applicable to pOC. In this paper, we use the constructed martingale to complete the proof of Claims~(A) and~(B), and to establish the crucial \emph{divergence gap theorem} in Section~\ref{sec-LTL} (due to space constraints, we only include brief sketches of Propositions~\ref{lem:expected-term-bound-prob} and~\ref{prop-term-inf} which demonstrate the use of Azuma's inequality and optional stopping theorem). The range of possible applications of Proposition~\ref{prop-martingale} is of course wider. \input{claimA} \input{claimB} \bigskip \noindent {\bf Non-strongly connected $\mathcal{X}$:} The general case still requires some extra care. First, realize that each BSCC $\mathscr{B}$ of $\mathcal{X}$ can be seen as a strongly connected finite-state Markov chain, and hence all notions and arguments of the previous subsection can be applied to~$\mathscr{B}$ immediately (in particular, we can compute the trend of $\mathscr{B}$ in polynomial time). We prove the following claims: \begin{itemize} \item[(C)] If $q$ does not belong to a BSCC of $\mathcal{X}$, then $E(p{\downarrow}q)$ is finite. \item[(D)] If $q$ belongs to a BSCC $\mathscr{B}$ of $\mathcal{X}$ such that the trend of $\mathscr{B}$ is different from~$0$, then $E(p{\downarrow}q)$ is finite. \item[(E)] If $q$ belongs to a BSCC $\mathscr{B}$ of $\mathcal{X}$ such that the trend of $\mathscr{B}$ is~$0$, then $E(p{\downarrow}q) = \infty$ iff the set $\mathit{Pre}^*(q(0)) \cap \mathit{Post}^*(p(1))$ is infinite. \end{itemize} Note that the conditions of Claims~(C)-(E) are verifiable in polynomial time. Intuitively, Claim~(C) is proven by observing that if $q$ does not belong to a BSCC of $\mathcal{X}$, then for all $s(\ell) \in \mathit{Post}^*(p(1))$, where $\ell \geq |Q|$, we have that $s(\ell)$ can reach a configuration outside $\mathit{Pre}^*(q(0))$ in at most $|Q|$ transitions. It follows that the probability of performing an honest path from $p(1)$ to $q(0)$ of length~$i$ decays exponentially in~$i$, and hence $\mathbb{E}(p{\downarrow}q)$ is finite. Claim~(D) is obtained by combining the arguments of Claim~(A) together with the fact that the conditional expected number of transitions needed to reach $\mathscr{B}$ from $p(0)$, under the condition that $\mathscr{B}$ is indeed reached from $p(0)$, is finite (this is a standard result for finite-state Markov chains). Finally, Claim~(E) follows by re-using the arguments of Claim~(B). \subsection{Efficient approximation of finite expected termination time}\label{subsec:fintermtime} Let us denote by $T^{>0}_{<\infty}$ the set of all pairs $(p,q)\in T^{>0}$ satisfying $E(p{\downarrow}q)<\infty$. Our aim is to prove the following: \begin{theorem} \label{thm-regular} For all $(p,q) \in T^{>0}_{<\infty}$, the value of $E(p{\downarrow}q)$ can be approximated up to an arbitrarily small absolute error $\varepsilon > 0$ in time polynomial in $|\mathscr{A}|$ and $\log(1/\varepsilon)$. \end{theorem} Note that if $y$ approximates $E(p{\downarrow}q)$ up to an absolute error $1 > \varepsilon > 0$, then $y$ approximates $E(p{\downarrow}q)$ also up to the relative error $\varepsilon$ because $E(p{\downarrow}q) \geq 1$. The proof of Theorem~\ref{thm-regular} is based on the fact that the vector of all $E(p{\downarrow}q)$, where $(p,q)\in T^{>0}_{<\infty}$, is the unique solution of a system of linear equations whose coefficients can be efficiently approximated (see below). Hence, it suffices to approximate the coefficients, solve the approximated equations, and then bound the error of the approximation using standard arguments from numerical analysis. Let us start by setting up the system of linear equations for $E(p{\downarrow}q)$. For all $p,q \in T^{>0}$, we fix a fresh variable $V(p{\downarrow}q)$, and construct the following system of linear equations, $\mathcal{L}$, where the termination probabilities are treated as constants: \begin{eqnarray*} V(p{\downarrow}q) & = & \sum_{(p,-1,q) \in \delta^{>0}} \frac{P^{>0}(p,-1,q)}{[p{\downarrow}q]} \ + \ \sum_{(p,0,t) \in \delta^{>0}} \frac{P^{>0}(p,0,t) \cdot [t{\downarrow}q]}{[p{\downarrow}q]} \cdot \bigg(1+ V(t{\downarrow}q)\bigg)\\[1ex] & + & \sum_{(p,1,t) \in \delta^{>0}} \sum_{r \in Q} \frac{P^{>0}(p,1,t) \cdot [t{\downarrow}r]\cdot[r{\downarrow}q]}{[p{\downarrow}q]} \cdot \bigg(1+ V(t{\downarrow}r)+ V(r{\downarrow}q)\bigg) \end{eqnarray*} It has been shown in \cite{EKM:prob-PDA-expectations} that the tuple of all $E(p{\downarrow}q)$, where $(p,q)\in T^{>0}$, is the least solution of~$\mathcal{L}$ in $\mathbb{R}^{+} \cup \{\infty\}$ with respect to component-wise ordering (where $\infty$ is treated according to the standard conventions). Due to Theorem~\ref{thm-exp-infinite}, we can further simplify the system~$\mathcal{L}$ by erasing the defining equations for all $V(p{\downarrow}q)$ such that $E(p{\downarrow}q) = \infty$ (note that if $E(p{\downarrow}q) < \infty$, then the defining equation for $V(p{\downarrow}q)$ in~$\mathcal{L}$ cannot contain any variable $V(r{\downarrow}t)$ such that $E(r{\downarrow}t) = \infty$). Thus, we obtain the system~$\mathcal{L}'$. It is straightforward to show that the vector of all finite $E(p{\downarrow}q)$ is the \emph{unique} solution of the system $\mathcal{L}'$ (see, e.g., Lemma~6.2.3 and Lemma~6.2.4 in~\cite{Brazdil:PhD}). If we rewrite~$\mathcal{L}'$ into a standard matrix form, we obtain a system $\vec{V} = H \cdot \vec{V} + \vec{b}$, where $H$ is a nonsingular nonnegative matrix, $\vec{V}$ is the vector of variables in~$\mathcal{L}'$, and $\vec{b}$ is a vector. Further, we have that $\vec{b} = \boldsymbol{1}$, i.e., the constant coefficients are all~$1$. This follows from the following equality (see \cite{EKM:prob-PDA-PCTL,EY:RMC-SG-equations}): \begin{equation} \label{eq:termination-probabilities} \begin{aligned} \ [p{\downarrow}q]=\sum_{(p,-1,q) \in \delta^{>0}} P^{>0}(p,-1,q)\ & + \sum_{(p,0,t) \in \delta^{>0}} P^{>0}(p,0,t) \cdot [t{\downarrow}q] \\ & + \sum_{(p,1,t) \in \delta^{>0}} \sum_{r \in Q} P^{>0}(p,1,t) \cdot [t{\downarrow}r]\cdot[r{\downarrow}q] \end{aligned} \end{equation} Hence, $\mathcal{L}'$ takes the form $\vec{V} = H \cdot \vec{V} + \boldsymbol{1}$. Unfortunately, the entries of~$H$ can take irrational values and cannot be computed precisely in general. However, they can be approximated up to an arbitrarily small relative error using Proposition~\ref{prop:termprobs}. Denote by $G$ an approximated version of~$H$. We aim at bounding the error of the solution of the ``perturbed'' system $\vec{V} = G \cdot \vec{V} + \boldsymbol{1}$ in terms of the error of~$G$. To measure these errors, we use the $l_\infty$ norm of vectors and matrices, defined as follows: For a vector $\vec{V}$ we have that $\norm{\vec{V}}=\max_i |\vec{V}_i|$, and for a matrix~$M$ we have $\norm{M}=\max_i \sum_j |M_{ij}|$. Hence, $\norm{M} = \norm{M \cdot \boldsymbol{1}}$ if $M$ is nonnegative. We show the following: \begin{proposition} \label{prop-exp-approx} Let $b \ge \max\left\{E(p{\downarrow}q)\mid (p,q)\in T^{>0}_{<\infty}\right\}$. Then for each $\varepsilon$, where $0 < \varepsilon < 1$, let $\delta = \varepsilon\, / (12\cdot b^2)$. If $\norm{G-H} \le \delta$, then the perturbed system $\vec{V} = G \cdot \vec{V} + \boldsymbol{1}$ has a unique solution $\vec{F}$, and in addition, we have that \[ |E(p{\downarrow} q) - \vec{F}_{pq}| \quad \leq\quad \varepsilon \qquad \text{for all $(p,q) \in T^{>0}_{<\infty}$.} \] Here $\vec{F}_{pq}$ is the component of $\vec{F}$ corresponding to the variable $V(p{\downarrow}q)$. \end{proposition} The proof of Proposition~\ref{prop-exp-approx} is based on estimating the size of the condition number $\kappa = \norm{1-H} \cdot \norm{(1-H)^{-1}}$ and applying standard results of numerical analysis. The $b$ in Proposition~\ref{prop-exp-approx} can be estimated as follows: \newcommand{\stmtpropexptimeboundspecial}{ Let $x_{\min}$ denote the smallest nonzero probability in~$A$. Then we have: \[ E(p{\downarrow}q) \quad\le\quad 85000 \cdot |Q|^6 / \left( x_{\min}^{6 |Q|^3} \cdot t_{\min}^4 \right) \qquad \text{for all $(p,q) \in T^{>0}_{<\infty}$,} \] where $t_{\min} = \{|t| \ne 0 \mid \text{$t$ is the trend in a BSCC of~$\mathcal{X}$}\}$. } \begin{proposition} \label{prop:exp-time-bound-special} \stmtpropexptimeboundspecial \end{proposition} Although $b$ appears large, it is really the value of $\log(1/b)$ which matters, and it is still reasonable. Theorem~\ref{thm-regular} now follows by combining Propositions~\ref{prop:exp-time-bound-special}, \ref{prop-exp-approx} and~\ref{prop:termprobs}, because the approximated matrix~$G$ can be computed using a number of arithmetical operations which is polynomial in $|\mathscr{A}|$ and $\log(1/\varepsilon)$. \section{Introduction} \label{sec-intro} In this paper we aim at designing \emph{efficient} algorithms for analyzing basic properties of probabilistic programs operating on unbounded data domains that can be abstracted into a non-negative integer counter. Consider, e.g., the recursive program of Fig.~\ref{fig-and-or} which evaluates a given AND-OR tree, i.e., a tree whose root is an AND node, all descendants of AND nodes are either leaves or OR nodes, and all descendants of OR nodes are either leaves or AND nodes. Note that the program evaluates a subtree only when necessary. In general, the program may not terminate and we cannot say anything about its expected termination time. Now assume that we \emph{do} have some knowledge about the actual input domain of the program, which might have been gathered empirically: \begin{itemize} \item an AND node has about $a$ descendants on average; \item an OR node has about $o$ descendants on average; \item the length of a branch is $b$ on average; \item the probability that a leaf evaluates to $1$ is $z$. \end{itemize} Further, let us assume that the actual number of descendants and the actual length of a branch are \emph{geometrically} distributed (which is a reasonably good approximation in many cases). Hence, the probability that an AND node has \emph{exactly} $n$ descendants is $(1-x_a)^{n-1} x_a$ with $x_a = \frac{1}{a}$. Under these assumption, the behaviour of the program is well-defined in the probabilistic sense, and we may ask the following questions: \begin{itemize} \item[1)] Does the program terminate with probability one? If not, what is the termination probability? \item[2)] If we restrict ourselves to terminating runs, what is the expected termination time? (Note that this conditional expected value is defined even if our program does not terminate with probability one.) \end{itemize} These questions are not trivial, and at first glance it is not clear how to approach them. Apart of the expected termination time, which is a fundamental characteristic of terminating runs, we are also interested in the properties on \emph{non-terminating} runs, specified by linear-time logics or automata on infinite words. Here, we ask for the probability of all runs satisfying a given linear-time property. Using the results of this paper, answers to such questions can be computed \emph{efficiently} for a large class of programs, including the one of Fig.~\ref{fig-and-or}. More precisely, the first question about the probability of termination can be answered using the existing results \cite{EWY:one-counter}; the original contributions of this paper are efficient algorithms for computing answers to the remaining questions. \begin{figure}[t] \noindent\hspace{\fill} \parbox{.45\textwidth}{\ttfamily\footnotesize \kw{procedure} AND(node)\\[.5ex] \kw{if} node is a leaf\\ \hspace*{1.5em} \kw{then} \kw{return} node.value\\ \kw{else}\\ \hspace*{1.5em} \kw{for each} successor s of node \kw{do}\\ \hspace*{1.5em}\msp \kw{if} OR(s) = 0 \kw{then} \kw{return} 0\\ \hspace*{1.5em} \kw{end for}\\ \hspace*{1.5em} \kw{return} 1\\ \kw{end if}}\hspace{\fill} \parbox{.45\textwidth}{\ttfamily\footnotesize \kw{procedure} OR(node)\\[.5ex] \kw{if} node is a leaf\\ \hspace*{1.5em} \kw{then} \kw{return} node.value\\ \kw{else}\\ \hspace*{1.5em} \kw{for each} successor s of node \kw{do}\\ \hspace*{1.5em}\msp \kw{if} AND(s) = 1 \kw{then} \kw{return} 1\\ \hspace*{1.5em} \kw{end for}\\ \hspace*{1.5em} \kw{return} 0\\ \kw{end if}}\hspace{\fill} \caption{A recursive program for evaluating AND-OR trees.} \label{fig-and-or} \end{figure} The abstract class of probabilistic programs considered in this paper corresponds to \emph{probabilistic one-counter automata (pOC)}. Informally, a pOC has finitely many control states $p,q,\ldots$ that can store global data, and a single non-negative counter that can be incremented, decremented, and tested for zero. The dynamics of a given pOC is described by finite sets of \emph{positive} and \emph{zero} rules of the form $p \prule{x,c} q$ and $p \zrule{x,c} q$, respectively, where $p,q$ are control states, $x$ is the \emph{probability} of the rule, and $c \in \{-1,0,1\}$ is the \emph{counter change} which must be non-negative in zero rules. A \emph{configuration} $p(i)$ is given by the current control state $p$ and the current counter value~$i$. If $i$ is positive/zero, then positive/zero rules can be applied to $p(i)$ in the natural way. Thus, every pOC determines an infinite-state Markov chain where states are the configurations and transitions are determined by the rules. As an example, consider a pOC model of the program of Fig.~\ref{fig-and-or}. We use the counter to abstract the stack of activation records. Since the procedures AND and OR alternate regularly in the stack, we keep just the current stack height in the counter, and maintain the ``type'' of the current procedure in the finite control (when we increase or decrease the counter, the ``type'' is swapped). The return values of the two procedures are also stored in the finite control. Thus, we obtain the pOC model of Fig.~\ref{fig-and-or-model} with $6$ control states and $12$ positive rules (zero rules are irrelevant and hence not shown in Fig.~\ref{fig-and-or-model}). The initial configuration is $(\textit{and,init})(1)$, and the pOC terminates either in $(\textit{or,return,0})(0)$ or $(\textit{or,return,1})(0)$, which corresponds to evaluating the input tree to $0$ and $1$, respectively. We set $x_a := 1/a$, $x_o := 1/o$ and $y := 1/b$ in order to obtain the average numbers $a, o, b$ from the beginning. \begin{figure}[t] \noindent\hspace{\fill} \parbox{.46\textwidth}{\footnotesize \textrm{/* if we have a leaf, return 0 or 1 */}\\ \hspace*{1.5em}$(\textit{and,init}) \xrightarrow{y\, z,\,-1} (\textit{or,return,1})$,\\ \hspace*{1.5em}$(\textit{and,init}) \xrightarrow{y (1-z),\,-1} (\textit{or,return,0})$\\[.5ex] \textrm{/* otherwise, call OR */}\\ \hspace*{1.5em}$(\textit{and,init}) \xrightarrow{(1-y),1} (\textit{or, init})$\\[.5ex] \textrm{/* if OR returns 1, call another OR? */}\\ \hspace*{1.5em}$(\textit{and,return,1}) \xrightarrow{(1-x_a),\,1} (\textit{or,init})$\\ \hspace*{1.5em}$(\textit{and,return,1}) \xrightarrow{x_a,\,-1} (\textit{or,return,1})$\\[.5ex] \textrm{/* if OR returns 0, return 0 immediately */}\\ \hspace*{1.5em}$(\textit{and,return,0}) \xrightarrow{1,\,-1} (\textit{or,return,0})$} \hspace{\fill} \parbox{.46\textwidth}{\footnotesize \textrm{/* if we have a leaf, return 0 or 1 */}\\ \hspace*{1.5em}$(\textit{or,init}) \xrightarrow{y\,z,\,-1} (\textit{and,return,1})$,\\ \hspace*{1.5em}$(\textit{or,init}) \xrightarrow{y(1-z),\,-1} (\textit{and,return,0})$\\[.5ex] \textrm{/* otherwise, call AND */}\\ \hspace*{1.5em}$(\textit{or,init}) \xrightarrow{(1-y),1} (\textit{and,init})$\\[.5ex] \textrm{/* if AND returns 0, call another AND? */}\\ \hspace*{1.5em}$(\textit{or,return,0}) \xrightarrow{(1-x_o),\,1} (\textit{and, init})$\\ \hspace*{1.5em}$(\textit{or,return,0}) \xrightarrow{x_o,\,-1} (\textit{and,return,0})$\\[.5ex] \textrm{/* if AND returns 1, return 1 immediately */}\\ \hspace*{1.5em}$(\textit{or,return,1}) \xrightarrow{1,\,-1} (\textit{and,return,1})$} \hspace{\fill} \caption{A pOC model for the program of Fig.~\ref{fig-and-or}.} \label{fig-and-or-model} \end{figure} As we already indicated, pOC can model recursive programs operating on unbounded data structures such as trees, queues, or lists, assuming that the structure can be faithfully abstracted into a counter. Let us note that modeling general recursive programs requires more powerful formalisms such as \emph{probabilistic pushdown automata (pPDA)} or \emph{recursive Markov chains (RMC)}. However, as it is mentioned below, pPDA and RMC do not admit \emph{efficient} quantitative analysis for fundamental reasons. Hence, we must inevitably sacrifice a part of pPDA modeling power to gain efficiency in algorithmic analysis, and pOC seem to be a convenient compromise for achieving this goal. The relevance of pOC is not limited just to recursive programs. As observed in \cite{EWY:one-counter}, pOC are equivalent, in a well-defined sense, to discrete-time \emph{Quasi-Birth-Death processes (QBDs)}, a well-established stochastic model that has been deeply studied since late~60s. Thus, the applicability of pOC extends to queuing theory, performance evaluation, etc., where QBDs are considered as a fundamental formalism. Very recently, games over (probabilistic) one-counter automata, also called ``energy games'', were considered in several independent works \cite{CHD:energy-games,CHDHR:energy-mean-payoff,% BBEKW:OC-MDP,BBE:OC-games}. The study is motivated by optimizing the use of resources (such as energy) in modern computational devices. \textbf{Previous work.} In \cite{EKM:prob-PDA-PCTL,EY:RMC-SG-equations}, it has been shown that the vector of termination probabilities in pPDA and RMC is the least solution of an effectively constructible system of quadratic equations. The termination probabilities may take irrational values, but can be effectively approximated up to an arbitrarily small absolute error $\varepsilon >0$ in polynomial space by employing the decision procedure for the existential fragment of Tarski algebra (i.e., first order theory of the reals) \cite{Canny:Tarski-exist-PSPACE}. Due to the results of \cite{EY:RMC-SG-equations}, it is possible to approximate termination probabilities in pPDA and RMC ``iteratively'' by using the decomposed Newton's method. However, this approach may need exponentially many iterations of the method before it starts to produce one bit of precision per iteration \cite{KLE07:stoc}. Further, any non-trivial approximation of the non-termination probabilities is at least as hard as the \textsc{SquareRootSum} problem~\cite{EY:RMC-SG-equations}, whose exact complexity is a long-standing open question in exact numerical computations (the best known upper bound for \textsc{SquareRootSum} is PSPACE). Computing termination probabilities in pPDA and RMC up to a given \emph{relative} error $\varepsilon > 0$, which is more relevant from the point of view of this paper, is \emph{provably} infeasible because the termination probabilities can be doubly-exponentially small in the size of a given pPDA or RMC~\cite{EY:RMC-SG-equations}. The expected termination time and the expected reward per transition in pPDA and RMC has been studied in \cite{EKM:prob-PDA-expectations}. In particular, it has been shown that the tuple of expected termination times is the least solution of an effectively constructible system of linear equations, where the (products of) termination probabilities are used as coefficients. Hence, the equational system can be represented only symbolically, and the corresponding approximation algorithm again employs the decision procedure for Tarski algebra. There also other results for pPDA and RMC, which concern model-checking problems for linear-time \cite{EY:RMC-LTL-complexity,EY:RMC-LTL-QUEST} and branching-time \cite{BKS:pPDA-temporal} logics, long-run average properties \cite{BEK:prob-PDA-predictions}, discounted properties of runs \cite{BBHK:pPDA-discounted}, etc. \textbf{Our contribution.} In this paper, we build on the previously established results for pPDA and RMC, and on the recent results of \cite{EWY:one-counter} where is shown that the decomposed Newton method of \cite{KLE:Newton-STOC} can be used to compute termination probabilities in pOC up to a given \emph{relative} error $\varepsilon>0$ in time which is \emph{polynomial} in the size of pOC and $\log(1/\varepsilon)$, assuming the unit-cost rational arithmetic RAM (i.e., Blum-Shub-Smale) model of computation. Adopting the same model, we show the following: \begin{itemize} \item[1.] The expected termination time in a pOC $\mathscr{A}$ is computable up to an arbitrarily small relative error $\varepsilon > 0$ in time polynomial in $|\mathscr{A}|$ and $\log(1/\varepsilon)$. Actually, we can even compute the expected termination time up to an arbitrarily small \emph{absolute} error, which is a better estimate because the expected termination time is always at least~$1$. \item[2.] The probability of all runs in a pOC $\mathscr{A}$ satisfying an $\omega$-regular property encoded by a deterministic Rabin automaton $\mathcal{R}$ is computable up to an arbitrarily small relative error $\varepsilon > 0$ in time polynomial in $|\mathscr{A}|$, $|\mathcal{R}|$, and $\log(1/\varepsilon)$. \end{itemize} The crucial step towards obtaining these results is the construction of a suitable \emph{martingale} for a given pOC, which allows to apply powerful results of martingale theory (such as the optional stopping theorem or Azuma's inequality, see, e.g., \cite{Rosenthal:book,Williams:book}) to the quantitative analysis of pOC. In particular, we use this martingale to establish the crucial \emph{divergence gap theorem} in Section~\ref{sec-LTL}, which bounds a positive divergence probability in pOC away from~$0$. The divergence gap theorem is indispensable in analysing properties of non-terminating runs, and together with the constructed martingale provide generic tools for designing efficient approximation algorithms for other interesting quantitative properties of~pOC. Although our algorithms have polynomial worst-case complexity, the obtained bounds look complicated and it is not immediately clear whether the algorithms are practically usable. Therefore, we created a simple experimental implementation which computes the expected termination time for pOC, and used this tool to analyse the pOC model of Fig.~\ref{fig-and-or-model}. The details are given in Section~\ref{sec-concl}. \section{Quantitative Model-Checking of $\omega$-regular Properties} \label{sec-LTL} In this section, we show that for every $\omega$-regular property encoded by a deterministic Rabin automaton, the probability of all runs in a given pOC that satisfy the property can be approximated up to an arbitrarily small relative error $\varepsilon>0$ in polynomial time. This is achieved by designing and analyzing a new quantitative model-checking algorithm for pOC and $\omega$-regular properties, which is \emph{not} based on techniques developed for pPDA and RMC in \cite{EKM:prob-PDA-PCTL,EY:RMC-LTL-complexity,EY:RMC-LTL-QUEST}. Recall that a deterministic Rabin automaton (DRA) over a finite alphabet $\Sigma$ is a deterministic finite-state automaton $\mathcal{R}$ with total transition function and \emph{Rabin acceptance condition} $(E_1,F_1),\ldots,(E_k,F_k)$, where $k \in \mathbb{N}$, and all $E_i$, $F_i$ are subsets of control states of~$\mathcal{R}$. For a given infinite word $w$ over $\Sigma$, let $\inf(w)$ be the set of all control states visited infinitely often along the unique run of $\mathcal{R}$ on $w$. The word $w$ is accepted by $\mathcal{R}$ if there is $i \leq k$ such that $\inf(w) \cap E_i = \emptyset$ and $\inf(w) \cap F_i \neq \emptyset$. Let $\Sigma$ be a finite alphabet, $\mathcal{R}$ a DRA over $\Sigma$, and $\mathscr{A} = (Q,\delta^{=0},\delta^{>0},P^{=0},P^{>0})$ a pOC. A \emph{valuation} is a function $\nu$ which to every configuration $p(i)$ of $\mathscr{A}$ assigns a unique letter of~$\Sigma$. For simplicity, we assume that $\nu(p(i))$ depends only on the control state $p$ and the information whether $i \geq 1$. Intuitively, the letters of $\Sigma$ correspond to collections of predicates that are valid in a given configuration of $\mathscr{A}$. Thus, every run $w \in \mathit{Run}_{\mathscr{A}}(p(i))$ determines a unique infinite word $\nu(w)$ over $\Sigma$ which is either accepted by $\mathcal{R}$ or not. The main result of this section is the following theorem: \begin{theorem} \label{thm-omega-regular} For every $p \in Q$, the probability of all $w \in \mathit{Run}_\mathscr{A}(p(0))$ such that $\nu(w)$ is accepted by $\mathcal{R}$ can be approximated up to an arbitrarily small relative error $\varepsilon > 0$ in time polynomial in $|\mathscr{A}|$, $|\mathcal{R}|$, and $\log(1/\varepsilon)$. \end{theorem} Our proof of Theorem~\ref{thm-omega-regular} consists of three steps: \begin{itemize} \item[1.] We show that the problem of our interest is equivalent to the problem of computing the probability of all accepting runs in a given pOC $\mathscr{A}$ with Rabin acceptance condition. \item[2.] We introduce a finite-state Markov chain $\mathcal{G}$ (with possibly irrational transition probabilities) such that the probability of all accepting runs in $\mathcal{M}_\mathscr{A}$ is equal to the probability of reaching a ``good'' BSCC in $\mathcal{G}$. \item[3.] We show how to compute the probability of reaching a ``good'' BSCC in $\mathcal{G}$ with relative error at most~$\varepsilon$ in time polynomial in $|\mathscr{A}|$ and $\log(1/\varepsilon)$. \end{itemize} Let us note that Steps~1 and~2 are relatively simple, but Step~3 requires several insights. In particular, we cannot solve Step~3 without bounding a positive non-termination probability in pOC (i.e., a positive probability of the form $[p{\uparrow}]$) away from zero. This is achieved in our ``divergence gap theorem'' (i.e., Theorem~\ref{thm-gap}), which is based on applying Azuma's inequality to the martingale constructed in Section~\ref{sec-etime}. Now we elaborate the three steps in more detail. \smallskip \noindent \textbf{Step~1.} For the rest of this section, we fix a pOC $\mathscr{A} = (Q,\delta^{=0},\delta^{>0},P^{=0},P^{>0})$, and a \emph{Rabin acceptance condition} $(\mathcal{E}_1,\mathcal{F}_1),\ldots,(\mathcal{E}_k,\mathcal{F}_k)$, where $k \in \mathbb{N}$ and $\mathcal{E}_i,\mathcal{F}_i \subseteq Q$ for all \mbox{$1 \leq i \leq k$}. For every run $w \in \mathit{Run}_{\mathscr{A}}$, let $\inf(w)$ be the set of all $p \in Q$ visited infinitely often along~$w$. We use $\mathit{Run}_\mathscr{A}(p(0),\mathit{acc})$ to denote the set of all \emph{accepting runs} $w \in \mathit{Run}_{\mathscr{A}}(p(0))$ such that $\inf(w) \cap \mathcal{E}_i = \emptyset$ and $\inf(w) \cap \mathcal{F}_i \neq \emptyset$ for some $i \leq k$. Sometimes we also write $\mathit{Run}_\mathscr{A}(p(0),\mathit{rej})$ to denote the set $\mathit{Run}_{\mathscr{A}}(p(0)) \smallsetminus \mathit{Run}_\mathscr{A}(p(0),\mathit{acc})$ of \emph{rejecting} runs. Our next proposition says that the problem of computing/approximating the probability of all runs $w$ in a given pOC that are accepted by a given DRA is efficiently reducible to the problem of computing/approximating the probability of all accepting runs in a given pOC with Rabin acceptance condition. The proof is very simple (we just ``synchronize'' a given pOC with a given DRA, and setup the Rabin acceptance condition accordingly). \begin{proposition} \label{prop-product} Let $\Sigma$ be a finite alphabet, $\mathscr{A}$ a pOC, $\nu$ a valuation, $\mathcal{R}$ a DRA over $\Sigma$, and $p(0)$ a configuration of $\mathscr{A}$. Then there is a pOC $\mathscr{A}'$ with Rabin acceptance condition and a configuration $p'(0)$ of $\mathscr{A}'$ constructible in polynomial time such that the probability of all $w \in \mathit{Run}_{\mathscr{A}}(p(0))$ where $\nu(w)$ is accepted by $\mathcal{R}$ is equal to the probability of all accepting $w \in \mathit{Run}_{\mathscr{A}'}(p'(0))$. \end{proposition} \smallskip \noindent \textbf{Step~2.} Let $\mathcal{G}$ be a finite-state Markov chain, where $Q \times \{0,1\} \ \cup\ \{\mathit{acc},\mathit{rej}\}$ is the set of states (the elements of $Q \times \{0,1\}$ are written as $p(i)$, where $i \in \{0,1\}$), and the transitions of $\mathcal{G}$ are determined as follows: \begin{itemize} \item $p(0) \tran{x} q(j)$ is a transition of $\mathcal{G}$ iff $p(0) \tran{x} q(j)$ is a transition of $\mathcal{M}_{\mathscr{A}}$; \item $p(1) \tran{x} q(0)$ iff $x = [p{\downarrow}q] > 0$; \item $p(1) \tran{x} \mathit{acc}$ iff $x = \mathcal{P}(\mathit{Run}_{\mathscr{A}}(p(1),\mathit{acc}) \cap \mathit{Run}_\mathscr{A}(p{\uparrow})) > 0$; \item $p(1) \tran{x} \mathit{rej}$ iff $x = \mathcal{P}(\mathit{Run}_{\mathscr{A}}(p(1),\mathit{rej}) \cap \mathit{Run}_\mathscr{A}(p{\uparrow})) > 0$; \item $\mathit{acc} \tran{1} \mathit{acc}$, $\mathit{rej} \tran{1} \mathit{rej}$; \item there are no other transitions. \end{itemize} A BSCC $B$ of $\mathcal{G}$ is \emph{good} if either $B = \{\mathit{acc}\}$, or there is some $i\leq k$ such that $\mathcal{E}_i \cap Q(B) = \emptyset$ and $\mathcal{F}_i \cap Q(B) \neq \emptyset$, where $Q(B) = \{p \in Q \mid p(j) \in B \text{ for some } j \in \{0,1\}\}$. For every $p \in Q$, let $\mathit{Run}_\mathcal{G}(p(0),\mathit{good})$ be the set of all $w \in \mathit{Run}_{\mathcal{G}}(p(0))$ that visit a good BSCC of~$\mathcal{G}$. The next proposition is obtained by a simple case analysis of accepting runs in~$\mathcal{M}_\mathscr{A}$. \begin{proposition} \label{prop-X} For every $p \in Q$ we have $\mathcal{P}(\mathit{Run}_\mathscr{A}(p(0),\mathit{acc})) = \mathcal{P}(\mathit{Run}_\mathcal{G}(p(0),\mathit{good}))$. \end{proposition} \smallskip \noindent \textbf{Step~3.} Due to Proposition~\ref{prop-X}, the problem of our interest reduces to the problem of approximating the probability of visiting a good BSCC in the finite-state Markov chain~$\mathcal{G}$. Since the termination probabilities in~$\mathscr{A}$ can be approximated efficiently (see Proposition~\ref{prop-termination}), the main problem with $\mathcal{G}$ is approximating the probabilities $x$ and $y$ in transitions of the form $p(1) \tran{x} \mathit{acc}$ and $p(1) \tran{y} \mathit{rej}$. Recall that $x$ and $y$ are the probabilities of all $w \in \mathit{Run}_{\mathscr{A}}(p{\uparrow})$ that are accepting and rejecting, respectively. A crucial observation is that almost all $w \in \mathit{Run}_{\mathscr{A}}(p{\uparrow})$ still behave accordingly with the underlying finite-state Markov chain $\mathcal{X}$ of $\mathscr{A}$ (see Section~\ref{sec-etime}). More precisely, we have the following: \begin{proposition} \label{prop-X-BSCC} Let $p \in Q$. For almost all $w \in \mathit{Run}_\mathscr{A}(p{\uparrow})$ we have that $w$ visits a BSCC $B$ of $\mathcal{X}$ after finitely many transitions, and then it visits all states of $B$ infinitely often. \end{proposition} A BSCC $B$ of $\mathcal{X}$ is \emph{consistent} with the considered Rabin acceptance condition if there is $i \leq k$ such that $B \cap \mathcal{E}_i = \emptyset$ and $B \cap \mathcal{F}_i \neq \emptyset$. If $B$ is not consistent, it is \emph{inconsistent}. An immediate corollary to Proposition~\ref{prop-X-BSCC} is the following: \begin{corollary} \label{cor-nonterm} Let $\mathit{Run}_\mathscr{A}(p(1),\mathit{cons})$ and $\mathit{Run}_\mathscr{A}(p(1),\mathit{inco})$ be the sets of all $w \in \mathit{Run}_{\mathscr{A}}(p(1))$ such that $w$ visit a control state of some consistent and inconsistent BSCC of $\mathcal{X}$, respectively. Then \begin{itemize} \item $\mathcal{P}(\mathit{Run}_\mathscr{A}(p(1),\mathit{acc}) \cap \mathit{Run}_\mathscr{A}(p{\uparrow})) \ = \ \mathcal{P}(\mathit{Run}_\mathscr{A}(p(1),\mathit{cons}) \cap \mathit{Run}_\mathscr{A}(p{\uparrow}))$ \item $\mathcal{P}(\mathit{Run}_\mathscr{A}(p(1),\mathit{rej}) \cap \mathit{Run}(p{\uparrow})) \ = \ \mathcal{P}(\mathit{Run}_\mathscr{A}(p(1),\mathit{inco}) \cap \mathit{Run}_\mathscr{A}(p{\uparrow}))$ \end{itemize} \end{corollary} Due to Corollary~\ref{cor-nonterm}, we can reduce the problem of computing the probabilities of transitions of the form $p(1) \tran{x} \mathit{acc}$ and $p(1) \tran{y} \mathit{rej}$ to the problem of computing the probability of non-termination in pOC. More precisely, we construct pOC's $\mathscr{A}_\mathit{cons}$ and $\mathscr{A}_\mathit{inco}$ which are the same as $\mathscr{A}$, except that for each control state $q$ of an inconsistent (or consistent, resp.) BSCC of $\mathcal{X}$, all positive outgoing rules of $q$ are replaced with $q \prule{1,-1} q$. Then $x = \mathcal{P}(\mathit{Run}_{\mathscr{A}_\mathit{cons}}(p{\uparrow}))$ and $y = \mathcal{P}(\mathit{Run}_{\mathscr{A}_\mathit{inco}}(p{\uparrow}))$. Due to \cite{BBEKW:OC-MDP}, the problem whether a given non-termination probability is positive (in a given pOC) is decidable in polynomial time. This means that the underlying graph of $\mathcal{G}$ is computable in polynomial time, and hence the sets $G_0$ and $G_1$ consisting of all states $s$ of $\mathcal{G}$ such that $\mathcal{P}(\mathit{Run}_\mathcal{G}(s,\mathit{good}))$ is equal to $0$ and $1$, respectively, are constructible in polynomial time. Let $G$ be the set of all states of $\mathcal{G}$ that are not contained in $G_0 \cup G_1$, and let $X_\mathcal{G}$ be the stochastic matrix of~$\mathcal{G}$. For every $s \in G$ we fix a fresh variable $V_s$ and the equation \begin{equation*} V_s \quad = \quad \sum_{s'\in G} X_\mathcal{G}(s,s') \cdot V_{s'} \quad + \quad \sum_{s'\in G_1} X_\mathcal{G}(s,s') \end{equation*} Thus, we obtain a system of linear equations $\vec{V} = A \vec{V} + \vec{b}$ whose unique solution $\vec{V}^*$ in $\mathbb{R}$ is the vector of probabilities of reaching a good BSCC from the states of~$G$. This system can also be written as $(I-A)\vec{V} = \vec{b}$. Since the elements of $A$ and $\vec{b}$ correspond to (sums of) transition probabilities in $\mathcal{G}$, it suffices to compute the transition probabilities of $\mathcal{G}$ with a sufficiently small relative error so that the approximate $A$ and $\vec{b}$ produce an approximate solution where the relative error of each component is bounded by the $\varepsilon$. By combining standard results for finite-state Markov chains with techniques of numerical analysis, we show the following: \begin{proposition} \label{prop-visiting-approx} Let $c = 2|Q|$. For every $s \in G$, let $R_s$ be the probability of visiting a BSCC of $\mathcal{G}$ from $s$ in at most $c$ transitions, and let $R = \min\{R_s \mid s \in G\}$. Then $R > 0$ and if all transition probabilities in $\mathcal{G}$ are computed with relative error at most $\varepsilon R^3/8(c+1)^2$, then the resulting system $(I-A')\vec{V} = \vec{b}'$ has a unique solution $\vec{U}^*$ such that $|\vec{V}^*_s - \vec{U}^*_s|/\vec{V}^*_s \leq \varepsilon$ for every $s \in G$. \end{proposition} \noindent Note that the constant $R$ of Proposition~\ref{prop-visiting-approx} can be bounded from below by $x_t^{|Q|-1} \cdot x_n$, where \begin{itemize} \item $x_t = \min\{X_\mathcal{G}(s,s') \mid s,s' \in G\}$, i.e., $x_t$ is the minimal probability that is either explicitly used in $\mathscr{A}$, or equal to some positive termination probability in $\mathscr{A}$; \item $x_n = \min\{X_\mathcal{G}(s,s') \mid s \in G, s' \in G_1\}$, i.e., $x_n$ is the minimal probability that is either a positive termination probability in $\mathscr{A}$, or a positive non-termination probability in the pOC's $\mathscr{A}_\mathit{cons}$ and $\mathscr{A}_\mathit{inco}$ constructed above. \end{itemize} \noindent Now we need to employ the promised divergence gap theorem, which bounds a positive non-termination probability in pOC away from zero (for all $p,q \in Q$, we use $[p,q]$ to denote the probability of all runs $w$ initiated in $p(1)$ that visit a configuration $q(k)$, where $k \geq 1$ and the counter stays positive in all configurations preceding this visit). \begin{theorem} \label{thm-gap} Let $\mathscr{A} = (Q,\delta^{=0},\delta^{>0},P^{=0},P^{>0})$ be a pOC and $\mathcal{X}$ the underlying finite-state Markov chain of $\mathscr{A}$. Let $p \in Q$ such that $[p{\uparrow}]>0$. Then there are two possibilities: \begin{enumerate} \item There is $q\in Q$ such that $[p,q]>0$ and $[q{\uparrow}]=1$. Hence, $[p{\uparrow}] \geq [p,q]$. \item There is a BSCC $\mathscr{B}$ of $\mathcal{X}$ and a state $q$ of $\mathscr{B}$ such that $[p,q]>0$, $t > 0$, and $\vec{v}_{q}=\vec{v}_{\max}$ (here $t$ is the trend, $\vec{v}$ is the vector of Proposition~\ref{prop-martingale}, and $\vec{v}_{\max}$ is the maximal component of~$\vec{v}$; all of these are considered in $\mathscr{B}$). Further, \[ [p{\uparrow}]\quad \ge\quad [p,q]\cdot \frac{t^3}{12 (2 (\vv_{\max} - \vv_{\min}) + 4)^3}\,. \] \end{enumerate} \end{theorem} Hence, denoting the relative precision $\varepsilon R^3/8(c+1)^2$ of Proposition~\ref{prop-visiting-approx} by $\delta$, we obtain that $\log(1/\delta)$ is bounded by a polynomial in $|\mathscr{A}|$ and $\log(1/\varepsilon)$. Further, the transition probabilities of $\mathcal{G}$ can be approximated up to the relative error $\delta$ in time polynomial in $|\mathscr{A}|$ and $\log(1/\varepsilon)$ by approximating the termination probabilities of~$\mathscr{A}$ (see Proposition~\ref{prop-termination}). This proves Theorem~\ref{thm-omega-regular}. \section{Proofs} In this section we give the proofs that were omitted in the main body of the paper. The appendix is structured according to sections and subsections of the main part. \input{app-finiteness} \input{app-etime} \input{app-termination} \input{app-ltl} \end{document}
2,869,038,156,040
arxiv
\section{Introduction} The problem of continuity of mathematical and conceptual descriptions of classical and quantum systems was discussed and a possible solution was offered by one of the authors in 2002 in a couple of papers \cite{ghose1,ghose2}. It was pointed out that the causal and ontological interpretation of quantum mechanics \cite{bohm,Holland} provided a suitable basis to view smooth transitions between these systems particularly clearly. Since then interest has grown in the area of mesoscopic systems in chemistry and solid state physics which exhibit both quantum and classical features, and the solution suggested in \cite{ghose1} and \cite{ghose2} acquires relevance. Here we use the word `mesoscopic' to mean something that is neither fully classical nor fully quantum mechanical. Hence, a macroscopic system can be mesoscopic in this sense. The purpose of this paper is two-fold: (i) to summarise the key points of the argument and suggest a simpler mathematical framework that can be used for both relativistic and non-relativistic systems, and (ii) give three additional examples of smooth transitions of direct empirical relevance. The first one is the demonstration of the famous double-slit experiment where the transition from classical trajectories to Bohmiam trajectories giving rise to an interference pattern as the coupling of the system to the environment is reduced is clearly visible. The second example is that of a 2D harmonic oscillator which has numerous possibilities of applications in physics and chemistry. The third example is that of some electron states in the hydrogen atom, their Bohmian trajectories and how they go over smoothly to classical Keplerian orbits. It is hoped that these examples will stimulate further and more detailed investigations of other mesoscopic systems using the same technique. It is well known that the conceptual boundary between classical and quantum descriptions of a physical system is not only arbitrary, there is an obvious disconnect between these two descriptions. In the ultimate analysis the classicality of the measuring apparatus is required to extract physically observable predictions from Schr\"{o}dinger evolution of the wave function, but this classicality cannot be deduced from an underlying quantum mechanical substratum. Attempts have been made to show how the classical world emerges from an underlying quantum mechanical susbstratum through decoherence \cite{joos,Zurek1,Zurek2}, but these approaches, though practically very useful, work only in the pointer basis---quantum coherence persists in other equivalent bases, and the classicality achieved is limited. It does not really bridge the conceptual gap between truly classical and quantum systems. The classical description is both realistic and causal whereas the standard quantum mechanical description is neither realistic nor causal. The de Broglie-Bohm interpretation of quantum mechanics showed that empirical evidence does not compel one to renounce realism and causality. However, being only an interpretation of quantum mechanics, its logic was not compelling, and by and large the community of physicists dismissed it as a curious hangover from the classical age. Furthermore, there was no clear way even in this interpretation to see the link between classical and quantum descriptions except in so far as it enables a complete isolation of all quantum effects in the quantum potential. There is nothing within the theory to suggest when and how the quantum potential gets switched off as a system approaches the classical limit. In fact, the nonlocality and contexuality of the interpretation (most strongly reflected in the quantum potential \cite{BH}) made a smooth transition look even harder to achieve than in the standard interpretation. The main point of \cite{ghose1} and \cite{ghose2} was that all quantum characteristics of a system get completely quenched when it is strongly coupled to its environment. As the coupling with the environment is reduced, the quantum characteristics of the system start to emerge, and finally the system becomes fully quantum mechanical when completely isolated from its environment. This simple idea can be implemented by introducing the concept of a wave function for all systems, including those that are classical, which obeys a certain nonlinear equation. \section{Dynamics of Mesoscopic Systems} It was Koopmann \cite{koop} and von Neumann \cite{vN} who first introduced the idea of complex wave functions for classical systems that are square integrable and span a Hilbert space. Though the classical wave functions $\phi(q,p)$ are complex, their relative phases are unobservable in classical mechanics. This is achieved by postulating that the wavefunctions obey Liouville equations, and showing that the density function in phase space $\rho(q,p) = \phi^*(q,p)\phi(q,p)$ also obeys the Loiuville equation. Thus, the classical phase space probability density and its dynamics can be recovered from the dynamics of the underlying complex wavefunctions $\phi$ and $\phi^*$. The dynamical variables $p$ and $q$ are assumed to be commuting operators. The method can be generalized to the case of the electromagnetic field \cite{ghose3} While we use the idea of square integrable complex wave functions for all systems, we postulate a different dynamics, namely a modified Schr\"{o}dinger equation of the form \begin{eqnarray} i\,\hbar\,\frac{\partial \psi}{\partial t} &=& \left(-\frac{\hbar^2}{2 m}\, \nabla^2 + V(x) - \lambda(t)Q \,\right)\,\psi \label{1},\\ Q &=& - \frac{\hbar^2}{2 m}\,\frac{\nabla^2 R}{R},\label{2} \end{eqnarray} where $R$ is defined by writing $\psi$ in the polar form $\psi = R{\rm exp}\left(\frac{i}{\hbar}S \right)$. The term $\lambda(t)Q$ represents the nonlinear coupling of the system to its environment. Separating the real and imaginary parts, one gets the modified Hamilton-Jacobi equation \begin{equation} \partial S/\partial t + \frac{(\nabla S)^2}{2m} + V + (1 - \lambda(t))Q= 0,\label{3} \end{equation} for the phase $S$ of the wave function, and the continuity equation \begin{equation} \frac{\partial \rho ( {\bf x}, t )}{\partial t} + {\bf \nabla}\,.\, (\,\rho\, \frac{{\bf \nabla}\,S}{m}) = 0 \label{4} \end{equation} for the density $\rho = \psi^*\psi = R^2$. The coupling function $\lambda(t)$ is chosen such that the system behaves fully classically in the limit $\lambda(t)\rightarrow 1$ and fully quantum mechanically in the limit $\lambda(t)\rightarrow 0$. Putting ${\bf p} = {\bf \nabla}\, S$, one can see that the functions $\rho = R^2$ and $S$ obeying the differential equations (\ref{3}) and (\ref{4}) are completely decoupled in the classical limit, ensuring no interference effects. Note that Planck's constant $h$ drops out from these equations in the classical limit. Since the momentum is defined by $p_i = \partial_i S$, one has \begin{equation} u_i = \frac{d x_i}{dt} = \frac{1}{m}\partial_iS,\label{5} \end{equation} and integration of this first-order equation gives the set of trajectories corresonding to an arbitrary choice of initial positions. In Bohmian mechanics, the initial positions are chosen to fit the quantum mechanical distribution $|\psi^*\psi|^2$. The continuity equation then gurantees that the distribution matches the quantum mechanical distribution at all future times. Applying the $\nabla$ operator on eqn. (\ref{3}) and remembering that $p_i = \partial_i S$, one has the second order equation of motion \begin{equation} \frac{dp_i}{dt} = -\partial_i \left(V + (1 - \lambda(t))Q \right).\label{9} \end{equation} This mathematical framework provides a unified basis to study continuous transitions between classical and quamtum systems via mesoscopic domains. \section{Equivalent Formalism Using Currents} Using the definition of the velocity (\ref{5}) in the continuity equation (\ref{4}), it is possible to express the velocity in terms of the current density in the form \begin{eqnarray} u_i &=& \frac{d x_i}{d t} = \frac{j_i}{\rho},\label{6}\\ j_i &=& \frac{i\hbar}{2m}\left(\psi^*\partial_i \psi - \partial_i \psi^* \psi\right),\\ \rho &=& \psi^* \psi. \end{eqnarray} The equation of motion is then \begin{equation} \frac{d p_i}{d t} = m\frac{d u_i}{d t} = m \frac{d}{d t}\left(\frac{j_i}{\rho} \right) = Q_i.\label{7} \end{equation} Comparing with eqn. (\ref{9}), it follows that \begin{equation} Q_i = - \partial_i\left(V + (1 - \lambda(t)\right)Q. \end{equation} This provides a simple unified treatment for relativistic and non-relativistic particles without explicitly using the Hamilton-Jacobi method which is cumbersome in the relativistic spin-1/2 case. Let us consider a relativistic particle described by the Dirac equation \begin{equation} i\hbar \frac{d\psi}{dt} = (- i\hbar \alpha_i \partial_i + \beta m_0 c^2 + V)\psi. \end{equation} Its 4-velocity can be defined by the guidance condition \begin{eqnarray} u_\mu &=& \frac{dx_\mu}{d\tau} = \frac{j_\mu}{\rho},\label{gc}\\ j_\mu &=& \bar{\psi}\gamma_\mu\psi,\\ \rho &=& \psi^\dagger \psi. \end{eqnarray} The Dirac current is conserved and one has the continuity equation \begin{eqnarray} \frac{\partial \rho}{\partial t} + {\bf \nabla}. {\bf j} &=& 0,\\ j_i &=& \bar{\psi}\gamma_i\psi = \rho u_i. \end{eqnarray} Integration of the differential equation (\ref{gc}) for $\mu = i (i=1,2,3)$ gives a set of Bohmian trajectories of the particle corresponding to arbitrary initial positions. Now, \begin{equation} \frac{dp_\mu}{d\tau} = m_0 \frac{d u_\mu}{d\tau} = m_0\frac{d}{d\tau}\left(\frac{j_\mu}{\rho}\right) \equiv Q_\mu.\label{8} \end{equation} The corresponding classical equation of motion is \begin{equation} \frac{dp^{cl}_\mu}{d\tau} = 0. \end{equation} Hence, the equation of motion for a mesoscopic Dirac system can be written as \begin{equation} \frac{dp_\mu}{d\tau} = (1 - \lambda(t))Q_\mu. \end{equation} \section{Examples} In this section three examples will be worked out in detail to show how Bohmian trajectories smoothly pass over to classical trajectories. The motion of the particle is inextricably linked with the structure of its environment through the quantum potential $Q$. The quantum potential does not depend on the intensity of the wave, but rather on its form. It need not fall off with increasing distance. Any change in the experimental setup affects the trajectory. Therefore, the trajectories cannot be measured directly. For simplicity we put the masses of the particles as well as $\hbar$ equal to one. Before proceeding further, it is necessary to specify explicitly the nature of the effective coupling function \begin{equation} P(t) = 1 - \lambda(t). \end{equation} In order to do that, let us assume that the environment to which the quantum system is to be coupled has a random variable with a gaussian distribution. Then the cumulative distribution function (CDF) is given by \begin{eqnarray} \Phi(t) &=& \frac{1}{\sqrt{2\pi\sigma^2}}\int_{-\infty}^t e^{-(x - \mu)^2/2\sigma^2}dx\\ &=& \frac{1}{2}\left[1 + {\rm erf}\left(\frac{t - \mu}{\sigma\sqrt{2}}\right)\right]. \end{eqnarray} This is the cumulative probability that the random variable has a value less than or equal to $t$. We will assume that $\lambda(t) = \Phi(t)$. Then $P(t)$ is the complementary CDF (CCDF) which is the cumulative probability that the random variable has a value greater than $t$. $P(t) \rightarrow 1$ as $t\rightarrow -\infty$ and $P(t) \rightarrow 0$ as $t\rightarrow\infty$. This will describe the quantum to classical transition as the system is coupled to an environment with a random variable. The reverse transition from the classical to the quantum is not physically possible because of the introduction of randomness and hence to increasing entropy. For simplicity of calculations we will assume that $\lambda(t)$ is a logistic function \begin{equation} \lambda(t) = \frac{1}{1 + e^{-b(t - t_0)}} \end{equation} where $b$ is an arbitrary parameter that can be varied and $t_0$ is the $t$-value of the sigmoid's midpoint. Hence \begin{equation} P(t) = \frac{1}{1 + e^{b(t - t_0)}}. \end{equation} A system is quantum-like for $b>0,t - t_0 <0$ and classical-like for $b>0,t - t_0 >0$, the point of inflection being $t - t_0 = 0$. \subsection{Double-slit} The quantum interference effect of matter for Young's double-slit experiment in optics is one of the famous examples. It shows how different quantum mechanics is compared to classical mechanics. Quantum particles are emitted by a source $S$, pass through two slits with the distance $\pm X$ in a barrier and arrive at a screen. To avoid complexity two identical Gaussian profiles are assumed with identical initial width $\rho$ and group velocity $u$ in $(x,t)$-space. According to the $\rho \left(1+\frac{i t}{\rho ^2}\right)$ term, the probability of finding the particle within a spatial interval decreases with time as the wave packet disperses. The superposition of the two Gaussian profiles leads to the interference effect. The total wave function emerging from the slits is \begin{equation} \psi (x,t)=\frac{\exp \left(-\frac{(u~t+x-X)^2}{2 \rho ^2 \left(1+\frac{i t}{\rho ^2}\right)}-i u \left(\frac{ u~t}{2}+x-X\right)\right)+\exp \left(-\frac{(-u~t+x+X)^2}{2 \rho ^2 \left(1+\frac{i t}{\rho ^2}\right)}+i u \left(-\frac{u~t}{2}+x+X\right)\right)}{\sqrt{\rho \left(1+\frac{i t}{\rho ^2}\right)}}. \end{equation} The square of the total wave function (=probability distribution) is \begin{equation} \varrho (x,t)=\frac{e^{-\frac{2 \left(x^2+(-u t+X)^2\right) \rho ^2}{t^2+\rho ^4}} \left(e^{\frac{(u t+x-X)^2 \rho ^2}{t^2+\rho ^4}}+e^{\frac{(-u t+x+X)^2 \rho ^2}{t^2+\rho ^4}}+2 e^{\frac{\left(x^2+(-u t+X)^2\right) \rho ^2}{t^2+\rho ^4}} \text{Cos}\left[\frac{2 x \left(X~t+u \rho ^4\right)}{t^2+\rho ^4}\right]\right)}{\sqrt{\frac{t^2+\rho ^4}{\rho ^2}}} \end{equation} with the quantum amplitude $R$ given by $R=\sqrt{\rho (x,t)}$. If the velocity $u_x$ is in $x$-direction, the total phase function from which $u_x$ is derived becomes \begin{equation} S(x,t)=\tan^{-1}\left(\frac{e^{\frac{\rho ^2 (u t+x-X)^2}{2 \left(\rho ^4+t^2\right)}} \sin \left(T_1\right)-e^{\frac{\rho ^2 (-u t+x+X)^2}{2 \left(\rho ^4+t^2\right)}} \sin \left(T_2\right)}{e^{\frac{\rho ^2 (u t+x-X)^2}{2 \left(\rho ^4+t^2\right)}} \cos \left(T_1\right)+e^{\frac{\rho ^2 (-u t+x+X)^2}{2 \left(\rho ^4+t^2\right)}} \cos \left(T_2\right)}\right)\\ \\ \end{equation} with \[T_1=\frac{t (-u t+x+X)^2}{2 \left(\rho ^4+t^2\right)}-\frac{1}{2} \tan^{-1}\left(\frac{t}{\rho ^2}\right)+u \left(-\frac{ u t}{2}+x+X\right)\] and \[T_2=-\frac{t (u t+x-X)^2}{2 \left(\rho ^4+t^2\right)}+\frac{1}{2} \tan^{-1}\left(\frac{t}{\rho ^2}\right)+u \left(\frac{u t}{2}+x-X\right).\] To see how the Bohmian trajectories smoothly pass over to classical trajectories, the general equation of motion (\ref{9}) is given by \begin{equation} \frac{d ^2 x}{dt^2} = - \partial_x\left(V + P(t)\right) Q, \end{equation} where the environment coupling function $P(t)$ is defined by (25) with $b\in \mathbb{R}$, $t_0\in \mathbb{R}$ and with $V=0$. $Q$ is the quantum potential expressed by eqn. (\ref{2}), which is extremely large for the double slit. The general equation of motion (29) can then be numerically integrated for appropriate values of the parameters $b$, $t_0$ $X$, $\rho$, $u$ and with the correct boundary conditions (see figure 1). For $b\rightarrow \infty$ and $t_0=0$ the trajectories become classical because $P$ reduces to null, and for $b\gg 1$ and $t_0=\rightarrow \infty$, the trajectories become Bohmian because $P$ tends to $1$, the quantum limit. For increasing time the amplitude of the quantum potential $Q$ decreases. In the classical case the trajectories become straight lines because of the uniform unaccelerated motion. They cross the axis of symmetry. In the quantum mechanical limit the trajectories run to the local maxima of the squared wave function, which correlates with the ``plateaux'' of the quantum potential, and therefore correspond to the bright fringes of the diffraction pattern. The fate of a particle depends sensitively on its position. The quantum particle passes through slit one or slit two and never crosses the axis of symmetry. In the initial process the quantum potential is an inverse parabola associated with the Gaussian profile (see figure $4$), corresponding to the single slit Gaussian profiles. But at later times it becomes more complex and affects the motion in a considerably complicated way. The ``kinks'' in the trajectories are due to the interference coming from the part of the wave which does not carry the particle, and which is in general called an `empty wave'. In the presence of ``troughs'' the particle accelerates or decelerates. At some time steps and for special values of the parameter $b$ and $t_0$ the quantum potential blows up and accelerates the particle, which leaves the bulk of the wave packet (see for example figure $1$ with $b=0.5$ and $t_0=2$ and figure $4$). The structure of the quantum potential finally decays into a set of ``troughs'' and ``plateaux''. For further details, see the simulation in \cite{bloh1}. In figure 1 the trajectories per unit length are proportional to the probability density. The total number of trajectories is $50$. For the Bohmian trajectories the initial particle velocities are restricted to the values $\overset{\rightharpoonup }{u}=\overset{\rightharpoonup }{\nabla } S(x_0,t=0)$. For the calculation the initial width $\rho$, the wavenumber $u$ and the slit distance $\pm X$ are chosen as $\rho=\frac{5}{8}$, $u=-2$ and $X=\pm2.5$. In figure 2 it is seen that the initial velocity becomes $u_x(x,0)=u$ for $x<0$ and $u_x(x,0)=-u$ for $x>0$ asymptotically. \subsection{2D Harmonic Oscillator} The time-independent wave function for a harmonic oscillator potential is a much simpler example than the double- slit experiment. A normalized, degenerate superposition of the ground state and a first excited state with a constant relative phase shift of an uncoupled isotropic harmonic oscillator in the two-dimensional configuration space $(x,y)$ gives an entangled, stationary wave function of the form \begin{equation} \psi (x,y,t)=e^{-i E_{n,m} t} \sum _{n,m} c_{n,m} \Theta_{n,m} \end{equation} with the the abbreviations $c_{n,m} \in \mathbb{C}$, $\Theta_{n,m}=\phi_n(x) \phi_m(y)$ and $E_{k,j}$, where $ \phi _n (x)$, $ \phi _m (y)$ are eigenfunctions and $E_{n,m}=E_n+E_m$ are permuted eigenenergies of the corresponding one-dimensional Schr\"odinger equation. The two-dimensional Schr\"odinger equation \begin{equation} -\frac{1}{2}\left(\frac{\partial ^2}{\partial x^2}+\frac{\partial ^2}{\partial y^2}\right)\psi +V(x,y)\psi =i\frac{\partial }{\partial t}\psi \end{equation} leads to the degenerate, entangled wave function $\Psi$ for the harmonic potential $V(x,y)=\frac{1}{2} k_0 \left(x^2+y^2\right)$. In general a stationary wave function yields a stationary velocity field. For a degenerate superposition of two eigenstates the total phase function becomes independent of the variable $x$ and therefore the velocity equals null. To get an autonomous velocity with a non-trivial motion a constant phase shift is introduced. Phase shift techniques have been widely used in modern phase measurement instruments. It is called \textit{Phase Shift Interferometry} (PSI). The basic idea of the PSI technique is to vary the phase between two beams in some manner to get a phase difference. For instance, the Mach-Zehnder interferometer is a device used to determine the relative phase shift between two beams derived by splitting the beam from one single source. The interferometer has been applied to determine phase shifts between the two beams caused by a change in length of one of the arms. In our special case the phase timeshift is introduced by the factor $\alpha$ with $\alpha = t_1$, which leads to stationary wave function for the harmonic oscillator: \begin{equation} \psi (x,y,t)=e^{i \alpha}\Theta _{0,1}\text{ + }\Theta _{1,0} \end{equation} with the angular frequency $\omega =\sqrt{k_0}$ and with the eigenfunctions: \begin{equation} \Theta _{m,n}=\frac{\omega ^2 H_m\left(\omega ^2 y\right) H_n\left(\omega ^2 x\right) \exp \left(-\frac{1}{2} \omega \left(x^2+y^2\right)- i~ \omega t~ ( m+ n+1)\right)}{\sqrt{2 \pi } \sqrt{2^m m!} \sqrt{2^n n!}}. \end{equation} For the trajectories in figure 3, $\alpha=\frac{\pi}{2}$, $k_0=1$ and $e^{\frac{i \pi}{2}}=i$. Therefore the wave function becomes \begin{equation} \psi (x,y,t)=\text{ = }\frac{1}{\sqrt{\pi }} (x+\text{iy}) e^{-\frac{1}{2} \left(4 i t+x^2+y^2\right)}. \end{equation} For the general case (28) the square of the total wave function (=probability distribution) becomes time-indepedent: \begin{equation} \varrho (x,y)=\frac{1}{\pi }e^{-\omega \left(x^2+y^2\right)}\omega ^2 \left(x^2+y^2+2~ x~ y~ \cos(\alpha)\right) \end{equation} The phase function $S$ from this total wave function is \begin{equation} S(x,y,t)=-\tan^{-1} \left( \frac{x~ (\sin (2 ~ \omega t))+y~ (\sin (2 ~\omega t -\alpha ))}{x~ (\cos (2 ~ \omega t))+y~ (\cos (2 ~ \omega t-\alpha ))} \right). \end{equation} From the gradient of the phase function $S$ we get the corresponding autonomous differential equation system for the velocity field $\overset{\rightharpoonup }{u}$ in the $x$ and $y$ -directions: \begin{eqnarray} u_x(x,y)=-\frac{y \sin (\alpha)}{x^2+2~ x~ y~ \cos ( \alpha )+y^2} \nonumber\\ u_y(x,y)=~\frac{x \sin (\alpha)}{x^2+2~ x~ y~ \cos ( \alpha )+y^2}. \end{eqnarray} The velocity vector is independent of the parameter $\omega$ and can be integrated numerically with respect to time to yield the motion in the ($x$,$y$) configuration space. The origin of the motion lies in the relative phase of the total wavefunction, which has no classical analogue in particle mechanics. With the quantum amplitude (=absolute value)$R=\sqrt{\rho (x,y)}$, the quantum potential is: \begin{equation} Q(x,y)=\frac{-T_1 ~ \left(x^2+y^2\right)-4~ T_2~ x~ \omega ~ y \cos (\alpha ) \left(x^2+y^2\right)+T_3 \cos ^2(\alpha )}{2 \left(x^2+2 x y \cos (\alpha )+y^2\right)^2} \end{equation} with $T_1 =\omega ^2~ \left(x^2+y^2\right)^2-4~ \omega \left(x^2+y^2\right)+1$, $T_2=\omega ~ \left(x^2+y^2\right)-4$ and \\ $T_3 =x^2 \left(-4~ \omega ^2 y^2 \left(x^2+y^2\right)+16 \omega y^2+1\right)+y^2$. For the trajectories in figure 3 the quantum potential becomes with $\alpha=\frac{\pi}{2}$ and $\omega=1$ (see figure $5$): \begin{equation} Q(x,y)=-\frac{ \left(x^2+y^2\right){}^2-4 ~ \left(x^2+y^2\right)+1}{2 \left(x^2+y^2\right)}. \end{equation} For $b\approx 1$ (as well as $b \gg 1$) and $t_0\rightarrow \infty$ the Bohmian trajectory forms a closed circle when the trajectory is projected in the $xy$ plane (see figure $3$) and for $b=46$ and $t_0=0$ a closed ellipse is obtained (the classical limit). For the classical case the equations of motion can be easily integrated to yield $x(t)=x_m \cos( \omega~ t-\phi_x)$ and $y(t)=y_m \cos( \omega~ t-\phi_y)$, where $x_m$, $y_m$, $\phi_x$ and $\phi_y$ are real valued constants. The direction and form (circle or ellipse) of the orbits depend on the phase difference $\phi_x-\phi_y$. Increasing the time for the mesoscopic case (for example $b=16$ and $t_0=10$ )the trajectories evolve after some time steps into the classical motion(ellipse). The orbit is substantially influenced by the initial starting point (small black points in figure $3$) of the particles. In some mesoscopic cases the orbit is not closed for certain time intervals (see figure $3$: $b=0.5$ and $t_0=4$) but no exponential separation of neighboring orbits appears(chaotic motion). Chaotic motion arises because of the nodal point (singularity) of the wave function. At the nodal point, the quantum potential becomes very negative or approaches negative infinity, which keeps the particles from entering the nodal region (for further details of the Bohmian trajectories, see for example \cite{bloh2}). \subsection{The Hydrogen-like Atom} The hydrogen atom contains a single positively charged proton and a single negatively charged electron bound to the nucleus by the Coulomb force. The solution of the Schrodinger equation in spherical polar coordinates for the hydrogen atom is accessed by separating the variables. The wave function is represented by the product of functions $\psi(\rho,\theta ,\phi ,t)=R_{n,l}(r) Y_{lm}(\phi,\theta)e^{i E_n t}$. The solutions can exist only when constants which arise in the solution, are restricted to integer values: The principal quantum number n, the angular momentum quantum number or azimunthal quantum number l and magnetic quantum number m. The potential, $V$ between two charges is described by Coulomb's law: \begin{equation} V=-\frac{e^2 Z}{4 \pi r \epsilon _0} \end{equation} where $Ze$ is the charge of the nucleus (Z=1 being the hydrogen case, Z=2 helium, etc.), the $e$ is the elementary charge of the single electron, $\epsilon_0$ is the permittivity of vacuum. With the system consisting of two masses, the reduced mass is defined by: \begin{equation} \mu =\frac{m M}{m+M} \end{equation} where $M$ is the mass of the nucleus and $m$ the mass of the electron. If the nucleus is much more massive than the electron the reduced mass becomes $\mu \approx m$. The Hamiltonian for the hydrogen atom is: \begin{equation} H=-\frac{\hbar ^2}{2 \mu }\nabla ^2-\frac{e^2 Z}{4 \pi r \epsilon _0}. \end{equation} For simplicity we put the reduced masses $\mu$ of the particles, $\hbar$ as well as $\frac{e^2 Z}{4 \pi \epsilon _0}$ equal to one. Since the potential is spherically symmetric, the Schrodinger equation can be solved analytically in spherical polar coordinates. This leads to the time-dependent wafefunction with the associated Laguerre polynomials $L_{-l+n-1}^{2 l+1}(\rho)$ and the Laplace's spherical harmonics $Y_l^m(\theta ,\phi)$: \begin{equation} \psi (\rho ,\theta ,\phi ,t)= 2\rho ^l e^{-\frac{\rho }{2}-i E_n t}\sqrt{\frac{(-l+n-1)!}{n^4 (l+n)!}} L_{-l+n-1}^{2 l+1}(\rho)Y_l^m(\theta ,\phi ) \end{equation} with the energy eigenstates $E_n$, diameter $\rho =2 r$ where the radius $r$ is the distance between the nucleus and the electron. The energy levels $E_n$ of the hydrogen atom depends on the principal quantum number $n$: \begin{equation} E_n=-\frac{Z^2 e^4 \mu}{2 (4 \pi)^2 \epsilon_0^2 \hbar^2 n^2} \end{equation} which is in our case simplified to $E_n=-\frac{1}{2 n^2}$. A stationary state $\psi (\rho ,\theta ,\phi ,t)$ corresponding to the energy state $E_n$ leads to the phase function $S$ \begin{equation} S(\rho ,\theta ,\phi ,t)=E_n t+m \hbar \phi. \end{equation} For each $t$ and $m \neq 0$ the electron rotates about the z-Axis \cite{Holland}. To give an example of the smooth transition a stationary state with the quantum numbers $n=2$, $l=1$ and $m=1$ and cartesian coordinates is applied. In such a case the sperical variables are transformed via $r=\sqrt{x^2+y^2+z^2}$, $\theta=\tan^{-1}\left( \frac{\sqrt{x^2+y^2}}{z } \right)$ and $\phi=\tan^{-1}\left(\frac{y}{x}\right)$ which lead to the simple wavefunction $\psi$ \begin{equation} \psi=-\frac{(x+i y) e^{-\frac{1}{2} \sqrt{x^2+y^2+z^2}+\frac{i t}{8}}}{8 \sqrt{\pi }}. \end{equation} From the wavefunction $\psi$ it follows for the phase function $S$ \begin{equation} S(x,y,z,t)= \tan ^{-1}\left(\frac{x \sin \left(\frac{t}{8}\right)+y \cos \left(\frac{t}{8}\right)}{x \cos \left(\frac{t}{8}\right)-y \sin \left(\frac{t}{8}\right)}\right) \end{equation} and therefore for the $(x,y,z)$ - components of the velocity \begin{eqnarray} v_x&=&-\frac{y}{x^2+y^2}\\ \nonumber v_y&=&\frac{x}{x^2+y^2}\\ \nonumber v_z&=&0. \end{eqnarray} The particle rotates with a constant speed $v_x^2+v_y^2$ which is proportional to $\frac{1}{r^2}$. That means if the particle is nearer to the nucleus it moves faster. From the quantum amplitude $R$ with $R=\frac{\sqrt{x^2+y^2} e^{-\frac{1}{2} \sqrt{x^2+y^2+z^2}}}{8 \sqrt{\pi }}$ the quantum potential $Q$ becomes \begin{equation} Q(x,y,z)=-\frac{x^2 \left(1-\frac{8}{\sqrt{x^2+y^2+z^2}}\right)+y^2 \left(1-\frac{8}{\sqrt{x^2+y^2+z^2}}\right)+4}{8 \left(x^2+y^2\right)}. \end{equation} With the classical potential $V=-\frac{1}{\sqrt{x^2+y^2+z^2}}$ and with the logistic function $\lambda (t)$ the smooth transition could be calculated. In figure $6$ you see the classical motion for 3 orbits with different starting positions and with the initial velocities taken from wave density and the gradient of the phase function for the quantum case. In this case the orbits are closed ellipses in the $(x,y,z)$ - space. Ignoring Pauli's exclusion principle that states that two or more identical electrons cannot occupy the same quantum state within a quantum system simultaneously you see in figure $7$ the "quantum motion," as represented by 3 Bohm trajectories with the wave density contours and the quantum potential contours. The smooth transition from the quantum to the classical case is shown in figure 8 where it is clearly seen that the quantum motion, which are circles in the $(x,y)$ - space perpendicular to the z-axis passes over to the ellipses in the $(x,y,z)$ - space. \subsection{Remark on the numerical methods} The trajectories are calculated with different methods by \textit{Mathematica}'s built in function \textbf{NDSolve} and compared with different numerical methods, offered by the method option in NDSolve, e. g. Runge$-$Kutta methods or the midpoint method. The accuracy for the calculations is 7 digits, which specifies how many effective digits of accuracy should be sought in the final result and with the maximum number of 50000 steps to generate a result. \section{Acknowledgement} PG acknowledges financial support from the National Academy of Sciences, India.
2,869,038,156,041
arxiv
\section{Introduction% \label{introduction}% } In the last years Python has gained more and more traction in the scientific community. Projects like Numpy \cite{Numpy}, SciPy \cite{SciPy}, and Matplotlib \cite{Matplotlib} have created a strong foundation for scientific computing in Python and machine learning packages like Scikit-learn \cite{Scikit-learn} or packages for data analysis like Pandas \cite{Pandas} are building on top of it. Although in recent years Python toolboxes like SCoT for EEG source connectivity \cite{Billinger}, or MNE-Python for MEG and EEG data analysis \cite{Gramfort} were published, Matlab seems still to be the dominant programming language in the brain-computer interface (BCI) community. A BCI is a system that measures central nervous system activity and translates the recorded data into an output suitable for a computer to use as an input signal. Or slightly less abstract: A BCI is a communication channel that allows for direct control of a computer by the power of thoughts. A BCI system consists of three parts (Figure \DUrole{ref}{bcisystem}): a \emph{signal acquisition} part that is connected to the measuring hardware (e.g. EEG) and provides the raw data to the rest of the BCI system. The \emph{signal processing} part receives the data from the signal acquisition and translates the data into the intent. The feedback/stimulus presentation part translates the intent into an action.\begin{figure}[]\noindent\makebox[\columnwidth][c]{\includegraphics[width=\columnwidth]{bci_system.pdf}} \caption{Overview of an online BCI system. \DUrole{label}{bcisystem}} \end{figure} In this paper we present Wyrm, an open source BCI toolbox in Python. Wyrm corresponds to the signal processing part of the aforementioned BCI system. Wyrm is applicable to a wide range of neuroscientific problems. It can be used as a toolbox for analysis and visualization of neurophysiological data (e.g. EEG, ECoG, fMRI, or NIRS) and it is suitable for real-time online experiments. In Wyrm we implemented dozens of methods, covering a broad range of aspects for off-line analysis and online experiments. In the following sections we will explain Wyrm's fundamental data structure, its design principles and give an overview of the available methods of the toolbox. We'll show you where to find the documentation as well as some extensive examples. \section{Design% \label{design}% } All methods in the toolbox revolve around a data structure that is used throughout the toolbox to store various kinds of data. The data structure dubbed \DUroletitlereference{Data}, is an object containing an n-dimensional numpy array that represents the actual data to be stored and some meta data. The meta data describes for each dimension of the numpy array, the name of the dimension, the names of the single columns and the unit of the data. Let's assume we have a record of previously recorded EEG data. The data was recorded with 20 channels and consists of 30 samples. The data itself can be represented as a 2-dimensional array with the shape (30, 20). The names for the dimensions are 'time' and 'channels', the units are 'ms' and '\#' (we use '\#' as a pseudo unit for enumeration of things), and the description of the columns would be two arrays: one array of length 30 containing the time stamps for each sample and another array of length 20 containing the channel names. This data structure can hold all kinds of BCI related data: continuous data, epoched data, spectrograms, feature vectors, etc. We purposely kept the meta data at a minimum, as each operation that modifies the data also has to check if the meta data is still consistent. While it might be desirable to have more meta data, this would also lead to more housekeeping code which makes the code less readable and more error prone. The data structure, however, can be extended as needed by adding new attributes dynamically at runtime. All toolbox methods are written in a way that they ignore unknown attributes and never throw them away. We tried very hard to to keep the syntax and semantics of the toolbox methods consistent. Each method obeys a small set of rules: (1) Methods never modify their input arguments. This allows for a functional style of programming which is in our opinion well suited when diving into the data. (2) A Method never modifies attributes of \DUroletitlereference{Data} objects which are unrelated to the functioning of that method. Especially does it never remove additional or unknown attributes. (3) If a method operates on a specific axis of a \DUroletitlereference{Data} object, it by default obeys a convention about the default ordering of the axis but allows for changing the index of the axis by means of a default arguments. \section{Toolbox Methods% \label{toolbox-methods}% } The toolbox contains a few data structures (\DUroletitlereference{Data}, \DUroletitlereference{RingBuffer} and \DUroletitlereference{BlockBuffer}), I/O-methods for loading and storing data in foreign formats and off course the actual toolbox algorithms. The list of algorithms includes: channel selection, IIR filters, sub-sampling, spectrograms, spectra, baseline removal for signal processing; Common Spatial Patterns (CSP) {[}Ramoser{]}, Source Power Co-modulation (SPoC) {[}Dähne{]}, classwise average, jumping means, signed $r^2$-values for feature extraction; Linear Discriminant Analyis (LDA) with and without shrinkage for machine learning \cite{Blankertz}, and many more. It is worth mentioning that with scikit-learn you have a wide range of machine learning algorithms readily at your disposal. Our data format is very compatible with scikit-learn and one can usually apply the algorithms without any data conversion step at all. The toolbox also includes plotting facilities that make it easy to quickly generate useful plots out of neurophysiological data. Those methods include scalp plots (Figure \DUrole{ref}{scalp}), time courses (Figure \DUrole{ref}{average}), signed $r^2$ plots (Figure \DUrole{ref}{r2}), and more.\begin{figure}[]\noindent\makebox[\columnwidth][c]{\includegraphics[width=\columnwidth]{scalp_subj_b.pdf}} \caption{Example scalp plots. The plots show the spatial topography of the average voltage distribution for different time intervals and channels.} \begin{DUlegend} \DUrole{label}{scalp}\end{DUlegend} \end{figure}\begin{figure}[]\noindent\makebox[\columnwidth][c]{\includegraphics[width=\columnwidth]{eeg_classwise_average.pdf}} \caption{Example time course plots for three selected channels.} \begin{DUlegend} \DUrole{label}{average}\end{DUlegend} \end{figure}\begin{figure}[]\noindent\makebox[\columnwidth][c]{\includegraphics[width=\columnwidth]{eeg_signed_r2.pdf}} \caption{Example signed $r^2$-plots. The channels are sorted from frontal to occipital and within each row from left to right.} \begin{DUlegend} \DUrole{label}{r2}\end{DUlegend} \end{figure} \section{Unit Testing% \label{unit-testing}% } Since the correctness of its methods is crucial for a toolbox, we used unit testing to ensure all methods work as intended. In our toolbox \emph{each} method is tested respectively by at least a handful of test cases to ensure that the methods calculate the correct results, throw the expected errors if necessary, etc. The total amount of code for all tests is roughly 2-3 times bigger than the amount code for the toolbox methods. \section{Documentation% \label{documentation}% } As a software toolbox would be hard to use without proper documentation, we provide documentation that consists of readable prose and extensive API documentation (\url{http://venthur.github.io/wyrm/}). Each method of the toolbox is thoroughly documented and has usually a short summary, a detailed description of the algorithm, a list of expected inputs, return values and exceptions, as well as cross references to related methods in- or outside the toolbox and example code to demonstrate how to use the method. \section{Examples% \label{examples}% } To show how to use the toolbox realistic scenarios we provide two off-line analysis scripts, where we demonstrate how to use the toolbox to complete two tasks from the BCI Competition III \cite{BCIComp3}. The first example uses Electrocorticography (ECoG) recordings provided by the Eberhard-Karls-Universität Tübingen. The time series where picked up by a 8x8 ECoG platinum grid which was placed on the contralateral, right motor cortex. During the experiment the subject had to perform imagined movements of either the left small finger or the tongue. Each trial consisted of either an imagined finger- or tongue movement and was recorded for a duration of 3 seconds. The recordings in the data set start at 0.5 seconds after the visual cue had ended to avoid visual evoked potentials, being reflected by the data. It is worth noting that the training- and test data were recorded on the same subject but with roughly one week between both recordings. The data set consists of 278 trials of training data and 100 trials of test data. During the BCI Competition only the labels (finger or tongue movement) for the training data were available. The task for the competition was to use the training data and its labels to predict the 100 labels of the test data. Since the competition is over, we also had the true labels for the test data, so we could calculate and compare the accuracy of our results. For this experiment our classification accuracy was 92\% which is comparable with the winners of the competition whose accuracy was: 91\%, 87\%, and 86\%. The second data set uses Electroencephalography (EEG) recordings, provided by the Wadsworth Center, NYS Department of Health, USA. The data were acquired using BCI2000’s Matrix Speller paradigm, originally described in \cite{Donchin}. The subject had to focus on one out of 36 different characters, arranged in a 6x6 matrix. The rows and columns were successively and randomly intensified. Two out of 12 intensifications contained the desired character (i.e., one row and one column). The event-related potential (ERP) components evoked by these target stimuli are different from those ERPs evoked by stimuli that did not contain the desired character. The ERPs are composed of a combination of visual and cognitive components. The subject’s task was to focus her/his attention on characters (i.e. one at a time) in a word that was prescribed by the investigator. For each character of the word, the 12 intensifications were repeated 15 times before moving on to the next character. Any specific row or column was intensified 15 times per character and there were in total 180 intensifications per character. The data was recorded using 64 channel EEG. The 64 channels covered the whole scalp of the subject and were aligned according to the 10-20 system. The collected signals were bandpass filtered from 0.1-60Hz and digitized at 240Hz. The data set consists of a training set of 85 characters and a test set of 100 characters for each of the two subjects. For the trainings sets the labels of the characters were available. The task for this data set was to predict the labels of the test sets using the training sets and the labels. In this experiment we reached a classification accuracy for single letters of 93,5\%, the winners of the competition reached 96,5\%, 90,5\%, and 90\%. We also provide an example online experiment where we use the ERP data set with an pseudo amplifier that feeds the data in real-time to the toolbox, to show how to do the classification task in an online setting. The data sets from the competition are freely available and one can reproduce our results using the scripts and the data. \section{Python 2 vs Python 3% \label{python-2-vs-python-3}% } The ongoing transition from Python 2 to Python 3 was also considered and we decided to support \emph{both} Python versions. Wyrm is mainly developed under Python 2.7, but written in a \emph{forward compatible} way to support Python 3 as well. Our unit tests ensure that the methods provide the expected results in Python 2 and Python 3. \section{Summary and Conclusion% \label{summary-and-conclusion}% } In this paper we presented Wyrm, a free and open source BCI toolbox in Python. We introduced Wyrm's main data structure and explained the design ideas behind the current implementation. We gave a short overview of the existing methods in the toolbox and showed how we utilized unit testing to make sure the toolbox works as specified, where to find the extensive documentation and some detailed examples. Together with \hyperref[mushu]{Mushu} \cite{Mushu} our signal acquisition library and Pyff \cite{Pyff} our Framework for Feedback and Stimulus Presentation, Wyrm adds the final piece to our ongoing effort to provide a complete, free and open source BCI system in Python. Wyrm is available under the terms of the MIT license, its repository can be found at \url{http://github.com/venthur/wyrm}. \section{Acknowledgements% \label{acknowledgements}% } This work was supported in part by grants of the BMBF: 01GQ0850 and 16SV5839. The research leading to this results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreements 611570 and 609593.
2,869,038,156,042
arxiv
\section{Introduction} \label{intro} A wireless sensor network (WSN) typically consists of a large number of autonomous devices with limited battery power and memory. Since cheap commodity devices can be used as sensor nodes, it is possible to do large scale deployment of sensor networks~\cite{WSN, WSN2}. They are ideal for monitoring to detect intruders physically entering a secured or otherwise important region~\cite{he04energy}. Thus, sensor networks have a wide variety of applications in security monitoring, such as protecting water supplies, chemical plants and nuclear power plants and in border security and battle field surveillance. Typically, WSNs are left unattended for efficient, low-cost monitoring. Thus, they are deployed without any physical protection. Also, the nodes in a WSN communicate with each other through a shared wireless medium. These two features make WSNs particularly vulnerable to a variety of attacks~\cite{ATT0, ATT1, ATT2}. Jamming \cite{FEASJAM, ATTJAM} is a particularly effective attack against WSNs. An intruder can easily place jamming devices in different parts of the network to cause radio interference and thus disrupt communications among the sensor nodes that are in close proximity to the jamming devices. Fig.~\ref{fig:border} shows a border security deployment scenario in which the red sensors have been jammed by the jamming sensors present in the network. \begin{figure}[hpb] \begin{center} \includegraphics[width = 2.75in]{border2} \end{center} \caption{Wireless sensor network for border monitoring} \label{fig:border} \end{figure} A jamming attack effectively creates a denial of service condition in the network. This a major problem in security monitoring applications, in which the lack of sensor communication means that an intruder can physically enter jammed regions without the threat of being detected. For example, in a border security setting, a path may be constructed with jamming devices that allows the intruder to cross back and forth across the border, completely bypassing the security perimeter. In this case, denial of service in the network leads to a major breach of physical security. It is critical in these monitoring applications for the base station to learn about and map out the jammed regions quickly and accurately so as to know where physical security may be threatened and where it may be necessary to increase other security measures like guard patrols or surveillance flights. Wood et al. propose JAM~\cite{JAM}, a jammed area mapping technique, that relies on the ability of the nodes to perform a detailed mapping of the jammed region locally. JAM is very effective at mapping out the jammed region. However, it is also a very complex protocol with high message and storage overheads at the nodes, due to its fully decentralized nature. It requires a lot of interaction among the nodes surrounding the jammed regions to estimate the region and correctly put the jammed nodes into groups. In settings such as the border security scenario shown in Fig.~\ref{fig:border}, it is vital to protect the sensor network and to detect the intruder as early as possible while keeping communication overhead low to save sensors' battery lifetime. \noindent {\bf Contributions.} In this paper, we present a model for studying the jammed sensor mapping problem (\S\ref{model}). We are the first to point out that mapping jammed nodes should be done {\em quickly} and {\em efficiently}, not just accurately. This observation suggests a different set of design choices than those made in JAM (briefly described in \S\ref{rel}). In particular, instead of mapping being performed locally by the sensor nodes, we leverage the powerful base station to gather information from the network and calculate where the jammed regions are. In this protocol (\S\ref{mapprot}), rather than an exact mapping, we only aim to get an approximation of the jammed area computed by the central base station. We apply k-means clustering to accurately separate out multiple jammed regions. We then invoke a method based on convex-hull finding algorithms to find the centers for these regions and then apply iterative adjustment to accurately locate and determine the size of the region. This approach relieves the sensors surrouding a jammed area from the communication overhead and power consumption of calculating the jammed regions. We developed a simulator (\S\ref{sim}) to evaluate our system and compare it with JAM in terms of effective mapping and communication overhead. Our results demonstrate that the proposed protocol performs faster mapping, thereby saving substantial message overhead compared with JAM, and it provides reasonable mapping accuracy. We also experiment with the trade-off between the communication overhead of the sensor nodes and the mapping performance of the system. Finally we run experiments on real sensor notes to see the performance of our jammed region mapping technique (\S\ref{exp}). \section{Related Work} \label{rel} Communication in the WSN in the presence of jamming has been investigated previously. Wormhole-based anti-jamming techniques and timing channels are discussed in ~\cite{PERF, cagalj07, Xu2008Antijamming}.Various spread-spectrum communication techniques are used to defend jamming in the wireless networks~\cite{ATTJAM, ATTJAM2, Xu04channelsurfing}. Since, large scale deployment of WSNs mostly have cheap commodity devices as sensor nodes, it is unlikely that they will possess the design complexity to perform spread spectrum techniques, and are more likely to use a single frequency. Some works involve detection of radio interference~\cite{RID_RADIO} in WSNs. The packet delivery ratio (PDR) and the measurement of signal strength can be used to detect jamming in WSNs~\cite{FEASJAM}. Jamming detection by monitoring channel utilization is discussed in JAM \cite{JAM}, where the sensors decide they are jammed when their channel utility is below a certain threshold. JAM \cite{JAM} is a complete protocol for the jammed area mapping after detection. In this protocol a node that discovers that it has been jammed broadcasts {\em jammed} messages to its neighbors informing them that it has been jammed. The neighboring nodes that are farther in the distance from the jamming effect but located near the boundary of the jammed region will be able to receive these messages and initiate the JAM protocol. By this protocol, the boundary nodes map out the jammed area by exchanging messages among themselves regarding the jammed nodes. In our protocol, the central BS performs the jammed region mapping. \section{Model} \label{model} In this section we present the basic characteristics of the network and intruder models that we are going to use for our protocol. \subsection{Network model} We study our system in a homogeneous network model in which all the nodes are stationary, location aware, and also have roughly the same sensing capabilities. These sensor nodes possess limited power and utilize wireless channels to communicate with other nodes within their signal range. The sensor nodes close to a jamming device are unable to receive any messages from their neighbors since the channel is jammed and therefore unable to communicate with any of their neighbors. However, the nodes that are on the edge of a jammed region are able to send messages to their un-jammed neighbors outside the jamming range and send notification messages to them once they detect jamming among some of their neighboring nodes. These nodes are called the boundary nodes. The base station (BS) is a distinguished component with much more computational power and communication resources than the sensor nodes in the network. The BS is also aware of the network topology and the location of the sensor nodes in the network~\cite{location1, location2}. During the deployment phase, flooding messages are sent by the BS to construct a spanning tree rooted at the BS; applying breadth first search. All the nodes within the range of the BS set the BS as their parent and rebroadcast this message to their neighboring nodes. In this way, each node has information about it's predecessor nodes and the minimum distance to the BS, which is used for routing by applying the standard routing algorithms and common pratices~\cite{route1, Schurgers01energyefficient}. An example network is shown in Fig.~\ref{fig:node}. \subsection{Intruder model} In a traditional wireless communication system, a jammer launches jamming attacks on the physical and data link layers of the WSNs with the goal of preventing reception of communications at the receiver end using as little power as possible. In this paper, we consider an intruder who can deploy jamming devices that act as constant jammers. The intruder can place these malicious nodes or jamming devices at arbitrary locations in the network for the purpose of creating a DOS condition or an unmonitored path through the network. We assume that the jamming devices have similar hardware capabilities as the sensor nodes in the network, and that they act as constant jammers. Thus they disrupt any communication in the surrounding area by emitting continuous radio signals. They can be implemented using regular wireless devices that continuously sending out random bits to the channel without following any MAC-layer protocol~\cite{FEASJAM} and thus effectively being able to block the legitimate traffic sources from using the channel. The range of these jamming devices may vary, and their range is not known to the sensor nodes or the BS. For example, in Fig.~\ref{fig:events}, we present a scenario in which there are five jamming sensors in the network. The red nodes are the sensor nodes that are jammed and are unable to communicate with the unjammed nodes marked in green. \section{Mapping protocol} \label{mapprot} In this section, we present a detailed description of our protocol for mapping the jammed regions in a WSN. We used the sequence in Fig.~\ref{fig:seq} as an example to guide our discussion. The mapping is performed in few steps as described in following sections. \begin{enumerate \item In the first step, the boundary nodes on the edge of a jammed region detects jamming in the network (\S~\ref{detect}), then some of them send notifications to the nodes outside the jamming range. \item Some of the nodes that got notifications, report to the BS about the presence of jamming in the network. \item The BS then locates the jammed regions and finally maps those regions based on the information it has received. \end{enumerate} \begin{figure} \begin{center} \subfloat[WSN]{\label{fig:node}\fbox{\includegraphics[width = 1.5in]{nodes}}} \hspace{.1in} \subfloat[Jamming in WSN]{\label{fig:events}\fbox{\includegraphics[width = 1.5in]{events}}} \end{center} \begin{center} \subfloat[Reporter nodes]{\label{fig:rep}\fbox{\includegraphics[width = 1.5in]{selectedNodes}}} \hspace{.1in} \subfloat[Finding jammed regions]{\label{fig:clust}\fbox{\includegraphics[width = 1.5in]{cluster5}}} \end{center} \begin{center} \subfloat[Convex hull of the regions]{\label{fig:hull}\fbox{\includegraphics[width = 1.5in]{selectedAndHull}}} \hspace{.1in} \subfloat[Mapped area]{\label{fig:area}\fbox{\includegraphics[width = 1.5in]{events_regions}}} \end{center} \caption{Mapping sequence.} \label{fig:seq} \end{figure} \subsection{Jamming detection and notification to the BS}\label{detect} A jamming device present in the network jams all the benign nodes within its signal range. We assume that the nodes are able to detect jamming of neighboring nodes. After detecting jamming some of these nodes send reports to the BS. We call the nodes that report jamming to the BS as {\em reporter} nodes. To keep the overall message overhead to a minimum, not all the nodes that detect jamming become reporter nodes. Whether a node will send a report to the BS depends on the following two decisions. \begin{list}{\labelitemi}{\leftmargin=1em} \item {\bf Decision to become reporter:} When a node detects jamming among its neighbors, it decides with some probability $P_{rep}$, whether or not to become a reporter node. $P_{rep}$ should be determined according to the density of the network and also depending on the required accuracy of mapping. Once jamming starts and an unjammed node finds jamming among its neighbor nodes, it starts making a list of these jammed nodes. In Fig.~\ref{fig:rep}, the green nodes are the reporter nodes of the network with $P_{rep}$ = $0.5$, and the red ones are the jammed nodes that are reported to the BS. \item {\bf Decision to notify neighbors:} The farther a reporter node is located from the BS, the greater the number of messages required for sending jamming notifications to the BS. To reduce this overhead while ensuring good reporting coverage, we use a simple scheme where the reporter node multiplies its distance to the BS with the number of its neighbors and $P_{rep}$. If this value is greater than a threshold value $T_{rep}$, the node will send an alert to its neighbors. If a node is the first one to notify its neighbors, it will send a report to the BS, otherwise it does not. If this value is less than or equal to $T_{rep}$, the node will send the report to the BS without sending any alerts to its neighbors. For example, a reporter node i will alert its neighbors if Eq.~\ref{th_eq} is fulfilled. \begin{equation} Dist_{i} \times Neighbor_{i} \times P_{rep} \leq T_{rep} \label{th_eq} \end{equation} Here, $Neighbor_{i}$ is the number of neighbors of the reporter node and $Dist_{i}$ is its distance from the BS. A node that gets an alert from a neighboring reporter node will not send any notification messages to the BS, even if it has selected itself as a reporter node. \indent The equation has been designed so that if the node is too far from the base station and has many neighbors the probablilty of it sending a message to the base station. The value $T_{rep}$ is determined based on the average distance to the BS and average number of neighbors for the sensor nodes in the network. There is a trade off between the cost of sending a notification from a reporter node to the BS and of sending notifications to the reporter's neighbors. $T_{rep}$ should be chosen such that it reduces the message overhead. \end{list} \subsection{Locating the jammed regions of the network} \label{reg_find} Once the BS starts receiving notifications of jamming from the un-jammed nodes located at the various parts of the network, it starts building its own list of known jammed nodes in the network. Since the BS may get notifications of jamming from different parts of the network, it applies a clustering algorithm to decide on the number and location of the jammed regions in the network. For this purpose, we used k-means \cite{KMEANS, KMEANS67}, which is a partition-based clustering algorithm. The geographical positions of the nodes are used as data points and the euclidean distance between nodes is used as the distance measure. The output of the algorithm are clusters, which represent the jammed regions in the network. $k$ is one of the inputs to the k-means clustering and in this case it stands for the number of jammed regions in the network. This value is not known apriori to the BS. If there is no manual assistance available to determine the probable value of $k$, the BS needs to decide on the best value of $k$, based on the information it receives from the nodes in the network. Since, the quality of the clustering by k-means greatly depends on the selection of initial centroids, the BS runs the algorithm on a maximum value of $k$ for a certain number of times, each time using $k$ random points as initial centroids. So, the BS starts from $k = k_{max}$ and picks $k$ random nodes among its list of jammed nodes to serve as $k$ initial centroids. The BS then runs the k-means algorithm a fixed number of times for this value of $k$. In each round, the BS picks the best result with the lowest sum of squared error ($SSE$). For the next round, it merges two of the closest clusters from the previous best result and generates a new centroid from the mean of the centroids for these two clusters. The BS then uses this new centroid and the other $k-2$ centroids as initial centroids for the next round of k-means for $k-1$ clusters. Except for the first round with the maximum value of $k$, BS runs the k-means algorithm only once for each decrement of $k$, until $k=1$. After generating $k_{max}$ number of clusters by the process described above, the BS decides on the optimal value of $k$ from the results. To do this, BS compares the improvements in the $SSE$ value for each of the clustering results from $k$ = $2$ to $k_{max}$ as improvement $Imp_{k}$ -- \begin{equation} Imp_{k} = ({SSE_{k-1} - SSE_{k}}) / SSE_{k-1} \end{equation} \noindent If $Imp_{k}$ is negative for a given value of $k$, clustering results for that $k$ are discarded. This is because of the intuition that with the increase of $k$, $SSE$ should always decrease. From the remaining clustering results, starting from the lowest value of $k$, the BS checks the nodes belonging to the cluster corresponding for that value of $k$ and accepts the clustering result if all the nodes are within two standard deviations from the mean of the cluster. If any of the nodes is more than two standard deviations away from the mean, the cluster is discarded. Otherwise, this $k$ is decided to be the number of jammed regions in the network. When the value of $k$ that is decided by BS is less than or greater than the original number of jammed regions in the network, one of the following two situations occurs - \begin{itemize} \item $k <$ number of jammed regions: If $k$ is less than the number of jammed regions in the network, multiple regions can be grouped into a single region. This causes, more un-jammed nodes to be inside a mapped region, thereby generating a larger FP rate. \item $k >$ number of jammed regions: If $k$ is greater than the number of jammed regions in the network, a single region can be divided in to two or more regions. This will create multiple small regions in place of a single larger one, thus leaving some jammed nodes outside the mapped areas and causing a larger FN rate. \end{itemize} \subsection{Jammed region mapping} After the BS first decides on the number and location of the jammed regions present in the network, it maps the estimated jammed area for each region. By mapping the regions, the BS works as a classifier for a two-class prediction problem, where the outcomes are labeled as positive (p) or negative (n) class. The jammed nodes are labeled as positive and the un-jammed nodes are labeled as negative. If a node inside a mapped region is actually jammed then it is labeled as true positive (TP); however if the node inside the mapped region is un-jammed, then the outcome is a false positive (FP). Conversely, a true negative (TN) can occur when an un-jammed node falls outside the mapped regions and false negative (FN) happens when a jammed node falls outside the mapped regions. Also, the \emph{known positive} (TP/FP) nodes to the BS are the jammed nodes that were reported to BS by the reporter nodes and the \emph{known negative} (TN/FN) nodes to the BS are these reporter nodes. The mapping is performed based on the fact that jamming devices (using an omni directional antenna) usually create have circular shaped jamming regions in the network. The output of the mapping protocol is shown in Fig.~\ref{fig:area} which shows the original network with the mapping by the BS. To map the region for each of the jammed areas found by BS, it first determines the center for that particular region, then fits the jammed nodes in a circular region and finally moves and increase the size of this region to increase the number of known TP and while keeping the number of known FP to minimum. The steps are as follows- \begin{list}{\labelitemi}{\leftmargin=1em} \item {\bf Determining the center for the region:} The BS first determines the center ($C_{j}$) for a region. If the number of nodes in a cluster is less than $3$, the center is the mean of the points in the cluster. Otherwise, the BS finds the convex hull of the cluster and the mean of the vertices of this hull is used as the center for that region. The reason for finding the convex hull is to minimize the effect of the center ($C_{j}$) being biased to the side of the region with higher density. We use the Graham scan algorithm to find the convex hull. \item {\bf Mapping the region:} After finding the centroid ($C_{j}$) of the jammed nodes belonging to a region, the BS takes the largest distance between any two jammed nodes of this region as the diameter of a circle centered at $C_{j}$ and finds the number of known jammed nodes (TP) and reporter nodes (known FP) that are inside this circle. It also calculates the center ($C_{i}$) of the known FP nodes (Fig.~\ref{fig:areamap2}). \item {\bf Improvement in Mapping:} To improve the mapping, the BS moves $C_{j}$ one step (away from $C_{i}$) at a time, alternating between moves along the vertical and horizontal axes. It continues to move the circle until either any known jammed node goes out of the circular region or if the number of known FP increases. If new TP nodes are added to the region, the BS increases the diameter of the circle by a factor to consume more jammed nodes. Fig.~\ref{fig:areamap2} shows the final area mapped by the BS with the new centroid for the jammed nodes located at ($C_{j}'$). Fig.~\ref{fig:areamap2} shows the region mapped by the BS based on the information collected on the jammed nodes from the reporter nodes. Here, the orange nodes are true positives (TP), the green nodes are false positives (FP) and the red nodes are false negatives (FN) among the nodes found as jammed by the BS. \begin{figure} \begin{center} \subfloat[Initial mapping]{\label{fig:initial}\fbox{\includegraphics[width = 1.5in]{mapping_sequence1}}} \hspace{.1in} \subfloat[Final mapping]{\label{fig:final}\fbox{\includegraphics[width = 1.5in]{mapping_sequence2}}} \end{center} \caption{Jammed region mapping by the BS.} \label{fig:areamap2} \end{figure} \end{list} \noindent In case of jamming regions that are asymmetric or non-circular shaped, the BS can use the result of the previously calculated convex hull to map the region. \section{Simulation} \label{sim} We built a simple simulator to evaluate our proposed system. We use the results to evaluate the performance of our system and to compare our proposed system with JAM~\cite{JAM}. Our comparison is based on two metrics: 1) the time to perform the mapping of all the jammed regions present in the network, and 2) the amount of messages exchanged among the sensor nodes to do the mapping. Also, to evaluate the performance of our system, we measure the performance of the mapping done by the BS in terms of true positives, false positives, and false negatives (incrementing the node selection probability leads to minimizing the number of false negatives and false positives). In the following sections we describe the experimental setup, methods for comparison with JAM, and then the performance of the system. \subsection{Experimental setup} We built the simulator in C++ to simulate a border region with dimensions of $200 \times 200$ units. In this simulator, we study a WSN which is composed of between $500$ and $1000$ nodes. These nodes are randomly deployed in this area with one node near the upper horizontal border of the network serving as the BS (Fig.~\ref{fig:initial}) whom all the other nodes report. During the deployment phase of the network, for each node, the neighbor discovery and path setup to the BS is performed. The nodes have a fixed signal radius ($10$ -- $20$ units) and have $7$ -- $17$ neighbors on average. During our simulation of events, an intruder places jamming devices randomly in the network. These devices have higher signal radii ($17$ -- $27$ units) than the sensor nodes, and this range can be different for the individual jamming devices in the network. When any of the nodes in the network is jammed, the neighboring nodes are notified by the detection process. When a working node discovers jamming among its neighbors, it decides to become a reporter node with probability $P_{rep}$. \subsubsection{Comparison with JAM} We compare the performance of JAM \cite{JAM} and our proposed system. To do this we simulated both JAM and our system in the exact same network conditions. \newline \noindent {\bf Time to Map:} We measured the time to map all the jammed regions in the network by the BS. For JAM, the time to get the mapping is measured by the number of coalitions of jammed groups that occur at a mapping node. This number determines the number of rounds of build messages that are going to be sent by the mapping nodes before the final mapping result is being sent to BS. By this protocol, after the mapping is done, there should be ideally one dominant group of mapped (jammed) members and only one mapping node sends this information to BS. For our system, total time to map is calculated by the number of alert messages a reporter node sends to its neighbors, which determines the time it takes for BS to receive all the notifications of jammed nodes in the network. \newline \noindent {\bf Message overhead:} The final goal is to have the BS learn about all the jammed regions in the network. To acheive this, during simulation, after mapping is done according to JAM, exactly one node (ideally the creator of the dominant group) from each jammed region sends a message to the BS containing information on the related mapped nodes. In our protocol, the overhead is the sum of the number of messages required to be sent by all the reporter nodes to notify the BS. For the first set of experiments, we consider networks of three different densities -- $600$, $800$, and $1000$ nodes. For each of these settings, the simulation results are calculated by running JAM and our system with $P_{rep} = 0.2, 04, and 0.6$. For each of these cases the simulation has been run $100$ times and results are calculated by the arithmetic mean of these simulations. The box-and-whisker graph in Fig.~\ref{fig:comp_time} shows the time to get the ultimate mapping result for JAM and our system for three different probabilities and for three different densities of the network. In the bar graph in Fig.~\ref{fig:comp_msg}, we compare the overhead of the two schemes according to the number of messages. \begin{figure} \begin{center} \subfloat[]{\fbox{\includegraphics[width = 3in]{time_overhead}}} \end{center} \caption{Comparison with JAM on time to map jammed regions.} \label{fig:comp_time} \end{figure} \begin{figure} \begin{center} \subfloat[]{\fbox{\includegraphics[width = 2.25in]{msg_overhead}}} \end{center} \caption{Comparison with JAM on message overhead.} \label{fig:comp_msg} \end{figure} \subsection{Performance of the system} In the second set of experiments, we evaluate the performance of our system. We took a measure of performance by incrementing the probability of a node to become reporter, varying $P_{rep}$ from $0.2$ to $0.6$. We present performance in terms of \emph{precision} and \emph{recall}. These two values are computed by the of number of true positive, false positive and false negative nodes the BS can identify while performing the mapping of the jammed nodes. Here, precision (Eq.~\ref{pre}) gives the probability of finding real jammed nodes classified as jammed by the BS to all then nodes classified as jammed by the BS. On the other hand recall (Eq.~\ref{rec}) specifies the probability of original jammed nodes mapped by the BS among all the jammed nodes present in the network. Here, TP, FP and FN stands for true positive, false positive and false negative values for the jammed nodes identified by the BS, respectively. \begin{equation} Precision =\frac{TP}{TP+FP} \label{pre} \end{equation} \begin{equation} Recall =\frac{TP}{TP+FN} \label{rec} \end{equation} We did the experiment with three different densities of network of -- $600$, $800$ and $1000$ nodes. Results from the second set of experiments are shown in Table~\ref{precall}. Recall improves better with increasing probability as when the number of reporter nodes are higher. This is because the BS has more information about the jammed nodes, which helps the BS to include more jammed nodes in each region. The improvement in precision occurs in slower rate as even with the more information on the jammed nodes the BS can include some unjammed nodes in the clusters that are unknown to it. \begin{table}[tp] \begin{center} \caption{Precision and recall} \begin{tabular}{|c||c|c|c|} \hline & Probability & Precision & Recall \\ \hline\hline 600 nodes & \begin{tabular}{c} 0.2 \\ 0.4 \\ 0.6 \\ \end{tabular} & \begin{tabular}{c} 0.77 \\ 0.81 \\ 0.82 \\ \end{tabular} & \begin{tabular}{c} 0.51 \\ 0.73\\ 0.75 \\ \end{tabular} \\ \hline\hline 800 nodes & \begin{tabular}{c} 0.2 \\ 0.4 \\ 0.6 \\ \end{tabular} & \begin{tabular}{c} 0.79 \\ 0.83 \\ 0.84 \\ \end{tabular} & \begin{tabular}{c} 0.56 \\ 0.73 \\ 0.80 \\ \end{tabular} \\ \hline\hline 1000 nodes & \begin{tabular}{c} 0.2 \\ 0.4 \\ 0.6 \\ \end{tabular} & \begin{tabular}{c} 0.83 \\ 0.84 \\ 0.87 \\ \end{tabular} & \begin{tabular}{c} 0.59 \\ 0.75 \\ 0.82 \\ \end{tabular} \\ \hline \end{tabular} \label{precall} \end{center} \end{table} \section{Experiment} \label{exp} In this section we describe a set of experiments that we conduct using real sensor motes to test the performance of the jammed region mapping technique. We use Crossbow's TelosB motes (TPR2400), which are generally platform for low-power research development for wireless sensor network experiments and TinyOS 2.0 for programming. \subsection{Setup} For our experiments we use a total of $50$ motes, of which $49$ are used for the network, and one is programmed as jamming device. We place the motes in a $7 \times 7$ grid. We placed adjacent motes $8''$ apart. Table~\ref{exp_env} summarises the network settings. The network area is a square of size $48'' \times 48''$. Since the default radio range of the sensor motes is quite high, We set the parameter DCC2420-DEF-RFPOWER of the motes to $1$, $2$ and $3$ and found $9''$, $20''$ and $213''$ as corresponding ranges. For our indoor experiments, we used the lowest range (DCC2420-DEF-RFPOWER = $1$). \begin{table} \begin{center} \caption{Basic setup of the network} \begin{tabular}{ | l | p{1.5cm}|} \hline Size of Network & 49\\ \hline Average number of neighbors & 5.22\\ \hline Total number of jammed nodes & 10\\ \hline Sensor node signal range & $9''$\\ \hline Jammer signal range & $9''$\\ \hline Average number of jammed regions & 1\\ \hline \end{tabular} \label{exp_env} \end{center} \end{table} \subsubsection{Jammer node} To make a regular sensor node to work as the jammer, we bypass the MAC protocol for that mote by disabling channel sensing and radio back off operation. This allows the jammer mote to send continuous signals and jam reception of all the motes those are within its transmission range. \subsubsection{Neighbor setup} Each sensor node in the network sends a beacon packet to its neighbors at regular time intervals ($0.4s$). We determine the neighbors of a node by setting a threshold for the number of messages it receives from another node at unit time. \subsubsection{Detection} The beacon packets sent by the motes to its neighbors are used to detect jamming in the network. When the jammer node is on, the jammed sensors fail to receive the regular beacon packets from its neighbors and will be able to detect that its being jammed. The boundary nodes of the jammed region notify their un-jammed neighbors that they are jammed and the un-jammed nodes starts the mapping protocol. \subsection{Simulation} We give this network setup with each node and its corresponding neighbors and also the list of jammed nodes from the experiment as input to our simulator (section~\ref{sim}) and see how our region mapping protocol performs for actual jammed regions. Since the jammed regions that are generated are of irregular sizes, we used a convex-hull finding algorithm for the final mapping, instead of the circular shapes. For each experiment, we run our simulation of region mapping for 1000 times to measure the performance, by selecting random reporter nodes for each round. The performance in terms of precision and recall is presented in Table.~\ref{exp_result}. \begin{table} \begin{center} \caption{Results from simulation of the network} \begin{tabular}{ | l | p{1.5cm}| p{1.5cm} | p{1.5cm}|} \hline Pr. of selection & Precision & Recall\\ \hline\hline 0.4 & 0.50 & 0.25\\ \hline 0.6 & 0.81 & 0.54\\ \hline 0.8 & 0.94 & 0.78\\ \hline \end{tabular} \label{exp_result} \end{center} \end{table} \section{Conclusion} \label{conc} Jamming is a critical attack against WSNs, which can lead to a DoS condition. This can prevent the network from monitoring for intruders. Also jamming can be used against other application scenarios. Also, an attacker can jam random parts in a network and create a path for himself to go back and forth through the network, thus creating a critical security breach. It is important to map out the jammed regions so that this information can be used in the network for routing and power management and also for taking reactive measures to deal with unmonitored regions. Thus, jammed region mapping in the network helps dealing with jamming and to take effective measures against it. \par In this paper, we proposed an efficient mapping protocol to map the jammed regions in a network by having the base station compute and approximate mapping. This relieves the sensor nodes from sending many mapping messages and running out of the battery power. The mapping results can be improved by having more nodes send jamming notification messages to base station, which creates a trade off between performance of mapping versus the network overhead. Our simulation results demonstrate that this system requires less interactions among the sensor nodes compared with previous work and thus has less overhead and faster mapping. \par We developed our intruder model assuming jamming devices placed randomly in the network and creating circular interference patterns. In future work, we will study mapping of jamming regions introduced by jamming devices of more asymmetric and irregular signal range. We will investigate the application of improved k-means algorithm~\cite{KMEANSIMP1, KMEANSIMP2}, so that we can have better selection of the initial centroids and improved clustering results in the presence of clusters of irregular size, density, and shape. \balance \bibliographystyle{plain}
2,869,038,156,043
arxiv
\section{Introduction}\label{sec:introduction} The aim of this paper is to assess the generation and promotion of turbulence in oscillatory magnetohydrodynamic (MHD) duct flows. Motivation stems from proposed designs of dual purpose tritium breeder/coolant ducts in magnetic confinement fusion reactors \cite{Abdou2015blanket}. These coolant ducts are plasma facing, hence subjected to both high temperatures and a strong pervading transverse magnetic field \citep{Smolentsev2008characterization}. At the same time, obtaining turbulent heat transfer rates is crucial to the long term operation of self-cooled duct designs \citep{Moreau2010flow}. This can be achieved by keeping the flow turbulent. Various strategies to promote turbulence in MHD flows include: the placement of physical obstacles of various cross section \citep{Cassels2016heat, Hussam2011dynamics, Hussam2012enhancing}, inhomogeneity in electrical boundary conditions \citep{Buhler1996instabilities}, electrode stimulation \citep{Hamid2016combining, Hamid2016heat} and localized magnetic obstacles \citep{Cuevas2006flow}. The approach to promote turbulence taken in this work is to superimpose a time periodic flow, of specified frequency and amplitude, onto an underlying steady flow. The benchmark used, particularly in the linear analysis, is the critical Reynolds number for the steady flow. The goal is to obtain the greatest reduction in the critical Reynolds number (considered as the degree of destabilization) with the addition of a time periodic flow component, of optimized frequency and amplitude. Ultimately, this approach seeks an estimate of the lowest Reynolds number at which turbulence may be incited and sustained by the addition of a pulsatile component to the base flow. In MHD flows, the predominant action of the Lorentz force on the electrically conducting fluid is to diffuse momentum along magnetic field lines \citep{Davidson2001introduction, Sommeria1982why}. When the Lorentz force dominates both diffusive and inertial forces, the flow becomes quasi-two-dimensional (Q2D) \citep{Potherat2010direct, Thess2007transition, Zikanov1998direct}. In the limit of quasi-static Q2D MHD, the magnetic field is imposed, and the Lorentz force dominates all other forces far from walls normal to the field. Three dimensionality only remains when asymptotically small in amplitude, or in regions of asymptotically small thickness. The boundary layers remain intrinsically three dimensional. Hartmann boundary layers form on walls perpendicular to magnetic field lines, with a thickness scaling as $\Har^{-1}$ \citep{Potherat2015decay, Sommeria1982why}, while the thickness of parallel wall Shercliff boundary layers scales as $\Har^{-1/2}$ \citep{Potherat2007quasi}. The Hartmann number $\Har = aB(\sigma/\rho\nu)^{1/2}$ represents the square root of the ratio of electromagnetic to viscous forces, where $a$ is the distance between Hartmann walls, $B$ the imposed magnetic field strength, and $\sigma$, $\rho$ and $\nu$ the incompressible Newtonian fluid's electrical conductivity, density and kinematic viscosity, respectively. Nevertheless, although not asymptotically small, three dimensionality in Shercliff layers remains small enough for Q2D models to represent them with high accuracy \cite{Potherat2000effective}. The remaining core flow is uniform and well two-dimensionalized, in fusion relevant regimes \citep{Smolentsev2008characterization}. A Q2D model proposed by Ref.~\citep{Sommeria1982why} (hereafter the SM82 model) is applied, which governs flow quantities averaged along the magnetic field direction. In the Q2D setup, the Hartmann walls are accounted for with the addition of linear friction acting on the bulk flow, valid for laminar Hartmann layers \citep{Sommeria1982why}. Shercliff layers still remain in the averaged velocity field, even in the quasi-static limit of a dominant Lorentz force, of thickness scaling as $H^{-1/2}$ \citep{Potherat2007quasi}, where $H = 2(L/a)^2\Har$ is the friction parameter, and $L$ the characteristic wall-normal length. The accuracy of the SM82 model is well established for the duct problem \citep{Cassels2019from3D, Kanaris2013numerical,Muck2000flows}, with less than $10$\% error between the quasi-two-dimensional and the three-dimensional laminar boundary layer profiles \citep{Potherat2000effective}. The linear stability of steady Q2D duct flow was first analysed by Ref.~\citep{Potherat2007quasi}. As the magnetic field is strongly stabilizing, the critical Reynolds number for a steady base flow, beyond which modal instabilities grow, scales as $\Rey_\mathrm{crit,s} = 4.835\times10^4 H^{1/2}$ for $H \gtrsim 1000$ \citep{Camobreco2020transition, Potherat2007quasi, Vo2017linear}. The Reynolds number $\Rey = U_0 L/\nu$ represents the ratio of inertial to viscous forces. In this work, both transient and steady inertial forces will be encapsulated in $U_0$, a characteristic velocity based on both the steady and oscillating flow components. Instability occurs via Tollmien--Schlichting (\TS) waves originating in the Shercliff layers. The instabilities become isolated at the duct walls with increasing magnetic field strength \citep{Camobreco2020transition, Potherat2007quasi}, eventually behaving as per an instability in an isolated exponential boundary layer \citep{Camobreco2020role, Camobreco2020transition, Potherat2007quasi}. To the authors' knowledge, oscillatory or pulsatile Q2D flows are yet to be analysed under a transverse magnetic field. Weak in-plane fields have been analysed for oscillatory flows, although pulsatility was not considered \citep{Thomas2010linear, Thomas2013global}. The destabilization of hydrodynamic plane channel flows with the imposition of an oscillating flow component was first convincingly assessed by Ref.~\citep{Kerczek1982instability}. Using series expansions to evaluate Floquet exponents, the range of frequencies that induce destabilization was determined. Womersly numbers $1\leq \Wo \lesssim 13$ were destabilizing and $\Wo \geq 14$ stabilizing, for low Reynolds numbers and pulsation amplitudes, where the Womersly number $\Wo=\omega L^2/\nu$ characterizes the square root of transient inertial to viscous forces, and where $\omega$ is the pulsation frequency. The problem was revisited with advanced computational power and techniques \citep{Thomas2011linear, Pier2017linear}. However, even large-scale Floquet matrix problems struggled to adequately resolve larger-amplitude pulsations \citep{Thomas2011linear, Pier2017linear}, as the required number of Fourier modes rapidly increases with increasing pulsation amplitude. Instead, direct forward evolution of the linearized Navier--Stokes equations is required. Improved bounds for destabilizing frequencies of $5\leq \Wo < 13$ were determined \citep{Pier2017linear}, with the optimum frequency for destabilization at $\Wo=7$. The optimized amplitude ratio for the pulsation was also found to be near unity (steady and oscillatory velocity maximums of equal amplitude) at lower frequencies \citep{Thomas2011linear}. In addition, a small destabilization was observed at very high frequencies, for small pulsation amplitudes. Although Ref.~\citep{Thomas2011linear} did not focus on obtaining the maximum destabilization, an approximately $33$\% reduction in the critical Reynolds number (relative to the steady result) was observed at the lowest frequency tested, near an amplitude ratio of unity. Further improvement, with an approximately $57$\% reduction in the critical Reynolds number \cite{Thomas2015linear}, was attained by the imposition of an oscillation with two modes of different frequencies. Given the size of the parameter space, there remains significant potential to further destabilize both hydrodynamic and MHD flows, with single-frequency optimized pulsations. At lower frequencies the perturbation energy varies over several orders of magnitude within a single period of evolution \citep{Pier2017linear, Singer1989numerical}. This intracylcic growth and decay predominantly occurs during the deceleration and acceleration phases of the base flow, respectively. The intracylcic growth increases exponentially with increasing pulsation amplitude \citep{Pier2017linear}. At smaller pulsation amplitudes, a `cruising' regime \citep{Pier2017linear} has been identified, where the perturbation energy remains of similar nonlinear magnitude throughout the entire cycle. At larger pulsation amplitudes, and at smaller frequencies, a `ballistic' regime \citep{Pier2017linear} was identified, where the perturbation energy varies by many orders of magnitude over the cycle, and is propelled from a linear to nonlinear regime through this growth. However, in full nonlinear simulations of Stokes boundary layers, an incomplete decay of the perturbation over one cycle is observed \citep{Ozdemir2014direct}. This has little effect on growth in the next cycle, thereby leading to either an intermittent or sustained turbulent state \citep{Ozdemir2014direct}. Thus, ballistic regimes form an enticing means to sustain turbulence under fusion relevant conditions. To assess the effectiveness of this strategy, we must understand the conditions of transition to turbulence in a duct flow pervaded by a strong enough magnetic field to assume quasi-two-dimensionality. Specifically, this paper seeks to answer the following questions: \begin{itemize} \item{Will superimposing an oscillatory flow onto an underlying steady base flow still be effective at reducing the critical Reynolds number in high $H$, fusion relevant, regimes?} \item{What pulsation frequencies and amplitudes are most effective at destabilizing the flow, both hydrodynamically, and toward fusion relevant regimes?} \item{Are the parameters at which reductions in $\ReyCrit$ are observed viable for both SM82 modelling, and fusion relevant applications?} \item{Are reductions in $\ReyCrit$ sufficient to observe turbulence at correspondingly lower $\Rey$?} \end{itemize} This paper proceeds as follows: In Sec.~\ref{sec:prob_set}, the problem is nondimensionalized, and the base flow for the duct problem derived in the SM82 framework. Particular focus is placed on the dependence of the base flow on all four nondimensional parameters. Pressure- and wall-driven flows are compared, before determining the bounds for validity of the SM82 approximation for pulsatile flows. In Sec.~\ref{sec:lin_for}, the linear problem is formulated, and both the Floquet and timestepper methods are introduced. The long-term stability behavior is considered in Sec.~\ref{sec:lin_res1}, with particular focus on the optimal conditions for destabilization. Intracyclic growth and the linear mode structure are analysed in more detail in Sec.~\ref{sec:lin_res2}. Sec.~\ref{sec:nlin_all} focuses on targeted direct numerical simulations (DNS) of the optimized pulsations. Emphasis is placed on comparing linear and nonlinear evolutions, and symmetry breaking induced by nonlinearity. \section{Problem setup}\label{sec:prob_set} \subsection{Geometry and base flows} \label{sec:geom} \begin{figure}[h!] \centering \begin{tikzpicture} \node [inner sep=0pt] (img) {\includegraphics[width=0.435\columnwidth]{Fig1a-eps-converted-to.pdf}}; \draw[dashed,line width = 0.2mm, dash pattern=on 3pt off 5pt] (-3.6,-2.2) -- (-3.6,2.2); \draw[line width = 0.4mm] (-3.6,-2.18) -- (3.6,-2.18); \draw[dashed,line width = 0.2mm, dash pattern=on 3pt off 5pt] (3.6,-2.2) -- (3.6,2.2); \draw[line width = 0.4mm] (-3.6,2.18) -- (3.6,2.18); \node at (-2.5,1.2) [circle,draw=black,fill=black,inner sep=0.4mm] {}; \node at (-2.5,1.2) [circle,draw=black,inner sep=2mm] {}; \node at (-2,1.2) {$\check{\vect{B}}$}; \node at (0,-2.55) {$\check{u}= U_2\cos(\omega \,\check{t})$, $\check{v}=0$, $\hat{\vect{u}}=0$}; \node at (0,2.55) {$\check{u}= U_2\cos(\omega \,\check{t})$, $\check{v}=0$, $\hat{\vect{u}}=0$}; \fill[pattern=north east lines, pattern color=black] (-3.62,2.3) rectangle (3.62,2.2); \fill[pattern=north east lines, pattern color=black] (-3.62,-2.3) rectangle (3.62,-2.2); \draw[line width = 0.2mm,<->] (-3.6,-2.8) -- (3.6,-2.8); \draw[line width = 0.2mm,<->] (3.8,-2.2) -- (3.8,2.2); \node at (4.2,0) {$2L$}; \node at (0,-3.1) {$2\pi/\alpha$}; \draw[line width = 0.4mm,->] (-3.6,0) -- (-2.8,0) node[anchor=north west] {}; \draw[line width = 0.4mm,->] (-3.6,0) -- (-3.6,0.8) node[anchor= west] {$y$}; \node at (-3.6,0) [circle,draw=black,inner sep=0.6mm,line width = 0.4mm] {}; \node at (-2.8,-0.2) {$x$}; \node at (-3.75,-0.2) {$z$}; \end{tikzpicture} \caption{A schematic representation of the system under investigation. Solid lines denote the oscillating, impermeable, no-slip walls. Short dashed lines indicate the streamwise extent of the periodic domain, defined by streamwise wave number $\alpha$. Examples of the steady base flow component ($\UoneB(y)$; dashed line) and the normalized total pulsatile base flow ($(1+1/\Gamma)U(y,t)$; 11 colored lines over the full period, $2\pi$) are overlaid, at $H=10$, $\Gamma=10$, $\Sr = 5\times10^{-3}$ and $\Rey = 1.5\times10^4$.} \label{fig:prob} \end{figure} This study considers a duct with rectangular cross-section of wall-normal height $2L$ ($y$ direction) and transverse width $a$ ($z$ direction), subjected to a uniform magnetic field $B\vect{e_z}$. The duct is uniform and of infinite streamwise extent ($x$ direction). A steady base flow component is driven by a constant pressure gradient, producing a maximum undisturbed dimensional velocity $U_1$. An oscillatory base flow component is driven by synchronous oscillation of both lateral walls at velocity $U_2\cos(\omega \check{t})$, with maximum dimensional velocity $U_2$. The pulsatile flow, the sum of the steady and oscillatory components, has a maximum velocity over the cycle of $U_0$. In the limits $\Har = aB(\sigma/\rho\nu)^{1/2} \gg 1$ and $N = aB^2\sigma/\rho U_0 \gg 1$, the flow is Q2D and can be modelled by the SM82 model \citep{Sommeria1982why,Potherat2000effective}. A more detailed assessment of the the validity of the SM82 model follows in Sec.~\ref{sec:vali}. Normalizing lengths by $L$, velocity by $U_0$, time by $1/\omega$ and pressure by $\rho U_0^2$, the governing momentum and mass conservation equations become \begin{equation} \label{eq:non_dim_m} \Sr\pde{\vect{u}}{t} = -(\vect{u}\vect{\cdot}\vect{\nabla}_\perp)\vect{u} - \vect{\nabla}_\perp p + \frac{1}{\Rey}\nabla_\perp^2\vect{u} - \frac{H}{\Rey}\vect{u}, \end{equation} \begin{equation} \label{eq:non_dim_c} \vect{\nabla_\perp} \vect{\cdot} \vect{u} = 0, \end{equation} where $\vect{u}=(u,v)$ is the \qtwod\ velocity vector, representing the $z$-averaged field, and $\vect{\nabla}_\perp=(\partial_x,\partial_y)$ is the two-dimensional gradient operator. Four nondimensional parameters govern this problem: the Reynolds number $\Rey = U_0 L/\nu$, the Strouhal number $\Sr = \omega L /U_0$, the Hartman friction parameter $H=2B(L^2/a)(\sigma/\rho\nu)^{1/2}$ and the amplitude ratio $\Gamma = U_1/U_2$. $\Gamma=0$ represents a flow purely driven by oscillating walls (no pressure gradient) and $\Gamma \rightarrow \infty$ a pressure driven flow (no wall motion). The Womersly number $\Wo^2 = \Sr\Rey$ is sometimes used instead of $\Sr$ as a dimensionless frequency. The nondimensional pulsatile base flow is $U(y,t) = \gamma_1\UoneB(y) + \gamma_2\UtwoB(y,t)$, where $\gamma_1=\Gamma/(\Gamma+1)$ and $\gamma_2 = 1/(\Gamma+1)$, following Ref. \citep{Thomas2011linear}, with steady component $\UoneB(y)$ and oscillating component $\UtwoB(y,t)$. This work considers $1 \leq \Gamma<\infty$. Thus, the magnitude of the steady component of the base flow is never smaller than that of the oscillating component, ensuring net transfer of tritium/heat is dominant. The nondimensional wall oscillation is $\cos(t)/\Gamma$, and the maximum velocity over the cycle $U_0 = \max_{\{y,t\}}(U)=1/(1+1/\Gamma)$ for $\Gamma \geq 1$ (henceforth, $\Gamma \geq 1$). The normalized time $t_\mathrm{P} = t/2\pi$ is also defined. To assess the degree of destabilization, the Reynolds number ratio $\rrs = [\Rey/(1+1/\Gamma)]/\mathit{Re}_\mathrm{crit,s}$ is defined, comparing the Reynolds number in this problem to the critical Reynolds number for a purely steady base flow \citep{Camobreco2020transition,Potherat2007quasi, Vo2017linear}. The wave number is similarly rescaled, as $\als = \alpha/\alpha_\mathrm{crit,s}$. Instantaneous variables $(\vect{u},p)$ are decomposed into base $(\vect{U},P)$ and perturbation $(\hat{\vect{u}},\hat{p})$ components via small parameter $\epsilon$, as $\vect{u} = \vect{U} + \epsilon \hat{\vect{u}}$; $p = P + \epsilon \hat{p}$. The fully developed, steady, parallel flow $\UoneBv=\UoneB(y)\vect{e_x}$, with boundary conditions $\UoneB(y \pm 1)=0$, and a constant driving pressure gradient scaled to achieve a unit maximum velocity is \citep{Potherat2007quasi}, \begin{equation}\label{eq:U1B} \UoneB = \frac{\cosh(H^{1/2})}{\cosh(H^{1/2})-1}\left(1-\frac{\cosh(H^{1/2}y)}{\cosh(H^{1/2})}\right). \end{equation} The fully developed, time periodic, parallel flow $\UtwoBv=\UtwoB(t,y)\vect{e_x} = \UtwoB(t+2\pi,y)\vect{e_x}$, with boundary conditions $\UtwoB(y \pm 1) = \cos(t)$, $\partial \UtwoB/ \partial t |_{y \pm 1}= -\sin(t)$ expresses as \begin{equation}\label{eq:U2B} \UtwoB = \Rez\left(\frac{\cosh[(r+si)y]}{\cosh(r+si)}e^{it}\right) = b(y)e^{it} + b^*(y)e^{-it}, \end{equation} where the inverse boundary layer thickness and the wave number of the wall-normal oscillations are represented by \begin{eqnarray}\label{eq:rands} r&=&[(\Sr\Rey)^2+H^2]^{1/4} \cos([\tan^{-1}(\Sr\Rey/H)]/2), \\ \nonumber s&=&[(\Sr\Rey)^2+H^2]^{1/4} \sin([\tan^{-1}(\Sr\Rey/H)]/2), \end{eqnarray} respectively, $i=(-1)^{1/2}$ and $*$ represents the complex conjugate. In the hydrodynamic limit of $H \rightarrow 0$, $r=s=(\Sr\Rey/2)^{1/2}$. In the limit of $H \rightarrow \infty$, at constant $\Rey$ and $\Sr$, $r \sim H^{1/2}$ and $s \rightarrow 0$. If $\Rey$ is also varied, it must vary at a rate $H^p$, with $p \geq 1$, for the limiting cases to differ. Note that the oscillating component of the base flow depends only on two parameters ($\Sr\Rey=\Wo^2$ and $H$). Although these choices mean the base flow is $\Rey$ dependent, they allow $\ReyCrit$ to be found at a constant frequency (constant $\Sr$), as a constant $\Wo$ instead represents a constant oscillating boundary layer thickness. Examples of the base flow at $\Gamma=1.2$ are illustrated in \fig\ \ref{fig:base_flows}, with the total pulsatile profile plotted as $(1+1/\Gamma)\,U(y,t)$ to show oscillation about the steady component $\UoneB$. \begin{figure} \begin{center} \addtolength{\extrarowheight}{-10pt} \addtolength{\tabcolsep}{-2pt} \begin{tabular}{ llll } \footnotesize{(a)} & \footnotesize{\hspace{4mm} $\Sr=5\times10^{-2}$, $\Rey=5\times10^3$, $H=1$} & & \\ \makecell{\vspace{26mm} \\ \vspace{34mm} \rotatebox{90}{\footnotesize{$y$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig2a-eps-converted-to.pdf}} & \makecell{\vspace{26mm} \\ \vspace{34mm} \rotatebox{90}{\footnotesize{$y$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig2b-eps-converted-to.pdf}} \\ \footnotesize{(b)} & \footnotesize{\hspace{4mm} $\Sr=5\times10^{-3}$, $\Rey=1.5\times10^4$, $H=10$} & & \\ \makecell{\vspace{26mm} \\ \vspace{34mm} \rotatebox{90}{\footnotesize{$y$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig2c-eps-converted-to.pdf}} & \makecell{\vspace{26mm} \\ \vspace{34mm} \rotatebox{90}{\footnotesize{$y$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig2d-eps-converted-to.pdf}} \\ \footnotesize{(c)} & \footnotesize{\hspace{4mm} $\Sr=5\times10^{-4}$, $\Rey=4.5\times10^4$, $H=100$} & & \\ \makecell{\vspace{26mm} \\ \vspace{34mm} \rotatebox{90}{\footnotesize{$y$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig2e-eps-converted-to.pdf}} & \makecell{\vspace{26mm} \\ \vspace{34mm} \rotatebox{90}{\footnotesize{$y$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig2f-eps-converted-to.pdf}} \\ & \hspace{35mm} \footnotesize{$\UtwoB$} & & \hspace{26mm} \footnotesize{$(1+1/\Gamma)\,U(y,t)$} \\ \end{tabular} \addtolength{\tabcolsep}{+2pt} \addtolength{\extrarowheight}{+10pt} \end{center} \caption{Base flow profiles at $\Gamma=1.2$. Equispaced over one period: oscillating component (left), $(1+1/\Gamma)$ rescaled pulsatile base flow (right). A black dashed line denotes the steady component, $\UoneB$.} \label{fig:base_flows} \end{figure} Both dominant transient inertial forces (large $\Sr$) or dominant frictional forces (large $H$) are capable of flattening the central region of the oscillating flow component. In \fig\ \ref{fig:base_flows}(a), the oscillating component is flattened by large transient inertial forces, while the steady flow still exhibits a curved Poiseuille-like profile as $H$ is small. Whereas, in \fig\ \ref{fig:base_flows}(c), it is the large $H$ value that is flattening both the steady and oscillating flow components. However, inflection points, which are important for intracyclic growth, are no longer present in \fig\ \ref{fig:base_flows}(c), as $H$ is large, but can be observed in the boundary layers of \figs\ \ref{fig:base_flows}(a) and \ref{fig:base_flows}(b), as $\Sr$ is large. It is instructive to consider the velocity profile for the simpler problem of the SM82 equivalent of an isolated Stokes layer, $U(y,t) = e^{-ry}\cos(sy-t)$, where $r$ and $s$ remain as defined in Eq.~(\ref{eq:rands}), except scaled by $H^{-1/2}$ to account for the isolated boundary layer nondimensionalization. This highlights the effects of $r$ and $s$ on the boundary layer, as the base flow becomes akin to a damped harmonic oscillator. Increasing either $H$ or $\Sr\Rey$ increases $r$ in turn, and reduces the boundary layer thickness. However, increasing $H$ reduces $s$. Thus, inflection points are eliminated with increasing $H$, and the boundary layer just appears as shifted exponential profiles, as is observed in \fig\ \ref{fig:base_flows}(c). Decreasing $\Sr\Rey$ reduces $s$, and also eliminates inflection points, whereas increasing $\Sr\Rey$ increases $s$, promoting inflection points, but containing them within a thinner oscillating boundary layer. It is also worth considering the pulsatile base flow in a broader context, as past literature is divided on the method of oscillation. Among many others, Refs.~\citep{Pier2017linear,Straatman2002hydrodynamic} impose an oscillatory pressure gradient, while Refs.~\citep{Blenn2006linear, Thomas2011linear} impose oscillating walls. For the unbounded, oscillatory Stokes flow, the eigenvalues of the linear operator, with either imposed oscillation, have been proven identical \citep{Blenn2002linear}. Furthermore, it has also been shown that (transient) energy growth is also identical between the two methods of oscillation \citep{Biau2016transient}. However, the full linear and nonlinear problems can be shown to be identical. Defining a motionless frame $G$, and a frame $\bar{G}$ in motion with arbitrary, time varying velocity $\vect{V}(t)$, the two frames are related through: \begin{equation} \label{eq:gal} \bar{\vect{x}} = \vect{x} - \int \vect{V} \mathrm{d}t,\,\,\, \bar{t}=t,\,\,\, \bar{\vect{u}} = \vect{u} - \vect{V}. \end{equation} Under extended Galilean invariance, $\partial \bar{\vect{u}}/\partial \bar{\vect{x}} = \partial \vect{u}/\partial \vect{x}$ and $\Sr\partial\bar{\vect{u}}/\partial \bar{t} + (\bar{\vect{u}} \boldsymbol{\cdot} \bar{\boldsymbol{\nabla}}_\perp)\bar{\vect{u}} = \Sr(\partial \vect{u}/\partial t - \partial \vect{V}/\partial t) + (\vect{u} \boldsymbol{\cdot} \boldsymbol{\nabla}_\perp)\vect{u}$ \citep{Pope2000turbulent}. In the frame $G$, a constant driving pressure gradient, and oscillatory wall motion $U(y\pm1,t)=\UtwoB(y\pm1,t)/\Gamma$, are imposed. $\vect{V}(t) = (\UtwoB(y\pm1,t)/\Gamma,0)$ is selected so the walls appear stationary, $\bar{U}(y\pm1,t)=0$, in the moving frame $\bar{G}$ . Substituting the relations in Eq.~(\ref{eq:gal}) into Eqs.~(\ref{eq:non_dim_m}) and (\ref{eq:non_dim_c}), the governing equations in the moving frame become \begin{equation} \label{eq:non_dim_m_move} \Sr\bigg(\pde{\bar{\vect{u}}}{\bar{t}} + \pde{\vect{V}}{t}\bigg) = -(\bar{\vect{u}}\vect{\cdot}\bar{\vect{\nabla}}_\perp)\bar{\vect{u}} - \bar{\vect{\nabla}}_\perp p + \frac{1}{\Rey}\bar{\nabla}_\perp^2\bar{\vect{u}} - \frac{H}{\Rey}(\bar{\vect{u}}+\vect{V}), \end{equation} \begin{equation} \label{eq:non_dim_c_move} \bar{\vect{\nabla}}_\perp \vect{\cdot} \bar{\vect{u}} = 0. \end{equation} As the pressure does not have a conversion relation, the driving pressure in the moving frame can be freely chosen as \begin{equation}\label{eq:pres} \bar{p}(t) = p+\frac{x}{\Gamma}\bigg(\Sr\pde{\UtwoB(y\pm1,t)}{t} + \frac{H}{\Rey}\UtwoB(y\pm1,t)\bigg). \end{equation} Substituting Eq.~(\ref{eq:pres}) into Eq.~(\ref{eq:non_dim_m_move}) and cancelling yields \begin{equation} \label{eq:non_dim_m_can} \Sr\pde{\bar{\vect{u}}}{\bar{t}} = -(\bar{\vect{u}}\vect{\cdot}\bar{\vect{\nabla}}_\perp)\bar{\vect{u}} - \bar{\vect{\nabla}}_\perp \bar{p} + \frac{1}{\Rey}\bar{\nabla}_\perp^2\bar{\vect{u}} - \frac{H}{\Rey}\bar{\vect{u}}, \end{equation} \begin{equation} \label{eq:non_dim_c_can} \bar{\vect{\nabla}}_\perp \vect{\cdot} \bar{\vect{u}} = 0. \end{equation} Thus, in the frame $\bar{G}$, the governing equations, Eqs.~(\ref{eq:non_dim_m_can}) and (\ref{eq:non_dim_c_can}), are identical to the governing equations in $G$, Eqs.~(\ref{eq:non_dim_m}) and (\ref{eq:non_dim_c}). However, in $\bar{G}$ the walls are stationary, and the pressure forcing $\bar{p}$ is the sum of a steady and oscillatory component. Thus, the linear and nonlinear dynamics when the flow is driven by oscillatory wall motion ($G$), or an oscillatory pressure gradient ($\bar{G}$), are identical in all respects, as they are both the same problem viewed in different frames of reference. These arguments do not hold if $H=0$ in the steady limit ($\Gamma \rightarrow \infty$, $\UtwoB=0$), or if the oscillation of both walls is not synchronous. Note that the constant pressure gradient in the fixed frame could also be considered as a constant wall motion, for non-zero $H$. If so, the oscillations would be about a finite wall velocity, rather than about zero. \subsection{Validity of SM82 for pulsatile flows} \label{sec:vali} With the pulsatile base flow established, the realm of validity of the SM82 model is assessed. The dimensional equation governing the induced magnetic field $\cvb$ is \cite{Muller2001magnetofluiddynamics}, \begin{equation} \label{eq:mag_ind} \pde{\cvb}{\check{t}} = B_0(\vect{e}_z \vect{\cdot} \cvn )\cvu + (\cvb \vect{\cdot} \cvn )\cvu - (\cvu \vect{\cdot} \cvn) \cvb + \frac{1}{\mu_0\sigma} \check{\nabla}^2 \cvb, \end{equation} where a background uniform steady field $B_0\vect{e}_z$ is imposed. The aim is to show that the induced magnetic field diffuses $R_\mathrm{m}$ times faster than it locally varies, where the magnetic Reynolds number $R_\mathrm{m}=\mu_0 \sigma U_1 L$ and where $\mu_0$ is the permeability of free space. The low-$R_\mathrm{m}$ approximation assumes that one of the bilinear terms is much smaller than the diffusive term, $|(\cvu \vect{\cdot} \cvn) \cvb| \ll |(\mu_0\sigma)^{-1} \check{\nabla}^2 \cvb|$. Once non-dimensionalized by $U_1$ and $L$ this imposes an $R_\mathrm{m} \ll 1$ constraint. This is well satisfied for liquid metal duct flows, with $R_\mathrm{m}$ of the order of $10^{-2}$ \citep{Moreau1990magnetohydrodynamics,Knaepen2004magnetohydrodynamic}. Note that $|B_0(\vect{e}_z \vect{\cdot} \cvn )\cvu|$ remains of the same order as $|(\mu_0\sigma)^{-1} \check{\nabla}^2 \cvb|$ when the background magnetic field is imposed. The quasi-static approximation assumes $|\partial \cvb/\partial t| \ll |(\mu_0\sigma)^{-1} \check{\nabla}^2 \cvb|$. Note that a low $R_\mathrm{m}$ does not necessarily imply that $|\partial \cvb/\partial \check{t}|$ is small. Based on a typical out-of-plane steady velocity scale of $a/U_1$, $|\partial \cvb/\partial \check{t}|$ may be reasonably assumed to scale as $|(\cvu \vect{\cdot} \cvn) \cvb|$, and thereby be small if $R_\mathrm{m}$ were small. However, a pulsatile flow introduces an additional velocity timescale, based on the forcing frequency, to also compare against. Hence, non-dimensionalizing $|\partial \cvb/\partial \check{t}| \ll |(\mu_0\sigma)^{-1} \check{\nabla}^2 \cvb|$ based on a timescale of $1/\omega$ yields a constraint on the shielding parameter $R_\omega = \mu_0 \sigma \omega L^2 \ll 1$ \cite{Moreau1990magnetohydrodynamics}. This translates to $R_\mathrm{m} \Sr \ll 1$, or $\Sr \ll R_\mathrm{m}^{-1}$, to ensure that diffusion of the induced field is not contained to small boundary regions of the domain. Given $R_\mathrm{m}$ of $10^{-2}$ is typical of liquid metal duct flows at moderate Reynolds numbers \citep{Moreau1990magnetohydrodynamics,Knaepen2004magnetohydrodynamic}, since $R_\mathrm{m}=\Rey\Pra_m$ and the magnetic Prandtl number $\Pra_m=\nu\mu_0\sigma$ is of the order of $10^{-6}$ for liquid metals \cite{Potherat2015decay}, the shielding condition of $\Sr \ll R_\mathrm{m}^{-1}$ requires $\Sr \ll 100$. Furthermore, for the induced magnetic field to be treated as steady, the induced magnetic field must vary rapidly relative to a slowly varying velocity field. This requires the Alfv\'{e}n timescale (time taken for the Alfv\'{e}n velocity to cross the duct width) be much smaller than the pulsation (transient inertial) timescale. The Alfv\'{e}n velocity $v_\mathrm{A} = B/(\mu_0 \rho)^{1/2} = (N_L/R_\mathrm{m})^{1/2}(U_1 L/a)$ is expressed in terms of the interaction parameter $N_L=a^2B^2\sigma/\rho U_1 L$. Thus the Alfv\'{e}n timescale is $\tau_\mathrm{A} = a/v_\mathrm{A} = (R_\mathrm{m}/N_L)^{1/2}(a^2/U_1 L)$, while the steady inertial timescale $\tau_\mathrm{I,L}=L/U_1$ and the pulsation timescale $\tau_\mathrm{P}=1/\omega$. Thus, $\tau_\mathrm{A}/\tau_\mathrm{I,L} = (R_\mathrm{m}/N_L)^{1/2}(a^2/L^2)$ and $\tau_\mathrm{A}/\tau_\mathrm{P} = (R_\mathrm{m}/N_L)^{1/2}Sr(U_0/U_1)(a^2/L^2)$. If $\Sr(U_0/U_1)<1$, or equally $\Sr(1+1/\Gamma)<1$, no SM82 assumptions are in question. This requires $\Sr < 1/2$ at $\Gamma=1$ (and $\Sr < 1$ for $\Gamma \rightarrow \infty$) at equivalent $N \gg 1$ and $R_\mathrm{m} \ll 1$ conditions as for a steady case. Recall that $\Sr \ll 100$ was required from the shielding constraint. Finally, the quasi-static approximation is only valid if Alfv\'{e}n waves dissipate much faster than they propagate. This is ensured if $|\partial \cvb/\partial t| \ll |(\mu_0\sigma)^{-1} \check{\nabla}^2 \cvb|$ is satisfied when considering the last remaining characteristic timescale, the Alfv\'{e}n timescale $\tau_\mathrm{A} = a/v_\mathrm{A}$. This places a condition on the Lundquist number $S=(N_L R_\mathrm{m})^{1/2} = \Har \Pra_m^{1/2} \ll 1$. Given $\Pra_m$ of the order of $10^{-6}$ \cite{Potherat2015decay}, and with $R_\mathrm{m}$ of $10^{-2}$ \citep{Moreau1990magnetohydrodynamics,Knaepen2004magnetohydrodynamic}, this translates to conditions on the interaction parameter and Hartmann number of $N_L \lesssim 100$ and $\Har \lesssim 1000$, respectively. An additional component of the SM82 model is the quasi-two-dimensional approximation, which requires the timescale for two-dimensionalization to occur via diffusion of momentum along magnetic field lines, $\tau_\mathrm{2D}=(\rho/\sigma B^2)(a^2/L^2)=(1/N_L)(a^4/U_1L^3)$ \citep{Potherat2007quasi}, be much smaller than the inertial and pulsation timescales. These ratios are $\tau_\mathrm{2D}/\tau_\mathrm{I,L}=(1/N_L)(a^4/L^4)$ and $\tau_\mathrm{2D}/\tau_\mathrm{P}=(\Sr/N_L)(U_0/U_1)(a^4/L^4)$. Thus, if $\Sr < 1/2$ for otherwise equivalent conditions as for a steady case, momentum is diffused across the duct more rapidly by the magnetic field than by steady or transient inertial forces. The SM82 approximation also assumes $1 \ll \Har \lesssim 1000$ and $N \gg 1$, $N_L \lesssim 100$. These constraints can be met with any $H$ if $a$ and $L$ are chosen appropriately, as discussed in Ref.~\citep{Vo2017linear}. The SM82 model is more generally applicable to flows which exhibit a linear friction and a strong tendency to two-dimensionalize. Axisymmetric quasigeostrophic flows, with frictional forces imparted by Ekman layers, and Hele-Shaw (shallow water) flows, with Rayleigh friction, both tend to two-dimensionality if the aspect ratio $L/a$ is small. In these flows a formally equivalent Q2D model can be derived \cite{Buhler1996instabilities,Vo2015effect} (with the addition of a term representing the Coriolis force in the quasigeostrophic case), although the physical meaning of the friction term differs, as do the bounds of validity \cite{Vo2017linear}. \section{Linear stability analysis}\label{sec:lin_all} \subsection{Formulation and validation}\label{sec:lin_for} Linear stability is assessed via the exponential growth rate of disturbances, with unstable perturbations exhibiting net growth each period. The linearized evolution equations \begin{equation} \label{eq:non_dim_lin_m} \Sr\pde{\hat{\vect{u}}}{t} = -(\hat{\vect{u}}\vect{\cdot}\vect{\nabla}_\perp)\vect{U} -(\vect{U} \vect{\cdot}\vect{\nabla}_\perp)\hat{\vect{u}} - \vect{\nabla}_\perp \hat{p} + \frac{1}{\Rey}\nabla_\perp^2\hat{\vect{u}} - \frac{H}{\Rey}\hat{\vect{u}}, \end{equation} \begin{equation} \label{eq:non_dim_lin_c} \vect{\nabla_\perp} \vect{\cdot} \hat{\vect{u}} = 0. \end{equation} are obtained by neglecting terms of $O(\epsilon^2)$ in the decomposed Navier--Stokes equations. A single fourth-order equation governing the linearized evolution of the perturbation is obtained by taking twice the curl of Eq.~(\ref{eq:non_dim_lin_m}), and substituting Eq.~(\ref{eq:non_dim_lin_c}). By additionally decomposing perturbations into plane wave solutions of the form $\hat{v}(y,t)=e^{i\alpha x}\tilde{v}(y,t)$, by virtue of the streamwise invariant base flow $U(y,t)$, yields \begin{equation} \label{eq:linearised_v} \pde{\tilde{v}}{t} = \mathscr{L}^{-1}\left[\frac{i\alpha}{\Sr}\pdesqr{U}{y} - \frac{Ui\alpha}{\Sr} \mathscr{L} + \frac{1}{\Sr\Rey}\mathscr{L}^2 - \frac{H}{\Sr\Rey}\mathscr{L} \right]\tilde{v}, \end{equation} where $\mathscr{L}=(\partial^2/\partial y^2 - \alpha^2)$, and where the perturbation eigenvector $\tilde{v}(y,t)$ still contains both exponential and intracyclic time dependence. Integrating Eq.~(\ref{eq:linearised_v}) forward in time, with a third-order forward Adams--Bashforth scheme \cite{Hairer1993solving}, and with the renormalization $\lVert \tilde{v} \rVert_2=1$ at the start of each period, forms the timestepper method. After sufficient forward evolution all but the fastest growing mode is washed away, providing the net growth of the leading eigenmode over one period. A Krylov subspace scheme \citep{Barkley2008direct} is also implemented to aid convergence and provide the leading few eigenvalues $\lambda_j$ with largest growth rate (real component). The domain $y \in [-1,1]$ is discretized with $N_\mathrm{c}+1$ Chebyshev nodes. The derivative operators, incorporating boundary conditions, are approximated with spectral derivative matrices \citep{Trefethen2000spectral}. The spatial resolution requirements are halved by incorporating a symmetry (resp.~antisymmetry) condition along the duct centreline, and resolving even (resp.~odd) perturbations separately. Even perturbations were consistently found to be less stable than odd perturbations. The eigenvalues of the discretized forward evolution operator are also determined with a Floquet matrix approach \citep{Blenn2006linear,Thomas2011linear}. The exponential and time periodic growth components of the eigenvector are separated by defining \begin{equation} \label{eq:floq_dec} \tilde{v}(y,t) = e^{\mu_\mathrm{F} t} \sum_{n=-\infty}^{n=\infty} \tilde{v}_n(y) e^{int}, \end{equation} with Floquet multiplier $\mu_\mathrm{F}$ and harmonic $n$. This sum is numerically truncated to $n \in [-T,T]$, to obtain a finite set of coupled equations \begin{eqnarray} \label{eq:floq_odes} \mu \tilde{v}_n = -\frac{i\alpha}{\Sr} \bigg( M\tilde{v}_{n+1} &+&M^*\tilde{v}_{n-1}\bigg) \\ + \bigg\{\frac{1}{\Sr\Rey}\mathscr{L}^{-1}\mathscr{L}^2 &-& \frac{H}{\Sr\Rey} -in -\frac{i\alpha\gamma_1}{\Sr}\bigg[\mathscr{L}^{-1}\bigg(\UoneB\mathscr{L} - \frac{\partial^2 \UoneB}{\partial y^2}\bigg)\bigg] \bigg\}\tilde{v}_n \nonumber, \end{eqnarray} after substituting Eq.~(\ref{eq:floq_dec}) into Eq.~(\ref{eq:linearised_v}), where $M=\gamma_2[\mathscr{L}^{-1}(b\mathscr{L}-\partial^2 b/\partial y^2)]$. This system of Chebyshev-discretized equations is set up as a block tridiagonal system, with the coefficients of $\tilde{v}_{n+1}$, $\tilde{v}_n$ and $\tilde{v}_{n-1}$ placed on super-, central- and sub-diagonals, respectively. Spectral derivative matrices are built as before. The MATLAB function \texttt{eigs} is used to find a subset of eigenvalues of the block tridiagonal system located near zero real component (neutral stability), with convergence tolerance $10^{-14}$. $\Rey$ and $\alpha$ are varied until only a single wave number, $\alphaCrit$, attains zero growth rate, at $\ReyCrit$ (for specified $\Sr$, $\Gamma$ and $H$). The numerical requirements for the Floquet and timestepper approaches are highly parameter dependent. Validation against the hydrodynamic oscillatory problem \citep{Blenn2006linear} is provided in \tbl\ \ref{tab:tab_1}. Further assurance of the validity of the numerical method is provided in the excellent agreement between pulsatile and steady $\ReyCrit$ values (e.g.~$\rrs \rightarrow 1$) at very small and large $\Sr$ in Sec.~\ref{sec:lin_res1}, and the agreement between the timestepper and Floquet growth rates shown in Sec.~\ref{sec:lin_res1}. Sporadic resolution testing, post determination of $\ReyCrit$, was also performed, with an example shown in \tbl\ \ref{tab:tab_2}. \begin{table} \begin{center} \begin{tabular}{ cccccc } \hline $N_\mathrm{c}$ ($T=300$) & $\Rez({\lambda_1})$ & $|$\% Error$|$ & $T$ ($N_\mathrm{c}=150$) & $\Rez({\lambda_1})$ & $|$\% Error$|$ \\ \hline 50 & 0.4719273115651 & 3.02$\times10^1$ & 200 & 0.9493815978240 & 4.04$\times10^1$ \\ 100 & 0.6762032203289 & 6.39$\times10^{-3}$ & 250 & 0.6761968753200 & 5.45$\times10^{-3}$ \\ 150 & 0.6761968755932 & 5.45$\times10^{-3}$ & 300 & 0.6761968755932 & 5.45$\times10^{-3}$ \\ Ref.~\citep{Blenn2006linear}, even & 0.67616 & 0 & & 0.67616 & 0 \\ \hline 50 & 0.4689789806609 & $3.06\times10^1$ & 200 & 0.8329627125585 & $2.33\times10^1$ \\ 100 & 0.6756830883343 & $6.38\times10^{-3}$ & 250 & 0.6756767389579 & $5.44\times10^{-3}$ \\ 150 & 0.6756767389579 & $5.44\times10^{-3}$ & 300 & 0.6756767389579 & $5.44\times10^{-3}$ \\ Ref.~\citep{Blenn2006linear}, odd & 0.67564 & 0 & & 0.67564 & 0 \\ \hline \end{tabular} \caption{$\Gamma=0$, $H=0$ cases validating and testing the resolution of the Floquet matrix method, considering the real part of even and odd modes separately. From Ref.~\citep{Blenn2006linear}, parameters convert as $\Sr=h_\mathrm{BB06}/\Rey_\mathrm{BB06}$ and $\Rey = 2h_\mathrm{BB06}\Rey_\mathrm{BB06}$, where $h_\mathrm{BB06}=16$ and $\Rey_\mathrm{BB06}=847.5$. $N_\mathrm{c}$ accounts for the entire domain.} \label{tab:tab_1} \end{center} \end{table} As a rough guide, for the Floquet method, $N_\mathrm{c}$ varies between $100$ and $400$ and $T$ between $100$ and $600$, with an eigenvalue subset size of around $200$. For the timestepper, $N_\mathrm{c}$ varies between $40$ to $240$, with $10^5$ to $4\times10^7$ time steps per period, and $6$ to $4000$ iterations. As discussed in Refs.~\citep{Thomas2011linear, Pier2017linear}, with increasing pulsation amplitude (decreasing $\Gamma$), decreasing $\Sr$ and increasing $\Rey$, the intracylcic growth can become stupendously large. The matrix method becomes problematic when the intracylcic growth exceeds four to six orders of magnitude, while the timestepper withstands approximately ten to fifteen orders of magnitude of intracylcic growth (the perturbation norm $\lVert \tilde{v} \rVert_2$ does not cleanly converge thereafter). Very roughly, for $\Sr \lesssim 10^{-3}$ and/or $\Gamma \lesssim 2$ and/or $\Rey \gtrsim 10^5$ when $H \geq 10$ the intracyclic growth was greater than even the timestepper could handle. However, given the specific aims of this work, this does not obstruct too large a fraction of the parameter space we wish to explore. \begin{table} \begin{center} \begin{tabular}{ cccccc } \hline $N_\mathrm{c}$ & \makecell{Time steps \\ (per period)} & Iterations & \makecell{ $\lVert \tilde{v} \rVert_2$ \\ (final iteration)} & $\Rez(\lambda_1)$ & $\Imz(\lambda_1)$ \\ \hline 100 & $4\times10^5$ & 40 & 0.991293824970121 & -0.001391699032636 & 0.962888347220989 \\ 140 & $4\times10^5$ & 20 & 1.000006449491397 & 0.000001028054446 & 0.955814791449918 \\ 180 & $4\times10^5$ & 20 & 0.999993672187703 & -0.000001007773833 & 0.955795855565797 \\ 220 & $7\times10^5$ & 20 & 0.999993546425103 & -0.000001027780526 & 0.955795848436100 \\ 240 & $10^6$ & 10 & 0.999993662207549 & -0.000001011050606 & 0.955795855979765 \\ \hline \end{tabular} \caption{Resolution test at $H=10$, $\Gamma=10$ (at large $\Rey$, and small $\Sr=1.12\times10^{-2}$). The Floquet method was used to determine $\ReyCrit=8.1243\times10^5$ and $\alphaCrit=0.91137$, at $N_\mathrm{c}=200$ and $T=400$. This $\ReyCrit$ and $\alphaCrit$ were input into the timestepper to validate the timestepper, and the Floquet $\ReyCrit$ value (note the neutrally stable growth rate $\Rez(\lambda_1)\approx 0$). $N_\mathrm{c}$ accounts for the entire domain, with an even mode enforced.} \label{tab:tab_2} \end{center} \end{table} \subsection{Long-term behavior}\label{sec:lin_res1} A neutrally stable perturbation exhibits no net growth or decay over each cycle. Neutral stability is first achieved at $\ReyCrit$ and $\alphaCrit$, as $\Rey$ is increased. However, such a definition conceals the intracylic dynamics, which strongly influence $\ReyCrit$, as is further discussed in Sec.~\ref{sec:lin_res2}. \begin{figure} \begin{center} \addtolength{\extrarowheight}{-10pt} \addtolength{\tabcolsep}{-2pt} \begin{tabular}{ llll } \makecell{\vspace{15mm} \footnotesize{(a)} \\ \vspace{26mm} \rotatebox{90}{\footnotesize{$\ReyCrit/(1+1/\Gamma)$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig3a-eps-converted-to.pdf}} & \makecell{\vspace{24mm} \footnotesize{(b)} \\ \vspace{34mm} \rotatebox{90}{\footnotesize{$\alphaCrit$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig3b-eps-converted-to.pdf}} \\ & \hspace{36mm} \footnotesize{$H$} & & \hspace{36mm} \footnotesize{$H$} \\ \makecell{\vspace{25mm} \footnotesize{(c)} \\ \vspace{35mm} \rotatebox{90}{\footnotesize{$\rrs$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig3c-eps-converted-to.pdf}} & \makecell{\vspace{25mm} \footnotesize{(d)} \\ \vspace{35mm} \rotatebox{90}{\footnotesize{$\als$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig3d-eps-converted-to.pdf}} \\ & \hspace{36mm} \footnotesize{$H$} & & \hspace{36mm} \footnotesize{$H$} \\ \end{tabular} \addtolength{\tabcolsep}{+2pt} \addtolength{\extrarowheight}{+10pt} \end{center} \caption{Rescaled $\ReyCrit$ and $\alphaCrit$ as a function of $H$ for $10^{-3} \leq \Sr \leq 1$ and $\Gamma \geq 10$. The steady ($\Gamma \rightarrow \infty$) results from Ref.~\citep{Camobreco2020transition} have been included for direct comparison in the top row (black dashed line), and are divided out to compute $\rrs$ and $\als$ in the bottom row.} \label{fig:vary_H} \end{figure} Two key results are shown in \fig\ \ref{fig:vary_H}, considering the effect of varying $H$ on $\ReyCrit$. First, at large $H$, $\ReyCrit$ for a purely steady base flow scales as $H^{1/2}$, while all pulsatile cases scale as $H^p$, with $1/2 \leq p < 1$. For large $H$, $r$ is dominated by $[(SrRe)^2+H^2]^{1/4}$, which is always greater than $H^{1/2}$. As the isolated boundary layer thickness is defined by $e^{-ry}$ (Sec.~\ref{sec:prob_set}), increasing $H$ stabilizes pulsatile base flows more rapidly than steady base flows. Thus, the thinner pulsatile boundary layers are always more stable than their thicker counterpart exhibited by steady base flows. Note that in the high $H$ regime, when the boundary layers are isolated for any frequency pulsation, the stability results are defined solely by the dynamics of an isolated boundary layer, as observed in steady MHD or Q2D studies \citep{Camobreco2020transition, Potherat2007quasi, Takashima1998stability, Vo2017linear}, and for high frequency oscillatory hydrodynamic flows \citep{Blenn2006linear}. Second, variations in the pulsation frequency and amplitude roughly act to translate the stability curves, without significantly changing the overall trends (a slight change, the local minimums in \fig\ \ref{fig:vary_H}(c), are explained when considering $\Sr$ variations at fixed $H$ shortly). At $\Gamma=100$, differences between pulsatile and steady results are not easily observed, confirming the accuracy of the Floquet solver. The $\Gamma=10$ curves overlay the steady trend at respective high and low frequencies of $\Sr = 1$ and $\Sr=10^{-3}$. At $\Sr=10^{-2}$, the flow is more unstable as $H\rightarrow 0$, with $\rrs \rightarrow 0.8651$. However, for $H \gtrsim 2400$ the additional stability conferred by thinner pulsatile boundary layers pushes $\rrs$ above unity. The pulsatile flow is then more stable than the steady counterpart. Note that so long as $\ReyCrit$ varies as $H^p$ with $p<1$ (as observed for all $H$ simulated), then $\Rey$ does not increase quickly enough to offset the eventual $s \rightarrow 0$ and $r \sim H^{1/2}$ trends as $H \rightarrow \infty$. Eventually, the exponent $p$ should settle to $1/2$, after which $\ReyCrit$ should vary as $H^{1/2}$ for very large $H>10^4$. At $\Sr=10^{-1}$, the flow is hydrodynamically more stable ($\rrs \rightarrow 2.4258$ as $H\rightarrow 0$), and is even more strongly stabilized at higher $H$. The $\Sr=10^{-1}$ curve in \Fig\ \ref{fig:vary_H}(c) is not smooth as different least stable modes become dominant, as shown in the jumps in critical wave number, clearest in \fig\ \ref{fig:vary_H}(d). In steady Q2D flows \citep{Camobreco2020transition, Potherat2007quasi, Vo2017linear}, $\alphaCrit$ also scales with $H^{1/2}$ for high $H$, like $\ReyCrit$. However, perplexingly for the pulsatile cases, the $\alphaCrit$ trends are as $H^q$, with a lower exponent than the steady case, $q \leq 1/2$. \begin{figure} \begin{center} \addtolength{\extrarowheight}{-10pt} \addtolength{\tabcolsep}{-2pt} \begin{tabular}{ llll } \makecell{\vspace{25mm} \footnotesize{(a)} \\ \vspace{35mm} \rotatebox{90}{\footnotesize{$\rrs$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig4a-eps-converted-to.pdf}} & \makecell{\vspace{25mm} \footnotesize{(b)} \\ \vspace{35mm} \rotatebox{90}{\footnotesize{$\als$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig4b-eps-converted-to.pdf}} \\ & \hspace{36mm} \footnotesize{$\Sr$} & & \hspace{36mm} \footnotesize{$\Sr$} \\ \end{tabular} \addtolength{\tabcolsep}{+2pt} \addtolength{\extrarowheight}{+10pt} \end{center} \caption{Variation in $\rrs$ and $\als$ as a function of $\Sr$ at $\Gamma=100$, curves of constant $H$ (arrows indicate increasing $H$). As $\Sr \rightarrow 0$ and $\Sr \rightarrow \infty$, the agreement with the steady result is further validation.} \label{fig:vary_Sr_G100} \end{figure} Variations in $\rrs$ as a function of $\Sr$ are depicted for various $H$ under a weak pulsatility of $\Gamma=100$ in \fig\ \ref{fig:vary_Sr_G100}(a) and at $\Gamma=10$ in \fig\ \ref{fig:vary_Sr_G10}(a). The deviations from the steady $\ReyCrit$ are modest at $\Gamma=100$ (between $-1$\% and $+4$\%). However, it helps provide a clearer picture of the underlying dynamics. Considering the hydrodynamic case (approximated by $H=10^{-7}$) as an example, the steady $\ReyCrit$ is approached ($\rrs \rightarrow 1$) as $\Sr \rightarrow 0$. In this limit, transient inertial forces act so slowly that viscosity can smooth out all wall-normal oscillations in the velocity profile over the entire duct within a single oscillation period ($2\pi$). Although large intracylic growth occurs during the deceleration phase of the base flow (effectively due to an adverse pressure gradient), this is not augmented by additional growth as inflection points are absent. Therefore, the growth is entirely cancelled out by decay (due to an equivalent-magnitude favorable pressure gradient) in the acceleration phase. With increasing $\Sr$, inflection points are present over a greater fraction of the deceleration phase, in spite of the action of viscosity, and become more prominent, providing a reduction in $\rrs$. However, increasing $\Sr$ reduces the effective duration of the deceleration phase of the base flow, leaving less time for intracyclic growth. Thus, the local minimum in $\rrs$ occurs when the benefits of promoting and maintaining inflection points for a larger time (increasing $\Sr$) is counteracted by reducing the duration of the growth phase (decreasing $\Sr$). However, although increasing $\Sr$ promotes inflection points, these points also become increasingly isolated as the oscillating boundary layers become thinner. The thinner boundary layers reduce constructive interference between modes at each wall, stabilizing the flow \citep{Camobreco2020transition}. Eventually, the oscillating boundary layers become so thin that they are immaterial, and $\rrs$ drops to recover the steady value ($\Sr \rightarrow \infty$). The other friction parameters are now considered. For larger $H$, as $H$ is increased, the curves in figure \fig\ \ref{fig:vary_Sr_G100}(a) shift to larger $\Sr$. Increasing $H$ smooths inflection points within the pulsatile boundary layer. Recall that a pulsatile isolated SM82 boundary layer has the form $e^{-ry}\cos(sy-t)$, and increasing $H$ decreases $s$, thereby increasing the wavelength of wall-normal oscillations in the base flow. Larger $\Sr$ values are then required to offset the larger $H$ values, ensuring that inflection points remain within the boundary layer, and provide enough intracylic growth to reduce $\rrs$. Thus, the local minimum of $\rrs$ does not strongly depend on $H$, although the corresponding $\Sr$ value varies greatly. Importantly for fusion relevant regimes, the percentage reduction in $\ReyCrit$ appears to steadily improve with increasing $H$, although the shift to higher $\Sr$ may eventually invalidate the SM82 assumption requiring $\Sr < 1/2$ for $\Gamma \geq 1$. The pulsatile boundary layers also become increasingly isolated with increasing $H$, as $r$ increases with $H$, resulting in the steady increase in the maximum of $\rrs$. At $\Gamma=100$, the variations in $\ReyCrit$ are small, with the Reynolds number dependence of the base flow having little effect, relative to the $\Sr$ and $H$ variations (this is not the case at $\Gamma=10$). As a last note, for $\Gamma=100$, the smooth $\als$ curves in \fig\ \ref{fig:vary_Sr_G100}(b) also show that the variations in $\rrs$ represent the same instability mode for all $\Sr$ (henceforth the \TSL\ mode). \begin{figure} \begin{center} \addtolength{\extrarowheight}{-10pt} \addtolength{\tabcolsep}{-2pt} \begin{tabular}{ llll } \makecell{\vspace{25mm} \footnotesize{(a)} \\ \vspace{35mm} \rotatebox{90}{\footnotesize{$\rrs$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig5a-eps-converted-to.pdf}} & \makecell{\vspace{25mm} \footnotesize{(b)} \\ \vspace{35mm} \rotatebox{90}{\footnotesize{$\als$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig5b-eps-converted-to.pdf}} \\ & \hspace{36mm} \footnotesize{$\Sr$} & & \hspace{36mm} \footnotesize{$\Sr$} \\ \end{tabular} \addtolength{\tabcolsep}{+2pt} \addtolength{\extrarowheight}{+10pt} \end{center} \caption{Variation in $\rrs$ and $\als$ as a function of $\Sr$ at $\Gamma=10$, curves of constant $H$ (arrows indicate increasing $H$). Dashed curve indicates restabilization and a second destabilization with increasing $\Rey > \ReyCrit$ at $H=10$. The stable region is below the continuous solid-dashed-solid curve.} \label{fig:vary_Sr_G10} \end{figure} At the lower $\Gamma=10$, \fig\ \ref{fig:vary_Sr_G10}, the oscillating component plays a much greater role. The underlying behaviors discussed for $\Gamma=100$ still hold for smaller $\Sr$, including the region of minimum $\rrs$, and for much larger $\Sr$. Furthermore, the local minimum in $\rrs$ still becomes more pronounced with increasing $H$, with an approximately $33.0\%$ reduction in $\ReyCrit$, compared to the steady value, at $H=10$. $H=1000$ could not be computed over a wide range of $\Sr$ at $\Gamma=10$, but the partial data collected (not shown) demonstrated a further reduction in $\rrs$, of up to $42.4$\%. The degree of stabilization at $\Gamma=10$ is far more striking. The sudden jumps in $\als$, shown in the inset of \fig\ \ref{fig:vary_Sr_G10}(b), indicate different instability modes. These modes are increasingly stable, with much larger accompanying $\rrs$ values (the $H=10$ case peaks with an approximately $804\%$ increase over the steady $\ReyCrit$). Because the Reynolds numbers are significantly far from the steady $\ReyCrit$ values, the change in Reynolds number has had a noticeable effect on the base flow profiles. At larger Reynolds numbers the oscillating boundary layers become much thinner, so inflection points are not positioned where they could underpin sizeable intracyclic growth. \begin{figure} \begin{center} \addtolength{\extrarowheight}{-10pt} \addtolength{\tabcolsep}{-2pt} \begin{tabular}{ llll } \footnotesize{(a)} & \footnotesize{\hspace{5mm} $\Sr=1.7\times10^{-2}$} & \footnotesize{(b)} & \footnotesize{\hspace{5mm} $\Sr=1.8\times10^{-2}$} \\ \makecell{\\ \vspace{10mm} \rotatebox{90}{\footnotesize{$\Rez(\lambda_1)$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig6a-eps-converted-to.pdf}} & \makecell{ \\ \vspace{10mm} \rotatebox{90}{\footnotesize{$\Rez(\lambda_1)$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig6b-eps-converted-to.pdf}} \\ & \hspace{37.5mm} \footnotesize{$\alpha$} & & \hspace{37.5mm} \footnotesize{$\alpha$} \\ \end{tabular} \addtolength{\tabcolsep}{+2pt} \addtolength{\extrarowheight}{+10pt} \end{center} \caption{Exponential growth rate as a function of $\alpha$ with increasing $\Rey$ ($8\times10^4$ through $8\times10^5$) at $H=10$, $\Gamma=10$, comparing $\Sr$. At $\Sr=1.8\times10^{-2}$ the \TSL\ mode does not become unstable, thus $\ReyCrit=6.40840\times10^5$ is much larger than $\ReyCrit=8.50617\times10^4$ at $\Sr=1.7\times10^{-2}$. As additional validation, symbols (timestepper) show excellent agreement with curves (Floquet).} \label{fig:Sr1718} \end{figure} This explains the discontinuous change in $\rrs$ with a slight shift in $\Sr$. At fixed $\Sr$, at Reynolds numbers near the steady $\ReyCrit$ value, a \TSL\ mode is excited, but not necessarily unstable. The \TSL\ mode is based on the instability of the steady flow, i.e.~the \TS\ wave. For $\Rey>\ReyCrit$ the exponential growth rate increases with increasing Reynolds number. However, the same increase in $\Rey$ increasingly isolates and thins the boundary layers, thus reducing the exponential growth rate. The isolation of the boundary layers (the effect of $\Rey$ on the base flow) eventually overcomes any increases in exponential growth rate (the effect of $\Rey$ on the perturbation). At higher $\Sr$, when the oscillating boundary layers are naturally further apart, the increased isolation prevents the instability of the \TSL\ mode. This is shown at $\Sr=1.8\times10^{-2}$ in \fig\ \ref{fig:Sr1718}(b), or to the right of the discontinuity in $\rrs$ on \fig\ \ref{fig:vary_Sr_G10}(a). The sudden increase in $\rrs$ in \fig\ \ref{fig:vary_Sr_G10}(a) reflects the stabilization of the \TSL\ mode (another mode is destabilized at a much higher $\Rey$). At smaller $\Sr$, the effect of $\Rey$ on increasing the growth rate allows the \TSL\ mode to become unstable, if only briefly at $\Sr=1.7\times10^{-2}$ in \fig\ \ref{fig:Sr1718}(a). With further increasing $\Rey$ the isolation and thinning of the boundary layers leads to the \TSL\ mode becoming stable again; the stable region is bounded by the dashed curve in \fig\ \ref{fig:vary_Sr_G10}(a). At $\Sr=1.7\times10^{-2}$, a different mode becomes unstable at much higher $\Rey$, as is also shown in \fig\ \ref{fig:Sr1718}(a). This mode is a very similar to that at $\Sr=1.8\times10^{-2}$, so the dashed curve in \fig\ \ref{fig:vary_Sr_G10}(a) follows the trend of increasing $\rrs$ from the right of the discontinuity. Eventually, for all $\Sr<1.12\times10^{-2}$ ($H=10$, $\Gamma=10$), with oscillating boundary layers that `start out' closer together, at least one mode is unstable for all $\Rey$. \begin{figure} \begin{center} \addtolength{\extrarowheight}{-10pt} \addtolength{\tabcolsep}{-2pt} \begin{tabular}{ llll } \makecell{\vspace{24mm} \footnotesize{(a)} \\ \vspace{34mm} \rotatebox{90}{\footnotesize{$\alpha$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig7a-eps-converted-to.pdf}} & \makecell{\vspace{24mm} \footnotesize{(b)} \\ \vspace{34mm} \rotatebox{90}{\footnotesize{$\alpha$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig7b-eps-converted-to.pdf}} \\ & \hspace{30mm} \footnotesize{$\Rey/(1+1/\Gamma)$} & & \hspace{30mm} \footnotesize{$\Rey/(1+1/\Gamma)$} \\ \end{tabular} \begin{tabular}{ ll } \makecell{\vspace{24mm} \footnotesize{(c)} \\ \vspace{34mm} \rotatebox{90}{\footnotesize{$\alpha$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig7c-eps-converted-to.pdf}} \\ & \hspace{30mm} \footnotesize{$\Rey/(1+1/\Gamma)$} \\ \end{tabular} \addtolength{\tabcolsep}{+2pt} \addtolength{\extrarowheight}{+10pt} \end{center} \caption{Neutral curves for various $\Sr$, at $\Gamma=10$, $H=10$, with instability to the right of open curves. (a) $\Sr$ from the steady result, to the first destabilization of the \TSL\ mode (unstable pocket) at $\Sr \leq 1.748\times10^{-2}$. (b) Dominance of the \TSL\ mode, and eventual vanishing of the restabilization region for $\Sr \leq 1.1\times10^{-2}$. (c) Instability for all $\Rey>\ReyCrit$, including the local $\ReyCrit$ minimum (near $\Sr=9\times10^{-3}$). However, stable pockets form at higher $\Rey$. The dashed black curve corresponds to the steady base flow at $H=10$ \citep{Camobreco2020transition}.} \label{fig:neut_curves} \end{figure} Further considering $\Gamma=10$ and $H=10$, neutral (zero net growth) curves at several $\Sr$ are presented in \fig\ \ref{fig:neut_curves}. The $\Sr=1$ neutral curve is indistinguishable from that of the steady base flow \citep{Camobreco2020transition}. With decreasing $\Sr$, the critical Reynolds number rapidly increases and the neutral curve broadens, see \fig\ \ref{fig:neut_curves}(a). At $\Sr = 1.8\times10^{-2}$, just to the right of the discontinuity, waviness in the neutral curve reflects the excitation of multiple modes, as shown in \fig\ \ref{fig:Sr1718}(b). At $\Sr=1.748\times10^{-2}$, just to the left of the discontinuity, the \TSL\ mode is first destabilized. The increasing isolation of the oscillating boundary layers quickly restabilizes the flow, resulting in a very small instability pocket. Moving to \fig\ \ref{fig:neut_curves}(b), with slight decreases in $\Sr$, the (\TSL\ mode's) instability pocket rapidly occupies more of the wave number space, and the pocket terminates before it reaches the broader, pulsatile part of the neutral curve at $\Sr=1.12\times10^{-2}$ (the left-most point of the dashed curve in \fig\ \ref{fig:vary_Sr_G10}). With a slight drop to $\Sr=1.1\times10^{-2}$, the two curves meet, with a small throat allowing a path through wave number space with increasing $\Rey$ that always attains positive growth. At $\Sr= 1.12\times10^{-2}$, also shown in \fig\ \ref{fig:Sr112008}(a), the \TSL\ mode (the first local maximum) initially peaks, and then falls away with increasing $\Rey$. A small band of Reynolds numbers fail to produce net growth (along the line of six depressions in the wave number space). Increasing $\Rey$, multiple pulsatile modes become excited from the baseline spectrum, and become unstable. At $\Sr=1.1\times10^{-2}$, the rising pulsatile modes outpace the falling \TSL\ mode, so that at least one mode always maintains positive growth, see \fig\ \ref{fig:Sr112008}(b). \begin{figure} \begin{center} \addtolength{\extrarowheight}{-10pt} \addtolength{\tabcolsep}{-2pt} \begin{tabular}{ llll } \footnotesize{(a)} & \footnotesize{\hspace{4mm} $\Sr=1.12\times10^{-2}$} & \footnotesize{(b)} & \footnotesize{\hspace{4mm} $\Sr=8\times10^{-3}$} \\ \makecell{\\ \vspace{10mm} } & \makecell{\includegraphics[width=0.458\textwidth]{Fig8a-eps-converted-to.pdf}} & \makecell{\\ \vspace{10mm} } & \makecell{\includegraphics[width=0.458\textwidth]{Fig8b-eps-converted-to.pdf}} \\ \end{tabular} \addtolength{\tabcolsep}{+2pt} \addtolength{\extrarowheight}{+10pt} \end{center} \caption{Exponential growth rate as a function of $\alpha$ and $\Rey$ ($\ReyCrit$ through $5\times10^6$). At low $\Rey$, only the \TSL\ mode is unstable. After this mode restabilizes, multiple modes are excited, separated by sharp valleys. These modes have negative growth for some $\Rey$ at $\Sr=1.12\times10^{-2}$, but have positive growth at $\Sr=8\times10^{-3}$. Solid lines denote positive growth; dotted lines negative. Zero growth is emphasized with a thick black line on a gray intersecting plane. } \label{fig:Sr112008} \end{figure} At $\Sr=10^{-2}$, three stable pockets are observed, see \fig\ \ref{fig:neut_curves}(c). At lower $\Sr$ the growth rates of the \TSL\ mode decrease more rapidly, leaving only pulsatile modes in control of the neutral stability behavior. Because these modes are excited in narrow resonant peaks in wave number space, stable regions can be present between the peaks. Thus, at lower $\Sr$, multiple stable pockets surrounded by unstable conditions form. Further reduction in $\Sr$ produces more resonant peaks, and more interleaved stable pockets, as shown at $\Sr = 8\times10^{-3}$ in \fig\ \ref{fig:Sr112008}(b). Further reducing $\Sr$, for large $H$ and $\Rey$, reaches the limit of the capability of the timestepper to cleanly resolve the entire neutral curves. By $\Sr=10^{-3}$, the part of the neutral curve able to be computed is approaching that of the steady base flow \citep{Camobreco2020transition}. \begin{figure} \begin{center} \addtolength{\extrarowheight}{-10pt} \addtolength{\tabcolsep}{-2pt} \begin{tabular}{ llll } \footnotesize{(a)} & \footnotesize{\hspace{4mm} $\Sr=1$} & \footnotesize{(b)} & \footnotesize{\hspace{4mm} $\Sr=10^{-2}$} \\ \makecell{\vspace{10mm} \\ \vspace{20mm} \rotatebox{90}{\footnotesize{$\rrs$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig9a-eps-converted-to.pdf}} & \makecell{\vspace{10mm} \\ \vspace{20mm} \rotatebox{90}{\footnotesize{$\rrs$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig9b-eps-converted-to.pdf}} \\ & \hspace{36mm} \footnotesize{$\Gamma$} & & \hspace{36mm} \footnotesize{$\Gamma$} \\ \end{tabular} \addtolength{\tabcolsep}{+2pt} \addtolength{\extrarowheight}{+10pt} \end{center} \caption{Variation in $\rrs$ as a function of $\Gamma\geq 1$ at $\Sr=1$ and $\Sr=10^{-2}$, curves of constant $H$ (arrows indicate increasing $H$). Small $\Sr$ and $\Gamma$ present significant potential for destabilization.} \label{fig:vary_G} \end{figure} The influence of $\Gamma$ is now considered. Over $1\leq\Gamma\leq 100$, different effects on $\rrs$ are observed at $\Sr = 1$, \fig\ \ref{fig:vary_G}(a), and at $\Sr = 10^{-2}$, \fig\ \ref{fig:vary_G}(b). As $\Sr=1$, close to the steady limit, $\rrs$ remains near unity. At small $H$, only stabilization is observed for all $\Gamma \geq 1$. With increasing $H$, a slight destabilization can be observed with increasing $H$, up to $H \approx 10$. Further increasing $H$ induces restabilization. This echoes the $\Sr$ variation, where the local minimum shifts to smaller $\Sr$ for $H \lesssim 10$, and shifts back to larger $\Sr$ for $H\gtrsim10$. At higher $H$, $H$ offsets $\Sr$, so the results for the steady base flow are only recovered at increasingly large $\Sr$. On the other hand, at $\Sr = 10^{-2}$ in \fig\ \ref{fig:vary_G}(b), $\rrs$ is far from unity, and the effect of varying the Reynolds number on the base flow must again be considered. At smaller $\Rey$, the oscillating boundary layers are much thicker, with prominent inflection points well-placed to promote intracyclic growth. This part of the base flow becomes increasingly dominant with decreasing $\Gamma$, favoring the destabilization of the \TSL\ mode. Given that $(\Sr\Rey)^2 \gg H^2$, $\ReyCrit$ depends far more on the pulsatile process and only weakly on $H$, until $\Sr\Rey$ becomes small. However, the $\ReyCrit$ for the steady base flow strongly depends on $H$, so $\rrs$ reduces with increasing $H$. $\rrs$ continues to decrease up to $\Gamma \gtrsim 1$ for $H\leq 10$, matching well with the conclusion of Ref. \citep{Thomas2011linear} that the maximum reduction in $\ReyCrit$ occurs near unity amplitude ratio. At higher $H$, the magnitude of intracylic growth eventually limited computations (to $\Gamma > 1$). At $H=100$, $\Sr = 10^{-2}$ no local minimum is observed for $\Gamma \geq 1$. However, these results still indicate that for $H \leq 100$ and $\Gamma \geq 1$, a $70$ to $90$\% reduction in the critical Reynolds number is possible with the addition of pulsatility. They further support that the percentage reduction in $\ReyCrit$ improves with increasing $H$. The mode defining this local minimum, even at small $\Gamma$, still appears to be directly related to the \TSL\ mode (as there were no sharp changes in the dominant $\alpha$ through the entire $\Sr-\Gamma-\Rey$ space). \begin{table} \begin{center} \begin{tabular}{ ccc cc cc ccc } \hline $H$ & $\Rey_\mathrm{crit,s}$ & $\alpha_\mathrm{crit,s}$ & $\Gamma$ & $\Sr$ & $\ReyCrit/(1+1/\Gamma)$ & $\alpha$ & $\rrs$ & $\als$ & \textbf{\% Reduction} \\ \hline $10^{-7}$ & 5772.22 & 1.02055 & 1.29 & $7.8\times10^{-3}$ & 1773.29 & 1.3812 & 0.3072 & 1.3534 & $\mathbf{69.28}$\\ 0.01 & 5808.04 & 1.01991 & 1.29 & $7.8\times10^{-3}$ & 1777.58 & 1.3804 & 0.3061 & 1.3535 & $\mathbf{69.39}$\\ 0.1 & 6136.85 & 1.01435 & 1.29 & $7.8\times10^{-3}$ & 1816.18 & 1.3823 & 0.2959 & 1.3628 & $\mathbf{70.41}$\\ 0.3 & 6908.55 & 1.00291 & 1.27 & $7.6\times10^{-3}$ & 1902.79 & 1.3857 & 0.2754 & 1.3816 & $\mathbf{72.46}$\\ 1 & 10033.2 & 0.97163 & 1.24 & $7.2\times10^{-3}$ & 2215.87 & 1.3980 & 0.2209 & 1.4388 & $\mathbf{77.91}$\\ 3 & 21792.6 & 0.93194 & 1.19 & $6.3\times10^{-3}$ & 3185.90 & 1.4343 & 0.1462 & 1.5391 & $\mathbf{85.38}$\\ 10 & 72436.8 & 0.96833 & 1.19 & $5.6\times10^{-3}$ & 7050 & 1.59 & 0.0973 & 1.6420 & $\mathbf{90.27}$\\ \hline \end{tabular} \caption{Optimisation of the pulsation (optimising $\Gamma$, $\Sr$ and $\alpha$) for the greatest reduction in the rescaled critical Reynolds number relative to the steady result. This is achieved at $\Gamma$ just above unity, and pulsation frequencies similar to those of the local minimum for the \TSL\ mode, \figs\ \ref{fig:vary_Sr_G100}(a) and \ref{fig:vary_Sr_G10}(a). Importantly, the percentage reduction improves with increasing $H$, with over an order of magnitude reduction in critical Reynolds number for $H \geq 10$.} \label{tab:tab_3} \end{center} \end{table} Given the results of \fig\ \ref{fig:vary_G}(b), it is worth considering the maximum reduction in $\rrs$ that can be obtained via optimisation of the pulsation over $10^{-4}<\Sr<1$ and $1<\Gamma<\infty$. These have been tabulated for increasing $H$ in \tbl\ \ref{tab:tab_3}. These optimized pulsations truly highlight how effective pulsatility can be in destabilizing a Q2D channel flow, both at hydrodynamic conditions, with a $69.3$\% reduction at $H=10^{-7}$, all the way up to a $90.3$\% reduction at $H=10$. Still larger percentage reductions are predicted at higher $H$, as $\rrs$ consistently decreases with increasing $H$. \subsection{Intracylcic behavior}\label{sec:lin_res2} This section is focused on processes taking place within each cycle that are obscured in the net growth quantifications. All results in this section are at $\ReyCrit$. \begin{figure} \begin{center} \addtolength{\extrarowheight}{-10pt} \addtolength{\tabcolsep}{-2pt} \begin{tabular}{ llllll } \footnotesize{(a)} & \footnotesize{\hspace{4mm} $\Sr=10^{-3}$} & & \footnotesize{(b)} & \footnotesize{\hspace{4mm} $\Sr=10^{-2}$} & \\ \makecell{ \\ \vspace{12mm} \rotatebox{90}{\footnotesize{$\lVert \tilde{v} \rVert_2$}}} & \makecell{\includegraphics[width=0.44\textwidth]{Fig10a-eps-converted-to.pdf}} & \makecell{ \\ \vspace{11mm} \rotatebox{90}{\footnotesize{$\tmr{10^3E_\mathrm{U}}$}}} & \makecell{ \\ \vspace{12mm} \rotatebox{90}{\footnotesize{$\lVert \tilde{v} \rVert_2$}}} & \makecell{\includegraphics[width=0.44\textwidth]{Fig10b-eps-converted-to.pdf}} & \makecell{\\ \vspace{11mm} \rotatebox{90}{\footnotesize{$\tmr{10^4E_\mathrm{U}}$}}} \\ & \hspace{34mm} \footnotesize{$t_\mathrm{P}$} & & & \hspace{34mm} \footnotesize{$t_\mathrm{P}$} & \\ \footnotesize{(c)} & \footnotesize{\hspace{4mm} $\Sr=10^{-1}$} & & \footnotesize{(d)} & \footnotesize{\hspace{4mm} $\Sr=1$} & \\ \makecell{ \\ \vspace{12mm} \rotatebox{90}{\footnotesize{$\lVert \tilde{v} \rVert_2$}}} & \makecell{\includegraphics[width=0.44\textwidth]{Fig10c-eps-converted-to.pdf}} & \makecell{ \\ \vspace{11mm} \rotatebox{90}{\footnotesize{$\tmr{10^5E_\mathrm{U}}$}}} & \makecell{ \\ \vspace{12mm} \rotatebox{90}{\footnotesize{$\lVert \tilde{v} \rVert_2$}}} & \makecell{\includegraphics[width=0.44\textwidth]{Fig10d-eps-converted-to.pdf}} & \makecell{ \\ \vspace{11mm} \rotatebox{90}{\footnotesize{$\tmr{10^6E_\mathrm{U}}$}}} \\ & \hspace{34mm} \footnotesize{$t_\mathrm{P}$} & & & \hspace{34mm} \footnotesize{$t_\mathrm{P}$} & \\ \end{tabular} \addtolength{\tabcolsep}{+2pt} \addtolength{\extrarowheight}{+10pt} \end{center} \caption{The perturbation norm (solid; black) and the base flow energy relative to the time mean (dashed; red) over one period at critical conditions at $H=100$, $\Gamma=100$, for various $\Sr$. The phase differences $\phased$ between the local minimums of each pair of curves are also annotated.} \label{fig:pert_base_norm100} \end{figure} The \TSL\ mode at $\Gamma=100$ and $H=100$ is considered first, in \fig\ \ref{fig:pert_base_norm100}, over a range of $\Sr$. The perturbation norm $\lVert \tilde{v} \rVert_2$ is compared to $E_\mathrm{U}(t) = \int U^2 \,\mathrm{d}y - \langle \int U^2 \,\mathrm{d}y \rangle_t$ (taking the value of the current base flow energy about the time mean solely to aid figure legibility). There are only simple, sinusoidal energy variations at these conditions, and perturbation energies remain order unity over the entire cycle (akin to the `cruising' regime). The key result is that the phase difference between the perturbation and base flow energy curves changes as $\Sr$ is varied. Measuring the phase difference $\phased$ of the local minimums of the perturbation and base flow energies appears most meaningful, and these values are annotated on \fig\ \ref{fig:pert_base_norm100}. The perturbation energy variation exhibits a lag to the base flow energy variation at $\Sr = 10^{-3}$, with $\phased = -0.2446$, and is closer to in phase by $\Sr = 10^{-2}$, $\phased = -0.1466$ (the optimal $\Sr$ is $1.5\times10^{-2}$ for minimising $\rrs$ at $\Gamma=100$). By $\Sr = 10^{-1}$, the perturbation energy leads the base flow energy (positive $\phased$), and intracyclic growth in noticeably smaller. $\Sr = 1$ is close enough to the $\Sr \rightarrow \infty$ limit to produce negligible intracyclic growth. The minimum in $\rrs$ tends to occur when the perturbation and base flow energy growths are close to being in phase. Thus, selecting the optimal $\Sr$ to minimize $\rrs$ at a given $\Gamma$ (and $H$) amounts to tuning the frequency of the oscillating flow component, to ensure growth in the base flow and perturbation energies coincide. \begin{figure} \begin{center} \addtolength{\extrarowheight}{-10pt} \addtolength{\tabcolsep}{-2pt} \begin{tabular}{ llllll } \footnotesize{(a)} & \footnotesize{\hspace{4mm} $\Sr=10^{-3}$} & & \footnotesize{(b)} & \footnotesize{\hspace{4mm} $\Sr=10^{-2}$} & \\ \makecell{ \\ \vspace{12mm} \rotatebox{90}{\footnotesize{$\lVert \tilde{v} \rVert_2$}}} & \makecell{\includegraphics[width=0.44\textwidth]{Fig11a-eps-converted-to.pdf}} & \makecell{ \\ \vspace{11mm} \rotatebox{90}{\footnotesize{$\tmr{10^2E_\mathrm{U}}$}}} & \makecell{ \\ \vspace{12mm} \rotatebox{90}{\footnotesize{$\lVert \tilde{v} \rVert_2$}}} & \makecell{\includegraphics[width=0.44\textwidth]{Fig11b-eps-converted-to.pdf}} & \makecell{ \\ \vspace{11mm} \rotatebox{90}{\footnotesize{$\tmr{10^3E_\mathrm{U}}$}}} \\ & \hspace{34mm} \footnotesize{$t_\mathrm{P}$} & & & \hspace{34mm} \footnotesize{$t_\mathrm{P}$} & \\ \footnotesize{(c)} & \footnotesize{\hspace{4mm} $\Sr=10^{-1}$} & & \footnotesize{(d)} & \footnotesize{\hspace{4mm} $\Sr=1$} & \\ \makecell{ \\ \vspace{12mm} \rotatebox{90}{\footnotesize{$\lVert \tilde{v} \rVert_2$}}} & \makecell{\includegraphics[width=0.44\textwidth]{Fig11c-eps-converted-to.pdf}} & \makecell{ \\ \vspace{11mm} \rotatebox{90}{\footnotesize{$\tmr{10^4E_\mathrm{U}}$}}} & \makecell{ \\ \vspace{12mm} \rotatebox{90}{\footnotesize{$\lVert \tilde{v} \rVert_2$}}} & \makecell{\includegraphics[width=0.44\textwidth]{Fig11d-eps-converted-to.pdf}} & \makecell{ \\ \vspace{11mm} \rotatebox{90}{\footnotesize{$\tmr{10^5E_\mathrm{U}}$}}} \\ & \hspace{34mm} \footnotesize{$t_\mathrm{P}$} & & & \hspace{34mm} \footnotesize{$t_\mathrm{P}$} & \\ \end{tabular} \addtolength{\tabcolsep}{+2pt} \addtolength{\extrarowheight}{+10pt} \end{center} \caption{The perturbation norm (solid; black) and the base flow energy relative to the time mean (dashed; red) over one period at critical conditions at $H=10$, $\Gamma=10$, for various $\Sr$. The phase differences $\phased$ between the local minimums of each pair of curves are also annotated.} \label{fig:pert_base_norm10} \end{figure} The energy norms at $\Gamma = 10$, $H=10$ are displayed in \fig\ \ref{fig:pert_base_norm10}. At $\Sr = 10^{-3}$, \fig\ \ref{fig:pert_base_norm10}(a), toward the steady base flow limit, the variation of the perturbation is again a simple sinusoid, slightly lagging behind the base flow energy variation, as for $\Gamma=100$, \fig\ \ref{fig:pert_base_norm100}(a). However, at $\Gamma=10$, the increase in intracylcic growth with reducing $\Gamma$ can be clearly observed, eclipsing 6 orders of magnitude. Thus, at lower $\Gamma$ and $\Sr$, a behavior akin to the `ballistic' regime is reached. At $\Sr = 10^{-2}$, intracyclic growth remains large (the local minimum in $\rrs$ occurs at $\Sr = 9\times 10^{-3}$). An additional complexity in the form of a brief growth in perturbation energy (at $t_\mathrm{P} \approx 0.25$) occurs during the acceleration phase of the base flow, and is not detected at $\Sr < 9\times10^{-3}$. The additional growth incurred by the presence of inflection points is somewhat obscured by the lower $\ReyCrit$ at $\Sr=10^{-2}$. Increasing $\Sr$ to $10^{-1}$, the \TSL\ mode is no longer the least stable. At this $\Sr$ the intracyclic growth is relatively small, likely falling in the `cruising' regime, while by $\Sr=1$ the intracylcic growth again becomes trivial. \begin{figure} \begin{center} \addtolength{\extrarowheight}{-10pt} \addtolength{\tabcolsep}{-2pt} \begin{tabular}{ llllll } \footnotesize{(a)} & \footnotesize{\hspace{4mm} $H=100$, $\Gamma=100$, $\Sr=10^{-2}$} & & \footnotesize{(b)} & \footnotesize{\hspace{4mm} $H=10$, $\Gamma=10$, $\Sr=10^{-2}$} & \\ \makecell{ \\ \vspace{12mm} \rotatebox{90}{\footnotesize{$y$}}} & \makecell{\includegraphics[width=0.44\textwidth]{Fig12a-eps-converted-to.pdf}} & \makecell{ \\ \vspace{11mm} \rotatebox{90}{\footnotesize{$\lVert \tilde{v} \rVert_2$}}} & \makecell{ \\ \vspace{12mm} \rotatebox{90}{\footnotesize{$y$}}} & \makecell{\includegraphics[width=0.44\textwidth]{Fig12b-eps-converted-to.pdf}} & \makecell{\\ \vspace{11mm} \rotatebox{90}{\footnotesize{$\lVert \tilde{v} \rVert_2$}}} \\ & \hspace{34mm} \footnotesize{$t_\mathrm{P}$} & & & \hspace{34mm} \footnotesize{$t_\mathrm{P}$} & \\ \footnotesize{(c)} & \footnotesize{\hspace{4mm} $H=100$, $\Gamma=100$, $\Sr=10^{-1}$} & & \footnotesize{(d)} & \footnotesize{\hspace{4mm} $H=10$, $\Gamma=10$, $\Sr=10^{-1}$} & \\ \makecell{ \\ \vspace{12mm} \rotatebox{90}{\footnotesize{$y$}}} & \makecell{\includegraphics[width=0.44\textwidth]{Fig12c-eps-converted-to.pdf}} & \makecell{ \\ \vspace{11mm} \rotatebox{90}{\footnotesize{$\lVert \tilde{v} \rVert_2$}}} & \makecell{ \\ \vspace{12mm} \rotatebox{90}{\footnotesize{$y$}}} & \makecell{\includegraphics[width=0.44\textwidth]{Fig12d-eps-converted-to.pdf}} & \makecell{ \\ \vspace{11mm} \rotatebox{90}{\footnotesize{$\lVert \tilde{v} \rVert_2$}}} \\ & \hspace{34mm} \footnotesize{$t_\mathrm{P}$} & & & \hspace{34mm} \footnotesize{$t_\mathrm{P}$} & \\ \footnotesize{(e)} & \footnotesize{\hspace{4mm} $H=100$, $\Gamma=100$, $\Sr=1$} & & \footnotesize{(f)} & \footnotesize{\hspace{4mm} $H=10$, $\Gamma=10$, $\Sr=1$} & \\ \makecell{ \\ \vspace{12mm} \rotatebox{90}{\footnotesize{$y$}}} & \makecell{\includegraphics[width=0.44\textwidth]{Fig12e-eps-converted-to.pdf}} & \makecell{ \\ \vspace{11mm} \rotatebox{90}{\footnotesize{$\lVert \tilde{v} \rVert_2$}}} & \makecell{ \\ \vspace{12mm} \rotatebox{90}{\footnotesize{$y$}}} & \makecell{\includegraphics[width=0.44\textwidth]{Fig12f-eps-converted-to.pdf}} & \makecell{ \\ \vspace{11mm} \rotatebox{90}{\footnotesize{$\lVert \tilde{v} \rVert_2$}}} \\ & \hspace{34mm} \footnotesize{$t_\mathrm{P}$} & & & \hspace{34mm} \footnotesize{$t_\mathrm{P}$} & \\ \end{tabular} \addtolength{\tabcolsep}{+2pt} \addtolength{\extrarowheight}{+10pt} \end{center} \caption{The linear evolution of the leading eigenvector $\tilde{v}(y,t)$ over one period. Linearly spaced contours between $\pm \max|\tilde{v}|$ are plotted, solid lines (red flooding) denote positive values, dotted lines (blue flooding) negative values, except for $H=10$, $\Sr=10^{-2}$, with logarithmically spaced contours between $-15$ and $15$. Perturbation norms $\lVert \tilde{v} \rVert_2$ from \figs\ \ref{fig:pert_base_norm100} and \ref{fig:pert_base_norm10} are overlaid.} \label{fig:pert_xt_norm10} \end{figure} The linearized evolutions of the leading eigenvector are depicted over the period of the base flow in \fig\ \ref{fig:pert_xt_norm10}. At $\Gamma = 100$ the dominant mode is the \TSL\ mode for all $\Sr$, with a structure that does not observably change with time, as shown in the accompanying animation \cite{Supvideos2020}. The amplitude variations are also small; many repetitions of the wave are visible at lower $\Sr$ as the advection timescale is much smaller than the transient inertial timescale. Although the mode has a very similar appearance to that of a steady \TS\ wave, the additional isolation of the boundary layers means that the $H=100$ pulsatile mode resembles a $H=400$ steady mode \citep{Camobreco2020transition}. Once $H$ is reduced, separate \TS\ waves are no longer observed at each wall, but appear as a single conjoined structure. While at larger $\Sr$, the $H=10$, $\Gamma = 10$ mode structure still displays minimal time variation. Only at $\Sr=10^{-2}$ is significant unsteadiness observed, slightly towards the walls, and prominently during the disruption of the decay phase (at $t_\mathrm{P} \approx 0.25$). However, the general appearance of the structure as a conjoined \TS\ wave persists (this case is also animated \cite{Supvideos2020}). \begin{figure} \begin{center} \addtolength{\extrarowheight}{-10pt} \addtolength{\tabcolsep}{-2pt} \begin{tabular}{ llllll } \footnotesize{(a)} & \footnotesize{\hspace{4mm} $\Sr=10^{-3}$} & & \footnotesize{(b)} & \footnotesize{\hspace{4mm} $\Sr=4\times10^{-3}$} & \\ \makecell{ \\ \vspace{12mm} \rotatebox{90}{\footnotesize{$\lVert \tilde{v} \rVert_2$}}} & \makecell{\includegraphics[width=0.44\textwidth]{Fig13a-eps-converted-to.pdf}} & \makecell{ \\ \vspace{11mm} \rotatebox{90}{\footnotesize{$\tmr{E_\mathrm{U}}$}}} & \makecell{ \\ \vspace{12mm} \rotatebox{90}{\footnotesize{$\lVert \tilde{v} \rVert_2$}}} & \makecell{\includegraphics[width=0.44\textwidth]{Fig13b-eps-converted-to.pdf}} & \makecell{ \\ \vspace{11mm} \rotatebox{90}{\footnotesize{$\tmr{E_\mathrm{U}}$}}} \\ & \hspace{34mm} \footnotesize{$t_\mathrm{P}$} & & & \hspace{34mm} \footnotesize{$t_\mathrm{P}$} & \\ \footnotesize{(c)} & \footnotesize{\hspace{4mm} $\Sr=7.2\times10^{-3}$} & & \footnotesize{(d)} & \footnotesize{\hspace{4mm} $\Sr=10^{-2}$} & \\ \makecell{ \\ \vspace{12mm} \rotatebox{90}{\footnotesize{$\lVert \tilde{v} \rVert_2$}}} & \makecell{\includegraphics[width=0.44\textwidth]{Fig13c-eps-converted-to.pdf}} & \makecell{ \\ \vspace{11mm} \rotatebox{90}{\footnotesize{$\tmr{E_\mathrm{U}}$}}} & \makecell{ \\ \vspace{12mm} \rotatebox{90}{\footnotesize{$\lVert \tilde{v} \rVert_2$}}} & \makecell{\includegraphics[width=0.44\textwidth]{Fig13d-eps-converted-to.pdf}} & \makecell{ \\ \vspace{11mm} \rotatebox{90}{\footnotesize{$\tmr{E_\mathrm{U}}$}}} \\ & \hspace{34mm} \footnotesize{$t_\mathrm{P}$} & & & \hspace{34mm} \footnotesize{$t_\mathrm{P}$} & \\ \end{tabular} \addtolength{\tabcolsep}{+2pt} \addtolength{\extrarowheight}{+10pt} \end{center} \caption{The perturbation norm (solid; black) and the base flow energy relative to the time mean (dashed; red) over one period at critical conditions at $H=10$, $\Gamma=1.24$, for various $\Sr$. The $\Sr=7.2\times10^{-3}$ case represents the optimized pulsation for this $H$, recalling \tbl\ \ref{tab:tab_3}. The phase differences $\phased$ between the local minimums of each pair of curves are also annotated.} \label{fig:pert_base_norm1} \end{figure} Finally, at $H=1$, the optimized conditions ($\Gamma = 1.24$, $\Sr = 7.2 \times 10^{-3}$) and nearby $\Sr$ are considered, with the energy norms displayed in \fig\ \ref{fig:pert_base_norm1}. A smaller $\Gamma$ features staggering intracyclic growth, with almost 24 orders of magnitude of growth at $\Sr = 10^{-3}$. Similar to previous cases, at lower $\Sr$ the local minimum in perturbation energy significantly lags behind the minimum in the base flow energy, $\phased=-0.2380$. However, an additional feature at smaller $\Sr$ and $\Gamma$ is that the perturbation decay is more rapid, and almost plateaus at low energies (with neither a smooth transitioning from growth to decay or sharp bounce back up). At the slightly larger $\Sr = 4\times 10^{-3}$, the decay is not so rapid (decaying over $0.112<t_\mathrm{P}<0.653$ compared to $0.008<t_\mathrm{P}<0.491$),with a sharp bounce back to growth and a smaller lag in the locations of the local minima, $\phased = -0.0996$. At the optimized $\Sr = 7.2 \times 10^{-3}$, the decay rate of the perturbation is matched to the period of the base flow, the local minima in energy are close to coinciding ($\phased = -0.0282$), and so inflection points are maintained throughout the deceleration phase ($\rrs$ is then minimized). At larger $\Sr$, the perturbation energy leads the base flow energy ($\phased = 0.0195$), and the deceleration phase is not used to its full extent. \begin{figure} \begin{center} \addtolength{\extrarowheight}{-10pt} \addtolength{\tabcolsep}{-2pt} \begin{tabular}{ ll ll } \footnotesize{(a)} & \footnotesize{\hspace{3mm} $t_\mathrm{P}=0$, $\max(|\hat{v}|)=2.991\times10^{-1}$} & \footnotesize{(b)} & \footnotesize{\hspace{3mm} $t_\mathrm{P}=0.1$, $\max(|\hat{v}|)=8.215\times10^{0}$} \\ \makecell{ \\ \vspace{10mm} \rotatebox{90}{\footnotesize{$y$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig14a-eps-converted-to.pdf}} & \makecell{ \\ \vspace{10mm} } & \makecell{\includegraphics[width=0.458\textwidth]{Fig14b-eps-converted-to.pdf}} \\ \footnotesize{(c)} & \footnotesize{\hspace{3mm} $t_\mathrm{P}=0.2$, $\max(|\hat{v}|)=3.273\times10^{1}$} & \footnotesize{(d)} & \footnotesize{\hspace{3mm} $t_\mathrm{P}=0.3$, $\max(|\hat{v}|)=2.474\times10^{0}$} \\ \makecell{ \\ \vspace{10mm} \rotatebox{90}{\footnotesize{$y$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig14c-eps-converted-to.pdf}} & \makecell{ \\ \vspace{10mm} } & \makecell{\includegraphics[width=0.458\textwidth]{Fig14d-eps-converted-to.pdf}} \\ \footnotesize{(e)} & \footnotesize{\hspace{3mm} $t_\mathrm{P}=0.4$, $\max(|\hat{v}|)=2.879\times10^{-2}$} & \footnotesize{(f)} & \footnotesize{\hspace{3mm} $t_\mathrm{P}=0.7$, $\max(|\hat{v}|)=3.051\times10^{-7}$} \\ \makecell{ \\ \vspace{10mm} \rotatebox{90}{\footnotesize{$y$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig14e-eps-converted-to.pdf}} & \makecell{ \\ \vspace{10mm} } & \makecell{\includegraphics[width=0.458\textwidth]{Fig14f-eps-converted-to.pdf}} \\ \footnotesize{(g)} & \footnotesize{\hspace{3mm} $t_\mathrm{P}=0.75$, $\max(|\hat{v}|)=2.954\times10^{-8}$} & \footnotesize{(h)} & \footnotesize{\hspace{3mm} $t_\mathrm{P}=0.85$, $\max(|\hat{v}|)=6.752\times10^{-5}$} \\ \makecell{ \\ \vspace{10mm} \rotatebox{90}{\footnotesize{$y$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig14g-eps-converted-to.pdf}} & \makecell{ \\ \vspace{10mm} } & \makecell{\includegraphics[width=0.458\textwidth]{Fig14h-eps-converted-to.pdf}} \\ & \hspace{38mm} \footnotesize{$x$} & & \hspace{38mm} \footnotesize{$x$} \\ \end{tabular} \addtolength{\tabcolsep}{+2pt} \addtolength{\extrarowheight}{+10pt} \end{center} \caption{Snapshots of the eigenvector expanded in the streamwise direction $\hat{v}=\tilde{v}(y,t)\exp(i\alpha x)$ through one cycle $t_\mathrm{P} \in [0,1]$ at $H=1$, $\Gamma=1.24$, $\Sr=7.2\times10^{-3}$. The base flow is overlaid (the black dashed line indicates zero base flow velocity). Red flooding positive; blue flooding negative.} \label{fig:snapshots_H1_helper} \end{figure} The evolution of the optimized perturbation at $H=1$ is shown in \fig\ \ref{fig:snapshots_H1_helper}, and in a supplementary animation \cite{Supvideos2020}. From $t_\mathrm{P}=0$, the perturbation is slowly growing, aided by the single large inflection points present in each half of the domain. As these become less pronounced, the `wings' of the perturbation are pulled in ($t_\mathrm{P}=0.2$). By this point, inflection points in the base flow have vanished, as the wall oscillation follows through to negative velocities, although a small amount of residual growth is maintained. The pull of the walls on the central structure sweeps the `wings' forward ($t_\mathrm{P} = 0.3$) as the base flow velocity in the central region is smaller than the velocities near the walls. The downstream pull of the walls acts to increasingly shear the structure, with perturbation decay until $t_\mathrm{P} =0.738$. The structure rapidly reorients to the wider forward `winged' structure just as inflection points reappear in the base flow, near $t_\mathrm{P} = 0.75$. As these inflection points become more pronounced rapid growth occurs, while the `wings' are swept further forward. \section{Nonlinear analysis}\label{sec:nlin_all} \subsection{Formulation and validation}\label{sec:nlin_for} We now seek to investigate the nonlinear behavior of the optimized pulsations at various $H$. As a first step in investigating transitions to turbulence, the modal instabilities predicted in the preceding sections are targeted by the DNS. Although linear or nonlinear transiently growing disturbances may initiate bypass transition scenarios \cite{Kerswell2014optimization, Reshotko2001transient, Schmid2001stability, Trefethen1993hydrodynamic, Waleffe1995transition}, the modal instability seemed the natural starting point. Furthermore, if the modal instability has a large decay rate, linear transient growth mechanisms can be strongly compromised \cite{Lozano2020cause}, as observed for cylinder wakes in particular \cite{Abdessemed2009transient}. Finally, previous work on steady Q2D transistions observed that only turbulence generated by a modal instability \cite{Camobreco2020role, Camobreco2020transition} was sustainable in wall-driven channel flows. The direct numerical simulation of Eqs.~(\ref{eq:non_dim_m}) and (\ref{eq:non_dim_c}) is performed as follows. The initial field is solely the analytic solution from Sec.~\ref{sec:prob_set}, $\vect{u}=U(y,t=0)$. The initial phase did not prove relevant with either an initial seed of white noise, or no initial perturbation. The flow is driven by a constant pressure gradient, $\partial P/\partial x = \gamma_1 (\cosh(H^{1/2})/(\cosh(H^{1/2}-1))H/\Rey$, with the pressure decomposed into a linearly varying and fluctuating periodic component, as $p = P + p'$, respectively. Periodic boundary conditions, $\vect{u}(x=0)=\vect{u}(x=W)$ and $p'(x=0)=p'(x=W)$, are applied at the downstream and upstream boundaries. The domain length $W=2\pi/\alpha_\mathrm{max}$ is set to match the wave number that achieved maximal linear growth $\alpha_\mathrm{max}$. Synchronous lateral wall movement generates the oscillating flow component, with boundary conditions $U(y \pm 1,t) = \gamma_2 \cos(t)$. Simulations are performed with an in-house spectral element solver, employing a third order backward differencing scheme, with operator splitting, for time integration. High-order Neumann pressure boundary conditions are imposed on the oscillating walls to maintain third order time accuracy \citep{Karniadakis1991high}. The Cartesian domain is discretized with quadrilateral elements over which Gauss--Legendre--Lobatto nodes are placed. The mesh design is identical to that of Ref.~\citep{Camobreco2020transition}. The wall-normal resolution was unchanged, although the streamwise resolution was doubled. Elements are otherwise uniformly distributed in both streamwise and transverse directions, ensuring perturbations remain well resolved during all phases of their growth. The solver, incorporating the SM82 friction term, has been previously introduced and validated \citep{Cassels2016heat, Cassels2019from3D, Hussam2012optimal, Sheard2009cylinders}. Further validation, depicted in \fig\ \ref{fig:val_DNS}(a), is a comparison between the nonlinear time evolution in primitive variables (the in-house solver, referred to as DNS in future) and the linearized evolution with the timestepper, introduced earlier. These are both computed using the $\ReyCrit$ and $\alphaCrit$ from the Floquet method, at $H=10$, $\Gamma=10$ and $H=100$, $\Gamma=100$, both at $\Sr=10^{-2}$ (cases discussed in Sec.~\ref{sec:lin_res2}). Initial seeds of white noise have specified initial energy $E_{0}(t=0)=\int\hat{u}^2+\hat{v}^2\,\dUP\Omega/\int U^2(t=0)\,\dUP\Omega$, where $\Omega$ represents the computational domain. Linearity is ensured with $E_0 = 10^{-6}$. The DNS settles after a short period of decay, and then attains excellent agreement with the intracyclic growth curves from the linearized timestepper, both in magnitude, and dynamics over the cycle. The only difference is that for the $\Gamma=10$ case, at small perturbation amplitudes (near $10^{-10}$) the nonlinear evolution cuts out, and remains at roughly constant energy until the deceleration phase of the base flow to begin growth again, while the linearized evolution continues on a smooth decay-growth trajectory. The resolution requirements are assessed by varying the polynomial order $\Np$ of the spectral elements. \Fig\ \ref{fig:val_DNS}(b) depicts simulations, with no initiating perturbation, driven by the optimized pulsation at $H=1$, at critical conditions. Excluding the initial growth, which is always resolution `dependent', the agreement in the intracylic growth stages is excellent (see box out). The slight differences predominantly originate from the initial growth stage, translating the curves with respect to one another. $\Np=19$ was deemed sufficient for the pulsatile problem, as for the steady base flow problem \citep{Camobreco2020transition}. \begin{figure} \begin{center} \addtolength{\extrarowheight}{-10pt} \addtolength{\tabcolsep}{-2pt} \begin{tabular}{ llll } \makecell{\footnotesize{(a)} \vspace{19.5mm} \\ \vspace{24mm} \rotatebox{90}{\footnotesize{$\int |\hat{v}| \mathrm{d}\Omega,\, \lVert \tilde{v} \rVert_2$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig15a-eps-converted-to.pdf}} & \makecell{\footnotesize{(b)} \vspace{26.5mm} \\ \vspace{33mm} \rotatebox{90}{\footnotesize{$\Euv$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig15b-eps-converted-to.pdf}} \\ & \hspace{38mm} \footnotesize{$t_\mathrm{P}$} & & \hspace{38mm} \footnotesize{$t_\mathrm{P}$} \\ \end{tabular} \addtolength{\tabcolsep}{+2pt} \addtolength{\extrarowheight}{+10pt} \end{center} \caption{Resolution testing at critical conditions. (a) Comparison of nonlinear DNS (in-house solver; solid lines) and linearized timestepper (dashed lines) at $\Sr = 10^{-2}$. An initial perturbation of white noise with $E_0=10^{-6}$ was applied to the DNS. (b) Nonlinear DNS with no initial perturbation of the $H=1$ optimized pulsation ($\Gamma=1.24$, $\Sr = 7.2\times10^{-3}$), varying polynomial order. } \label{fig:val_DNS} \end{figure} Fourier analysis is also performed in the nonlinear simulations, exploiting the streamwise periodicity of the domain. The absolute values of the Fourier coefficients $c_\kappa = \lvert (1/\Nf)\sum_{n=0}^{n=\Nf-1}\hat{f}(x_n) e^{-2\pi i \kappa n/\Nf}\rvert$ were obtained using the discrete Fourier transform in MATLAB, where $x_n$ represents the $n$'th $x$-location linearly spaced between $x_0=0$ and $x_{\Nf}=W$. $\hat{f}$ may be $\hat{u}$, $\hat{v}$, $\hat{\omega}_z = \partial \hat{v}/ \partial x - \partial \hat{u} /\partial {y}$ or $\hat{u}^2+\hat{v}^2$, depending on the property of interest. In the $y$-direction, either a mean Fourier coefficient $\meanfoco$ is obtained by averaging the coefficients obtained at 21 $y$-values, and taking $\Nf=10000$. Alternately, considering 912 $y$-values, and taking $\Nf=380$, all except the $j$'th (and $\Nf-j$'th) Fourier coefficients were set to zero, $c_{\kappa,\neg j}=0$, and the inverse discrete Fourier transform $\hat{f}_j= \sum_{\kappa=0}^{\kappa=\Nf-1}c_{\kappa,\neg j} e^{2\pi i \kappa n/\Nf}$ computed. After isolating the $j$'th mode in the physical domain $\hat{f}_j$, an assessment of the degree of symmetry within that mode was determined by computing $\hat{f}_{\mathrm{s},j} = (\sum_{m=0}^{m=\Ny}[\hat{f}_j(y_m)-\hat{f}_j(-y_m)]^2)^{1/2}$, where $y_m$ represents the $m$'th $y$-location linearly spaced between $y_0=-1$ and $y_{\Ny}=0-1/(\Ny-1)$, and taking $\Ny = 912/2$. Thus, a purely symmetric mode has $\hat{f}_{\mathrm{s},j}=0$ as $\hat{f}_j(y_m)$=$\hat{f}_j(-y_m)$ for all $y_m$. \subsection{Critical conditions}\label{sec:nlin_cres} \begin{figure} \begin{center} \addtolength{\extrarowheight}{-10pt} \addtolength{\tabcolsep}{-2pt} \begin{tabular}{ llll } \makecell{\footnotesize{(a)} \vspace{23mm} \\ \vspace{27mm} \rotatebox{90}{\footnotesize{$\Ev$, $\lVert \tilde{v} \rVert_2^2$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig16a-eps-converted-to.pdf}} & \makecell{\footnotesize{(b)} \vspace{26.5mm} \\ \vspace{33mm} \rotatebox{90}{\footnotesize{$\Euv$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig16b-eps-converted-to.pdf}} \\ & \hspace{38mm} \footnotesize{$t_\mathrm{P}$} & & \hspace{38mm} \footnotesize{$t_\mathrm{P}$} \\ \end{tabular} \addtolength{\tabcolsep}{+2pt} \addtolength{\extrarowheight}{+10pt} \end{center} \caption{Effect of varying $E_0$, between $10^0$ and $10^{-10}$, on nonlinear evolution, compared to a case without an initial perturbation (black dashed line in (a) and solid line in (b)) and a case linearly evolved (pink dot-dashed line), for the optimized pulsation at $H=1$, $\Gamma = 1.24$, $\Sr = 7.2\times10^{-3}$. (a) $\Ev = \int \hat{v}^2 \mathrm{d} \Omega$. (b) $\Euv = \int \hat{u}^2 + \hat{v}^2 \mathrm{d} \Omega$.}. \label{fig:varyEz} \end{figure} \begin{figure} \begin{center} \addtolength{\extrarowheight}{-10pt} \addtolength{\tabcolsep}{-2pt} \begin{tabular}{ llll } \makecell{\footnotesize{(a)} \vspace{26.5mm} \\ \vspace{33.5mm} \rotatebox{90}{\footnotesize{$\Ev$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig17a-eps-converted-to.pdf}} & \makecell{\footnotesize{(b)} \vspace{27mm} \\ \vspace{33.5mm} \rotatebox{90}{\footnotesize{$\Euv$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig17b-eps-converted-to.pdf}} \\ & \hspace{38mm} \footnotesize{$t_\mathrm{P}$} & & \hspace{38mm} \footnotesize{$t_\mathrm{P}$} \\ \end{tabular} \addtolength{\tabcolsep}{+2pt} \addtolength{\extrarowheight}{+10pt} \end{center} \caption{Nonlinear evolutions of the optimized pulsations, at various $H$, from \tbl\ \ref{tab:tab_3}. (a) $\Ev = \int \hat{v}^2 \mathrm{d} \Omega$. (b) $\Euv = \int \hat{u}^2 + \hat{v}^2 \mathrm{d} \Omega$. The ultimate result of the nonlinear evolutions is no net growth at $\ReyCrit$.} \label{fig:nogro_varyH} \end{figure} This section focuses solely on the minimum $\rrs$ conditions of \tbl\ \ref{tab:tab_3}, at $\ReyCrit$. The first factor is the role of the initial perturbation. Comparing a simulation without an initiating perturbation (\eg\ `numerical noise'), and simulations initiated with white noise of specified magnitude, \fig\ \ref{fig:varyEz}, yields two key results. The first is that all the initial energy trajectories collapse to the numerical noise result within the first period of evolution, except $\Ezero=1$ (slightly offset). For $\Ezero<1$ the perturbation energy decays no further than for the case initiated from numerical noise, and plateaus until the next deceleration phase of the base flow. Once this occurs, all energies grow in unison. As the $\Gamma$, $\Sr$ optima are within the `ballistic' regime, they decay to linearly small energies every period \cite{Pier2017linear}. Hence, unless a transition to turbulence occurs in the first period of the base flow, the initial energy has no influence on subsequent cycles. The second key result is that the linear and nonlinear evolutions compared via $\Ev = \int \hat{v}^2 \mathrm{d}\Omega$ are similar, see \fig\ \ref{fig:varyEz}(a), while they are not via $\Euv = \int \hat{u}^2+\hat{v}^2 \mathrm{d}\Omega$, \fig\ \ref{fig:varyEz}(b). In the second period of the base flow, the nonlinear intracyclic decay is largely truncated. After another period, the nonlinear case saturates to relatively constant energy maxima and minima (\fig\ \ref{fig:varyEz}(b) inset). Previous works \cite{Camobreco2020role, Camobreco2020transition} have shown that growth in $\hat{v}$ is stored in streamwise independent structures, $\hat{u}$, in nonlinear modal and nonmodal growth scenarios of steady quasi-two-dimensional base flows. A similar process occurs here, as is further discussed shortly. The lack of nonlinear net growth at the critical conditions for the remaining cases in \tbl\ \ref{tab:tab_3} is depicted in \fig\ \ref{fig:nogro_varyH}, again without specifying an initial perturbation. At higher $H$, nonlinear intracyclic growth was smaller than expected (linearly, intracyclic growth increased with increasing $H$ at $\Rey = \ReyCrit$). However, the final result of no net growth is still maintained, as expected at $\ReyCrit$. The only slight difference is that at higher $H$, and thereby larger $\Rey$, the maximum and minimum energies reached are becoming inconsistent (see box-out). In the linear solver, such inconsistencies would eventually limit the accurate computation of $\ReyCrit$. \subsection{Supercritical conditions}\label{sec:nlin_sres} \begin{figure} \begin{center} \addtolength{\extrarowheight}{-10pt} \addtolength{\tabcolsep}{-2pt} \begin{tabular}{ llll } \makecell{\footnotesize{(a)} \vspace{26.5mm} \\ \vspace{33.5mm} \rotatebox{90}{\footnotesize{$\Ev$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig18a-eps-converted-to.pdf}} & \makecell{\footnotesize{(b)} \vspace{27mm} \\ \vspace{33.5mm} \rotatebox{90}{\footnotesize{$\Euv$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig18b-eps-converted-to.pdf}} \\ & \hspace{38mm} \footnotesize{$t_\mathrm{P}$} & & \hspace{38mm} \footnotesize{$t_\mathrm{P}$} \\ \end{tabular} \addtolength{\tabcolsep}{+2pt} \addtolength{\extrarowheight}{+10pt} \end{center} \caption{Nonlinear evolutions of the optimized $\Sr$ and $\Gamma$ for minimum $\rrs$, for various $H$, at $\Rey/\ReyCrit=1.1$. (a) $\Ev = \int \hat{v}^2 \mathrm{d} \Omega$. (b) $\Euv = \int \hat{u}^2 + \hat{v}^2 \mathrm{d} \Omega$. These results are very similar to those at $\Rey/\ReyCrit=1$ (\fig\ \ref{fig:nogro_varyH}) in spite of the fact that linearly, exponential growth is predicted.} \label{fig:nogro11_varyH} \end{figure} Supercritical Reynolds numbers are briefly considered, again without specifying an initial perturbation. As the base flow is Reynolds number dependent, only a 10\% and a 20\% increase (not shown) in the Reynolds number were attempted, for the values of $\Gamma$ and $\Sr$ that minimize $\rrs$ for $H \leq 10$. The overall behaviors at $\Rey/\ReyCrit=1$ (\fig\ \ref{fig:nogro_varyH}) and $\Rey/\ReyCrit=1.1$ (\fig\ \ref{fig:nogro11_varyH}) are virtually identical, even though exponential growth is predicted linearly at $\Rey/\ReyCrit=1.1$. Nonlinearly, the intracyclic growth in the first period is large enough to reach nonlinear amplitudes, which quickly modulates the base flow, resulting in the no net growth behavior. However, turbulence is not observed at these supercritical conditions, with only some chaotic behavior following the symmetry breaking of the linear mode. The severity of the decay in the acceleration phase may be the main factor preventing the transition to turbulence. However, the magnitude of $H$ and $\Rey$ could be a factor, since $H<3$ are unable to trigger turbulence for the case of a steady base flow at the equivalent $\Rey/\ReyCrit$ ratio \cite{Camobreco2020transition}. Although higher $H$ were able to trigger turbulence in the classical duct flow, the magnitude of the Reynolds numbers were larger for the steady base flow, as optimising for minimum $\rrs$ results in an order of magnitude reduction in $\ReyCrit$. \subsection{Role of streamwise and wall-normal velocity components}\label{sec:str_wal} \begin{figure} \begin{center} \addtolength{\extrarowheight}{-10pt} \addtolength{\tabcolsep}{-2pt} \begin{tabular}{ ll ll } \footnotesize{(a)} & \footnotesize{\hspace{3mm} $t_\mathrm{P}=1.61$, $\max(|\hat{v}|)=3.319\times10^{-8}$} & \footnotesize{(b)} & \footnotesize{\hspace{3mm} $t_\mathrm{P}=1.7$, $\max(|\hat{v}|)=1.348\times10^{-9}$} \\ \makecell{ \\ \vspace{10mm} \rotatebox{90}{\footnotesize{$y$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig19a-eps-converted-to.pdf}} & \makecell{ \\ \vspace{10mm} } & \makecell{\includegraphics[width=0.458\textwidth]{Fig19b-eps-converted-to.pdf}} \\ \footnotesize{(c)} & \footnotesize{\hspace{3mm} $t_\mathrm{P}=1.75$, $\max(|\hat{v}|)=1.646\times10^{-8}$} & \footnotesize{(d)} & \footnotesize{\hspace{3mm} $t_\mathrm{P}=1.925$, $\max(|\hat{v}|)=3.491\times10^{-3}$} \\ \makecell{ \\ \vspace{10mm} \rotatebox{90}{\footnotesize{$y$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig19c-eps-converted-to.pdf}} & \makecell{ \\ \vspace{10mm} } & \makecell{\includegraphics[width=0.458\textwidth]{Fig19d-eps-converted-to.pdf}} \\ \footnotesize{(e)} & \footnotesize{\hspace{3mm} $t_\mathrm{P}=1.965$, $\max(|\hat{v}|)=2.310\times10^{-2}$} & \footnotesize{(f)} & \footnotesize{\hspace{3mm} $t_\mathrm{P}=2$, $\max(|\hat{v}|)=6.188\times10^{-2}$} \\ \makecell{ \\ \vspace{10mm} \rotatebox{90}{\footnotesize{$y$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig19e-eps-converted-to.pdf}} & \makecell{ \\ \vspace{10mm} } & \makecell{\includegraphics[width=0.458\textwidth]{Fig19f-eps-converted-to.pdf}} \\ \footnotesize{(g)} & \footnotesize{\hspace{3mm} $t_\mathrm{P}=2.155$, $\max(|\hat{v}|)=3.278\times10^{-2}$} & \footnotesize{(h)} & \footnotesize{\hspace{3mm} $t_\mathrm{P}=2.22$, $\max(|\hat{v}|)=7.450\times10^{-3}$} \\ \makecell{ \\ \vspace{10mm} \rotatebox{90}{\footnotesize{$y$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig19g-eps-converted-to.pdf}} & \makecell{ \\ \vspace{10mm} } & \makecell{\includegraphics[width=0.458\textwidth]{Fig19h-eps-converted-to.pdf}} \\ \footnotesize{(i)} & \footnotesize{\hspace{3mm} $t_\mathrm{P}=2.3$, $\max(|\hat{v}|)=8.864\times10^{-4}$} & \footnotesize{(j)} & \footnotesize{\hspace{3mm} $t_\mathrm{P}=2.4$, $\max(|\hat{v}|)=4.572\times10^{-5}$} \\ \makecell{ \\ \vspace{10mm} \rotatebox{90}{\footnotesize{$y$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig19i-eps-converted-to.pdf}} & \makecell{ \\ \vspace{10mm} } & \makecell{\includegraphics[width=0.458\textwidth]{Fig19j-eps-converted-to.pdf}} \\ & \hspace{38mm} \footnotesize{$x$} & & \hspace{38mm} \footnotesize{$x$} \\ \end{tabular} \addtolength{\tabcolsep}{+2pt} \addtolength{\extrarowheight}{+10pt} \end{center} \caption{Nonlinear evolution of $\hat{v}$-velocity perturbation contours at $H=1$, $\Gamma=1.24$, $\Sr=7.2\times10^{-3}$ through one cycle $t_\mathrm{P} \in [1.5,2.5]$. The base flow is overlaid (the black dashed line indicates zero base flow velocity). Red flooding positive; blue flooding negative.} \label{fig:snaps_nonlinear_v} \end{figure} Two aspects of the nonlinear evolution are considered in more detail. The first is the slight difference between the linear and nonlinear growth in $\hat{v}$, observed in \fig\ \ref{fig:varyEz}(a). Snapshots of the $\hat{v}$-velocity from the DNS are depicted in \fig\ \ref{fig:snaps_nonlinear_v} over $t_\mathrm{P} \in [1.5,2.5]$; the linear case at the same conditions was shown in \fig\ \ref{fig:snapshots_H1_helper}, over $t_\mathrm{P} \in [0,1]$. An animation comparing these cases is also provided \cite{Supvideos2020}. When at small energies at $t_\mathrm{P}=1.61$, the highly sheared structure along the centreline of the nonlinear case has a very similar appearance to its linear counterpart (around $t_\mathrm{P} = 1.7$). However, some higher wave number effects are still visible near the walls in the nonlinear case even at these small energies. The reformation of the nonlinear structure, as it spreads over the duct ($t_\mathrm{P} = 1.7$-$1.75$) and as the `wings' pull forward ($t_\mathrm{P}=1.925$), when inflection points form in the base flow, are also very similar to the linear case. However, past $t_\mathrm{P} \approx 1.925$, the linear growth rate slightly diminishes, while the nonlinear growth rate remains higher, again recalling \fig\ \ref{fig:varyEz}(a). This is related to nonlinearity inducing a symmetry breaking of the linear mode, from around $t_\mathrm{P}=1.965$, with the region of positive $\hat{v}$-velocity structure tilting downward, and the region of negative velocity tilting upward. Eventually, secondary structures separate from each core before the structures eventually break apart around $t_\mathrm{P}=2.155$. From $t_\mathrm{P}=2.22$ through to $t_\mathrm{P}=2.5$, the decay induced by the downstream pull of the walls creates a single highly sheared structure along the centreline, as for the linear case. The second aspect of the nonlinear evolution is the limited decay of $\Euv = \int \hat{u}^2+\hat{v}^2 \mathrm{d}\Omega$, of only 3 orders of magnitude, compared to the 18 or so orders of magnitude of decay in $\Ev = \int \hat{v}^2 \mathrm{d}\Omega$ (\fig\ \ref{fig:varyEz} or \ref{fig:nogro_varyH}). Snapshots of the $\hat{u}$-velocity from the DNS are shown in \fig\ \ref{fig:snaps_nonlinear_u} over the first two periods. An animation comparing the linear and nonlinear $\hat{u}$-velocity is also provided as supplementary material \cite{Supvideos2020}. The $\hat{u}$ perturbation is intially close to symmetric (see animation), with a central positive streamwise sheet of velocity, bounded by two negative sheets at each wall. The negative sheet of velocity near the bottom wall intensifies, and expands to fill the lower half of the duct, while pushing the positive sheet of velocity into the upper half of the duct, at $t_\mathrm{P}=0.22$ (the sheet of negative velocity near the top wall almost vanishing). By $t_\mathrm{P}=0.6$, the $\hat{u}$ perturbation is close to purely antisymmetric. However, opposite signed velocity near the walls begins encroaching on the streamwise sheets around the time when inflection points form in the base flow. This generates the linear mode observable at $t_\mathrm{P}=0.925$. At $t_\mathrm{P}=0.965$ the symmetry breaking observed in $\hat{v}$ is also observed in $\hat{u}$, disrupting the linear mode. This disruption eventually eliminates the positive velocity structures, leaving a wavy sheet of negative velocity, at $t_\mathrm{P}=1.3$. Throughout the acceleration phase of the base flow the sheet smooths out until it is streamwise invariant. This now symmetric sheet of negative velocity stores a large amount of perturbation energy, that produces a relatively large minimum $\hat{u}$-velocity. This sheet acts as a modulation to the base flow, and is highly persistent. Similar behaviors are observed in steady duct flows \cite{Camobreco2020transition}. Throughout the linear growth stage, the linear perturbation is able to form over the negative sheet, between $t_\mathrm{P}=1.9$ to $t_\mathrm{P} = 1.965$, before nonlinearity again breaks symmetry in the linear mode past $t_\mathrm{P}=1.965$. \begin{figure} \begin{center} \addtolength{\extrarowheight}{-10pt} \addtolength{\tabcolsep}{-2pt} \begin{tabular}{ ll ll } \footnotesize{(a)} & \footnotesize{\hspace{3mm} $t_\mathrm{P}=0.22$, $\max(|\hat{u}|)=2.722\times10^{-7}$} & \footnotesize{(b)} & \footnotesize{\hspace{3mm} $t_\mathrm{P}=0.76$, $\max(|\hat{u}|)=3.679\times10^{-7}$} \\ \makecell{ \\ \vspace{10mm} \rotatebox{90}{\footnotesize{$y$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig20a-eps-converted-to.pdf}} & \makecell{ \\ \vspace{10mm} } & \makecell{\includegraphics[width=0.458\textwidth]{Fig20b-eps-converted-to.pdf}} \\ \footnotesize{(c)} & \footnotesize{\hspace{3mm} $t_\mathrm{P}=0.925$, $\max(|\hat{u}|)=1.010\times10^{-2}$} & \footnotesize{(d)} & \footnotesize{\hspace{3mm} $t_\mathrm{P}=0.965$, $\max(|\hat{u}|)=7.503\times10^{-2}$} \\ \makecell{ \\ \vspace{10mm} \rotatebox{90}{\footnotesize{$y$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig20c-eps-converted-to.pdf}} & \makecell{ \\ \vspace{10mm} } & \makecell{\includegraphics[width=0.458\textwidth]{Fig20d-eps-converted-to.pdf}} \\ \footnotesize{(e)} & \footnotesize{\hspace{3mm} $t_\mathrm{P}=1.155$, $\max(|\hat{u}|)=5.001\times10^{-2}$} & \footnotesize{(f)} & \footnotesize{\hspace{3mm} $t_\mathrm{P}=1.3$, $\max(|\hat{u}|)=1.943\times10^{-2}$} \\ \makecell{ \\ \vspace{10mm} \rotatebox{90}{\footnotesize{$y$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig20e-eps-converted-to.pdf}} & \makecell{ \\ \vspace{10mm} } & \makecell{\includegraphics[width=0.458\textwidth]{Fig20f-eps-converted-to.pdf}} \\ \footnotesize{(g)} & \footnotesize{\hspace{3mm} $t_\mathrm{P}=1.4$, $\max(|\hat{u}|)=1.599\times10^{-2}$} & \footnotesize{(h)} & \footnotesize{\hspace{3mm} $t_\mathrm{P}=1.9$, $\max(|\hat{u}|)=9.752\times10^{-3}$} \\ \makecell{ \\ \vspace{10mm} \rotatebox{90}{\footnotesize{$y$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig20g-eps-converted-to.pdf}} & \makecell{ \\ \vspace{10mm} } & \makecell{\includegraphics[width=0.458\textwidth]{Fig20h-eps-converted-to.pdf}} \\ \footnotesize{(i)} & \footnotesize{\hspace{3mm} $t_\mathrm{P}=1.925$, $\max(|\hat{u}|)=1.428\times10^{-2}$} & \footnotesize{(j)} & \footnotesize{\hspace{3mm} $t_\mathrm{P}=1.965$, $\max(|\hat{u}|)=5.821\times10^{-2}$} \\ \makecell{ \\ \vspace{10mm} \rotatebox{90}{\footnotesize{$y$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig20i-eps-converted-to.pdf}} & \makecell{ \\ \vspace{10mm} } & \makecell{\includegraphics[width=0.458\textwidth]{Fig20j-eps-converted-to.pdf}} \\ & \hspace{38mm} \footnotesize{$x$} & & \hspace{38mm} \footnotesize{$x$} \\ \end{tabular} \addtolength{\tabcolsep}{+2pt} \addtolength{\extrarowheight}{+10pt} \end{center} \caption{Nonlinear evolution of $\hat{u}$-velocity perturbation contours at $H=1$, $\Gamma=1.24$, $\Sr=7.2\times10^{-3}$ through two cycles $t_\mathrm{P} \in [0,2]$. The base flow is overlaid (the black dashed line indicates zero base flow velocity). Red flooding positive; blue flooding negative.} \label{fig:snaps_nonlinear_u} \end{figure} \subsection{Symmetry breaking}\label{sec:sym_break} \begin{figure} \begin{center} \addtolength{\extrarowheight}{-10pt} \addtolength{\tabcolsep}{-2pt} \begin{tabular}{ llll } \makecell{\footnotesize{(a)} \vspace{26.5mm} \\ \vspace{33mm} \rotatebox{90}{\footnotesize{$\vsk$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig21a-eps-converted-to.pdf}} & \makecell{\footnotesize{(b)} \vspace{26.5mm} \\ \vspace{33mm} \rotatebox{90}{\footnotesize{$\usk$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig21b-eps-converted-to.pdf}} \\ & \hspace{38mm} \footnotesize{$t_\mathrm{P}$} & & \hspace{38mm} \footnotesize{$t_\mathrm{P}$} \\ \makecell{\footnotesize{(c)} \vspace{26.5mm} \\ \vspace{33mm} \rotatebox{90}{\footnotesize{$\csk$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig21c-eps-converted-to.pdf}} & \makecell{\footnotesize{(d)} \vspace{26.5mm} \\ \vspace{33mm} \rotatebox{90}{\footnotesize{$\meanfoco$, $\Euv$}}} & \makecell{\includegraphics[width=0.458\textwidth]{Fig21d-eps-converted-to.pdf}} \\ & \hspace{38mm} \footnotesize{$t_\mathrm{P}$} & & \hspace{38mm} \footnotesize{$t_\mathrm{P}$} \\ \end{tabular} \addtolength{\tabcolsep}{+2pt} \addtolength{\extrarowheight}{+10pt} \end{center} \caption{A measure of the symmetry in the zeroth through one-hundredth isolated streamwise Fourier modes. (a) Wall-normal velocity perturbation. (b) Streamwise velocity perturbation. (c) In-plane vorticity perturbation. Small values of the symmetry measure indicate the mode is almost symmetric (light blue), while large vales indicate the mode is almost antisymmetric (orange/yellow). (d) The $y$-averaged Fourier coefficient for each mode, based on $\hat{f} = \hat{u}^2+\hat{v}^2$, compared to the DNS measure $\Euv = \int \hat{u}^2 + \hat{v}^2 \mathrm{d} \Omega$. Note that for modes $100 < \kappa \leq 5000$ only every fifth $\kappa$ is plotted.} \label{fig:sym_measure} \end{figure} The symmetry breaking process was further analysed by measuring the degree of symmetry, separately for each mode $j$, via $\hat{f}_{\mathrm{s},j} = (\sum_{m=0}^{m=\Ny}[\hat{f}_j(y_m)-\hat{f}_j(-y_m)]^2)^{1/2}$. This is depicted for $\hat{v}$, $\hat{u}$ and $\hat{\omega}_z$ in \figs\ \ref{fig:sym_measure}(a) through (c), while a measure of the $y$-averaged energy in each mode is provided in \fig\ \ref{fig:sym_measure}(d). The key result is that when the nonlinear DNS had a similar appearance and growth rate to the linear simulation (e.g.~from $t_\mathrm{P} \approx 0.75+q$ to $t_\mathrm{P} \approx 0.95+q$, for $q=0$, $1$, $2$), every resolved $\hat{v}$ mode ($\kappa = 0$ through $100$) is close to purely symmetric, \fig\ \ref{fig:sym_measure}(a). Once symmetry breaking occurs, at $t_\mathrm{P}\approx0.965$, every odd $\hat{v}$ mode (first, third, etc.) becomes antisymmetric. See also see the vorticity measure, \fig\ \ref{fig:sym_measure}(c), for the first 50 or 60 modes. Thus, the symmetry breaking does not appear to be connected to any asymmetry introduced by numerical noise in the initial perturbation, as every mode becomes symmetric through the preceding linear phase. The measure of symmetry in $\hat{u}$ is effectively the photo negative of $\hat{v}$ (if $\hat{v}$ is almost symmetric, $\hat{u}$ is almost antisymmetric). The exception is the zeroth mode, which remains symmetric after the first period. The zeroth mode stores a large amount of perturbation energy, \fig\ \ref{fig:sym_measure}(d), and decays very slowly compared to the higher modes. Hence, the DNS measure of the perturbation energy $\Euv$ closely resembles the energy in the zeroth mode. As a final note, although a large number of modes become appreciably energized, the floor of the energy in the highest modes (after the base flow modulation occurs) is not clearly raised, and no distinct inertial subrange forms (not shown). Hence, as turbulence is not observed, it cannot initiate the symmetry breaking. However, exactly how nonlinearity induces the symmetry breaking remains unknown. \section{Conclusions} \label{sec:conc} This work numerically investigates the stability of a pulsatile quasi-two-dimensional duct flows, motivated by their relevance to the cooling conduits of magnetic confinement fusion reactors. The linear stability over a large $\Rey$, $H$, $\Sr$, $\Gamma$ parameter space was analysed, to both determine the pulsation optimized for the greatest reduction in $\ReyCrit$, and more generally to understand the role of transient inertial forces in unsteady MHD duct flows. At large amplitude ratios ($\Gamma=100$, near the conditions of a steady base flow) the effect of varying $\Sr$ was clearest. Increasing $\Sr$ lead to both more prominent inflection points, acting to reduce $\ReyCrit$, and thinner oscillating boundary layers, acting to increase $\ReyCrit$. Although more prominent inflection points generated additional growth during the deceleration of the base flow, the effective length of the deceleration phase increases with decreasing $\Sr$. Thus, by tuning $\Sr$ (for a given $H$, $\Gamma$) the minimum $\ReyCrit$ is reached as the perturbation and base flow energy variations fall in phase, so long as inflection points remain prominent. Furthermore, the percentage reduction in $\ReyCrit$ always improved with increasing $H$, when free to adjust $\Sr$. This observation, that pulsatility was still effective at destabilizing the flow in (or toward) fusion relevant regimes, satisfies the first question the paper put forward. At intermediate amplitude ratios ($\Gamma=10$), the addition of the oscillating flow component lead to large changes in $\ReyCrit$ compared to the steady base flow. At these amplitude ratios the effect of $\Rey$ on the base flow becomes important. Increasing $\Rey$ reduces the oscillating boundary layer thickness and restabilizes the flow for a small range of frequencies. Although the base flow became more stable with increasing $\Rey$, a large enough $\Rey$ was eventually reached to destabilize other instability modes (different to the \TSL\ mode). At smaller, near unity amplitude ratios (equal steady and oscillating base flow maxima) the largest advancements in $\ReyCrit$ over the steady value were observed. At $H=10^{-7}$, an almost $70$\% reduction in $\ReyCrit$ was attained, while by $H=10$, there was over an order of magnitude reduction ($90.3$\%). These improvements were attained at $\Sr$ of order $10^{-3}$, a region of the parameter space more than amenable to both SM82 modelling, and fusion relevant applications. Particularly in the latter case, a low frequency driving force would be far simpler to engineer than a high frequency oscillation. These results answer the second and third questions put forth in the paper. At these conditions, the onset of turbulence was not observed in nonlinear DNS. Within the first oscillation period the intracyclic growth was able to propel an initial perturbation of numerical noise to nonlinear amplitudes. This modulated the base flow, by generating a sheet of negative velocity along the duct centreline. Although this modulated base flow had no effect on the growth of the wall-normal velocity perturbation, it was able to saturate the exponential growth at supercritical Reynolds numbers. Although turbulence was not triggered, the nonlinear growth was still a promising result. However, without a wider nonlinear investigtion of the parameter space, the capability for $\ReyCrit$ reductions to translate to reductions in the $\Rey$ at which turbulence is observed (the fourth question put forward), remains partially unresolved. At nonlinear amplitudes, a symmetry breaking process was observed within each cycle. The ensuing chaotic flow may naturally improve mixing, improving cooling conduit performance, without the severe increases in frictional losses accompaning a turbulent flow \cite{Kuhnen2018destabilizing}. This is an avenue for future work. Finally, the capability for the optimized pulsations to nonlinearly modulate the base flow within one cycle favors linear transient growth as a strong contender for enabling bypass transitions to turbulence. This is a key area of future research, as if the flow is transiently driven over a partial oscillation cycle (and steadily driven thereafter), turbulence may be rapidly triggered. A caveat to such a method is that it is the continually driven time periodic base flow which yields eigenvalues with positive growth rates at greatly reduced Reynolds numbers. Without such an underlying base flow, the leading eigenvalues may be strongly negative, and severely limit any transient growth, as for cylinder wake flows \cite{Abdessemed2009transient}. This may be particularly problematic if large amounts of regenerative transient growth are the key to sustaining turbulent states \cite{Budanur2020upper,Lozano2020cause}, a point that also requires further investigation. Overall, the large reductions in $\ReyCrit$, occurring in a viable region of the parameter space, form too promising a direction to cease investigating. The first steps to this are to assess the heat transfer characteristics of the pulsatile base flow, which may naturally be more efficient than the steady equivalent, and investigating linear transient optimals. Other than linear transient growth, the use of pulsatility in concert with one of the various Q2D vortex promoters \citep{Cassels2016heat, Hussam2011dynamics, Hussam2012enhancing, Buhler1996instabilities, Hamid2016combining, Hamid2016heat, Cuevas2006flow} could aid in sustaining turbulence. Past the Q2D setup, the full 3D duct flow could be tackled. In particular, the interaction between the Stokes and Hartmann layers could result in new avenues to reach turbulence. The reduced constriction of the full 3D domain may also aid in sustaining turbulence. Note that for fusion applications, oscillatory wall motion is not viable. Therefore, in the context of a 3D domain, oscillatory pressure gradients are more relevant (note that the fully nonlinear wall- and pressure-driven flows are only equivalent in the 2D averaged equations). Lastly, with a broader scope, even electrically conducting walls could be investigated. Although less prevalent in self-cooled designs \cite{Smolentsev2008characterization}, the larger shear present in boundary layers forming on conducting walls provides conditions more susceptible to transitions to turbulence, and larger turbulent fluctuations \cite{Burr2000turbulent}. The interactions between flow pulsatility and electrically conducting walls could yield many new insights. \begin{acknowledgments} C.J.C.\ receives support from the Australian Government Research Training Program (RTP). This research was supported by the Australian Government via the Australian Research Council (Discovery Grant DP180102647), the National Computational Infrastructure (NCI) and Pawsey Supercomputing Centre (PSC), by Monash University via the MonARCH HPC cluster and by the Royal Society under the International Exchange Scheme between the UK and Australia (Grant E170034). \end{acknowledgments}
2,869,038,156,044
arxiv
\section{Introduction} Our graph-theoretic notation is standard (e.g., see \cite{Bol98}); in particular, we write $G\left( n\right) $ for a graph of order $n$. Given a graph $G,$ a $k$\emph{-walk} is a sequence of vertices $v_{1},\ldots,v_{k}$ of $G$ such that $v_{i-1}$ is adjacent to $v_{i}$ for all $i=2,\ldots,k.$ We write $w_{k}\left( G\right) $ for the number of $k$-walks in $G$ and $k_{r}\left( G\right) $ for the number of its $r$-cliques. We order the eigenvalues of the adjacency matrix of a graph $G=G\left( n\right) $ as $\mu\left( G\right) =\mu_{1}\left( G\right) \geq\ldots\geq\mu_{n}\left( G\right) $. Let $\omega=\omega\left( G\right) $ be the clique number of $G$. Wilf \cite{Wil86} proved that \[ \mu\left( G\right) \leq\frac{\omega-1}{\omega}v\left( G\right) =\frac{\omega-1}{\omega}w_{1}\left( G\right) , \] and Nikiforov~\cite{Nik06} extended this, showing that the inequality% \begin{equation} \mu^{s}\left( G\right) \leq\frac{\omega-1}{\omega}w_{s}\left( G\right) \label{maxmu}% \end{equation} holds for every $s\geq1.$ Note that for $s=2$ inequality \eqref{maxmu} implies a concise form of Tur\'{a}n's theorem. Indeed, if $G$ has $n$ vertices and $m$ edges, then $\mu(G)\geq2m/n,$ and so, \[ \left( \frac{2m}{n}\right) ^{2}\leq\mu^{2}(G)\leq\frac{\omega-1}{\omega }w_{2}(G)=\frac{\omega-1}{\omega}2m. \] This shows that \begin{equation} m\leq\frac{\omega-1}{2\omega}n^{2}, \label{maxmu1}% \end{equation} which is best possible whenever $\omega$ divides $n.$ If we combine \eqref{maxmu} with other lower bounds on $\mu(G)$, e.g., with \[ \mu^{2}(G)\geq\frac{1}{n}\sum_{u\in V(G)}d^{2}\left( u\right) , \] we obtain generalizations of (\ref{maxmu1}). Moreover, inequality \eqref{maxmu} follows from a result of Motzkin and Straus \cite{MoSt65} following in turn from (\ref{maxmu1}) (see \cite{Nik06a}). The implications \[ \eqref{maxmu}\Longrightarrow\left( \ref{maxmu1}\right) \Longrightarrow \text{MS}\Longrightarrow\eqref{maxmu} \] justify regarding inequality \eqref{maxmu} as a spectral form of Tur\'{a}n's theorem, well suited for nontrivial generalizations. For example, the following conjecture seems to be quite subtle. \begin{conjecture} Let $G$ be a $K_{r+1}$-free graph with $m$ edges. Then% \[ \mu_{1}^{2}\left( G\right) +\mu_{2}^{2}\left( G\right) \leq\frac{r-1}% {r}\text{ }2m. \] \end{conjecture} If true, this conjecture is best possible whenever $r$ divides $n$. Indeed, for $r|n$, $n=qr$, the Tur\'{a}n graph $T_{r}(n)$ (i.e., the complete $r$-partite graph $K_{r}(q)$ with $q$ vertices in each class) has $r(r-1)q^{2}/2$ edges, and there are three eigenvalues: $(r-1)q$, with multiplicity $1$, $-q$, with multiplicity $r-1$, and $0$, with multiplicity $r(q-1)$, so that $\mu_{1}(G)=(r-1)q$ and $\mu_{2}(G)=0$.\bigskip The aim of this note is to prove further relations between $\mu\left( G\right) $ and the number of cliques in $G$. In \cite{Nik02} it is proved that \begin{equation} \mu^{\omega}\left( G\right) \leq\sum_{s=2}^{\omega}\left( s-1\right) k_{s}\left( G\right) \mu^{\omega-s}\left( G\right) \label{polyn}% \end{equation} with equality holding if and only if $G$ is a complete $\omega$-partite graph with possibly some isolated vertices. It turns out that this inequality is one of a whole sequence of similar inequalities. \begin{theorem} \label{le3mu}For every graph $G$ and $r\geq2,$% \[ \mu^{r+1}\left( G\right) \leq\left( r+1\right) k_{r+1}\left( G\right) +\sum_{s=2}^{r}\left( s-1\right) k_{s}\left( G\right) \mu^{r+1-s}\left( G\right) . \] \end{theorem} Observe that, with $r=\omega+1$, Theorem \ref{le3mu} implies (\ref{polyn}). Theorem \ref{le3mu} also implies a lower bound on the number of cliques of any given order, as stated below. \begin{theorem} \label{tmomo}For every graph $G=G\left( n\right) $ and $r\geq2,$ \[ k_{r+1}\left( G\right) \geq\left( \frac{\mu\left( G\right) }{n}% -1+\frac{1}{r}\right) \frac{r\left( r-1\right) }{r+1}\left( \frac{n}% {r}\right) ^{r+1}. \] \end{theorem} We also prove the following extension of an earlier result of ours~\cite{BoNi04}. \begin{theorem} \label{leNSMM}Let $1\leq s\leq r<\omega\left( G\right) $ and $\alpha\geq0$. If $G=G\left( n\right) $ and% \begin{equation} \left( s+1\right) k_{s+1}\left( G\right) \geq n^{s+1}\prod_{t=1}% ^{s}\left( \frac{r-t}{rt}+\alpha\right) , \label{cond1}% \end{equation} then% \begin{equation} k_{r+1}\left( G\right) \geq\alpha\frac{r^{2}}{r+1}\left( \frac{n}% {r}\right) ^{r+1}. \label{lok}% \end{equation} \end{theorem} Note that Theorems \ref{leNSMM} and \ref{tmomo} hold for all values of the parameters satisfying the conditions there; in particular, $\alpha$ may depend on $n$. Our final theorem is the following stability result. \begin{theorem} \label{tstab}For all $r\geq2$ and $0\leq\alpha\leq2^{-10}r^{-6},$ if $G=G\left( n\right) $ is a $K_{r+1}$-free graph with% \begin{equation} \mu\left( G\right) \geq\left( 1-\frac{1}{r}-\alpha\right) n, \label{reqmu}% \end{equation} then $G$ contains an induced $r$-partite graph $G_{0}$ of order $v\left( G_{0}\right) >\left( 1-3\alpha^{1/3}\right) n$ and minimum degree% \[ \delta\left( G_{0}\right) >\left( 1-\frac{1}{r}-6\alpha^{1/3}\right) n. \] \end{theorem} \section{Proofs} \subsection{Proof of Theorem \ref{le3mu}} For a vertex $u\in V\left( G\right) $, write $w_{l}\left( u\right) $ for the number of $l$-walks starting with $u$ and $k_{r}\left( u\right) $ for the number of $r$-cliques containing $u.$ Clearly, it is enough to prove the assertion for $2\leq r<\omega\left( G\right) $, since the case $r\geq \omega\left( G\right) $ follows easily from (\ref{polyn}). It is shown in \cite{Nik02} that for all $2\leq s\leq\omega\left( G\right) $ and $l\geq2,$% \begin{equation} \sum_{u\in V\left( G\right) }\big(k_{s}\left( u\right) w_{l+1}\left( u\right) -k_{s+1}\left( u\right) w_{l}\left( u\right) \big)\leq\left( s-1\right) k_{s}\left( G\right) w_{l}\left( G\right) . \label{oldin}% \end{equation} Summing these inequalities for $s=2,...r,$ we obtain% \[ \sum_{u\in V\left( G\right) }\big(k_{2}\left( u\right) w_{l+r-1}\left( u\right) -k_{r+1}\left( u\right) w_{l}\left( u\right) \big)\leq\sum _{s=2}^{r}\left( s-1\right) k_{s}\left( G\right) w_{l+r-s}\left( G\right) , \] and so, after rearranging,% \[ w_{l+r}\left( G\right) -\sum_{s=2}^{r}\left( s-1\right) k_{i}\left( G\right) w_{l+r-s}\left( G\right) \leq\sum_{u\in V\left( G\right) }k_{r+1}\left( u\right) w_{l}\left( u\right) . \] Noting that $w_{l}\left( u\right) \leq w_{l-1}\left( G\right) ,$ this implies that \[ \sum_{u\in V\left( G\right) }k_{r+1}\left( u\right) w_{l}\left( u\right) \leq w_{l-1}\left( G\right) \sum_{u\in V\left( G\right) }k_{r+1}\left( u\right) =\left( r+1\right) k_{r+1}\left( G\right) w_{l-1}\left( G\right) , \] and so, \[ \frac{w_{l+r}\left( G\right) }{w_{l-1}\left( G\right) }-\sum_{s=2}% ^{r}\left( s-1\right) k_{s}\left( G\right) \frac{w_{l+r-s}\left( G\right) }{w_{l-1}\left( G\right) }\leq\left( r+1\right) k_{r+1}\left( G\right) . \] Given $n$, there are non-negative constants $c_{1},\dots,c_{n}$ such that for $G=G\left( n\right) $ we have \[ w_{l}\left( G\right) =c_{1}\mu_{1}^{l-1}\left( G\right) +\cdots+c_{n}% \mu_{n}^{l-1}\left( G\right) , \] (See, e.g., \cite{CDS80}, p. 44.) Since $\omega>2$, our graph $G$ is not bipartite and so $\left\vert \mu_{n}(G)\right\vert <\mu_{1}(G)$. Therefore, for every fixed $q$, we have \[ \lim_{l\rightarrow\infty}\frac{w_{l+q}\left( G\right) }{w_{l-1}\left( G\right) }=\mu^{q+1}\left( G\right) , \] and the assertion follows. \hfill{$\Box$} \subsection{Proof of Theorem \ref{leNSMM}} Moon and Moser \cite{MoMo62} stated the following result (for a proof see \cite{KhNi78} or \cite{Lov79}, Problem 11.8): if $G=G\left( n\right) $ and $k_{s}\left( G\right) >0$, then \[ \frac{\left( s+1\right) k_{s+1}\left( G\right) }{sk_{s}\left( G\right) }-\frac{n}{s}\geq\frac{sk_{s}\left( G\right) }{\left( s-1\right) k_{s-1}\left( G\right) }-\frac{n}{s-1}. \] Equivalently, for $1\leq s<t<\omega\left( G\right) $, we have% \begin{equation} \frac{\left( t+1\right) k_{t+1}\left( G\right) }{tk_{t}\left( G\right) }-\frac{n}{t}\geq\frac{\left( s+1\right) k_{s+1}\left( G\right) }% {sk_{s}\left( G\right) }-\frac{n}{s}. \label{MoMo}% \end{equation} Let $s\in\left[ 1,r\right] $ be the smallest integer for which (\ref{cond1}) holds. This implies either $s=1$ or% \begin{equation} sk_{s}\left( G\right) <n^{s}\prod_{t=1}^{s-1}\left( \frac{r-t}{rt}% +\alpha\right) \label{secin}% \end{equation} for some $s\in\left[ 2,r\right] $. Suppose first that $s=1$. (This case is considered in \cite{BoNi04}, but for the sake of completeness we present it here.) We have% \[ \frac{2k_{2}\left( G\right) }{k_{1}\left( G\right) }-n\geq\left( \frac{r-1}{r}+\alpha\right) n-n=\alpha n-\frac{n}{r}, \] and so, for all $t=1,\ldots,r$, inequality (\ref{MoMo}) implies that \[ \frac{\left( t+1\right) k_{t+1}\left( G\right) }{tk_{t}\left( G\right) }\geq\alpha n+\frac{n}{t}-\frac{n}{r}. \] Multiplying these inequalities for $t=1,\ldots,r$, we obtain that% \[ \left( r+1\right) k_{r+1}\left( G\right) \geq n^{r+1}\prod_{t=1}% ^{r}\left( \frac{r-t}{rt}+\alpha\right) \geq\alpha r^{2}\left( \frac{n}% {r}\right) ^{r+1}\prod_{t=1}^{r-1}\frac{r-t}{t}=\alpha r^{2}\left( \frac {n}{r}\right) ^{r+1}, \] proving the result in this case. Assume now that (\ref{secin}) holds for some $s\in\left[ 2,r\right] $. Then we have% \[ \frac{\left( s+1\right) k_{s+1}\left( G\right) }{sk_{s}\left( G\right) }>\left( \frac{r-s}{rs}+\alpha\right) n. \] and so, for every $t=s,...,r,$% \[ \frac{\left( t+1\right) k_{t+1}\left( G\right) }{tk_{t}\left( G\right) }>\frac{n}{t}-\frac{n}{s}+\frac{r-s}{rs}n+\alpha n=\left( \frac{r-t}% {rt}+\alpha\right) n. \] Multiplying these inequalities for $t=s+1,...,r,$ we obtain% \[ \frac{\left( r+1\right) k_{r+1}\left( G\right) }{\left( s+1\right) k_{s+1}\left( G\right) }>n^{r-s}\prod_{t=s+1}^{r}\left( \frac{r-t}% {rt}+\alpha\right) . \] Appealing to (\ref{cond1}), this implies that \[ \left( r+1\right) k_{r+1}\left( G\right) >n^{r+1}\prod_{t=1}^{r}\left( \frac{r-t}{rt}+\alpha\right) =\alpha n^{r+1}\prod_{t=1}^{r-1}\left( \frac{r-t}{rt}+\alpha\right) \geq\alpha r^{2}\left( \frac{n}{r}\right) ^{r+1}, \] as required. \hfill{$\Box$} \subsection{Proof of Theorem \ref{tmomo}} Set \[ \alpha=\frac{\mu}{n}-1+\frac{1}{r-1}. \] Clearly we may assume that $\alpha>0$, since otherwise the assertion is trivial. Suppose that \begin{equation} sk_{s}\left( G\right) >n^{s}\prod_{t=1}^{s-1}\left( \frac{r-t}{rt}% +\alpha\right) \label{in2}% \end{equation} for some $s\in\left[ 2,r\right] $. Then, by Theorem \ref{leNSMM},% \[ \left( r+1\right) k_{r+1}\left( G\right) >\alpha\frac{r^{2}}{r+1}\left( \frac{n}{r}\right) ^{r+1}\geq\alpha\frac{r\left( r-1\right) }{r+1}\left( \frac{n}{r}\right) ^{r+1}, \] completing the proof. Thus we may and shall assume that (\ref{in2}) fails for every $s\in\left[ r-1\right] $. From Theorem \ref{le3mu} we have \begin{equation} \left( r+1\right) k_{r+1}\left( G\right) \geq\mu^{r+1}\left( G\right) -\sum_{s=2}^{r}\left( s-1\right) k_{s}\left( G\right) \mu^{r+1-s}\left( G\right) . \label{minK}% \end{equation} Substituting the bounds on $k_{s}\left( G\right) $ into (\ref{minK}), and setting $\mu=\mu\left( G\right) /n$, we obtain \begin{align*} \frac{\left( r+1\right) k_{r+1}\left( G\right) }{n^{r+1}} & \geq \mu^{r+1}-\sum_{s=2}^{r}\mu^{r+1-s}\frac{s-1}{s}\prod_{t=1}^{s-1}\left( \frac{r-t}{rt}+\alpha\right) \\ & \geq\mu^{r+1}-\mu^{r+1-2}\frac{1}{2}\left( \frac{r-1}{r}+\alpha\right) +\sum_{s=3}^{r}\frac{s-1}{s}\mu^{r+1-s}\prod_{t=1}^{s-1}\left( \frac{r-t}% {rt}+\alpha\right) \\ & \geq\mu^{r+1-2}\left( \mu^{2}-\frac{1}{2}\left( \frac{r-1}{r}% +\alpha\right) \right) +\sum_{s=3}^{r}\frac{s-1}{s}\mu^{r+1-s}\prod _{t=1}^{s-1}\left( \frac{r-t}{rt}+\alpha\right) \\ & \geq\mu^{r+1-2}\left( \frac{r-1}{r}+\alpha\right) \left( \frac{r-2}% {2r}+\alpha\right) +\sum_{s=3}^{r}\frac{s-1}{s}\mu^{r+1-s}\prod_{t=1}% ^{s-1}\left( \frac{r-t}{rt}+\alpha\right) . \end{align*} By induction on $k$ we prove that, for all $k=2,\ldots,r,$% \[ \frac{\left( r+1\right) k_{r+1}\left( G\right) }{n^{r+1}}\geq\mu ^{r+1-k}\prod_{t=1}^{k}\left( \frac{r-t}{rt}+\alpha\right) -\sum_{s=k+1}% ^{r}\frac{s-1}{s}\mu^{r+1-s}\prod_{t=1}^{s-1}\left( \frac{r-t}{rt}% +\alpha\right) \] and hence,% \[ \frac{\left( r+1\right) k_{r+1}\left( G\right) }{n^{r+1}}\geq\mu \prod_{t=1}^{r}\left( \frac{r-t}{rt}+\alpha\right) \geq\alpha\frac{r-1}% {r}\prod_{t=1}^{r-1}\frac{r-t}{rt}=\alpha\frac{r-1}{r^{r}}. \] It follows that% \[ k_{r+1}\left( G\right) \geq\alpha\frac{r\left( r-1\right) }{r+1}\left( \frac{n}{r}\right) ^{r+1}, \] as required. \hfill{$\Box$} \subsection{Proof of Theorem \ref{tstab}} Inequality (\ref{maxmu}) for $s=2$ together with (\ref{reqmu}) implies that% \[ 2\frac{r-1}{r}e\left( G\right) \geq\mu^{2}\left( G\right) >\left( \frac{r-1}{r}-\alpha\right) ^{2}n^{2}>\left( \left( \frac{r-1}{r}\right) ^{2}-2\alpha\frac{r-1}{r}\right) n^{2}, \] and so,% \[ e\left( G\right) \geq\left( \frac{r-1}{2r}-2\alpha\right) n^{2}. \] To complete our proof, let us recall the following stability theorem proved by Nikiforov and Rousseau in \cite{NiRo05}. Let $r\geq2$ and $0<\beta\leq 2^{-9}r^{-6}$, and let $G=G(n)$ be a $K_{r+1}$-free graph satisfying \[ e\left( G\right) \geq\left( \frac{r-1}{2r}-\beta\right) n^{2}. \] Then $G$ contains an induced $r$-partite graph $G_{0}$ of order $v\left( G_{0}\right) >\left( 1-2\alpha^{1/3}\right) n$ and with minimum degree \[ \delta\left( G_{0}\right) \geq\left( 1-\frac{1}{r}-4\beta^{1/3}\right) n. \] Setting $\beta=2\alpha,$ in view of $4\cdot2^{1/3}<6,$ the required inequalities follow. \hfill{$\Box$} \textbf{Acknowledgement}. Part of this research was completed while the authors were enjoying the hospitality of the Institute for Mathematical Sciences, National University of Singapore in 2006.
2,869,038,156,045
arxiv
\section{Introduction} The r-process is a nucleosynthesis process to produce elements heavier than iron(\cite{bfh}). They occupy nearly half of the massive nuclear species, and show typical abundance peaks around nuclear masses A=80, 130 and 195, whose neutron numbers are slightly smaller than the magic numbers N=50, 82 and 126, respectively. This fact suggests that the r-process elements have completely different origin from the s-process elements whose abundance peaks are located just on the neutron magic numbers. The r-process elements are presumed to be produced in an explosive environment with short time scale and high entropy, where intensive flux of free neutrons are absorbed by seed elements successively to form the nuclear reaction flow on extremely unstable nuclei in neutron-rich side. Recent progress in the studies of nuclear physics of unstable nuclei has made it possible to simulate the r-process nucleosynthesis by the use of accumulated knowledge on nuclear masses and beta half-lives of several critical radioactive elements. The studies of r-process elements make another impact on the cosmic age problem, that is the age of the Universe to be known from cosmological constants and the age of the oldest globular cluster conflict with each other. A typical r-process element, thorium, has been detected recently in very metal-deficient stars, providing independent method to estimate the age of the Milky Way Galaxy(\cite{sn}). Since thorium has half-life of 14 Gyr, the observed abundance relative to the other stable elements is used as a chronometer dating the age of the Galaxy. To study the origin of the r-process elements is thus important and even critical in cosmology and astronomy of Galactic chemical evolution as well as nuclear physics of unstable nuclei. Unfortunately, however, astrophysical site of the r-process nucleosynthesis has been poorly known, although several candidate sites are proposed and being investigated theoretically. Neutrino-driven wind, which is our object to study in this article, is thought to be one of the most promising candidates for the r-process nucleosynthesis. It is generally believed that a neutron star is formed as the remnant of gravitational core collapse of Type II, Ib or Ic supernovae. The hot neutron star just born releases most of its energy as neutrinos during Kelvin-Helmholtz cooling phase, and these neutrino drive matter outflow from the surface. This outflow is called neutrino-driven wind. Many theoretical studies of neutrino-driven wind followed the successful detection of energetic neutrinos from SN1987A, which raised the possibility of finding the r-process nucleosynthesis in this wind. Although there are several numerical simulations of the neutrino-driven wind, results are very different from one another, depending on models and methods adopted in literature (\cite{wil,jan1,jan2}). A benchmark study of numerical simulation by Wilson and his collaborators (\cite{wil}) can successfully explain the solar system r-process abundances, but the others (\cite{jan1,jan2}) can not reproduce their result. Qian and Woosley (1996) tried to work out this discrepancy using approximate methods to solve the spherically symmetric, steady state flow in the Newtonian framework. They could not find suitable condition for the r-process nucleosynthesis, and they suggested in a post-Newtonian calculation that general relativistic effects may improve thermodynamic condition for the r-process nucleosynthesis. Cardall and Fuller (1997) adopted similar approximate methods in general relativistic framework and obtained short dynamic time scale of the expansion and large entropy, which is in reasonable agreement with the result in post-Newtonian approximation adopted by Qian and Woosley (1996). They did not remark quantitatively, however, what kind of specific effect among several general relativistic effects is responsible for this change. Since the wind blows near the surface of the neutron star, it is needed to study expansion dynamics of neutrino-driven wind in general relativity. The first purpose of this paper is to quantitatively make clear the effects of general relativity by adopting fully general relativistic framework. Although we assume only spherical steady-state flow of the neutrino-driven wind, we do not adopt approximate methods as in several previous studies. We try to extract as general properties as possible of the wind in manners independent of supernova models so that they are to be compared with expansion of different object like accretion disk of binary neutron star merger (\cite{nsm}) or sub-critical small mass neutron star (\cite{sumi2}), which is induced by intense neutrino burst. The second purpose is to look for suitable conditions for the r-process. There are key quantities in order to explain the solar system r-process abundances. They are the mass outflow rate, $\dot{M}$, the dynamic time scale of the expansion, $\tau_{\rm{dyn}}$, the entropy, $S$, and the electron fraction, $Y_{\rm e}$. The third purpose of this paper is to make clear how these thermodynamic and hydrodynamic quantities affect the r-process nucleosynthesis by carrying out the nucleosynthesis calculation numerically. In the next section we explain our theoretical models of neutrino-driven wind. We introduce basic equations to describe the dynamics of the wind in the Schwarzschild geometry. Boundary conditions and adopted parameters for solving these equations are presented in this section. Numerical results are shown in section 3, where the effects of general relativity are studied in detail. We also investigate the dependence of the key physical quantities like $\tau_{dyn}$ and $S$ on the neutron star mass, radius, and neutrino luminosity in order to look for the conditions of the neutrino-driven wind which is suitable for the r-process nucleosynthesis. Applying the result obtained in section 3, we carry out the nucleosynthesis calculation in section 4. The purpose of this section is to confirm quantitatively that the r-process elements are produced successfully in the wind having very short dynamic time scale with relatively low entropy. We finally summarize the results of this paper and present further discussions and outlook in section 5. \section{Models of neutrino-driven winds} \subsection{Basic equations} Type II or Ib supernova explosion is one of the complex hydrodynamic process which needs careful theoretical studies of the convection associated with shock propagation. The time of our interest, however, is the later phase after the core bounce, at which the shock has already passed away to reach a radius about 10000 km and continuous mass outflow is installed from the surface of the neutron star. Recent three dimensional numerical simulation (Hillebrandt 1998) has indicated that the convection near the shock front does not grow as deep as that shown in two-dimensional numerical simulation and the hydrodynamic conditions behind the shock are more likely similar to those obtained in one-dimensional numerical simulation. Since Wilson's numerical simulation of SN1987A in Woosley et al.~(1994) has shown that the neutrino-driven wind is adequately described by a steady state flow, we here adopt spherically symmetric and steady state wind, following the previous studies (\cite{dun,qw,cf}). According to his numerical simulation, the neutrino luminosity $L_{\nu}$ changes slowly from about $10^{52}$ ergs/s to below $10^{51}$ ergs/s during $\sim 10$ s of the Kelvin-Helmholtz cooling phase of the neutron star. The properties of the protoneutron star, {\it i.e.} the mass $M$ and radius $R$, also evolve slowly. We therefore take these quantities $L_{\nu}$, $M$, and $R$ as input parameters in order to describe more rapid evolution of the neutrino-driven wind. The basic equations to describe the spherically symmetric and steady state winds in Schwarzschild geometry are given by (\cite{st}) \begin{equation} \dot{M} = 4\pi r^2\rho_{b}u, \label{eqn:gk1} \end{equation} \begin{equation} u\frac{du}{dr}= \frac{1}{\rho_{tot}+P}\frac{dP}{dr} \left(1+u^2-\frac{2M}{r}\right)-\frac{M}{r^2}, \label{eqn:gk2} \end{equation} \begin{equation} \dot{q}= u\left( \frac{d\varepsilon}{dr}-\frac{P}{\rho_{b}^{2}} \frac{d\rho_{b}}{dr}\right), \label{eqn:gk3} \end{equation} where $\dot{M}$ is the mass outflow rate, $r$ is the distance from the center of the neutron star, $\rho_{b}$ is the baryon mass density, $u$ is the radial component of the four velocity, $\rho_{tot}=\rho_{b}+\rho_{b}\varepsilon$ is total energy density, $\varepsilon$ is the specific internal energy, $P$ is the pressure, $M$ is the mass of the neutron star, and $\dot{q}$ is the net heating rate due to neutrino interactions with matter. We use the conventional units that the plank constant $\hbar$, the speed of light $c$, the Boltzmann constant $k$, and gravitational constant $G$, are taken to be unity. Since the neutrino-driven wind blows from the surface of the hot protoneutron star at high temperature $T \sim 5$ MeV and also the physics of the wind is mostly determined at $T \gtrsim 0.5$ MeV (\cite{qw}), the equations of state are approximately written as \begin{eqnarray} P&=&{11\pi^2\over 180}T^4+{\rho_b\over m_N}T, \label{eqn:eos1} \\ \varepsilon&=&{11\pi^2\over 60}{T^4\over \rho_b}+ {3\over 2}{T\over m_N}, \label{eqn:eos2} \end{eqnarray} where $T$ is the temperature of the system, and $m_N$ is the nucleon rest mass. We have assumed that the material in the wind consists of photons, relativistic electrons and positrons, and non-relativistic free nucleons. The heating rate $\dot{q}$ in Eq.~(\ref{eqn:gk3}) through the interactions between neutrinos and material takes the key to understand the dynamics of the neutrino-driven wind. Following Bethe (1993) and Qian and Woosley (1994), we take account of the following five neutrino processes; neutrino and antineutrino absorption by free nucleons, neutrino and antineutrino scattering by electrons and positrons, and neutrino-antineutrino annihilation into electron-positron pair as the heating processes, and electron and positron capture by free nucleons, and electron-positron annihilation into neutrino-antineutrino pair as the cooling processes. We assume that neutrinos are emitted isotropically from the surface of the neutron star at the radius $R$, which proves to be a good approximation in recent numerical studies of the neutrino transfer (\cite{yamada}). In this paper, therefore, we make an assumption that the neutrinosphere radius is equal to the protoneutron star radius $R_{\nu} = R$. Since the neutrino trajectory is bent in the Schwarzschild geometry, the material in the wind sees neutrinos within the solid angle subtended by the neutrinosphere which is greater than the solid angle in the Newtonian geometry at the same coordinate radius. The bending effect of the neutrino trajectory increases the heating rate compared to Newtonian case. We have to take account of the redshift effect on the neutrino energy, too, which tends to decrease the heating rate. The important heating rate is due to the neutrino and antineutrino absorption by free nucleons \begin{equation} \nu_e + n \rightarrow p + e^- , \label{eqn:mib} \end{equation} \begin{equation} \bar{\nu}_e + p \rightarrow n + e^+ , \label{eqn:plb} \end{equation} and it is given by \begin{eqnarray} \dot{q}_1&\approx& 9.65N_A[(1-Y_e)L_{\nu_e,51}\varepsilon^2_{\nu_e}+Y_eL_{\bar{\nu}_e,51} \varepsilon^ 2_{\bar{\nu}_e}] \nonumber \\ &\times&\frac{1-g_1(r)}{R^2_{\nu 6}}\Phi(r)^6\rm{MeV~ s^{-1}g^{-1}} , \label{eqn:q1} \end{eqnarray} where the first and second terms in the parenthesis are for the processes (\ref{eqn:mib}) and (\ref{eqn:plb}), respectively, $\varepsilon_i$ is the energy in MeV defined by $\varepsilon_i=\sqrt{<E_i^3>/<E_i>}$, and $<E_i^n>$ denotes the $n$th energy moment of the neutrino $(i=\nu _e)$ and antineutrino $(i=\bar{\nu}_e)$ energy distribution, $N_A$ is the Avogadro number, $Y_e$ is the electron fraction, $L_{i,51}$ is the individual neutrino or antineutrino luminosity in units of $10^{51}$ ergs/s, and $R_{\nu6}$ is the neutrinosphere radius in units of $10^6$ cm. In this equation, $1-g_1(r)$ is the geometrical factor which represents the effect of bending neutrino trajectory, and $g_1(r)$ is given by \begin{equation} g_1(r)=\left(1-\left(\frac{R_{\nu}}{r}\right)^2 \frac{1-2M/r}{1-2M/R_{\nu}}\right)^{1/2}, \end{equation} where the function $(1-2M/r)/(1-2M/R_{\nu})$ arises due to the Schwarzschild geometry, and unity should be substituted for this factor in the Newtonian geometry. We also define the redshift factor \begin{equation} \Phi(r)=\sqrt{\frac{1-2M/R_\nu}{1-2M/r}}, \label{eqn:red} \end{equation} in the Schwarzschild geometry, which is unity in the Newtonian geometry. We will discuss the effects of these general relativistic correction factors in the next section. The second heating rate due to neutrino and antineutrino scattering by electrons and positrons plays equally important role. Neutrinos of all flavors can contribute to the scattering, and the heating rate is given by \begin{eqnarray} \dot{q}_3 &\approx& 2.17N_A\frac{T^4_{MeV}}{\rho_8} \nonumber \\ &\times&\left(L_{\nu_e,51}\epsilon_{\nu_e}+L_ {\bar{\nu}_ e,51}\epsilon_{\bar{\nu}_e}+\frac{6}{7}L_{\nu_{\mu},51} \epsilon_{\nu_{\mu}} \right) \nonumber \\ &\times& \frac{1-g_1(r)}{R^2_{\nu 6}}\Phi(r)^5\rm{ MeV~ s^{-1}g^{-1}}, \label{eqn:q3} \end{eqnarray} where $\epsilon_i=<E_i^2>/<E_i>$ in MeV $(i=\nu_e,~~\bar{\nu}_e,~~and~~\nu_{\mu} )$, and we have assumed the same contribution from $\nu_{\mu}$, $\bar{\nu}_{\mu}$, $\nu_{\tau}$, and $\bar{\nu}_{\tau}$ fluxes. We take $\varepsilon_i^2\simeq 1.14\epsilon_i^2$ from the numerical studies by Qian and Woosley (1996). The third heating rate due to neutrino-antineutrino pair annihilation into electron-positron pair is given by \begin{eqnarray} \dot{q}_5 &\approx& 12.0N_A \nonumber \\ &\times& \left(L_{\nu_e,51}L_{\bar{\nu}_e,51}(\epsilon_{\nu_e}+\epsilon_ {\bar{\nu}_e})+\frac{6}{7}L^2_{\nu_{\mu},51}\epsilon_{\nu_{\mu}}\right) \nonumber \\ &\times& \frac{g_2(r)} {\rho_8 R^4_{\nu 6}}\Phi(r)^9\rm{MeV~ s^{-1}g^{-1}}, \label{eqn:q5} \end{eqnarray} where $g_2(r)$ is given by \begin{equation} g_2(r)=(1-g_1(r))^4(g_1(r)^2 + 4 g_1(r) + 5). \label{eqn:geon} \end{equation} The cooling rates which we included in the present calculations are for the inverse reactions of the two heating processes considered in Eqs.~(\ref{eqn:q1}) and (\ref{eqn:q5}). The first cooling rate due to electron and positron captures by free nucleons, which are the inverse reactions of (\ref{eqn:mib}) and (\ref{eqn:plb}), is given by \begin{equation} \dot{q}_2 \approx 2.27N_AT^6_{MeV}\rm{MeV~ s^{-1}g^{-1}}. \label{eqn:q2} \end{equation} The second cooling rate due to electron-positron pair annihilation into neutrino-antineutrino pair of all flavors, which is the inverse reaction of Eq.~(\ref{eqn:q5}), is given by \begin{equation} \dot{q}_4 \approx 0.144N_A\frac{T^9_{MeV}}{\rho_8}\rm{ MeV~ s^{-1}g^{-1}}. \label{eqn:q4} \end{equation} Combining the above five heating and cooling rates, we obtain the total net heating rate $\dot{q}$ \begin{equation} \dot{q}=\dot{q}_1 - \dot{q}_2 + \dot{q}_3 - \dot{q}_4 + \dot{q}_5. \label{eqn:qtot} \end{equation} As we will discuss in the next section, the first three heating and cooling rates $\dot{q}_1$, $\dot{q}_2$, and $\dot{q}_3$ dominate over the other two contributions from $\dot{q}_4$ and $\dot{q}_5$. \subsection{Boundary conditions and input parameters} We assume that the wind starts from the surface of the protoneutron star at the radius $r_i=R$ and the temperature $T_i$. Near the neutrinosphere and the neutron star surface, both heating (mostly $\dot{q}_1$) and cooling (mostly $\dot{q}_2$) processes almost balance with each other due to very efficient neutrino interactions with material. The system is thus in kinetic equilibrium (Barrows and Mazurek 1982) at high temperature and high density. The inner boundary temperature $T_i$ is determined so that the net heating rate $\dot{q}$ becomes zero at this radius. We have confirmed quantitatively that a small change in $T_i$ does not influence the calculated thermodynamic and hydrodynamic quantities of the neutrino-driven wind very much. We give the density $\rho(r_i)=10^{10}$ g/cm$^3$ at the inner boundary, which is taken from the result of Wilson's numerical simulation in Woosley et al.~(1994). The luminosity of each type of neutrino $L_i ~~(i=\nu_e,~\bar{\nu}_e,~\nu_{\mu},~\bar{\nu}_{\mu},~\nu_{\tau},~\bar{\nu}_{\tau} )$ is similar to one another and changes from about $10^{52}$ to $10^{50}$ ergs/s very slowly during $\sim 10$ s (\cite{wil}). We therefore take a common neutrino luminosity $L_{\nu}$ as a constant input parameter. In the heating and cooling rates, however, we use the values of neutrino energies $\epsilon_{\nu_e}=12~\rm{MeV}$, $\epsilon_{\bar{\nu_e}}=22~\rm{MeV}$, and $\epsilon_{\nu}=\epsilon_{\bar{\nu}}=34~\rm{MeV}$ for the other flavors at $r_i=R$ as in Qian \& Woosley(1996). We take the neutron star mass as a constant input parameter ranging $1.2 M_{\odot} \leq M \leq 2.0 M_{\odot}$, too. The mass outflow rate $\dot{M}$ determines how much material is ejected by the neutrino-driven wind. In Eqs.(\ref{eqn:gk1})-(\ref{eqn:gk3}), $\dot{M}$ is taken to be a constant value to be determined by the following outer boundary condition. In any delayed explosion models of Type II supernovae ~(\cite{wil,jan1,jan2}), the shock wave moves away at the radius around 10000 km above the neutron star surface at times $1 {\rm s} \lesssim t$ after the core bounce. As we stated in the previous subsection, the neutrino-driven wind is described by a steady state flow fairly well between the neutron star surface and the shock. \ From this observation, a typical temperature at the location of the shock wave can be used as an outer boundary condition. We impose the boundary condition only for subsonic solutions by choosing the value of $\dot{M} < \dot{M}_{\rm crit}$ so that $T=0.1$MeV at $r\simeq$10,000km, where $\dot{M}_{\rm crit}$ is the critical value for supersonic solution. Given $\rho(r_i)$, Eq.(\ref{eqn:gk1}) determines also the initial velocity at $r=r_i$ for each $\dot{M}$. We here explore the effects of the assumed boundary condition and the mass outflow rate $\dot{M}$ on the results of calculated quantities of the neutrino-driven winds. We show in Figs.~\ref{fig1}(a) and ~\ref{fig1}(b) the fluid velocity and the temperature as a function of raduis from the center of neutron star for various $\dot{M}$, where neutron star mass $M = 1.4~M_{\odot}$ and neutrino luminosity $L_{\nu_e} = 10^{51}$ ergs/s are used. Figures ~\ref{fig2}(a) and ~\ref{fig2}(b) are the same as those in Figs. \ref{fig1}(a) and ~\ref{fig1}(b) for M = 2.0$M_{\odot}$ and $L_{\nu_e} = 10^{52}$ ergs/s. Varied $\dot{M}$'s are tabulated in Table 1 with the calculated entropies and dynamic timescales. These figures indicate that both velocity and temperature profiles are very sensitive to the adopted $\dot{M}$ corresponding to different boundary conditions at r = 10000 km. However, the entropies are more or less similar to one another, while exhibiting very different dynamic timescales. Although finding an appropriate boundary condition is not easy, it is one of preferable manners to match the condition obtained in numerical simulations of the supernova explosion. We studied one of the successful simulations of 20$M_{\odot}$ supernova explosion assuming $M = 1.4 M_{\odot}$ (\cite{wil2}). Extensive studies of the r-process (\cite{wil}) are based on his supernova model. Careful observation tells us that, although the neutrino luminosity for each flavor changes from $5\times 10^{52}$ ergs/s to $10^{50} $ergs/s, the temperature lowers progressively to 0.1 MeV around r = 10,000 km where the shock front almost stays during $\sim$ 10 s after the core bounce at times which we are most interested in. It is to be noted that for successful r-process (\cite{wil}) the temperature has to decrease gradually down to around 0.1 MeV at the external region. This will be discussed in later sections. As displayed in Figs. ~\ref{fig1}(a) and ~\ref{fig1}(b), our calculation denoted by "3" meets with this imposed boundary condition. Although it may not be necessarily clear, we can adopt the same boundary condition for different neutron star masses which we study in this article, expecting that the physics continuously changes and also aiming at comparing the results with one another which arise from the same boundary condition. Even in the case of massive neutron star having M = 2.0$M_{\odot}$, as displayed in Figs. \ref{fig2}(a) and \ref{fig2}(b), we can still find a solution denoted by "1" which satisfies the same outer boundary condition. Although we fortunately found a solution with reasonable value of $\dot{M}$, careful studies of the numerical simulation in the case of massive neutron stars are highly desirable in order to find better boundary condition. Let us discuss how our adopted outer boundary condition is not unresonable. We are interested in the times $1 {\rm s} \lesssim t$ when the neutrino-driven wind becomes quasi steady state flow between the neutrinosphere and the shock front. Intense flux of neutrinos from the hot proto-neutron star have already interacted efficiently with radiation and relativistic electron-positron pairs at high temperature. Thus we have $T \sim T_{\nu}$, where $T$ and $T_{\nu}$ are respectively the photon and neutrino temperatures. In this stage, the gain radius $R_g$ (\cite{bw}) at which the neutrino heating and cooling balance with each other is very close to the neutrinosphere. Since we make an approximation that the neutrinosphere and the neutron star surface is close enough, we here assume that the gain radius is also the same, i.e. $R_g = R_{\nu}=R$. On these conditions we can estimate the mass outflow rate $\dot{M}$ by considering the energy deposition to the gas from the main processes of neutrino capture on nucleons (\ref{eqn:mib}) and (\ref{eqn:plb}). Following the discussion by Woosley et al. (1994), the rate of energy deposition in the gas above the neutrinosphere is given by \begin{equation} \dot{E} = (L_{\nu_e} + L_{\bar{\nu}_e}) \times \tau_{\nu}, \label{eqn:edot} \end{equation} where $\tau_{\nu}$ is the optical depth for the processes (\ref{eqn:mib}) and (\ref{eqn:plb}) and is given in terms of the opacity $\kappa_{\nu}$ and the pressure scale height $L_p$ by, \begin{eqnarray} \tau_{\nu}& =& \int^{R_g}_{\infty} \kappa_{\nu}\rho_{\rm b}dr \nonumber \\ &\approx& \kappa_{\nu}(R_g)\rho_{\rm b}(R_g)L_p(R_g) \nonumber \\ &\approx& 0.076 R_7^2 \left(\frac{T_{\nu}}{3.5{\rm MeV}}\right)^6 \left(\frac{1.4M_{\odot}}{M}\right). \label{eqn:optd} \end{eqnarray} Note that $R_g = R$ and $T_{\nu} = T_i$. In order to obtain this expression, we have already used an approximate opacity (\cite{brev,ww}) $\kappa_{\nu} \approx 6.9\times10^{-18} (T_{\nu}/3.5{\rm MeV})^2 {\rm cm}^2 {\rm g}^{-1}$ and the pressure scale height in radiation dominated domain which is written as, \begin{eqnarray} \L_p &\approx& (aT^4)/(GM\rho_{\rm b}/R^2) \nonumber \\ &= & 74 {\rm km} \left(\left(\frac{T}{{\rm MeV}}\right)^4R_7^2/\rho_{\rm b,7}\right)\left(\frac{1.4M_{\odot}}{M} \right), \nonumber \\ && \label{eqn:scht} \end{eqnarray} where the subscripts on $R_7$ and $\rho_{\rm b,7}$ indicate cgs multipliers in units of $10^7$. The energy deposition Eq.(\ref{eqn:edot}) is mostly used for lifting the matter out of the gravitational well of the neutron star. Thus, inserting Eq.(\ref{eqn:optd}) into Eq.(\ref{eqn:edot}) and using the relation $L_{\nu_e} = L_{\bar{\nu_e}}$ = $(7/4)\pi R^2 \sigma T_{\nu}^4$, the mass outflow rate $\dot{M}$ is approximately given by \begin{eqnarray} \dot{M} &\approx& \dot{E}/(GM/R) \nonumber \\ &\approx& 0.092\left(\frac{L_{\nu_e} + L_{\bar{\nu_e}}}{10^{53} {\rm ergs}~ {\rm s}^{-1}}\right)^{5/2} \left(\frac{1.4M_{\odot}}{M}\right)^2 M_{\odot}{\rm s}^{-1}. \nonumber \\ \label{eqn:M} \end{eqnarray} Our mass outflow rate $\dot{M}$ obtained from the imposed boundary condition of a temperature 0.1 MeV at 10,000 km is in reasonable agreement with the estimate using this Eq.~(\ref{eqn:M}) within a factor of five for $10^{50} {\rm ergs}/{\rm s} \leq (L_{\nu_e} + L_{\bar{\nu_e}}) \leq 10^{52} {\rm ergs}/{\rm s}$. \subsection{Characteristics of the neutrino-driven wind} When the material of the wind is on the surface of the neutron star and neutrinosphere, thermodynamic quantities still reflect the effects of neutralization and the electron fraction $Y_e$ remains as low as $\sim 0.1$. Once the wind leaves surface after the core bounce, electron number density decreases abruptly and the chemical equilibrium among leptons is determined by the balance between the two processes (\ref{eqn:mib}) and (\ref{eqn:plb}) due to intense neutrino fluxes, shifting $Y_e$ to $\sim 0.5$. Interesting phase starts when the temperature falls to $\sim 10^{10}$ K, for our purpose of studying the physical condition of the neutrino-driven wind that is suitable for the r-process nucleosynthesis. At this temperature the material is still in the NSE, and the baryon numbers are carried by only free protons and neutrons. The neutron-to-proton number abundance ratio is determined by $Y_e$ for charge neutrality. Electron antineutrino has a harder spectrum than electron neutrino, as evident from their energy moments $\epsilon_{\nu _e}=12$ MeV $< \epsilon _{\bar{\nu}_e}=22$ MeV. Thus, the material is slightly shifted to neutron-rich. Assuming weak equilibrium, this situation is approximately described by \begin{eqnarray} Y_e &\approx& \frac{\lambda _{\nu_e n}}{\lambda _{\nu_e n} + \lambda _{\bar{\nu}_e p}} \nonumber \\ &\approx& \left ( 1+\frac{L_{\bar{\nu_e}}}{L_{\nu_e}}\frac{\epsilon_{\bar{\nu_e}}-2 \delta + 1.2 \delta^2 / \epsilon_{\bar{\nu_e}}}{\epsilon_{\nu_e}+2 \delta + 1.2 \delta^2 / \epsilon_{\nu_e}} \right)^{-1}, \nonumber \\ && \label{eqn:ye} \end{eqnarray} where $\lambda_{\nu _e n}$ and $\lambda_{\bar{\nu}_e p}$ are the reaction rates for the processes (\ref{eqn:mib}) and (\ref{eqn:plb}), respectively, and $\delta$ is the neutron-proton mass difference (\cite{qw}). In our parameter set of the neutron star mass $M=1.4M_{\odot}$ and radius $R=10$ km, for example, $Y_e$ varies from $Y_e(r=R)=0.43$ to $Y_e(r=10000 {\rm km})=0.46$ very slowly due to the redshift factor (\ref{eqn:red}) because of $\epsilon \propto \Phi$. As this change is small and the calculated result of hydrodynamic quantities are insensitive to $Y_e$, we set $Y_e=0.5$ for numerical simplicity. One of the most important hydrodynamic quantity, that characterizes the expansion dynamics of the neutrino-driven wind, is the dynamic time scale $\tau_{\rm dyn}$ which is the duration time of the $\alpha$-process. When the temperature falls below $10^{10}$ K, the NSE favors a composition of alpha-particles and neutrons. As the temperature drops further below about $5 \times 10^{9}$ K ($T \approx 0.5$ MeV), the system falls out of the NSE and the $\alpha$-process starts accumulating some amount of seed elements until the charged particle reactions freeze out at $T \approx 0.5/e$ MeV $\approx 0.2$ MeV. Introducing a time variable of the wind moving away from the distance $r_i$ to outer distance $r_f$ \begin{equation} \tau= \int^{r_f}_{r_i}\frac{dr}{u}, \label{eqn:time} \end{equation} and setting $r_i=r(T=0.5 {\rm MeV})$ and $r_f=r(T=0.5/e~~ {\rm MeV})$, we can define the dynamic time scale $\tau_{\rm dyn}$ by \begin{equation} \tau_{dyn}\equiv \int^{\rm{T=0.5/e~~MeV}}_{\rm{T=0.5 MeV}}\frac{dr}{u}. \label{eqn:tau} \end{equation} The second important hydrodynamic quantity, that affects strongly the r-process nucleosynthesis which occurs at later times when the temperature cools below $0.2$ MeV, is the entropy per baryon, defined by \begin{equation} S= \int ^r _R \frac{m_N \dot{q}}{u T}dr, \label{eqn:S} \end{equation} where $\dot{q}$ is the total net heating rate (\ref{eqn:qtot}). As $S \propto T^3/\rho_b$ assuming the radiation dominance, high entropy and high temperature characterizes a system with many photons and low baryon number density. Since high entropy favors also a large fraction of free nucleons in the limit of the NSE, it is expected to be an ideal condition for making high neutron-to-seed abundance ratio. Therefore, the high entropy at the beginning of the $\alpha$-process is presumed to be desirable for successful r-process. \section{Numerical results} \subsection{Effects of relativistic gravity to entropy} The purpose of this section is to discuss both similarities and differences of the neutrino-driven wind between the relativistic treatment and the Newtonian treatment. In Fig.~\ref{fig3}, we show typical numerical results of radial velocity $u$, temperature $T$, and baryon mass density $\rho_{b}$ of the wind for the neutron star mass $M=1.4 M_{\odot}$, radius $R=10$km, and the neutrino luminosity $L_{\nu }=10^{51}$ ergs/s. The radial dependence of these quantities is displayed by solid and dashed curves for Schwarzschild and Newtonian cases, respectively, in this figure. Using these results and Eq.(\ref{eqn:S}), we can calculate $S$ in each ejecta. Figure \ref{fig4} shows the calculated profile of the entropy $S$ for the two cases. Although both entropies describe rapid increase just above the surface of the neutron star $10$ km $\leq r \leq 15$ km, the asymptotic value in general relativistic wind is nearly $40$ \% larger than that in Newtonian wind. The similar behavior of rapid increase in both winds is due to efficient neutrino heating near the surface of the neutron star. We show the radial dependence of the heating and cooling rates by neutrinos in Figs.~\ref{fig5}(a)-(c). Figure ~\ref{fig5}(a) shows the total net heating rate defined by Eq.~(\ref{eqn:qtot}), and Figs.~\ref{fig5}(b) and (c) display the decompositions into contribution from each heating(solid) or cooling(dashed) rate in Schwarzschild and Newtonian cases, respectively. The common characteristic in both cases is that net heating rate $\dot{q}$ has a peak around $r \approx 12$ km, which makes a rapid increase in $S$ near the surface of the neutron star for the following reason. The integrand of the entropy $S$ in Eq.~(\ref{eqn:S}) consists of the heating rate and the inverse of fluid velocity times temperature. The fluid velocity increases more rapidly than the slower decrease in the temperature, as shown in Fig.~\ref{fig3}, after the wind lifts off the surface of the neutron star. Let us carefully discuss the reason why the general relativistic wind results in $40$ \% larger entropy than the Newtonian wind in the asymptotic region. This fact has been suggested in the previous papers of Qian \& Woosley~(1996) and Cardall \& Fuller (1997). Unfortunately, however, the reason of this difference was not clearly appreciated to the specific effect quantitatively among several possible sources. We first consider the redshift effect and the bending effect of the neutrino trajectory. The redshift effect plays a role in decreasing the mean neutrino energy $\epsilon_{\nu}$ ejected from the neutrinosphere, and in practice $\epsilon_{\nu}$ is proportional to the redshift factor $\Phi (r)$ which is defined by Eq.~(\ref{eqn:red}). Since neutrino luminosity is proportional to $\Phi^4$ and the heating rate $\dot{q}_1, \dot{q}_3$, and $\dot{q}_5$ depend on these quantities in different manners, each heating rate has different $\Phi$-dependence as $\dot{q}_1 \propto L_{\nu}\epsilon_{\nu}^2 \propto \Phi^6$, $\dot{q}_3 \propto L_{\nu}\epsilon_{\nu} \propto \Phi^5$, and $\dot{q}_5 \propto L_{\nu}^2\epsilon_{\nu} \propto \Phi^9$, as shown in Eqs. (\ref{eqn:q1}),(\ref{eqn:q3}), and (\ref{eqn:q5}). Cooling rates $\dot{q}_2$ and $\dot{q}_4$ do not depend on $\Phi(r)$. The bending effect of the neutrino trajectory is included in the geometrical factors $g_1(r)$ and $g_2(r)$ in these equations. Although numerical calculations were carried out by including all five heating and cooling processes, as $\dot{q}_1$, $\dot{q}_2$, and $\dot{q}_3$ predominate the total net heating rate $\dot{q}$, we here discuss only these three processes in the following discussions for simplicity. In Newtonian analysis, the redshift factor $\Phi(r)$ is unity and the geometrical factor is given by \[g_{1N}(r)=\sqrt{1-\left(\frac{R_{\nu}}{r}\right)^2}.\] This geometrical factor $g_1(r)$ and the redshift factor appear in the form of $(1-g_1(r)) \Phi(r)^m$ in the heating rate $\dot{q}_1 ~~(m=5)$ and $\dot{q}_3~~(m=6)$. As for the first factor $(1-g_1(r))$, the following inequality relation holds between the Schwarzschild and Newtonian cases, for $R_{\nu} \leq r$; \[\left( 1-g_1(r)\right) > \left( 1-g_{1N}(r)\right). \] However, $\Phi (r)$ is a monotonously decreasing function of $r$, the combined factor $(1-g_1(r)) \Phi(r)^m/(1-g_{1N}(r))$ increases from unity and has a local maximum around $r \sim R_{\nu}+0.2$ km. Its departure from unity is at most 3 \% . Beyond this radius the function starts decreasing rapidly because of the redshift effect $\Phi(r)^m$, and it becomes as low as $\sim 0.6$ at $r \sim 30$ km. In this region, the net heating rate in the relativistic wind is smaller than that in the Newtonian wind if the temperature and density are the same. However, the difference in this region does not influence the dynamics of the wind very much. It is almost determined in the inner region $R_{\nu} \leq r \lesssim 15$ km where one finds efficient neutrino heating and small difference between $(1-g_1(r)) \Phi^m(r)$ and $(1-g_{1N}(r))$. By performing general relativistic calculation and neglecting these two relativistic effects, {\it i.e.} the redshift effect and the bending effect of the neutrino trajectory, we find that it produces only a small change in entropy by $\Delta S \sim 3$. Thus it does not seem to be the major source of the increase in the entropy. Let us consider another source of general relativistic effects which are included in the solution of a set of the basic equations (\ref{eqn:gk1})-(\ref{eqn:gk3}). Since the entropy depends on three hydrodynamic quantities $\dot{q}(r)$, $u(r)$, and $T(r)$ (see Eq.(\ref{eqn:S})), we should discuss each quantity. The neutrino-heating rate, $\dot{q}(r)$, depends on the temperature $T(r)$ and density $\rho_{\rm b}(r)$ in addition to the redshift factor and the geometrical factor of the bending neutrino trajectory. Therefore, we study first the detailed behavior of $T(r)$, $u(r)$, and $\rho_{\rm b}(r)$, and then try to look for the reason why the general relativistic effects increase the entropy. We assume that the pressure and internal energy per baryon are approximately described by the radiation and relativistic electrons and positrons in order to make clear the following discussions. This is a good approximation for the neutrino-driven wind. The equations of state are given by \begin{equation} P \approx \frac{11 \pi^2}{180} T^4, \label{eqn:apn1} \end{equation} \begin{equation} \epsilon \approx \frac{11 \pi^2}{60}\frac{T^4}{\rho_b}. \label{eqn:apn2} \end{equation} By using another approximation \begin{equation} u^2 \ll \frac{4P}{3\rho_b}, \label{eqn:apn3} \end{equation} which is satisfied in the region of interest, we find \begin{eqnarray} \frac{1}{T} \frac{dT}{dr} &\approx &\frac{1}{1+u^2-\frac{2M}{r}} \frac{\rho_{\rm b}+P}{4P} \nonumber \\ &\times& \left(-\frac{M}{r^2} + \frac{2u^2}{r} - \frac{45}{11 \pi^2}\frac{u \rho_b}{T^4}\dot{q}\right), \nonumber \\ && \label{eqn:temg} \end{eqnarray} in Schwarzschild case. The basic equations of the spherically symmetric and steady state wind in Newtonian case are given by \begin{equation} \dot{M}=4 \pi r^2 \rho_b v, \label{eqn:nk1} \end{equation} \begin{equation} v \frac{dv}{d r} =- \frac{1}{\rho_b} \frac{dP}{dr}- \frac{M}{r^2}, \label{eqn:nk3} \end{equation} \begin{equation} \dot{q}=v\left(\frac{d \epsilon}{d r} - \frac{P}{\rho_b^2}\frac{d \rho_b}{d r}\right), \label{eqn:nk2} \end{equation} where $v$ is the fluid velocity. The equations of state are given by Eqs. (\ref{eqn:eos1}) and (\ref{eqn:eos2}) the same as in Schwarzschild case. Repeating the same mathematical technique in Eqs.~(\ref{eqn:nk1})-(\ref{eqn:nk2}) instead of Eqs.~(\ref{eqn:gk1})-(\ref{eqn:gk3}) and taking the same approximations as (\ref{eqn:apn1})-(\ref{eqn:apn3}), we find the equation corresponding to Eq. (\ref{eqn:temg}), in Newtonian case, as \begin{equation} \frac{1}{T} \frac{dT}{dr} \approx \frac{\rho_b}{4P} \left(-\frac{M}{r^2} + \frac{2v^2}{r} - \frac{45}{11 \pi^2}\frac{v \rho_b}{T^4}\dot{q}\right). \label{eqn:temn} \end{equation} Note that the logarithmic derivative of the temperature, $d \ln T/dr=T^{-1}dT/dr$, has always a negative value, and the temperature is a monotonously decreasing function of $r$. There are two differences between Eqs.~(\ref{eqn:temg}) and (\ref{eqn:temn}). The first prefactor $1/(1+u^2-2M/r)$ in the r.h.s. of Eq.~(\ref{eqn:temg}) is larger than unity. This causes more rapid decrease of $T(r)$ in relativistic case than in Newtonian case at small radii within $r \sim 20$ km, as shown in Fig.~\ref{fig3}, where our approximations are satisfied. The second prefactor $(\rho_{\rm b}+P)/4P$ in the r.h.s. of Eq.~(\ref{eqn:temg}) is larger than the prefactor $\rho_{\rm b}/4P$ in the r.h.s. of Eq.~(\ref{eqn:temn}), {\it i.e.} $(\rho_{\rm b}+P)/4P > \rho_{\rm b}/4P$, which also makes the difference caused by the first prefactor even larger. Applying the similar mathematical transformations to the velocity, we obtain the following approximations, \begin{equation} \frac{1}{u}\frac{du}{dr} \approx \frac{3}{1+u^2-\frac{2M}{r}} \frac{(\rho_b+4P)}{4P} \frac{M}{r^2}-\frac{2}{3r}+\frac{\rho_b}{4 u P}\dot{q} \label{eqn:gv} \end{equation} in Schwarzschild case, and \begin{equation} \frac{1}{v}\frac{dv}{dr}\approx \frac{3 \rho_b}{4P}\frac{M}{r^2}- \frac{2}{3r}+\frac{ \rho_b}{4 v P}\dot{q} \label{eqn:nv} \end{equation} in Newtonian case. In these two equations, the first leading term in the r.h.s. makes the major contribution. Since exactly the same prefactors $1/(1+u^2-2M/r)$ and $(\rho_{\rm b}+4P)/4P$ appear in Schwarzschild case, the same logic as in the logarithmic derivative of the temperature is applied to the velocity. Note, however, that slightly different initial velocities at the surface of the neutron star make this difference unclear in Fig.~{\ref{fig3}}. The relativistic Schwarzschild wind starts from $u(10~{\rm km}) \approx 8.1 \times 10^4$ cm/s, while the Newtonian wind starts from $v(10~{\rm km}) \approx 2.0 \times 10^5$ cm/s. Both winds reach almost the same velocity around $r \sim 20$ km or beyond. The baryon number conservation leads to the logarithmic derivative of the baryon density \begin{equation} \frac{1}{\rho_b}\frac{d\rho_b}{dr}=-\frac{1}{u}\frac{du}{dr} -\frac{2}{r}, \label{eqn:rho} \end{equation} where $u$ is the radial component of the four-velocity in Schwarzschild case. The fluid velocity $v$ should read for $u$ in Newtonian case. Inserting Eq.~(\ref{eqn:gv}) or Eq.~(\ref{eqn:nv}) to the first leading term of the r.h.s. of this equation, we can predict the behavior of $\rho_b$ as a function of $r$ in both Schwarzschild and Newtonian cases as shown in Fig.~\ref{fig3}. Incorporating these findings concerning $T(r)$, and $u(r)$ into the definition of entropy Eq. (\ref{eqn:S}), we can now discuss why the relativistic Schwarzschild wind makes larger entropy than the Newtonian wind. We have already discussed previously in the second paragraph of this section that the fluid velocity increases more rapidly in Schwarzschild case. Since integrand of the entropy $S$ is inversely proportional to fluid velocity times temperature, this fact enlarges the difference due to $\dot{q}$ at smaller radii (see Fig.~\ref{fig4}(a)). In addition, as we found, temperature in Schwarzschild geometry is smaller than the temperature in Newtonian geometry. For these reasons, the entropy in the relativistic Schwarzschild wind becomes larger than the entropy of the Newtonian wind. Let us confirm the present results quantitatively in a different manner. The entropy per baryon for relativistic particles with zero chemical potential is given by \begin{equation} S={11\pi^2\over 45}{T^3\over \rho_b/m_N}. \label{eqn:sapp} \end{equation} Here, we take a common temperature $T=0.5$ MeV to each other in Schwarzschild and Newtonian cases. This is the typical temperature at the beginning of the $\alpha$-process, and both electrons and positrons are still relativistic at this temperature. We read off the radii at which the temperature becomes 0.5MeV in Fig.~\ref{fig3}. They are 43km and 55km in Schwarzschild and Newtonian cases, respectively. We can again read off the baryon mass densities at these radii in this figure, that are $\rho_{\rm b}=5.5 \times 10^5$ g/cm$^3$ at $r=43$ km in relativistic Schwarzschild wind and $\rho_{\rm b}=7.8 \times 10^5$ g/cm$^3$ at $r=55$ km in Newtonian wind. Taking the inverse ratio of these $\rho_{\rm b}$ values with approximate relation $(\ref{eqn:sapp})$, we find that the entropy in Schwarzschild case is 40 \% larger than that in Newtonian case. This is quantitatively in good agreement with the result of numerical calculation shown in Fig.~\ref{fig4}. Let us shortly remark on the dynamic time scale $\tau_{\rm dyn}$. Although higher entropy is favorable for making enough neutrons in the neutrino-driven wind, shorter dynamic time scale also is in favor of the r-process. This is because the neutron-to-seed abundance ratio, which is one of the critical parameters for successful r-process, becomes larger in the wind with shorter $\tau_{\rm dyn}$, which is to be discussed in the next section. It is therefore worth while discussing the general relativistic effect on $\tau_{\rm dyn}$ here. The argument is very transparent by using Eqs.~(\ref{eqn:temg}) and (\ref{eqn:temn}) and Fig.~\ref{fig3}. Since the dynamic time scale $\tau_{\rm dyn}$ is defined as the duration of $\alpha$-process in which the temperature of the wind cools from $T=0.5$ MeV to $T=0.5/e \approx 0.2$ MeV, faster cooling is likely to result in shorter $\tau_{\rm dyn}$. Let us demonstrate it numerically. For the reasons discussed below two Eqs.~(\ref{eqn:temg}) and (\ref{eqn:temn}), the relativistic fluid describes more rapid decrease in temperature than the Newtonian fluid as a function of distance $r$. In fact, the distance corresponding to $T=0.5-0.2$ MeV are $r=43-192$ km in Schwarzschild case, and $r=55-250$ km in Newtonian case. Figure \ref{fig3} tells us that both fluids have almost the same velocities at these distances, which gives shorter $\tau_{\rm dyn}$ for the Schwarzschild case than the Newtonian case. The calculated dynamic time scales are $\tau_{\rm dyn}=0.164$ s for the former and $\tau_{\rm dyn}=0.213$ s for the latter. Before closing this subsection, let us briefly discuss how the system makes a complicated response to the change in $T(r)$, $u(r)$ and $\rho_{\rm b}(r)$. When the temperature decreases rapidly at $10~{\rm km}\leq r \lesssim 20$ km, the major cooling process of the $e^+e^-$ capture by free nucleons, $\dot{q}_2$, is suppressed because this cooling rate has rather strong temperature dependence, $\dot{q}_2 \propto T^6$. In Schwarzschild geometry this suppression partially offsets the decrease in $\dot{q}_1$ due to the neutrino redshift effect, though being independent of temperature of the wind. Another heating source $\dot{q}_3$ due to neutrino-electron scattering also plays a role in the change of entropy. Since $\dot{q}_3$ depends on the baryon density as well as temperature $\dot{q}_3 \propto T^4/\rho_{\rm b}$, if the system has a correlated response to decrease $\rho_{\rm b}$ strongly with decreasing temperature, then this might eventually work for the partial increase in entropy. However, in reality, actual response arises from more complicated machanism because $\dot{q}_i$'s should depend on the solution of dynamic equations (1)-(3) self-consistently on adopted proper boundary conditions and input parameters through the relation $\dot{q}_1 \propto L_{\nu}\epsilon_{\nu}^2$, $\dot{q}_2 \propto T^6$, $\dot{q}_3 \propto T^4/\rho_{\rm b} L_{\nu}\epsilon_{\nu}$, $\dot{q}_4 \propto T^9/\rho_{\rm b}$, and $\dot{q}_5 \propto \rho_{\rm b}^{-1} L_{\nu}^2\epsilon_{\nu}$. The neutrino-driven wind is a highly non-linear system. \subsection{Parameter dependence} Most of the previous studies of the neutrino-driven wind have been concentrated on SN1987A, and the parameter set in the theoretical calculations was almost exclusive. We here expand our parameter region of the neutron star mass $M$, radius $R$, and neutrino luminosity $L_{\nu}$, and investigate widely the dependence of key quantities, $\tau_{\rm dyn}$ and $S$, on these three parameters. Since the neutron star mass $M$ and radius $R$ are mostly contained through the form $M/R$ in the basic equations of the system, we only look at the dependence on $M$ and $L_{\nu}$. Figures ~\ref{fig6}(a) and \ref{fig6}(b) show the calculated $\tau_{\rm dyn}$ and $S$ at the beginning of the $\alpha$-process at $T=0.5$ MeV for various neutron star masses $1.2 M_{\odot} \leq M \leq 2.0 M_{\odot}$. Closed circles, connected by thick solid line, and open triangles, connected by thin solid line, are those for the Schwarzschild and Newtonian cases. In Fig.~\ref{fig6}(a), we plot also two broken lines in Newtonian case from the paper~(Qian and Woosley~~1996) which adopted \begin{equation} \tau_{\rm dyn} ({\rm QW})= \left.\frac{r}{\upsilon} \right| _{0.5{\rm MeV}}, \label{eqn:qwt} \end{equation} in two limits of the radiation dominance (upper) and the dominance of non-relativistic nucleon (lower). In either limit, this $\tau_{\rm dyn}$(QW) is an increasing function of the neutron star mass and this feature is in reasonable agreement with our exact solution Eq.(\ref{eqn:tau}). However, absolute value of (\ref{eqn:qwt})is about half that of the exact solution in the Newtonian case. Remarkable difference between Schwarzschild and Newtonian cases is an opposite response of $\tau_{\rm dyn}$ to the neutron star mass (Fig.~\ref{fig6}(a)). General relativistic effects make the dynamic time scale even smaller with increasing neutron star mass. We have already discussed the reason why $\tau_{\rm dyn}$ in Schwarzschild case is smaller than that in Newtonian case by comparing Eqs.~(\ref{eqn:temg}) and (\ref{eqn:temn}) from each other. We understand the decrease of $\tau_{\rm dyn}$ as a consequence from the fact that the general relativistic effects, which arise from the two prefactors in the r.h.s. of Eq.(\ref{eqn:temg}), are enlarged by stronger gravitational force $M/r^2$ with larger $M$. Similar analysis on the role of the gravitational force is applied to the discussion of entropy and Eqs.~(\ref{eqn:temg}), (\ref{eqn:gv}), and (\ref{eqn:rho}). Figure~\ref{fig6}(b) displays that the entropy per baryon in Schwarzschild case makes stronger mass dependence than in Newtonian case. It is to be noted again that the above features of the mass dependence are equivalent to those obtained by the change in the neutron star radius. Since the radius of protoneutron star shrinks with time in cooling process, it may work for increasing the entropy and decreasing the dynamic time scale. Figures~\ref{fig7}(a) and \ref{fig7}(b) show the dependence of our calculated $\tau_{\rm dyn}$ and $S$ on the neutrino luminosity ranging $10^{50} {\rm ergs/s} \leq L_{\nu} \leq 10^{52} {\rm ergs/s}$. Differing from the mass dependence, both quantities are decreasing function of $L_{\nu}$ as far as $L_{\nu} \leq 10^{52}$ ergs/s. This tendency, except for the absolute values, is in reasonable agreement with approximate estimates~(Qian and Woosley 1996) shown by broken lines. This is because larger luminosity makes the mass outflow rate $\dot{M}$ higher through more efficient neutrino heating, which causes bigger increase in the fluid velocity in addition to moderate increase in baryon density. Having these changes in hydrodynamic quantities with the definition of $\tau_{\rm dyn}$, Eq.(\ref{eqn:tau}), and the definition of $S$, Eq.(\ref{eqn:S}), we understand that both quantities decrease with increasing neutrino luminosity. However, if the luminosity becomes larger than $10^{52}$ ergs/s, the temperature does not decrease as low as 0.1 MeV before the distance reaches 10000 km because of the effect of too strong neutrino heating. The dynamic time scale $\tau_{\rm dyn}$ is of order $\sim 10$ s. In such a very slow expansion of the neutrino-driven wind, $\alpha$-process goes on and leads to uninteresting r-process nucleosynthesis. To summarize this section, we find it difficult to obtain very large entropy $\sim 400$ for reasonably short dynamic time scale $\tau_{\rm dyn} \lesssim 0.1$ s, as reported by Woosley et al. (1994), by changing the neutron star mass $M$ and neutrino luminosity $L_{\nu}$. However, there are still significant differences between our calculated result of $\tau_{\rm dyn}$ and $S$, which are shown by thick solid lines in Figs.~\ref{fig6}(a)-\ref{fig7}(b), and those of Qian and Woosley (1996), which are shown by broken lines, in the mass dependence of the entropy and the opposite behavior in $\tau_{\rm dyn}$. We will see in the subsequent sections that these differences are important to look for successful condition of the r-process. \subsection{Implication in nucleosynthesis} Having known the detailed behavior of dynamic time scale $\tau_{\rm dyn}$ and entropy per baryon $S$ as a function of neutron star mass $M$, radius $R$, and neutrino luminosity $L_{\nu}$, we are forced to discuss their implication in the r-process nucleosynthesis. We have already shown the calculated results of $\tau_{\rm dyn}$ and $S$ for limited sets of two independent parameters $M$ and $L_{\nu}$ in Figs.~\ref{fig6}(a)-\ref{fig7}(b). We here expand the parameter space in order to include a number of $(M, L_{\nu})$-grids in their reasonable range $1.2 M_{\odot} \leq M \leq 2.0 M_{\odot}$ and $10^{50} {\rm ergs/s} \leq L_{\nu} \leq 10^{52}$ ergs/s. Figure~\ref{fig8} displays the calculated results in the $\tau_{dyn}-S$ plane. Shown also are two zones for which the r-process nucleosynthesis might occur so that the second abundance peak around $A=130$ and the third abundance peak around $A=195$ emerge from a theoretical calculation as suggested by Hoffman et al. (1997). Their condition for the element with mass number $A$ to be produced in an explosive r-process nucleosynthesis, for $Y_e > \langle Z \rangle / \langle A \rangle$, is given by \begin{equation} S\approx Y_{e,i} \left\{ \frac{8 \times 10^7 (\langle A \rangle -2 \langle Z \rangle)}{\ln[(1-2 \langle Z \rangle /A)/(1 - \langle A \rangle /A)]} \left(\frac{\tau_{dyn}}{sec}\right)\right\}^{1/3}, \label{eqn:hofcon} \end{equation} where $\langle A \rangle$ is mean mass number and $\langle Z \rangle$ is mean proton number of the seed nuclei at the end of the $\alpha$-process. Following numerical survey of seed abundance of Hoffman et al. (1997), we choose $\langle A \rangle=90$ and $\langle Z \rangle=34$ in Fig.~\ref{fig8}. \ From this figure, we find that dynamic time scale as short as $\tau_{\rm dyn} \approx 6$ ms with $M=$2.0$M_{\odot}$ and $L_{\nu}=10^{52}$ergs/s is the best case among those studied in the present paper in order to produce the r-process elements, although the entropy $S$ is rather small 140. Let us remark shortly on this useful equation. Equation~(\ref{eqn:hofcon}) tells us that the r-process element with mass number $A$ is efficiently produced from seed elements with $\langle A \rangle$ and $\langle Z \rangle$ on a given physical condition $\tau_{dyn}$, $S$, and $Y_e$ at the onset of r-process nucleosynthesis at $T_9 \approx 2.5$. In order to derive Eq.~(\ref{eqn:hofcon}), Hoffman et al. (1997) assumed that the $\alpha + \alpha + n \rightarrow ^9{\rm Be} + \gamma$ reaction is in equilibrium, because of its low Q-value, during the $\alpha$-process at $T \approx 0.5-0.2$ MeV and that the $^9{\rm Be} + \alpha \rightarrow ^{12}{\rm C} + \gamma$ reaction triggers burning of alpha-particles to accumulate seed elements. The NSE holds true if the nuclear interaction time scale for $\alpha + \alpha + n \rightarrow ^9{\rm Be} + \gamma$ is much shorter than the expansion time scale. We found in the present calculation that it is not always the case in neutrino-driven winds with short dynamic time scale, for $L_{\nu} \approx 5 \times 10^{51}-10^{52}$ ergs/s, which is to be discussed more quantitatively in the next section. Keeping this in mind, we think that Eq.~(\ref{eqn:hofcon}) is still a useful formula in order to search for suitable physical condition for the r-process without performing numerical nucleosynthesis calculation. One might wonder if the dynamic time scale $\tau_{\rm dyn} \sim 6$ ms is too short for the wind to be heated by neutrinos. Careful comparison between proper expansion time and specific collision time for the neutrino heating is needed in order to answer this question. Note that $\tau_{\rm dyn}$ was defined as the duration of the $\alpha$-process so that the temperature of the expanding wind decreases from $T=0.5$ MeV to $0.5/e \approx 0.2$ MeV, which correspond to outer atmosphere of the neutron star. These radii are $r(T=0.5 ~{\rm MeV})=52$ km and $r(T=0.5/e ~{\rm MeV})=101$ km for the wind with $(L_{\nu}, M)=(10^{52} {\rm ergs/s}, 2.0 M_{\odot})$, and $r(T=0.5 ~{\rm MeV})= 43$ km and $r(T=0.5/e ~{\rm MeV})=192$ km for the wind with $(L_{\nu}, M)=(10^{51} {\rm erg/s}, 1.4 M_{\odot})$. We found in Figs.~\ref{fig5}(a)-(c) that the neutrinos transfer their kinetic energy to the wind most effectively just above the neutron star surface at $10 {\rm km} \leq r < 20 {\rm km}$. Therefore, as for the heating problem, one should refer the duration of time for the wind to reach the radius where temperature is $T \approx 0.5$ MeV rather than $\tau_{\rm dyn}$. We can estimate this expansion time $\tau_{\rm heat}$ by setting $r_i = R =10$ km and $r_f = r(T=0.5 {\rm MeV})$ in Eq.~(\ref{eqn:time}): $\tau_{\rm heat} = 0.017$ s and $0.28$ s for the winds with $(L_{\nu},M)=(10^{52} {\rm ergs/s}, 2.0 M_{\odot})$ and $(10^{51} {\rm ergs/s}, 1.4 M_{\odot})$, respectively. We note, for completeness, $r(T=0.5 {\rm MeV})=52$ km or 43 km for each case. These proper expansion time scales, $\tau_{\rm heat}$, are to be compared with the specific collision time $\tau_{\nu}$ for the neutrino-nucleus interactions in order to discuss the efficiency of the neutrino heating. The collision time $\tau_{\nu}$ is expressed (\cite{qhl}) as \begin{eqnarray} \tau_{\nu} &\approx& 0.201 \times L_{\nu ,51}^{-1} \nonumber \\ &\times& \left ( \frac{\epsilon _{\nu}}{\rm MeV}\right) \left( \frac{r}{100 {\rm km}}\right)^2 \left( \frac{\langle \sigma _{\nu} \rangle}{10^{-41} {\rm cm}^2} \right)^{-1} s \label{eqn:tnu} \end{eqnarray} where $L_{\nu, 51}$ and $\epsilon _{\nu}$ have already been defined in Sec.~2-1, and $\langle \sigma _{\nu} \rangle$ is the averaged cross section over neutrino energy spectrum. As discussed above, neutrino heating occurs most effectively at $r \approx 12$ km (see also Fig.~\ref{fig5}(a)), and we set this value in Eq.~(\ref{eqn:tnu}). Since two neutrino processes (\ref{eqn:mib}) and (\ref{eqn:plb}) make the biggest contribution to heating the wind and $\epsilon _{\nu_e}=12$ MeV and $\epsilon _{\bar{\nu}_e}=22$ MeV, we set $\epsilon _{\nu}=(\epsilon _{\nu_ e} + \epsilon _{\bar{\nu}_e} )/2 \approx 15$ MeV. We take $\langle \sigma _{\nu} \rangle = 10^{-41}$ cm$^2$. Incorporating these values into Eq.~(\ref{eqn:tnu}), we can obtain $\tau_{\nu}$ value. Let us compare the specific collision time, $\tau_{\nu}$, and the proper expansion time, $\tau_{\rm heat}$, with each other: \begin{mathletters} \begin{eqnarray} \tau_{\nu}&=&0.0043 {\rm s} ~~< ~~ \tau_{\rm heat}= 0.017~~{\rm s}, \nonumber \\ &&{\rm for}~~(L_{\nu},M)=(10^{52} {\rm ergs/s}, 2.0 M_{\odot}), \nonumber \\ && \label{eqn:taua} \\ \tau_{\nu}&=& 0.043 {\rm s} ~~< ~~ \tau_{\rm heat}= 0.28 {\rm s}, \nonumber \\ &&{\rm for}~~(L_{\nu},M)=(10^{51} {\rm ergs/s}, 1.4 M_{\odot}). \nonumber \\ && \end{eqnarray} \end{mathletters} We can conclude that there is enough time for the expanding wind to be heated by neutrinos even with short dynamic time scale for the $\alpha$-process, $\tau_{\rm dyn} \sim 6$ ms, which corresponds to the case (\ref{eqn:taua}). Before closing this section, let us briefly discuss the effect of electron fraction $Y_e$ on the hydrodynamic condition of the neutrino-driven wind. Although we took $Y_e=0.5$ for simplicity in our numerical calculations, we should examine the sensitivity of the calculated result on $Y_e$ quantitatively. Since we are interested in short dynamic time scale, let us investigate the case with $(L_{\nu},M)=(10^{52} {\rm ergs/s}, 2.0 M_{\odot})$ which results in $S=138.5$ and $\tau_{\rm dyn}=0.00618$ s for $Y_e = 0.5$. When we adopt $Y_e = 0.4$, these quantities change slightly to $S=141.5$ and $\tau_{\rm dyn}=0.00652$ sec. These are very small changes less than $5 \%$, and the situation in similar for the other sets of $(L_{\nu},M)$. To summarize this section, we found that there is a parameter region in Fig.~\ref{fig8} which leads to desirable physical condition for the r-process nucleosynthesis. Sophisticated supernova simulation (\cite{wil}) indicates that the neutrino luminosity from the protoneutron star decreases slowly from about $5 \times 10^{52}$ to $10^{51}$ ergs/s as the time passes by after the core bounce. Therefore, our favorable neutrino luminosity $L_{\nu}=10^{52}$ ergs/s is possible in reality in relatively earlier epoch of supernova explosion at around 0.5 s to a few seconds after the core bounce. \section{R-process nucleosynthesis calculation} Our discussion on the r-process nucleosynthesis in the last section was based on Hoffman's criterion, Eq.~(\ref{eqn:hofcon}), which is to be referred with caution for several assumptions and approximations adopted in its derivation. The purpose of this section is to confirm quantitatively that the r-process occurs in the neutrino-driven wind with short dynamic time scale, which we found in the present study. Given the flow trajectory characterized by $u(t)$, $\rho_b(t)$, and $T(t)$ as discussed in the last section, our nucleosynthesis calculation starts from the time when the temperature is $T_9=9$. Since this temperature is high enough for the system to be in the NSE, initial nuclear composition consists of free neutrons and protons. We set $Y_e=0.4$ in order to compare with Hoffman's criterion shown in Fig.~\ref{fig8}. In our nucleosynthesis calculation we used a fully implicit single network code for the $\alpha$-process and r-process including over 3000 isotopes. We take the thermonuclear reaction rates for all relevant nuclear processes and their inverse reactions as well as weak interactions from Thielemann~(1995) for the isotopes $Z \leq 46$ and from Cowan et al.~(1991) for the isotopes $Z > 46$. Previous r-process calculations had complexity that the seed abundance distribution at $T_9=2.5$ was not fully shown in literature (\cite{wil,wh,hof}), which makes the interpretation of the whole nucleosynthesis process less transparent. This inconvenience happened because it was numerically too heavy to run both $\alpha$-process and r-process in a single network code for huge number of reaction couplings among $\sim 3000$ isotopes. For this reason, one had to calculate the $\alpha$-process first, using smaller network for light-to-intermediate mass elements, in order to provide seed abundance distribution at $T_9 = 2.5~(T \approx 0.2 {\rm MeV})$. Adopting such seed abundance distribution and following the evolution of material in the wind after $T \approx 0.2$ MeV, which is the onset temperature of the r-process, the r-process nucleosynthesis calculation was extensively carried out by using another network code independent of the $\alpha$-process. Our nucleosynthesis calculation is completely free from this complexity because we exploited single network code which is applied to a sequence of the whole processes of NSE - $\alpha$-process - r-process. The calculated mass abundance distribution is shown in Figs.~\ref{fig10} and \ref{fig9} for the neutrino-driven wind with $(L_{\nu},M)=(10^{52}ergs/s, 2.0 M_{\odot})$ that makes most favorable condition for the r-process nucleosynthesis with the shortest $\tau_{\rm dyn}=0.0062$ s among those studied in the present paper~(see Fig.~\ref{fig8}). Figure~\ref{fig9} displays the snapshot at the time when the temperature cooled to $T_9 = 2.5~(\approx 0.2 {\rm MeV})$ at the end of the $\alpha$-process. This shows seed abundance distribution at the onset of the r-process, too. Our calculated quantities at this temperature are the baryon mass density $\rho _{\rm b} =3.73 \times 10^4 $ g/cm$^3$, neutron mass fraction $X_n = 0.159$, mass fraction of alpha-particle $X_{\alpha}=0.693$, average mass number of seed nuclei $\langle A \rangle =94$, and neutron-to-seed abundance ratio $n/s = 99.8$ for the set of hydrodynamic quantities $\tau_{\rm dyn}= 0.0062$ s, $S \approx 139$, and $Y_e=0.4$. These values should be compared with those adopted in Woosley's calculation of trajectory 40, {\it i,e,} $\rho_b = 1.107 \times 10^4$ g/cm$^3$, $X_n = 0.176$, $X_{\alpha}=0.606$, $\langle A \rangle = 95$, $n/s=77$, $\tau_{\rm dyn} \approx 0.305$ s, $S \approx 400$, and $Y_e=0.3835$, as in Table 3 in Woosley et al.~(1994). It is interesting to point out that our seed abundance distribution in Fig.~\ref{fig9} is very similar to theirs (\cite{wil,wh}), as clearly shown by almost the same $\langle A \rangle \approx 95$, although the other evolutionary parameters and thermodynamic quantities are different from each other. The calculated final r-process abundance is displayed in Fig.~\ref{fig10}. Our wind model can produce the second $(A \approx 135)$ and third $(A \approx 195)$ r-process abundance peaks and rare earth elements between them as well. It is generally accepted that the r-process elements will be produced if there are plenty of free neutrons and if the neutron-to-seed abundance ratio is high enough to approximately satisfy $A \approx \langle A \rangle +n/s$ (\cite{hof}) at the beginning of the r-process, where $A$ is the typical mass number of the r-process element. Therefore, the $\alpha$-process should take the key to understand why our wind model results in a similar r-process nucleosynthesis to the result of Woosley's trajectory 40. The $\alpha$ burning starts when the temperature cools below $T = 0.5$ MeV. Since triple alpha reaction $^4$He$(\alpha \alpha,\gamma)^{12}$C is too slow at this temperature, alternative nuclear reaction path to reach $^{12}$C, $^4$He$(\alpha n,\gamma)$ \\ $^9$Be$(\alpha,n)^{12}$C, triggers explosive $\alpha$-process to produce the seed elements. In rapidly expanding flow of neutrino-driven wind with short $\tau_{\rm dyn}$, it is not a good approximation to assume that the first reaction $^4$He$(\alpha n, \gamma)^9$Be is in the NSE. Rate equation is thus written as \begin{eqnarray} \frac{d Y_9}{dt} &\approx& \rho_b^2 Y_{\alpha}^2 Y_n \lambda(\alpha \alpha n \rightarrow ^9{\rm Be}) \nonumber \\ &-& \rho_b Y_{\alpha}Y_9 \lambda(^9{\rm Be}~ \alpha \rightarrow ^{12}{\rm C}) \nonumber\\ &+ &(their~~ inverse~~and~~ \nonumber \\ &&other~~reaction~~ rates), \label{eqn:rate} \end{eqnarray} where $Y_9$, $Y_{\alpha}$, $Y_n$ are the number fractions of $^9$Be, $\alpha$, and neutron, and $ \lambda(\alpha \alpha n \rightarrow ^9{\rm Be})$ and $ \lambda(^9{\rm Be} \alpha \rightarrow ^{12}{\rm C})$ are the thermonuclear reaction rate for each reaction process as indicated. Details on $\lambda$'s are reported in Woosley and Hoffman (1992) and Wrean, Brune, and Kavanagh (1994). Let us take the first term of the {\it r.h.s.} of Eq.~(\ref{eqn:rate}) which is largest all terms in Eq.~(\ref{eqn:rate}). This is allowed in the following discussion of the time scale because the $^4$He$(\alpha n,\gamma)^9$Be reaction is the slowest among all charged particle reaction paths in all $\alpha$-process reactions. We now define the typical nuclear reaction time scale $\tau_{\alpha}$ of the $\alpha$-process, regulated by the $^4$He$(\alpha n,\gamma)^9$Be reaction time scale $\tau_N$, as \begin{equation} \tau_{\alpha} \gtrsim \left(\rho_b^2 Y_{\alpha}^2 Y_n \lambda(\alpha \alpha n \rightarrow ^9{\rm Be}) \right)^{-1} \equiv \tau_{N}. \label{eqn:35} \end{equation} We show the ratio $\tau_{dyn}/\tau_N$ as a function of the baryon mass density $\rho_b$ at the beginning of the $\alpha$-process when $T=0.5$ MeV for various cases of the wind models with $(L_{\nu}, M)$ in Fig.~\ref{fig11}. Note that the critical line $\tau_{dyn}/\tau_{\alpha} =1$ is slightly shifted upwards because of $\tau_{N} \lesssim \tau_{\alpha}$. This figure, with the help of Fig.~\ref{fig8}, clearly indicates that the favorable conditions for the r-process nucleosynthesis have inevitably shorter $\tau_{\rm dyn} \ll \tau_{N}$ and $\tau_{\alpha}$. Typical ratio is of order $\tau_{dyn}/\tau_N \sim 0.1$. To interpret this result, there is not enough time for the $\alpha$-process to accumulate a number of seed elements and plenty of free neutrons are left even at the beginning of the r-process. Consequently, the $n/s$ ratio becomes very high $\sim 100$. As for the neutron mass fraction, on the other hand, our value $X_n = 0.159$ is smaller than Woosley's model value $X_n=0.176$ in trajectory 40 because low entropy is in favor of low neutron fraction. This may be a defect in our low entropy model. However, the short dynamic time scale saves the situation by regulating the excess of the seed elements as discussed above. These two effects compensate with each other to result in average mass number of seed nuclei $\langle A \rangle \approx 95$ and neutron-to-seed abundance ratio $n/s \approx 100$, which is ideal for the production of the third $(A \approx 195)$ abundance peak of the r-process elements in our model, as displayed in Fig.~\ref{fig10}. R-process elements have recently been detected in several metal-deficient halo stars ~(\cite{sn}) and the relative abundance pattern for the elements between the second and the third peak proves to be very similar to that of the solar system r-process abundances. One of the possible and straightforward interpretations of this fact is that they were produced in narrow window of some limited physical condition in massive supernova explosions, as studied in the present paper. These massive stars have short lives $\sim 10^7$ yr and eject nucleosynthesis products into interstellar medium continuously from the early epoch of Galaxy evolution. It is not meaningless, therefore, to discuss several features of our calculated result in comparison with the solar system r-process abundance distribution (\cite{kap}) in Fig.~\ref{fig10}. Although K\"appeler et al. obtained these abundances as s-process subtractions from the observed meteoritic abundances~(\cite{and}) for the mass region $63 \leq A \leq 209$, the inferred yields and error bars for $A=206,207,208$, and 209 are subject to still uncertain s-process contribution. We did not show these heavy elements $A=206-209$ in Fig.~\ref{fig10}. Our single wind model reproduces observed abundance peaks around $A \approx 130$ and $A \approx 195$ and the rare earth element region between these two peaks. However, there are several requirements to the wind model in order to better fit the details of the solar system r-process abundances in the mass region $120 \lesssim A$. The first unsatisfactory feature in our model calculation is that the two peaks are shifted upward by $2 \sim 4$ mass unit, although overall positions and peaks are in good agreement with the solar system data. This is a common problem in all theoretical calculations of the r-process nucleosynthesis (\cite{mey3,wil}). The shift of the peak around $A\approx195$ is slightly larger than that around at $A \approx 130$, which may be attributed to a strong neutron exposure as represented by $n/s \approx 100$ in our model calculation. The second feature is that the rare earth element region shows broad abundance hill, but its peak position $A \approx 165$ in the data is not explained in our calculation. It was pointed out by Surman et al.~(1997) that the abundance structure in this mass region is sensitive to a subtle interplay of nuclear deformation and beta decay just prior to the freeze-out of the r-process. More careful studies of these nuclear effects and the dynamics of the r-process nucleosynthesis are desirable. The third failure in the model calculation is the depletion around $A\approx120$, which is also another serious problem encountered commonly by all previous theoretical calculations. This deficiency is thought to be made by too fast runaway of the neutron-capture reaction flow in this mass region. This is due to too strong shell effects of the $N=82$ neutron shell closure, suggesting an incomplete nuclear mass extrapolations to the nuclei with $Z \approx 40$ and $N \approx 70-80$ which correspond to the depleted abundance mass region $A \approx 120$. It is an interesting suggestion ~among many others (\cite{wil}) that an artificial smoothing of extrapolated zigzag structure of nuclear masses could fill the abundance dip around $A \approx 120$. This suggestion sheds light on the improvement of mass formula. Let us repeat it again that an overall success in the present r-process nucleosynthesis calculation, except for several unsatisfactory fine features mentioned above, is only for heavier mass elements $130 \lesssim A$ including the second $(A \approx 130)$ and the third $(A \approx 195)$ peaks. When one looks at disagreement of the abundance yields around the first $(A \approx 80)$ peak, relative to those at the third peak, between our calculated result and the solar system r-process abundances, it is clear that a single wind model is unable to reproduce all three r-process abundance peaks. The first peak elements should be produced on different conditions with lower neutron-to-seed ratio and higher neutrino flux. It has already been pointed out by several authors (\cite{see,kod,hil}) that even the r-process nucleosynthesis needs different neutron exposures similarly to the s-process nucleosynthesis in order to understand the solar system r-process abundance distribution. In a single event of supernova explosion, there are several different hydrodynamic conditions in different mass shells of the neutrino-driven wind (\cite{wil,jan1}), which may produce the first peak elements. Different progenitor mass supernova or the event like an exploding accretion disk of neutron-star merger might contribute to the production of the r-process elements. Consideration of these possibilities is beyond our scope in the present paper. We did not include the effects of neutrino absorption and scattering during the nucleosynthesis process in the present calculation. This is because these effects do not change drastically the final r-process yields as far as the dynamic expansion time scale $\tau _{\rm dyn}$ is very short. Using Eq.~(\ref{eqn:tnu}), we can estimate the specific collision time for neutrino-nucleus interaction \begin{equation} \tau_{\nu} \approx 0.082 {\rm s} - 0.31 {\rm s}, \label{eqn:34} \end{equation} where the input parameters are set equal to $L_{\nu ,51}=10, \epsilon_{\nu}=15$ MeV, and $\langle \sigma _{\nu} \rangle = 10^{-41}$ cm$^2$. Note that $\tau_{\nu} \approx 0.082$ s is the specific neutrino collision time at $r=52$ km where the temperature of the wind becomes $T=0.5$ MeV at the beginning of the $\alpha$-process, and $\tau_{\nu} \approx 0.31$ s for $r=101$ km and $T=0.5/e \approx 0.2$ MeV at the beginning of the r-process. These $\tau_{\nu}$ values are larger than $\tau_{\rm dyn}=0.0062$ s that stands for the duration of the $\alpha$-process by definition. Therefore, the neutrino process does not disturb the hydrodynamic condition of the rapid expansion during the $\alpha$-process. It is to be noted, however, that the neutrino process virtually makes the strong effect on the r-process for the winds of slow expansion. We have numerically examined Woosley's model~(1994) of trajectory 40 to find $\tau_{\rm dyn} \approx 0.3$ s. Meyer et al. ~(1998) also used $\tau_{\rm dyn}=0.3$ s in their simplified fluid trajectory to see the neutrino-capture effects. This dynamic time scale $\tau_{\rm dyn} \approx 0.3$ s is larger than or comparable to the specific neutrino collision time $\tau_{\nu}$ in Eq.(\ref{eqn:34}). In such a slow expansion the neutrino absorption by neutron~(\ref{eqn:mib}) proceeds to make a new proton in the $\alpha$-process. This proton is quickly interconverted into alpha-particle in the following reaction chain, $p(n,\gamma)d(n,\gamma)$t which is followed by t$(p,n)^3$He$(n,\gamma)^4$He and t$(t,2n)^4$He, and contributes to the production of seed elements. These radiative capture reactions and nuclear reactions are much faster than the weak process (\ref{eqn:plb}) on newly produced proton from the process (\ref{eqn:mib}). The net effect of these neutrino processes, therefore, is to decrease the neutron number density and increase the seed abundance, which leads to extremely low $n/s$ ratio. As a result, even the second abundance $(A \approx 130)$ peak of the r-process elements disappears, as reported in literature (\cite{mey,mey2}). Details on the neutrino process will be reported elsewhere. We have assumed that electrons and positrons are fully relativistic throughout the nucleosynthesis process. However, the total entropy of the system may change at the temperature $T \lesssim 1/3 m_e$ where electrons and positrons tend to behave as non-relativistic particles. This might affect the nucleosynthesis although it does not affect significantly the dynamics near the protoneutron star. We should correct this assumption in the future papers. Finally, let us refer to massive neutron star. Large dispersion in heavy element abundance of halo stars has recently been observed. Ishimaru and Wanajo (1999) have shown in their galactic chemical evolution model that if r-process nucleosynthesis occurs in either massive supernovae $\geq 30 M_{\odot}$ or small mass supernovae $8-10 M_{\odot}$, where these masses are for the progenitors, the observed large dispersion can be well explained theoretically. In addition, SN1994W and SN1997D are presumed to be due to 25 $M_{\odot}-40 M_{\odot}$ massive progenitors because of very low $^{56}$Ni abundance in the ejecta (\cite{SN1,SN2}). These massive supernova are known to have massive iron core $\geq 1.8 M_{\odot}$ and leave massive remnant (\cite{SN2}). It is critical for the r-process nucleosynthesis whether the remnant is neutron star or black hole. Recent theoretical studies of the EOS of neutron star matter, which is based on relativistic mean field theory, set upper limit of the neutron star mass at 2.2 $M_{\odot}$ (\cite{shen}). \section{Conclusion and Discussions} We studied the general relativistic effects on neutrino-driven wind which is presumed to be the most promising site for the r-process nucleosynthesis. We assumed the spherically symmetric and steady state flow of the wind. In solving the basic equations for relativistic fluid in Schwarzschild geometry, we did not take approximate method as adopted in several previous studies. We tried to extract generic properties of the wind in manners independent of supernova models or neutron-star cooling models. The general relativistic effects introduce several corrections to the equations of motion of the fluid and also to the formulae of neutrino heating rate due to the redshift and bending of neutrino trajectory. We found that these corrections increase entropy and decrease dynamic time scale of the expanding neutrino-driven wind from those in the Newtonian case. The most important corrections among them proves to be the correction to the hydrodynamic equations. Both temperature and density of the relativistic wind decrease more rapidly than the Newtonian wind as the distance increases without remarkable change of the velocity at $r<30$km, where the neutrino-heating takes place efficiently. The lower the temperature and density are, the larger the net heating rate is. This is the main reason why the entropy in the relativistic case is larger than the Newtonian case. We also looked for suitable environmental condition for the r-process nucleosynthesis in general relativistic framework. We studied first the differences and similarities between relativistic and Newtonian winds in numerical calculations, and then tried to interpret their behavior by expressing gradients of the temperature, velocity and density of the system analytically under the reasonable approximations. We extensively studied the key quantities for the nucleosynthesis, {\it i.e.} the entropy $S$ and the dynamic time scale $\tau_{dyn}$ of the expanding neutrino-driven wind, and their dependence on the protoneutron star mass, radius, and neutrino luminosity. We found that more massive or equivalently more compact neutron star tends to produce explosive neutrino-driven wind of shorter dynamic time scale, which is completely different from the result of the previous studies in the Newtonian case which adopted approximation methods. We also found that the entropy becomes larger as the neutron star mass becomes larger. Since the larger luminosity makes the dynamic time scale shorter, the large neutrino luminosity is desirable as far as it is less than $10^{52}$ergs/sec. If it exceeds $10^{52}$ ergs/sec, only the mass outflow rate becomes large and the flow can not cool down to $\sim 0.2 \rm{MeV}$ by the shock front $r\sim 10,000$km. As the result, the time scale becomes too long, which is not favorable to the r-process nucleosynthesis. Although we could not find a model which produces very large entropy $S\sim$ 400 as suggested by Woosley et al.(1994), it does not mean that the r-process does not occur in the neutrino-driven wind. We compared our results with Hoffman's condition and found that the short dynamic time scale $\tau_{dyn}\sim 6$ms, with $M=2.0 M_{\odot}$ and $L_{\nu}=10^{52}$ ergs/sec, is one of the most preferable condition for producing r-process elements around the third peak ($A \sim 195$). In order to confirm this, we carried out numerical calculations of the r-process nucleosynthesis upon this condition by using fully implicit single network code which takes account of more than $\sim 3000$ isotopes and their associated nuclear reactions in large network. We found that the r-process elements around $A \sim 195$ and even the heavier elements like thorium can be produced in this wind, although it has low entropy $S \sim 130$. The short dynamic time scale $\tau_{dyn}\sim 6$ ms was found to play the role so that the few seed nuclei are produced with plenty of free neutrons left over at the beginning of the r-process. For this reason the resultant neutron-to-seed ratio, $n/s \sim 100$, in high enough even with low entropy and leads to appreciable production of r-process elements around the second$(A \approx 130)$ and third $(A \approx 195)$ abundance peaks and even the hill of rare earth elements between the peaks. Note that the energy release by the interconversion of nucleons into $\alpha$-particles at $T\sim 0.5$ MeV produces an additional entropy about $\Delta S\sim 14$. This was not included in our present calculation. We can make note that, taking account of this increase, the r-process could occur in the neutrino-driven wind from hot neutron star whose mass is smaller than 2.0$M_\odot$. One might think that short $\tau_{dyn}$ brings deficiency of neutrino heating and that the wind may not blow. It is not true because the mass elements in the wind are heated by energetic neutrinos most efficiently at $r \lesssim30km$, while the expansion time scale $\tau_{dyn}$ is the time for the temperature to decrease from $T\sim 0.5$ MeV to 0.2 MeV at larger radii. The duration of time for the mass elements to reach 30km after leaving neutron star surface is longer than $\tau_{dyn}$. There is enough time for the system to be heated by neutrinos even for $\tau_{dyn}$ as low as $\sim 6$ ms. We did not include neutrino-capture reactions that may change $Y_e$ during the nucleosynthesis process. Since the initial electron fraction was taken to be relatively high $Y_e =0.4 \sim 0.5$, there is a possibility that the final nucleosynthesis yields in neutrino-driven wind may be modified by the change in $Y_e$ during the $\alpha$- and r-processes. However, this is expected to make a small modification in our present expansion model with short dynamic time scale because the typical time scale of neutrino interaction is longer than $\tau_{dyn}$. We will report the details about the nucleosynthesis calculation including neutrino-capture reactions in forthcoming papers. It was found that the entropy decreases with increasing neutrino luminosity. This fact suggests that one cannot obtain large entropy by merely making the heating rate large. The cooling rate, on the other hand, does not depend on the neutrino luminosity. In the present studies we included two cooling mechanisms of the $e^+ e^-$ capture by free nucleons and the $e^+ e^-$ pair annihilation. As for the cooling rate due to the $e^+ e^-$ pair annihilation, only the contribution from pair-neutrino process is usually taken into consideration, as in the present calculation. However, there are many other processes which can contribute to the total cooling rate. They are the photo-neutrino process, the plasma-neutrino process, the bremsstrahlung-neutrino process and the recombination-neutrino process (\cite{ito}). Indeed, if we double our adopted cooling rate artificially, we can obtain larger entropy. Details on the numerical studies of the cooling rate are reported elsewhere. The radial dependence of the heating rate is also important (\cite{qw}). Since both heating and cooling processes are critical to determine the entropy, more investigation on the neutrino process is desirable. There are other effects which have not been included in the present study. They are, for example, the mass accretion onto the neutron star, the time variation of the neutrino luminosity, convection and mixing of materials, and rotation or other dynamic process which break spherical symmetry of the system. These probably important effects may make several modifications to the present result. However, we believe that our main conclusion that there is a possibility of finding the r-process nucleosynthesis in an environment of relatively small entropy and short dynamic time scale is still valid. We conclude that the neutrino-driven wind is a promising astrophysical site for the successful r-process nucleosynthesis. \acknowledgments We are grateful to Prof.~G.J.~Mathews and Prof.~J.~Wilson for many useful discussions and kind advice. We also would like to thank Profs.~R.N.~Boyd, S.E.~Woosley, H.~Toki, Drs.~K.~Sumiyoshi, S.~Yamada, and H.~Suzuki for their stimulating discussions. This work has been supported in part by the Grant-in-Aid for Scientific Research (1064236, 10044103) of the Ministry of Education, Science, Sports and Culture of Japan and the Japan Society for the Promotion of Science.
2,869,038,156,046
arxiv
\section{Introduction}\label{Introduction} This paper is concerned with the injective dimension of two kinds of modules: $D$-modules $M$ over a formal power series ring $R = k[[x_1, \ldots, x_n]]$ where $k$ is a field of characteristic zero, and $F$-modules $M$ over a noetherian regular ring $R$ of characteristic $p>0$. In both cases, the upper bound \begin{equation}\label{basicbound} \injdim_R M \leq \dim \Supp_R M \end{equation} holds by the foundational work of Lyubeznik (\cite[Theorem 2.4(b)]{LyubeznikFinitenessLocalCohomology} in the $D$-module case and \cite[Theorem 1.4]{LyubeznikFModulesApplicationsToLocalCohomology} in the $F$-module case). Here $\injdim_R M$ denotes the injective dimension of $M$ as an $R$-module. Lyubeznik's proof also shows that \pref{basicbound} is true if $R$ is a polynomial ring instead of a formal power series ring. In the special case of local cohomology in positive characteristic, \pref{basicbound} is due originally to Huneke and Sharp \cite[Corollary 3.9]{HunekeSharpBassnumbersoflocalcohomologymodules}; in equicharacteristic zero, Lyubeznik shows further \cite[Theorem 3.4(b)]{LyubeznikFinitenessLocalCohomology} that \pref{basicbound} holds for the local cohomology of any noetherian regular ring. In either setting, if $\mathfrak{p} \subseteq R$ is a prime ideal of dimension $d$ and $E(R/\mathfrak{p})$ is the $R$-module injective hull of $R/\mathfrak{p}$, then $E(R/\mathfrak{p})$ is a $D$-module (resp. $F$-module) with injective dimension zero whose support has dimension $d$. It is clear from this example that without imposing further hypotheses, there does not exist a nontrivial \emph{lower} bound for $\injdim_R M$ in terms of $\dim \Supp_R M$. Our main result is the following theorem. \begin{mainthm}[Theorems \ref{holonomic over power series} and \ref{F-finite over regular char p}]\label{maintheorem} Let $M$ be either a holonomic $D$-module over $R = k[[x_1, \ldots, x_n]]$ where $k$ is a field of characteristic zero, or an $F$-finite $F$-module over a noetherian regular ring $R$ of characteristic $p > 0$. Then $\injdim_R M \geq \dim \Supp_R M - 1$. \end{mainthm} Our Main Theorem combined with the upper bound \pref{basicbound} shows that $\injdim_R M$ (with $M$ as in the theorem) enjoys a dichotomy property: it has only two possible values, either $\dim \Supp_R M - 1$ or $\dim \Supp_R M$. In the case of a polynomial ring $R = k[x_1, \ldots, x_n]$ over a characteristic-zero field, Puthenpurakal \cite[Corollary 1.2]{PuthenpurakaInjectiveResolutionofLC} has shown that $\injdim_R M = \dim \Supp_R M$ whenever $M$ is a local cohomology module of $R$. This result was strengthened by Zhang \cite[Theorem 4.5]{zhanginjdim} who established this equality for all holonomic $D$-modules over polynomial rings, as well as for all $F$-finite $F$-modules over certain regular rings (finitely generated algebras over an infinite, positive-characteristic field). In the case of a formal power series ring, or in the general case of a positive-characteristic regular local ring, this equality need not hold: indeed, in both cases, the injective hull of $R/\mathfrak{p}$ where $\mathfrak{p}$ is a prime ideal of dimension one provides a counterexample (see Remark \ref{bound is sharp} below). The proof of our Main Theorem appears in sections \ref{injdim over power series} and \ref{injdim of F-modules}, following the most technical part of the paper: section \ref{minimal resolutions}, an in-depth study of the last terms of minimal injective resolutions over the rings considered in our Main Theorem as well as their localizations. One key observation is that the assumption that $R$ is Jacobson in \cite[Theorem 3.3]{zhanginjdim} can be weakened; to this end, we introduce a notion of {\it pseudo-Jacobson} rings in section \ref{Pseudo-Jacobson rings}. During the preparation of this paper, we were made aware of the article \texttt{arXiv:1603.06639v1}, which investigates the injective dimension of local cohomology modules $H^j_I(R)$ when $R$ is a formal power series ring in characteristic zero and the dimension of the support of $H^j_I(R)$ is at most 4. In November 2017 an updated version, \texttt{arXiv:1603.06639v2}, appeared; it investigates the injective dimension of $F$-finite $F$-modules over a regular local ring in characteristic $p > 0$ and modules of the form $(H^j_I(R))_g$ over a regular local ring $R$ in characteristic zero (here $g \in R$). The approach in our paper is different: in order to investigate the injective dimension of holonomic $D$-modules, we introduce and study the notion of pseudo-Jacobson rings. Such an approach works well for both holonomic $D$-modules and $F$-finite $F$-modules, further illustrating the nice parallel between these two classes of modules. \subsection*{Acknowledgments} The authors thank Mel Hochster and Gennady Lyubeznik for helpful discussions. \section{$D$-module and $F$-module preliminaries}\label{FandDmodules} In this section, we review some basic notions concerning $D$-modules, $F$-modules, and local cohomology that will be needed throughout the paper. We begin with general notation and conventions. Throughout the paper, a \emph{ring} is commutative with $1$ unless otherwise specified, and a \emph{local ring} is always noetherian. If $R$ is a noetherian ring and $M$ is an $R$-module, we will denote the minimal injective resolution of $M$ as an $R$-module by $E^{\bullet}_R(M)$ (or $E^{\bullet}(M)$ if $R$ is understood). The $R$-module $E^0_R(M) = E^0(M)$ (the \emph{injective hull} of $M$) will simply be denoted $E_R(M)$ (or $E(M)$). We denote the set of associated primes of $M$ by $\Ass M$ or $\Ass_R M$. If $R$ is a ring and $S \subseteq R$ is a multiplicative subset, we will use without further comment the one-to-one correspondence between prime ideals of $S^{-1}R$ and prime ideals of $R$ that do not meet $S$. In particular, if we write ``let $S^{-1}\mathfrak{p}$ be a prime ideal of $S^{-1}R$'', it is to be understood that $\mathfrak{p} \subseteq R$ is a prime ideal and $\mathfrak{p} \cap S = \emptyset$. \subsection{$D$-modules} Our basic references for $D$-modules are EGA \cite{EGAIV4} and the book \cite{BjorkBookRIngDiffOperators} of Bj\"{o}rk. Let $R$ be a ring and $k \subseteq R$ a subring. We denote by $D(R,k)$ (or simply $D$, if $R$ and $k$ are understood) the (usually non-commutative) ring of $k$-linear differential operators on $R$, which is a subring of $\End_k(R)$. This ring is recursively defined as follows \cite[\S 16]{EGAIV4}. A differential operator $R \rightarrow R$ of order zero is multiplication by an element of $R$. Supposing that differential operators of order $\leq j-1$ have been defined, $d \in \End_k(R)$ is said to be a differential operator of order $\leq j$ if, for all $r \in R$, the commutator $[d,r] \in \End_k(R)$ is a differential operator of order $\leq j-1$, where $[d,r] = dr - rd$ (the products being taken in $\End_k(R)$). We write $D^j(R)$ for the set of differential operators on $R$ of order $\leq j$ and set $D(R,k) = \cup_j D^j(R)$. If $d \in D^j(R)$ and $d' \in D^{\l}(R)$, it is easy to prove by induction on $j + \l$ that $d' \circ d \in D^{j+\l}(R)$, so $D(R,k)$ is a ring. The most important case for us will be where $k$ is a field of characteristic zero and $R = k[[x_1, \ldots, x_n]]$ is a formal power series ring over $k$. In this case, the ring $D$, viewed as a left $R$-module, is freely generated by monomials in the partial differentiation operators $\partial_1 = \frac{\partial}{\partial x_1}, \ldots, \partial_n = \frac{\partial}{\partial x_n}$ (\cite[Theorem 16.11.2]{EGAIV4}: here the characteristic-zero assumption is necessary). This ring has an increasing filtration $\{D(\nu)\}$, called the \emph{order filtration}, where $D(\nu)$ consists of those differential operators of order $\leq \nu$. The associated graded object $\gr(D) = \oplus D(\nu)/D(\nu-1)$ with respect to this filtration is isomorphic to $R[\zeta_1, \ldots, \zeta_n]$ (a commutative ring), where $\zeta_i$ is the image of $\partial_i$ in $D(1)/D(0) \subseteq \gr(D)$. By a \emph{$D$-module} we always mean a \emph{left} module over the ring $D$ unless otherwise specified. If $M$ is a finitely generated $D$-module, there exists a \emph{good filtration} $\{M(\nu)\}$ on $M$, meaning that $M$ becomes a filtered left $D$-module with respect to the order filtration on $D$ \emph{and} $\gr(M) = \oplus M(\nu)/M(\nu-1)$ is a finitely generated $\gr(D)$-module. We let $J$ be the radical of $\Ann_{\gr(D)} \gr(M) \subseteq \gr(D)$ and set $d(M) = \dim \gr(D)/J$. The ideal $J$, and hence the number $d(M)$, is independent of the choice of good filtration on $M$. By \emph{Bernstein's theorem}, if $M \neq 0$ is a finitely generated $D$-module, we have $n \leq d(M) \leq 2n$. \begin{definition}\label{holonomic} Let $M$ be a finitely generated $D$-module. We say that $M$ is \emph{holonomic} if $M = 0$ or $d(M) = n$. \end{definition} The ring $R$ itself is a holonomic $D$-module. More generally, local cohomology modules of $R$ are holonomic $D$-modules (Proposition \ref{local coh omnibus}(c)). The ring $D$ is a holonomic $D$-module if and only if $n=0$ (so $D = k$), since $d(D) = 2n$. We collect in the following proposition the basic results on $D$-modules that we will use below. \begin{prop}\label{D-modules omnibus} Let $R = k[[x_1, \ldots, x_n]]$ where $k$ is a field of characteristic zero, and let $M$ be a $D(R,k)$-module. \begin{enumerate}[(a)] \item $\injdim_R M \leq \dim \Supp_R M$ \cite[Theorem 2.4(b)]{LyubeznikFinitenessLocalCohomology}; \item If $S \subseteq R$ is a multiplicative subset, $S^{-1}M$ is both a $D$-module and a $D(S^{-1}R, k)$-module; and if $M$ is of finite length as a $D$-module, $S^{-1}M$ is of finite length as a $D(S^{-1}R, k)$-module (\cite[Proposition 2.5]{zhanginjdim}; this is true for any domain $R$ and subring $k$); \item If $M$ is finitely generated as a $D$-module (in particular, if $M$ is holonomic), then $M$ has finitely many associated primes as an $R$-module \cite[Theorem 2.4(c)]{LyubeznikFinitenessLocalCohomology}; \item If $M$ is holonomic, then $M$ is of finite length as a $D$-module \cite[Theorem 2.7.13]{BjorkBookRIngDiffOperators}; \item If $0 \rightarrow M' \rightarrow M \rightarrow M'' \rightarrow 0$ is a short exact sequence of $D$-modules and $D$-linear maps, then $M$ is holonomic (resp. finite length) if and only if $M'$ and $M''$ are holonomic (resp. finite length). \end{enumerate} \end{prop} As remarked in \cite[2.2(c)]{LyubeznikFinitenessLocalCohomology}, a proof of the ``holonomic'' part of Proposition \ref{D-modules omnibus}(e) is analogous to the proof of \cite[Proposition 1.5.2]{BjorkBookRIngDiffOperators} (the ``finite length'' part is a well-known fact about modules over any ring). We note that part (b) of the proposition does \emph{not} assert that if $M$ is of finite length as a $D$-module, so is $S^{-1}M$. This is not known even in the case where $S^{-1}R = R_f$ for a single element $f \in R$. Part (b) only makes the weaker claim that $S^{-1}M$ is of finite length as a $D(S^{-1}R, k)$-module. \subsection{$F$-modules} Our basic reference for $F$-modules is the paper \cite{LyubeznikFModulesApplicationsToLocalCohomology} of Lyubeznik in which they were introduced. Let $R$ be a noetherian regular ring of characteristic $p>0$. Let $F_R$ denote the Peskine-Szpiro functor: \[F_R(M)=R' \otimes_R M\] for each $R$-module $M$, where $R'$ denotes the $R$-module that is the same as $R$ as a left $R$-module and whose right $R$-module structure is given by $r'\cdot r=r^pr'$ for all $r'\in R'$ and $r\in R$. \begin{definition}[Definitions 1.1, 1.9 and 2.1 in \cite{LyubeznikFModulesApplicationsToLocalCohomology}] \label{definition: F-modules} An $F_R$-{\it module} (or \emph{$F$-module}, if $R$ is understood) is an $R$-module $M$ equipped with an $R$-linear isomorphism $\theta_{M}:M\to F_R(M)$. A \emph{homomorphism} between $F_R$-modules $(M,\theta_{M})$ and $(N,\theta_N)$ is an $R$-linear map $\varphi:M\to N$ such that the following diagram commutes: \[\xymatrix{ M \ar[r]^{\theta_{M}} \ar[d]^{\varphi} & F_R(M) \ar[d]^{F_R(\varphi)}\\ N \ar[r]^{\theta_{N}} & F_R(N). }\] A {\it generating morphism} of an $F_R$-module $(M,\theta_{M})$ is an $R$-linear map $\beta:M'\to F_R(M')$, where $M'$ is an $R$-module, such that the direct limit of the diagram \[ \xymatrix{ M' \ar[r] \ar[d] & F_R(M') \ar[r]^{F_R(\beta)} \ar[d]^{F_R(\beta)} & F^2_R(M')\ar[r] \ar[d]^{F^2_R(\beta)} &\cdots \\ F_R(M') \ar[r]^{F_R(\beta)} & F^2_R(M') \ar[r]^{F^2_R(\beta)} & F^3_R(M')\ar[r] &\cdots }\] is the map $\theta_{M}:M\to F_R(M)$. An $F_R$-module $M$ is called $F_R$-{\it finite} (or \emph{$F$-finite}) if it admits a generating morphism $\beta:M'\to F_R(M')$ such that $M'$ is a finitely generated $R$-module. \end{definition} The counterpart to Proposition \ref{D-modules omnibus} for $F$-modules is the following. \begin{prop}\label{F-modules omnibus} Let $R$ be a noetherian regular ring of characteristic $p > 0$, and let $M$ be an $F$-module. \begin{enumerate}[(a)] \item $\injdim_R M \leq \dim \Supp_R M$ \cite[Theorem 1.4]{LyubeznikFModulesApplicationsToLocalCohomology}; \item The minimal injective resolution $E^{\bullet}(M)$ is a complex of $F$-modules and $F$-module morphisms \cite[Example 1.2(b'')]{LyubeznikFModulesApplicationsToLocalCohomology}; \item If $S \subseteq R$ is a multiplicative subset, then $S^{-1}M$ is an $F_{S^{-1}R}$-module; and if $M$ is $F$-finite, then $S^{-1}M$ is $F_{S^{-1}R}$-finite (both statements follow from \cite[Remark 1.0(i)]{LyubeznikFModulesApplicationsToLocalCohomology}); \item If $M$ is $F$-finite, $M$ has finitely many associated primes as an $R$-module \cite[Theorem 2.12(a)]{LyubeznikFModulesApplicationsToLocalCohomology}; \item If $0 \rightarrow M' \rightarrow M \rightarrow M'' \rightarrow 0$ is a short exact sequence of $F$-modules and $F$-module morphisms, then $M$ is $F$-finite if and only if $M'$ and $M''$ are $F$-finite \cite[Theorem 2.8]{LyubeznikFModulesApplicationsToLocalCohomology}; \item If $E$ is an injective $R$-module, $E$ is an $F$-module \cite[Proposition 1.5]{HunekeSharpBassnumbersoflocalcohomologymodules}. \end{enumerate} \end{prop} \subsection{Local cohomology} We will also make use of local cohomology modules, for whose definition and basic properties we refer to \cite{LCBookBrodmannSharp}. The most important facts about local cohomology we will use are the following. \begin{prop}\label{local coh omnibus} \begin{enumerate}[(a)] \item Local cohomology commutes with flat base change: if $R \rightarrow S$ is a flat homomorphism of noetherian rings, $I \subseteq R$ is an ideal, and $M$ is an $R$-module, then $H^i_{IS}(S \otimes_R M) \cong S \otimes H^i_I(M)$ as $S$-modules for all $i \geq 0$ \cite[Theorem 4.3.2]{LCBookBrodmannSharp}. In particular, local cohomology commutes with localization and completion. \item If $R = k[[x_1, \ldots, x_n]]$ where $k$ is a field of characteristic zero, $M$ is a holonomic $D$-module, and $I \subseteq R$ is an ideal, then for all $i \geq 0$, the local cohomology module $H^i_I(M)$ is a holonomic $D$-module; in particular, $H^i_I(R)$ is a holonomic $D$-module \cite[2.2(d)]{LyubeznikFinitenessLocalCohomology}. \item If $R$ is a noetherian regular ring of characteristic $p > 0$, $M$ is an $F$-finite $F$-module, and $I \subseteq R$ is an ideal, then for all $i \geq 0$, the local cohomology module $H^i_I(M)$ is an $F$-finite $F$-module; in particular, $H^i_I(R)$ is an $F$-finite $F$-module \cite[Proposition 2.10]{LyubeznikFModulesApplicationsToLocalCohomology}. \item If $R$ is a Gorenstein ring and $\mathfrak{m} \subseteq R$ is a maximal ideal of height $n$, then $H^n_{\mathfrak{m}}(R) \cong E(R/\mathfrak{m})$ as $R$-modules \cite[Lemma 11.2.3]{LCBookBrodmannSharp}. \end{enumerate} \end{prop} Part (d) of Proposition \ref{local coh omnibus} is stated in \cite{LCBookBrodmannSharp} only in the case where $R$ is a Gorenstein \emph{local} ring, but the same result is true for maximal ideals in arbitrary Gorenstein rings, with the same proof: if $E^{\bullet}$ is the minimal injective resolution of $R$ as a module over itself, then $\Gamma_{\mathfrak{m}}(E^{\bullet})$, which computes the local cohomology of $R$ supported at $\mathfrak{m}$, is simply $E(R/\mathfrak{m})$ concentrated in degree $n$. Recall that if $M$ is a module over a noetherian ring $R$, $E^{\bullet}(M)$ is its minimal injective resolution, and $\mathfrak{p} \subseteq R$ is a prime ideal, then the \emph{Bass number} $\mu^i(\mathfrak{p}, M)$ is the (possibly infinite) number of copies of the indecomposable injective hull $E(R/\mathfrak{p})$ occurring as direct summands of $E^i(M)$ (see \cite[\S 18]{MatsumuraCRT} for properties of Bass numbers, including their well-definedness). In particular, to say that $\mu^i(\mathfrak{p}, M) > 0$ is to say that $E(R/\mathfrak{p})$ is a summand of $E^i(M)$, which implies that $\mathfrak{p} \in \Supp_R E^i(M)$. If $\mathfrak{p} \subseteq R$ is a prime ideal, the $R$-module $E(R/\mathfrak{p})$ is naturally an $R_{\mathfrak{p}}$-module isomorphic to $E_{R_{\mathfrak{p}}}(R_{\mathfrak{p}}/\mathfrak{p} R_{\mathfrak{p}})$. We will use this fact repeatedly. For now, we remark that in conjunction with Proposition \ref{local coh omnibus}(d), this fact implies that $E(R/\mathfrak{p})$ is a $D(R,k)$-module whenever $R$ is a Gorenstein ring and $k \subseteq R$ is a subring; indeed, since $R_{\mathfrak{p}}$ is a Gorenstein local ring, we have \[ E(R/\mathfrak{p}) \cong E_{R_{\mathfrak{p}}}(R_{\mathfrak{p}}/\mathfrak{p} R_{\mathfrak{p}}) \cong H^{\hgt \mathfrak{p}}_{\mathfrak{p} R_{\mathfrak{p}}}(R_{\mathfrak{p}}) \cong (H^{\hgt \mathfrak{p}}_{\mathfrak{p}}(R))_{\mathfrak{p}} \] as $R_{\mathfrak{p}}$-modules, so since $H^{\hgt \mathfrak{p}}_{\mathfrak{p}}(R)$ is a $D$-module by Proposition \ref{local coh omnibus}(b), its localization $E(R/\mathfrak{p})$ is as well. (In the $F$-module case, if $R$ is a regular local ring of characteristic $p>0$ and $\mathfrak{p} \subseteq R$ is a prime ideal, then $E(R/\mathfrak{p})$ is an $F$-module by Proposition \ref{F-modules omnibus}(f).) Finally, we will need to make use of a lemma of Lyubeznik on Bass numbers and local cohomology. \begin{lem}\cite[Lemma 1.4]{LyubeznikFinitenessLocalCohomology}\label{Bass numbers of local coh} Let $R$ be a noetherian ring, let $\mathfrak{p} \subseteq R$ be a prime ideal, and let $M$ be an $R$-module such that the $R_{\mathfrak{p}}$-module $(H^i_{\mathfrak{p}}(M))_{\mathfrak{p}}$ is injective for all $i \geq 0$. \begin{enumerate}[(a)] \item All differentials in the complex $(\Gamma_{\mathfrak{p}}(E^{\bullet}(M)))_{\mathfrak{p}}$ of $R_{\mathfrak{p}}$-modules are zero. \item For all $i \geq 0$, the Bass numbers $\mu^i(\mathfrak{p}, M)$ and $\mu^0(\mathfrak{p}, H^i_{\mathfrak{p}}(M))$ are equal. \end{enumerate} \end{lem} \section{Localizations of $D$-modules}\label{inj dim of localization} Throughout this section, let $R = k[[x_1, \ldots, x_n]]$ where $k$ is a field of characteristic zero, and let $D = D(R,k)$. The goal of this section is to prove the following generalization of Proposition \ref{D-modules omnibus}(a). \begin{thm}\label{inj dim bound for arbitrary localization} Let $M$ be a $D$-module, and let $S \subseteq R$ be a multiplicative subset. Then \[ \injdim_{S^{-1}R} S^{-1}M \leq \dim \Supp_{S^{-1}R} S^{-1}M. \] \end{thm} In fact, it suffices to prove the following weaker statement. \begin{prop}\label{inj dim bound for localization} Let $M$ be a $D$-module, and let $\mathfrak{p} \in \Supp_R M$. Then \[ \injdim_{R_{\mathfrak{p}}} M_{\mathfrak{p}} \leq \dim \Supp_{R_{\mathfrak{p}}} M_{\mathfrak{p}}. \] \end{prop} \begin{proof}[Proof that Proposition \ref{inj dim bound for localization} implies Theorem \ref{inj dim bound for arbitrary localization}] Let $M$ and $S \subseteq R$ be given, and let $t$ be the injective dimension $\injdim_{S^{-1}R} S^{-1}M$. Since $E^t_{S^{-1}R}(S^{-1}M) \neq 0$, there exists a prime ideal $S^{-1} \mathfrak{p} \subseteq S^{-1}R$ such that $\mu^t(S^{-1}\mathfrak{p}, S^{-1}M) > 0$. Since $S^{-1} \mathfrak{p}$ belongs to the support of $E^t_{S^{-1}R}(S^{-1}M)$, if we localize the complex $E^{\bullet}_{S^{-1}R}(S^{-1}M)$ at $S^{-1}\mathfrak{p}$, its length remains the same. But this new complex is the minimal injective resolution of $(S^{-1}M)_{S^{-1} \mathfrak{p}} = M_{\mathfrak{p}}$ as an $R_{\mathfrak{p}}$-module, so we have \[ t = \injdim_{S^{-1}R} S^{-1}M = \injdim_{R_{\mathfrak{p}}} M_{\mathfrak{p}} \leq \Supp_{R_{\mathfrak{p}}} M_{\mathfrak{p}} \leq \dim \Supp_{S^{-1}R} S^{-1}M, \] where the first inequality holds since we have assumed Proposition \ref{inj dim bound for localization}. This completes the proof. \end{proof} The proof of Proposition \ref{inj dim bound for localization} below proceeds similarly to that of \cite[Theorem 3.4(b)]{LyubeznikFinitenessLocalCohomology}, an analogous statement for local cohomology modules over more general rings. We first need a couple of lemmas. \begin{lem}\label{completion is a D-module} Let $\mathfrak{p}$ be a prime ideal of $R$. Then the $\mathfrak{p} R_{\mathfrak{p}}$-adic completion $\widehat{R_{\mathfrak{p}}}$ of $R_{\mathfrak{p}}$ is isomorphic to a formal power series ring $K[[z_1, \ldots, z_c]]$ where $K$ is a field of characteristic zero and $c = \hgt \mathfrak{p}$, and the $\widehat{R_{\mathfrak{p}}}$-module $\widehat{R_{\mathfrak{p}}} \otimes_{R_{\mathfrak{p}}} M_{\mathfrak{p}}$ is in fact a $D(\widehat{R_{\mathfrak{p}}}, K)$-module. \end{lem} \begin{proof} The statement about the form of the ring $\widehat{R_{\mathfrak{p}}}$ is simply Cohen's structure theorem, since $R_{\mathfrak{p}}$ is a regular local ring. The second statement is essentially included in the proof of \cite[Corollary 8]{LyubeznikCharFreeFiniteness} (see also the proof of \cite[Theorem 2.4]{LyubeznikFinitenessLocalCohomology}), so we omit most details, contenting ourselves with the following outline. There exist derivations $\delta_i: R_{\mathfrak{p}} \rightarrow R_{\mathfrak{p}}$ for $1 \leq i \leq c$ such that, upon passing to the completion, each $\delta_i$ induces the $K$-linear derivation $\partial_i = \frac{\partial}{\partial z_i}$ on $\widehat{R_{\mathfrak{p}}}$. We define the $D(\widehat{R_{\mathfrak{p}}}, K)$-module structure on $\widehat{R_{\mathfrak{p}}} \otimes_{R_{\mathfrak{p}}} M_{\mathfrak{p}}$ as follows: if $\hat{r}, \hat{s} \in \widehat{R_{\mathfrak{p}}}$ and $\mu \in M_{\mathfrak{p}}$, then $\hat{r} \cdot (\hat{s} \otimes \mu) = \hat{r}\hat{s} \otimes \mu$ and $\partial_i \cdot (\hat{s} \otimes \mu) = \partial_i(\hat{s}) \otimes \mu + \hat{s} \otimes \delta_i(\mu)$. It is easy to see that, for $1 \leq i \leq c$ and all $\hat{r} \in R_{\mathfrak{p}}$, the actions of $\partial_i \hat{r} - \hat{r} \partial_i$ and $\partial_i(\hat{r})$ on $\hat{s} \otimes \mu$ are the same. \end{proof} \begin{lem}\label{localized local coh is injective} For all prime ideals $\mathfrak{p}$ of $R$ and all $i \geq 0$, the $R_{\mathfrak{p}}$-module $(H^i_{\mathfrak{p}}(M))_{\mathfrak{p}}$ is injective. \end{lem} \begin{proof} First recall that $(H^i_{\mathfrak{p}}(M))_{\mathfrak{p}} \cong H^i_{\mathfrak{p} R_{\mathfrak{p}}}(M_{\mathfrak{p}})$ as $R_{\mathfrak{p}}$-modules. Since $H^i_{\mathfrak{p} R_{\mathfrak{p}}}(M_{\mathfrak{p}})$ is supported only at $\mathfrak{p} R_{\mathfrak{p}}$, every element of this module is annihilated by some power of $\mathfrak{p} R_{\mathfrak{p}}$, and therefore $H^i_{\mathfrak{p} R_{\mathfrak{p}}}(M_{\mathfrak{p}})$ is an $\widehat{R_{\mathfrak{p}}}$-module, where $\widehat{R_{\mathfrak{p}}}$ is the $\mathfrak{p} R_{\mathfrak{p}}$-adic completion of $R_{\mathfrak{p}}$: this $\widehat{R_{\mathfrak{p}}}$-module may be identified with $\widehat{R_{\mathfrak{p}}} \otimes_{R_{\mathfrak{p}}} H^i_{\mathfrak{p} R_{\mathfrak{p}}}(M_{\mathfrak{p}})$. The extension $R_{\mathfrak{p}} \rightarrow \widehat{R_{\mathfrak{p}}}$ is flat, so since local cohomology commutes with flat base change, we have \[ \widehat{R_{\mathfrak{p}}} \otimes_{R_{\mathfrak{p}}} H^i_{\mathfrak{p} R_{\mathfrak{p}}}(M_{\mathfrak{p}}) \cong H^i_{\mathfrak{p} \widehat{R_{\mathfrak{p}}}}(\widehat{R_{\mathfrak{p}}} \otimes_{R_{\mathfrak{p}}} M_{\mathfrak{p}}) \] as $\widehat{R_{\mathfrak{p}}}$-modules. As in Lemma \ref{completion is a D-module}, $\widehat{R_{\mathfrak{p}}} \cong K[[z_1, \ldots, z_c]]$ where $K$ is a field of characteristic zero. By that lemma, $\widehat{R_{\mathfrak{p}}} \otimes_{R_{\mathfrak{p}}} M_{\mathfrak{p}}$, and hence $H^i_{\mathfrak{p} \widehat{R_{\mathfrak{p}}}}(\widehat{R_{\mathfrak{p}}} \otimes_{R_{\mathfrak{p}}} M_{\mathfrak{p}})$, is a $D(\widehat{R_{\mathfrak{p}}}, K)$-module. Since $\widehat{R_{\mathfrak{p}}}$ is a formal power series ring over $K$, we have \[ \injdim_{\widehat{R_{\mathfrak{p}}}} H^i_{\mathfrak{p} \widehat{R_{\mathfrak{p}}}}(\widehat{R_{\mathfrak{p}}} \otimes_{R_{\mathfrak{p}}} M_{\mathfrak{p}}) \leq \dim \Supp_{\widehat{R_{\mathfrak{p}}}} H^i_{\mathfrak{p} \widehat{R_{\mathfrak{p}}}}(\widehat{R_{\mathfrak{p}}} \otimes_{R_{\mathfrak{p}}} M_{\mathfrak{p}}) = 0, \] where the inequality is Proposition \ref{D-modules omnibus}(a) and the equality holds because $H^i_{\mathfrak{p} \widehat{R_{\mathfrak{p}}}}(\widehat{R_{\mathfrak{p}}} \otimes_{R_{\mathfrak{p}}} M_{\mathfrak{p}})$ is supported only at the maximal ideal $\mathfrak{p} \widehat{R_{\mathfrak{p}}}$. Therefore $H^i_{\mathfrak{p} \widehat{R_{\mathfrak{p}}}}(\widehat{R_{\mathfrak{p}}} \otimes_{R_{\mathfrak{p}}} M_{\mathfrak{p}})$, which we have identified with $H^i_{\mathfrak{p} R_{\mathfrak{p}}}(M_{\mathfrak{p}})$, is injective as an $\widehat{R_{\mathfrak{p}}}$-module; since $R_{\mathfrak{p}} \rightarrow \widehat{R_{\mathfrak{p}}}$ is flat, $H^i_{\mathfrak{p} R_{\mathfrak{p}}}(M_{\mathfrak{p}})$ is injective over $R_{\mathfrak{p}}$ as well, completing the proof. \end{proof} \begin{proof}[Proof of Proposition \ref{inj dim bound for localization}] We proceed by induction on $\dim \Supp_{R_{\mathfrak{p}}} M_{\mathfrak{p}}$. If $\mathfrak{p}$ is a minimal prime of $M$, then this dimension is zero and we must show that $M_{\mathfrak{p}}$ is injective as an $R_{\mathfrak{p}}$-module. Since $\mathfrak{p}$ is minimal in $\Supp_R M$, every element of $M_{\mathfrak{p}}$ is annihilated by some power of $\mathfrak{p} R_{\mathfrak{p}}$, and so $M_{\mathfrak{p}}$ is a module over the $\mathfrak{p} R_{\mathfrak{p}}$-adic completion $\widehat{R_{\mathfrak{p}}}$ of $R_{\mathfrak{p}}$: this module may be identified with $\widehat{R_{\mathfrak{p}}} \otimes_{R_{\mathfrak{p}}} M_{\mathfrak{p}}$. By the same reasoning used in the proof of Lemma \ref{localized local coh is injective}, $M_{\mathfrak{p}}$ is injective over $\widehat{R_{\mathfrak{p}}}$ and therefore over $R_{\mathfrak{p}}$. Now suppose that $\dim \Supp_{R_{\mathfrak{p}}} M_{\mathfrak{p}} > 0$. Let $E^{\bullet}(M_{\mathfrak{p}})$ denote the minimal injective resolution of $M_{\mathfrak{p}}$ as an $R_{\mathfrak{p}}$-module. By the inductive hypothesis, if $\mathfrak{q} \subset \mathfrak{p}$ and $\mathfrak{q} \in \Supp_R M$, we have \[ \injdim_{R_{\mathfrak{q}}} M_{\mathfrak{q}} \leq \dim \Supp_{R_{\mathfrak{q}}} M_{\mathfrak{q}} < \dim \Supp_{R_{\mathfrak{p}}} M_{\mathfrak{p}}, \] so if $i \geq \dim \Supp_{R_{\mathfrak{p}}} M_{\mathfrak{p}}$, $E^i(M_{\mathfrak{p}})$ is supported only at $\mathfrak{p} R_{\mathfrak{p}}$. By Lemma \ref{localized local coh is injective}, $(H^i_{\mathfrak{p}}(M))_{\mathfrak{p}}$ is an injective $R_{\mathfrak{p}}$-module for all $i \geq 0$. By Lemma \ref{Bass numbers of local coh}(a), the differentials \[ E^i(M_{\mathfrak{p}}) = \Gamma_{\mathfrak{p} R_{\mathfrak{p}}}(E^i(M_{\mathfrak{p}}))\rightarrow \Gamma_{\mathfrak{p} R_{\mathfrak{p}}}(E^{i+1}(M_{\mathfrak{p}})) = E^{i+1}(M_{\mathfrak{p}}) \] are zero for all $i \geq \dim \Supp_{R_{\mathfrak{p}}} M_{\mathfrak{p}}$. By the minimality of the resolution, $E^i(M_{\mathfrak{p}})$ itself is zero for all such $i$, completing the proof. \end{proof} \section{Pseudo-Jacobson rings}\label{Pseudo-Jacobson rings} Recall that a ring $R$ is said to be \emph{Jacobson} if every prime ideal of $R$ is equal to the intersection of the maximal ideals containing it, and that if $R$ is a Jacobson ring, so also is every quotient $R/I$ of $R$. It is not hard to see from this that every non-maximal prime ideal of $R$ must be contained in \emph{infinitely many} distinct maximal ideals. It is this weaker statement that will be important for us; hence we make the following definition. \begin{definition}\label{pseudo-Jacobson ring} A commutative ring $R$ is called \emph{pseudo-Jacobson} if every non-maximal prime ideal $\mathfrak{p}$ of $R$ is contained in infinitely many distinct maximal ideals. \end{definition} Pseudo-Jacobson rings will arise for us in the following way: if $R$ is a regular local ring and $f$ is a non-unit of $R$, then unless $R$ is of very small dimension, the localization $R_f$ is pseudo-Jacobson. This follows from Proposition \ref{localization is pseudo-Jacobson}(a) below; the next preliminary results are given with this result in mind. \begin{lem}\label{intersection of height 1 primes} Let $(R, \mathfrak{m})$ be a local domain of dimension $d \geq 2$. Then \[ \bigcap_{\substack{\mathfrak{p} \subseteq R \, \mathrm{prime}\\{\hgt \mathfrak{p} = 1}}} \mathfrak{p} = 0. \] \end{lem} \begin{proof} Suppose otherwise, and let $f \neq 0$ belong to the displayed intersection. Then every height $1$ prime ideal $\mathfrak{p}$ is a minimal prime of $f$. Since $R$ is noetherian, there are only finitely many such minimal primes. By Krull's principal ideal theorem, $\mathfrak{m}$ is contained in the union of all height $1$ prime ideals. Since there are only finitely many such, prime avoidance implies that $\mathfrak{m}$ is contained in a height $1$ prime ideal, a contradiction since $\dim R > 1$. \end{proof} \begin{prop}\label{intersection of height d-1 primes} Let $(R, \mathfrak{m})$ be a catenary local domain of dimension $d > 0$. Let $t$ be an integer such that $0\leq t\leq d-1$. Then \[ \bigcap_{\substack{\mathfrak{p} \subseteq R \, \mathrm{prime}\\{\hgt \mathfrak{p} = t}}} \mathfrak{p} = 0. \] \end{prop} \begin{proof} If $d=1$, then the only possible value for $t$ is 0 and our conclusion is clear. We will proceed by induction on $d$. When $d=2$, our conclusion is clear from Lemma \ref{intersection of height 1 primes}. Now suppose that $d \geq 3$ and fix a height $1$ prime ideal $\mathfrak{p} \subseteq R$. By Lemma \ref{intersection of height 1 primes}, we may assume that $t\geq 2$. Since $R$ is catenary, the height $t-1$ prime ideals of $R/\mathfrak{p}$ are precisely the height $t$ prime ideals of $R$ containing $\mathfrak{p}$. Therefore, the inductive hypothesis applied to the $(d-1)$-dimensional ring $R/\mathfrak{p}$ shows that $\mathfrak{p}$ is the intersection of all height $t$ prime ideals of $R$ containing $\mathfrak{p}$. Taking the intersection over all height $1$ prime ideals $\mathfrak{p}$ (which is $0$ by Lemma \ref{intersection of height 1 primes}), we conclude the proof. \end{proof} \begin{prop}\label{localization is pseudo-Jacobson} Let $(R, \mathfrak{m})$ be a catenary local domain of dimension $d \geq 2$, and let $f \in \mathfrak{m}$ be a nonzero element. \begin{enumerate}[(a)] \item The ring $R_f$ is pseudo-Jacobson. \item Every maximal ideal of $R_f$ has height $d-1$. \end{enumerate} \end{prop} \begin{proof} We claim first that there are infinitely many height $d-1$ prime ideals of $R$ that do not contain $f$. Suppose otherwise and let $\mathfrak{p}_1, \ldots, \mathfrak{p}_n$ be all the height $d-1$ prime ideals not containing $f$. Choose a nonzero $g \in \cap_{i=1}^n \mathfrak{p}_i$: since $R$ is a domain, it is enough to choose a nonzero element of each $\mathfrak{p}_i$ and let $g$ be their product. Then $fg$ belongs to every height $d-1$ prime ideal of $R$, a contradiction to Proposition \ref{intersection of height d-1 primes}. Now let $\mathfrak{p}$ be a prime ideal of $R$ such that $\mathfrak{p} R_f$ is a non-maximal prime ideal of $R_f$. Since $\dim R_f = d-1$, the height of $\mathfrak{p}$ is at most $d-2$, so the quotient $R/\mathfrak{p}$ is a catenary local domain of dimension at least $2$. By the reasoning of the previous paragraph applied to $R/\mathfrak{p}$, there are infinitely many height $\dim(R/\mathfrak{p}) - 1$ prime ideals in $R/\mathfrak{p}$ that do not contain the image of $f$ (which is a nonzero element in the maximal ideal $\mathfrak{m}/\mathfrak{p}$), and these prime ideals correspond to infinitely many prime ideals of $R$ that contain $\mathfrak{p}$ but not $f$. Since $R$ is catenary, these prime ideals all have height $d-1$, and therefore correspond to maximal ideals in $R_f$. This proves part (a). To prove part (b), suppose $\mathfrak{p}$ is a prime ideal of $R$ such that $f \notin \mathfrak{p}$ and $\hgt \mathfrak{p} \leq d-2$. Then as in the proof of part (a), the quotient $R/\mathfrak{p}$ has dimension at least $2$ and satisfies the hypotheses of Proposition \ref{intersection of height d-1 primes}, so $\mathfrak{p}$ is properly contained in infinitely many prime ideals of $R$ that do not contain $f$, and therefore $\mathfrak{p} R_f$ is not a maximal ideal of $R_f$. We conclude that all maximal ideals of $R_f$ must have height $d-1$, as claimed. \end{proof} \section{The last terms of minimal injective resolutions}\label{minimal resolutions} In this section, we study minimal injective resolutions. Proposition \ref{inj hull is finite} below shows that the property of a module $M$ being of finite length as a $D$-module (resp. being $F$-finite) is inherited by the indecomposable summands of the last term of the minimal injective resolution of $M$. The following lemma is the key to proving both cases of this. \begin{lem}\label{summands of top inj module} Let $R$ be a noetherian domain and let $M$ be an $R$-module of finite injective dimension $t$. Suppose that for all prime ideals $\mathfrak{p}$ of $R$ and all $i \geq 0$, the local cohomology $R$-modules $H^i_{\mathfrak{p}}(M)$ have finitely many associated primes, and their localizations $(H^i_{\mathfrak{p}}(M))_{\mathfrak{p}}$ are injective $R_{\mathfrak{p}}$-modules. Then for all $\mathfrak{p} \in \Spec(R)$ such that $\mu^t(\mathfrak{p}, M) > 0$, there exists an ideal $J \subseteq R$ such that the quotient \[ N = H^t_{\mathfrak{p}}(M)/\Gamma_J(H^t_{\mathfrak{p}}(M)) \] is isomorphic to a direct sum of copies of $E(R/\mathfrak{p})$. \end{lem} \begin{proof} Let $\mathfrak{p} \in \Spec(R)$ be such that $\mu^t(\mathfrak{p}, M) > 0$. Since $(H^i_{\mathfrak{p}}(M))_{\mathfrak{p}}$ is injective over $R_{\mathfrak{p}}$ for all $i$, it follows from Lemma \ref{Bass numbers of local coh}(b) that $\mu^0(\mathfrak{p}, H^t_{\mathfrak{p}}(M)) = \mu^t(\mathfrak{p}, M) > 0$, and therefore $\mathfrak{p} \in \Ass H^t_{\mathfrak{p}}(M)$. By hypothesis, $H^t_{\mathfrak{p}}(M)$ has only finitely many associated primes: say $\Ass H^t_{\mathfrak{p}}(M) = \{\mathfrak{p}, \mathfrak{q}_1, \ldots, \mathfrak{q}_r\}$. Let $J = \mathfrak{q}_1 \cdots \mathfrak{q}_r$ (with the convention that if $r = 0$, that is, if $\Ass H^t_{\mathfrak{p}}(M) = \{\mathfrak{p}\}$, then $J = R$), and as in the statement of the lemma, let $N = H^t_{\mathfrak{p}}(M)/\Gamma_J(H^t_{\mathfrak{p}}(M))$. By \cite[Exercise 2.1.14]{LCBookBrodmannSharp}, $\Ass H^t_{\mathfrak{p}}(M)$ is the disjoint union of $\Ass N$ and $\Ass \Gamma_J(H^t_{\mathfrak{p}}(M))$, from which we conclude that $\Ass N = \{\mathfrak{p}\}$. Now let $f \in R \setminus \mathfrak{p}$ be given. By hypothesis, the minimal injective resolution $E^{\bullet}(M)$ is a complex of length $t$. Since $f \notin \cup_{\mathfrak{q} \in \Ass N} \mathfrak{q} = \mathfrak{p}$, multiplication by $f$ is injective on $N$. On the other hand, since $R$ is a domain, the injective $R$-module $\Gamma_{\mathfrak{p}}(E^t(M))$ is divisible, so multiplication by any non-zero $f \in R$ is surjective on $\Gamma_{\mathfrak{p}}(E^t(M))$ and therefore on any quotient of $\Gamma_{\mathfrak{p}}(E^t(M))$. Since $H^t_{\mathfrak{p}}(M)$ (and hence $N$) is such a quotient, we see that multiplication by $f$ is an isomorphism on $N$ for all $f \in R \setminus \mathfrak{p}$, and therefore $N = N_{\mathfrak{p}}$. But $(\Gamma_J(H^t_{\mathfrak{p}}(M)))_{\mathfrak{p}} = 0$, so $N_{\mathfrak{p}} = (H^t_{\mathfrak{p}}(M))_{\mathfrak{p}}$, which by hypothesis is an injective $R_{\mathfrak{p}}$-module and is supported only at $\mathfrak{p} R_{\mathfrak{p}}$. We conclude that $N = N_{\mathfrak{p}}$ is isomorphic to a direct sum of copies of $E_{R_{\mathfrak{p}}}(R_{\mathfrak{p}}/\mathfrak{p} R_{\mathfrak{p}})$; but $E_{R_{\mathfrak{p}}}(R_{\mathfrak{p}}/\mathfrak{p} R_{\mathfrak{p}}) \cong E(R/\mathfrak{p})$ as $R$-modules, so the lemma follows. \end{proof} \begin{prop}\label{inj hull is finite} \begin{enumerate}[(a)] \item Let $R = k[[x_1, \ldots, x_n]]$ where $k$ is a field of characteristic zero, let $M$ be a holonomic $D(R,k)$-module, and let $t = \injdim_R M$. For any $\mathfrak{p} \in \Spec R$ such that $\mu^t(\mathfrak{p}, M) > 0$, the $D(R,k)$-module $E(R/\mathfrak{p})$ is holonomic. \item Let $R = k[[x_1, \ldots, x_n]]$ where $k$ is a field of characteristic zero, let $M$ be a holonomic $D(R,k)$-module, let $S \subseteq R$ be a multiplicative subset, and let $t = \injdim_{S^{-1}R} S^{-1}M$. For any $S^{-1}\mathfrak{p} \in \Spec S^{-1}R$ such that $\mu^t(S^{-1}\mathfrak{p}, S^{-1}M) > 0$, the $D(S^{-1}R, k)$-module $E_{S^{-1}R}(S^{-1}R/S^{-1}\mathfrak{p})$ is of finite length. \item Let $R$ be a noetherian regular domain of characteristic $p>0$, let $M$ be an $F$-finite $F$-module, and let $t = \injdim_R M$. For any $\mathfrak{p} \in \Spec(R)$ such that $\mu^t(\mathfrak{p}, M) > 0$, the $F$-module $E(R/\mathfrak{p})$ is $F$-finite. \end{enumerate} \end{prop} We observe that parts (a) and (b) remain true if $R$ is replaced with a \emph{polynomial} ring (see the proof of \cite[Theorem 4.4]{zhanginjdim}). \begin{proof} We prove part (b) first, and we begin by verifying the hypotheses of Lemma \ref{summands of top inj module} for $S^{-1}M$. The ring $S^{-1}R$ is a domain. Since $M$ is a $D$-module, it has finite injective dimension as an $R$-module by Proposition \ref{D-modules omnibus}(a), so $S^{-1}M$ has finite injective dimension as an $S^{-1}R$-module (used implicitly in the statement). For all $i \geq 0$ and for all $S^{-1}\mathfrak{p} \in \Spec S^{-1}R$, $H^i_{\mathfrak{p}}(M)$ is a holonomic $D$-module and so has finitely many associated primes; it follows that $S^{-1}(H^i_{\mathfrak{p}}(M)) \cong H^i_{S^{-1}\mathfrak{p}}(S^{-1}M)$ has finitely many associated primes as an $S^{-1}R$-module. All that remains to be checked is that, for all $i \geq 0$, $(H^i_{S^{-1}\mathfrak{p}}(S^{-1}M))_{S^{-1}\mathfrak{p}}$ is an injective $(S^{-1}R)_{S^{-1}\mathfrak{p}}$-module. Since the ring $(S^{-1}R)_{S^{-1}\mathfrak{p}}$ is simply $R_{\mathfrak{p}}$, and $(H^i_{S^{-1}\mathfrak{p}}(S^{-1}M))_{S^{-1}\mathfrak{p}} \cong H^i_{\mathfrak{p} R_{\mathfrak{p}}}(M_{\mathfrak{p}})$ as $R_{\mathfrak{p}}$-modules, this follows from Lemma \ref{localized local coh is injective}. Now let $S^{-1}\mathfrak{p} \in \Spec S^{-1}R$ be such that $\mu^t(S^{-1}\mathfrak{p}, S^{-1}M) > 0$. By the proof of Lemma \ref{summands of top inj module}, there is an ideal $J \subseteq S^{-1}R$ such that $N = H^t_{S^{-1}\mathfrak{p}}(S^{-1}M)/\Gamma_J(H^t_{S^{-1}\mathfrak{p}}(S^{-1}M))$ is isomorphic to a direct sum of copies of \[ E_{R_{\mathfrak{p}}}(R_{\mathfrak{p}}/\mathfrak{p} R_{\mathfrak{p}}) \cong E_{S^{-1}R}(S^{-1}R/S^{-1}\mathfrak{p}) \] as $S^{-1}R$-modules and, in fact, as $D(S^{-1}R, k)$-modules. Since $H^t_{\mathfrak{p}}(M)$ is a holonomic (and hence finite length) $D$-module, its localization $H^t_{S^{-1}\mathfrak{p}}(S^{-1}M)$ (and hence the $D(S^{-1}R, k)$-module quotient $N$) is of finite length as a $D(S^{-1}R, k)$-module. But then $E_{S^{-1}R}(S^{-1}R/S^{-1}\mathfrak{p})$ must be of finite length as well, completing the proof of part (b). If we do not localize (that is, if $S^{-1}R = R$, $S^{-1}\mathfrak{p} = \mathfrak{p} \subseteq R$, and $S^{-1}M = M$), then the same proof shows that $E(R/\mathfrak{p})$ is holonomic, proving part (a). Finally, we prove part (c). Since $M$ is an $F$-finite $F$-module, so are the local cohomology modules $H^i_I(M)$ for all $i \geq 0$ and all ideals $I \subseteq R$; what is more, $(H^i_I(M))_{\mathfrak{p}}$ is an $F_{R_{\mathfrak{p}}}$-finite $F_{R_{\mathfrak{p}}}$-module for all $\mathfrak{p} \in \Spec(R)$, so since $(H^i_{\mathfrak{p}}(M))_{\mathfrak{p}}$ is supported only at the maximal ideal $\mathfrak{p} R_{\mathfrak{p}}$, it is an injective $R_{\mathfrak{p}}$-module by Proposition \ref{F-modules omnibus}(a). Therefore, the hypotheses of Lemma \ref{summands of top inj module} are satisfied, so if $\mathfrak{p} \in \Spec(R)$ is such that $\mu^t(\mathfrak{p}, M) > 0$, then there is an ideal $J \subseteq R$ such that $N = H^t_{\mathfrak{p}}(M)/\Gamma_J(H^t_{\mathfrak{p}}(M))$ is isomorphic to a direct sum of copies of $E(R/\mathfrak{p})$. The $R$-submodule $\Gamma_J(H^t_{\mathfrak{p}}(M)) \subseteq H^t_{\mathfrak{p}}(M)$ is in fact an $F$-submodule, so the quotient $N$ is an $F$-finite $F$-module. It follows that $E(R/\mathfrak{p})$ must also be $F$-finite, completing the proof. \end{proof} \begin{remark}\label{holonomic is necessary} In the proof of Proposition \ref{inj hull is finite}, we used the fact that if $M$ is a holonomic $D$-module, any local cohomology module $H^i_I(M)$ has finitely many associated primes as an $R$-module, for the reason that it is itself a holonomic $D$-module. In fact, any finite length (indeed, finitely generated) $D$-module has finitely many associated primes by \cite[Theorem 2.4(c)]{LyubeznikFinitenessLocalCohomology}. However, we do not know whether $H^i_I(M)$ is of finite length as a $D$-module whenever $M$ is. This is the reason why we require the stronger hypothesis (in Proposition \ref{inj hull is finite} and in our Main Theorem) that $M$ be holonomic. \end{remark} Having shown that the minimal injective resolution of an $F$-finite $F$-module or (localization of a) holonomic $D$-module terminates in an object that is the direct sum of indecomposables with certain finiteness properties, our next task is to determine exactly which indecomposables have these finiteness properties. This we do in the following proposition, of which only the $D$-module parts are new. \begin{prop}\label{which inj hulls are finite} \begin{enumerate}[(a)] \item Let $R = k[[x_1, \ldots, x_n]]$ where $k$ is a field of characteristic zero, and let $\mathfrak{p} \subseteq R$ be a prime ideal. The $D(R,k)$-module $E(R/\mathfrak{p})$ is holonomic if and only if $\hgt \mathfrak{p} \geq n-1$. \item Let $R = k[[x_1, \ldots, x_n]]$ where $k$ is a field of characteristic zero, let $S \subseteq R$ be a multiplicative subset, and let $S^{-1} \mathfrak{p} \subseteq S^{-1}R$ be a prime ideal. The $D(S^{-1}R, k)$-module $E_{S^{-1}R}(S^{-1}R/S^{-1}\mathfrak{p})$ is of finite length if and only if $S^{-1}\mathfrak{p}$ is contained in only finitely many distinct prime ideals of $S^{-1}R$. \item Let $(R, \mathfrak{m})$ be a regular local ring of characteristic $p>0$ and dimension $n$, and let $\mathfrak{p} \subseteq R$ be a prime ideal. The $F$-module $E(R/\mathfrak{p})$ is $F$-finite if and only if $\hgt \mathfrak{p} \geq n-1$. \item Let $R$ be a noetherian regular ring of characteristic $p>0$, and let $\mathfrak{p} \subseteq R$ be a prime ideal. The $F$-module $E(R/\mathfrak{p})$ is $F$-finite if and only if $\mathfrak{p}$ is contained in only finitely many distinct prime ideals of $R$. \end{enumerate} \end{prop} In part (a), the conclusion is different from the polynomial case. If $R$ is replaced with a polynomial ring, $E(R/\mathfrak{p})$ is holonomic if and only if $\mathfrak{p}$ is \emph{maximal}: see \cite[Propositions 4.2, 4.3]{zhanginjdim}. \begin{proof} We prove part (b) first, and we begin by considering the possible cases. Let $h$ denote the height of $S^{-1}\mathfrak{p}$. If $S^{-1}\mathfrak{p}$ is a maximal ideal of $S^{-1}R$, we must show that $E_{S^{-1}R}(S^{-1}R/S^{-1}\mathfrak{p})$ is of finite length. If there exists a chain $S^{-1}\mathfrak{p} \subset S^{-1}\mathfrak{s} \subset S^{-1}\mathfrak{q}$ of proper inclusions of prime ideals, then since $S^{-1}R$ is a noetherian ring, it is well-known that there are infinitely many prime ideals lying strictly between $S^{-1}\mathfrak{p}$ and $S^{-1}\mathfrak{q}$. Therefore, in this case we must show that $E_{S^{-1}R}(S^{-1}R/S^{-1}\mathfrak{p})$ is not of finite length (this is the last case we treat below). Since $S^{-1}R$ is regular and therefore catenary, the only remaining case is that in which $S^{-1}\mathfrak{p}$ is not maximal, but the only prime ideals properly containing it are maximal ideals of height $h+1$. In this case, we must show that $E_{S^{-1}R}(S^{-1}R/S^{-1}\mathfrak{p})$ is of finite length if and only if there are only finitely many such maximal ideals. Suppose first that $S^{-1}\mathfrak{p}$ is a maximal ideal of $S^{-1}R$. Since $S^{-1}R$ is Gorenstein, by Proposition \ref{local coh omnibus}(d), $E_{S^{-1}R}(S^{-1}R/S^{-1}\mathfrak{p}) \cong H^h_{S^{-1}\mathfrak{p}}(S^{-1}R)$, which is a localization of the holonomic $D$-module $H^h_{\mathfrak{p}}(R)$ and is therefore of finite length as a $D(S^{-1}R, k)$-module by Proposition \ref{D-modules omnibus}(b,d). Now suppose that $S^{-1}\mathfrak{p}$ is not maximal, but that all maximal ideals containing it have height $h+1$. Since $E^{\bullet}$ can be identified with the Cousin complex of $R$ \cite[Theorem 5.4]{SharpCousinComplex}, all of whose differentials are direct sums of canonical localization maps, it is a complex of $D$-modules. If we localize $E^{\bullet}$ at $S$, we obtain the minimal injective resolution of $S^{-1}R$ as a module over itself, and this resolution is a complex of $D(S^{-1}R, k)$-modules. After applying $\Gamma_{S^{-1}\mathfrak{p}}$, we obtain a short exact sequence \[ 0 \rightarrow H^h_{S^{-1}\mathfrak{p}}(S^{-1}R) \rightarrow E_{S^{-1}R}(S^{-1}R/S^{-1}\mathfrak{p}) \rightarrow \bigoplus_{\substack{S^{-1}\mathfrak{p} \subseteq S^{-1}\mathfrak{q} \\ \hgt S^{-1}\mathfrak{q} = h+1}} E_{S^{-1}R}(S^{-1}R/S^{-1}\mathfrak{q}) \rightarrow 0, \] which is an exact sequence of $D(S^{-1}R, k)$-modules. (The last map is surjective by the Hartshorne-Lichtenbaum vanishing theorem \cite[Theorem 8.2.1]{LCBookBrodmannSharp}.) Since $H^h_{\mathfrak{p}}(R)$ is a holonomic $D$-module, it has finite length as a $D$-module, and therefore its localization $H^h_{S^{-1}\mathfrak{p}}(S^{-1}R)$ has finite length as a $D(S^{-1}R, k)$-module. It follows that $E_{S^{-1}R}(S^{-1}R/S^{-1}\mathfrak{p})$ is of finite length as a $D(S^{-1}R, k)$-module if and only if the third term in the displayed short exact sequence is. By the previous paragraph, each summand $E_{S^{-1}R}(S^{-1}R/S^{-1}\mathfrak{q})$ is of finite length, since the $S^{-1}\mathfrak{q}$ are maximal ideals; therefore the sum is of finite length if and only if there are finitely many summands, as desired. Finally, we suppose that there exists a chain $S^{-1}\mathfrak{p} \subset S^{-1}\mathfrak{s} \subset S^{-1}\mathfrak{q}$ of proper inclusions of prime ideals in $S^{-1}R$. We claim that if such a chain exists, $E_{S^{-1}R}(S^{-1}R/S^{-1}\mathfrak{p})$ cannot be of finite length. We may assume that the chain is saturated, from which it follows that $\hgt S^{-1}\mathfrak{q} = h+2$. If we localize at $S^{-1}\mathfrak{q}$, the ring $(S^{-1}R)_{S^{-1}\mathfrak{q}}$ is isomorphic to $R_{\mathfrak{q}}$, and $E_{S^{-1}R}(S^{-1}R/S^{-1}\mathfrak{p}) \cong E_{R_{\mathfrak{q}}}(R_{\mathfrak{q}}/\mathfrak{p} R_{\mathfrak{q}})$ as $R_{\mathfrak{q}}$-modules. If $E_{S^{-1}R}(S^{-1}R/S^{-1}\mathfrak{p})$ were of finite length as a $D(S^{-1}R, k)$-module, its localization $E_{R_{\mathfrak{q}}}(R_{\mathfrak{q}}/\mathfrak{p} R_{\mathfrak{q}})$ would be of finite length as a $D(R_{\mathfrak{q}}, k)$-module, so it suffices to prove this last statement false. We have therefore reduced the proof to the case where $S^{-1}R = R_{\mathfrak{q}}$ for some prime ideal $\mathfrak{q}$ and $\mathfrak{p} R_{\mathfrak{q}}$ is a prime ideal in $R_{\mathfrak{q}}$ of height $h = \dim R_{\mathfrak{q}} - 2$. Let $E^{\bullet} = E^{\bullet}_{R_{\mathfrak{q}}}(R_{\mathfrak{q}})$ be the minimal injective resolution of $R_{\mathfrak{q}}$. The complex $\Gamma_{\mathfrak{p} R_{\mathfrak{q}}}(E^{\bullet})$ takes the form \[ 0 \rightarrow E(R_{\mathfrak{q}}/\mathfrak{p} R_{\mathfrak{q}}) \xrightarrow{\delta^h} \bigoplus_{\substack{\mathfrak{p} R_{\mathfrak{q}} \subseteq \mathfrak{s} R_{\mathfrak{q}} \\ \hgt \mathfrak{s} R_{\mathfrak{q}} = h+1}} E(R_{\mathfrak{q}}/\mathfrak{s} R_{\mathfrak{q}}) \xrightarrow{\delta^{h+1}} E(R_{\mathfrak{q}}/\mathfrak{q} R_{\mathfrak{q}}) \rightarrow 0, \] and gives rise to three short exact sequences \begin{align*} 0 \rightarrow H^h_{\mathfrak{p} R_{\mathfrak{q}}}(R_{\mathfrak{q}}) \rightarrow E_{R_{\mathfrak{q}}}(R_{\mathfrak{q}}/\mathfrak{p} R_{\mathfrak{q}}) \rightarrow \im \delta^h \rightarrow 0, \\ 0 \rightarrow \im \delta^h \rightarrow \ker \delta^{h+1} \rightarrow H^{h+1}_{\mathfrak{p} R_{\mathfrak{q}}}(R_{\mathfrak{q}}) \rightarrow 0,\\ 0 \rightarrow \ker \delta^{h+1} \rightarrow \bigoplus_{\substack{\mathfrak{p} R_{\mathfrak{q}} \subseteq \mathfrak{s} R_{\mathfrak{q}} \\ \hgt \mathfrak{s} R_{\mathfrak{q}} = h+1}} E_{R_{\mathfrak{q}}}(R_{\mathfrak{q}}/\mathfrak{s} R_{\mathfrak{q}}) \rightarrow E_{R_{\mathfrak{q}}}(R_{\mathfrak{q}}/\mathfrak{q} R_{\mathfrak{q}}) \rightarrow 0, \end{align*} where now the $\delta^j$ are the differentials in the complex $\Gamma_{\mathfrak{p} R_{\mathfrak{q}}}(E^{\bullet})$ (and the third sequence is exact by the Hartshorne-Lichtenbaum theorem). What is more, these are exact sequences of $D(R_{\mathfrak{q}}, k)$-modules, since they arise from localizations of the Cousin complex of $R$. The modules $H^h_{\mathfrak{p} R_{\mathfrak{q}}}(R_{\mathfrak{q}})$, $H^{h+1}_{\mathfrak{p} R_{\mathfrak{q}}}(R_{\mathfrak{q}})$, and $E_{R_{\mathfrak{q}}}(R_{\mathfrak{q}}/\mathfrak{q} R_{\mathfrak{q}})$ are localizations at $\mathfrak{q}$ of holonomic (hence finite length) $D$-modules ($H^h_{\mathfrak{p}}(R)$, $H^{h+1}_{\mathfrak{p}}(R)$, and $H^{h+2}_{\mathfrak{q}}(R)$ respectively), so all three are of finite length as $D(R_{\mathfrak{q}}, k)$-modules. Assume for the purposes of contradiction that $E_{R_{\mathfrak{q}}}(R_{\mathfrak{q}}/\mathfrak{p} R_{\mathfrak{q}})$ is of finite length as a $D(R_{\mathfrak{q}}, k)$-module. Then we have the following chain of implications: since $H^h_{\mathfrak{p} R_{\mathfrak{q}}}(R_{\mathfrak{q}})$ and $E_{R_{\mathfrak{q}}}(R_{\mathfrak{q}}/\mathfrak{p} R_{\mathfrak{q}})$ are of finite length, so is $\im \delta^h$; since $\im \delta^h$ and $H^{h+1}_{\mathfrak{p} R_{\mathfrak{q}}}(R_{\mathfrak{q}})$ are of finite length, so is $\ker \delta^{h+1}$; since $\ker \delta^{h+1}$ and $E_{R_{\mathfrak{q}}}(R_{\mathfrak{q}}/\mathfrak{q} R_{\mathfrak{q}})$ are of finite length, so is $\oplus_{\mathfrak{p} R_{\mathfrak{q}} \subseteq \mathfrak{s} R_{\mathfrak{q}}, \hgt \mathfrak{s} R_{\mathfrak{q}} = h+1} E_{R_{\mathfrak{q}}}(R_{\mathfrak{q}}/\mathfrak{s} R_{\mathfrak{q}})$. This last statement is absurd, since there are infinitely many distinct summands $E_{R_{\mathfrak{q}}}(R_{\mathfrak{q}}/\mathfrak{s} R_{\mathfrak{q}})$. This contradiction completes the proof of part (b). If we do not localize (that is, if $S^{-1}R = R$ and $S^{-1}\mathfrak{p} = \mathfrak{p} \subseteq R$), then the same proof (using Proposition \ref{D-modules omnibus}(e)) shows that $E(R/\mathfrak{p})$ is holonomic if and only if $\mathfrak{p}$ is contained in only finitely many distinct prime ideals of $R$. Since $R$ is a local ring of dimension $n$, this condition is satisfied if and only if the height of $R$ is at least $n-1$, proving part (a). The possible cases in part (d) are the same as in part (b): we must show that $E(R/\mathfrak{p})$ is $F$-finite whenever $\mathfrak{p}$ is a maximal ideal (which, since $R$ is Gorenstein, follows at once from Proposition \ref{local coh omnibus}(c,d)); that $E(R/\mathfrak{p})$ is not $F$-finite whenever there exists a chain $\mathfrak{p} \subset \mathfrak{s} \subset \mathfrak{q}$ of proper inclusions of prime ideals (which is \cite[Proposition 3.2]{zhanginjdim}); and that in the only remaining case, where $\mathfrak{p}$ is not maximal but the only prime ideals properly containing it are maximal ideals of height $\hgt \mathfrak{p} + 1$, that $E(R/\mathfrak{p})$ is $F$-finite if and only if there are only finitely many such maximal ideals. This last case is \cite[Proposition 3.1]{zhanginjdim}, which finishes the proof of part (d). As part (c) is merely a special case of part (d), the proof is complete. \end{proof} \begin{remark}\label{non-holonomic localization} In the setting of Proposition \ref{which inj hulls are finite}(a), if $M$ is a holonomic $D$-module and $f \in R$, then $M_f$ is a holonomic $D$-module \cite[Theorem 3.4.1]{BjorkBookRIngDiffOperators}. It is known that $M_{\mathfrak{p}}$ need not be a holonomic $D$-module for all prime ideals $\mathfrak{p} \subseteq R$. We remark that the proposition provides many such examples: let $\mathfrak{p}$ be any prime ideal of height $h \leq n-2$, and consider the holonomic $D$-module $H^h_{\mathfrak{p}}(R)$. Its localization at $\mathfrak{p}$ is isomorphic to $E(R/\mathfrak{p})$, which is not a holonomic $D$-module by the proposition. \end{remark} We record separately the special cases of Proposition \ref{which inj hulls are finite} that we will use in the proof of our Main Theorem. This is where the pseudo-Jacobson property defined in section \ref{Pseudo-Jacobson rings} is used. \begin{cor}\label{special cases} \begin{enumerate}[(a)] \item Let $R = k[[x_1, \ldots, x_n]]$ where $k$ is a field of characteristic zero and $n \geq 2$. If $\mathfrak{q} \subseteq R$ is a prime ideal of height $h \geq 2$, $f \in \mathfrak{q} R_{\mathfrak{q}}$ is a nonzero element, and $(\mathfrak{p} R_{\mathfrak{q}})_f$ is a prime ideal of $(R_{\mathfrak{q}})_f$, then the $D((R_{\mathfrak{q}})_f, k)$-module $E_{(R_{\mathfrak{q}})_f}((R_{\mathfrak{q}})_f/(\mathfrak{p} R_{\mathfrak{q}})_f)$ has finite length if and only if $(\mathfrak{p} R_{\mathfrak{q}})_f$ is a maximal ideal in $(R_{\mathfrak{q}})_f$, that is, if and only if $\mathfrak{p}$ is a height $h-1$ prime ideal of $R$ contained in $\mathfrak{q}$ and not containing $f$. \item Let $(R, \mathfrak{m})$ be a regular local ring of characteristic $p>0$ and dimension $n \geq 2$. If $f \in \mathfrak{m}$ is a nonzero element and $\mathfrak{p} R_f \subseteq R_f$ is a prime ideal, the $F_{R_f}$-module $E_{R_f}(R_f/\mathfrak{p} R_f)$ is $F_{R_f}$-finite if and only if $\mathfrak{p} R_f$ is a maximal ideal in $R_f$, that is, if and only if $\mathfrak{p}$ is a height $n-1$ prime ideal of $R$ not containing $f$. \end{enumerate} \end{cor} \begin{proof} A regular local ring is a catenary domain, so by Proposition \ref{localization is pseudo-Jacobson}, the rings $R_f$ of part (b) and $(R_{\mathfrak{q}})_f$ of part (a) are pseudo-Jacobson, and all their maximal ideals have the same height $n-1$. By the pseudo-Jacobson property, every non-maximal prime ideal $\mathfrak{p} R_f$ of $R_f$ in part (b) (resp. every non-maximal prime ideal $(\mathfrak{p} R_{\mathfrak{q}})_f$ of $(R_{\mathfrak{q}})_f$ in part (a)) is contained in infinitely many distinct maximal ideals, so part (b) (resp. (a)) follows from Proposition \ref{which inj hulls are finite}(d) (resp. (b)). \end{proof} \section{Injective dimension of holonomic $D$-modules}\label{injdim over power series} In this section, we prove the characteristic-zero part (Theorem \ref{holonomic over power series}) of our main theorem. Most of the work in the proof is contained in the following proposition. \begin{prop}\label{injdim equals dim after localizing D-mod} Let $R = k[[x_1, \ldots, x_n]]$ where $k$ is a field of characteristic zero, and let $M$ be a holonomic $D(R,k)$-module. Let $\mathfrak{q}$ be a prime ideal of $R$ belonging to $\Supp_R M$, and let $f \in \mathfrak{q} R_{\mathfrak{q}}$ be a element that does not belong to any minimal prime of $M_{\mathfrak{q}}$. Then \[ \injdim_{(R_{\mathfrak{q}})_f} (M_{\mathfrak{q}})_f = \dim \Supp_{(R_{\mathfrak{q}})_f} (M_{\mathfrak{q}})_f. \] \end{prop} \begin{proof} Recall that if $S \subseteq R$ is a multiplicative subset, then \[ \injdim_{S^{-1}R} S^{-1}M \leq \dim \Supp_{S^{-1}R} S^{-1}M \] by Theorem \ref{inj dim bound for arbitrary localization}. We will use this fact repeatedly below. We proceed by induction on $\dim \Supp_{R_{\mathfrak{q}}} M_{\mathfrak{q}}$. Observe first that if $\dim R_{\mathfrak{q}} < 2$, then either $\dim \Supp_{(R_{\mathfrak{q}})_f} (M_{\mathfrak{q}})_f = 0$ and the statement is immediate by the previous paragraph, or no such $f$ as in the statement exists. Therefore we may assume that $\hgt \mathfrak{q} \geq 2$ for all prime ideals $\mathfrak{q}$ we encounter. Let $\mathfrak{q}$ be a minimal element of $\Supp_R M$. The localization $M_{\mathfrak{q}}$ has zero-dimensional support over $R_{\mathfrak{q}}$, so it is an injective $R_{\mathfrak{q}}$-module; the further localization $(M_{\mathfrak{q}})_f$ for any $f \in \mathfrak{q} R_{\mathfrak{q}}$ is then an injective $(R_{\mathfrak{q}})_f$-module, establishing the base case. Now suppose that $l \geq 0$ and that the displayed equality holds for all $\mathfrak{p} \in \Supp_R M$ such that $\dim \Supp_{R_{\mathfrak{p}}} M_{\mathfrak{p}} \leq l$. Fix $\mathfrak{q} \in \Supp_R M$ such that $\dim \Supp_{R_{\mathfrak{q}}} M_{\mathfrak{q}} = l + 1$, and let $f \in \mathfrak{q} R_{\mathfrak{q}}$ be an element that does not belong to any minimal prime of $M_{\mathfrak{q}}$. Choose $\mathfrak{p} R_{\mathfrak{q}} \in \Supp_{R_{\mathfrak{q}}} M_{\mathfrak{q}}$ such that $\dim \Supp_{R_{\mathfrak{p}}} M_{\mathfrak{p}} = \dim \Supp_{R_{\mathfrak{q}}} M_{\mathfrak{q}} - 1 = l$ and $f \in \mathfrak{p} R_{\mathfrak{q}}$. Then $f$ does not belong to any minimal prime of $M_{\mathfrak{p}}$, so $\dim \Supp_{(R_{\mathfrak{p}})_f} (M_{\mathfrak{p}})_f = \dim \Supp_{R_{\mathfrak{p}}} M_{\mathfrak{p}} - 1 = l - 1$ ($R_{\mathfrak{p}}$ is a local ring) and by the inductive hypothesis, \[ \injdim_{(R_{\mathfrak{p}})_f} (M_{\mathfrak{p}})_f = \dim \Supp_{(R_{\mathfrak{p}})_f} (M_{\mathfrak{p}})_f = l-1. \] Since $(R_{\mathfrak{p}})_f$ is a localization of $(R_{\mathfrak{q}})_f$, we obtain the chain of inequalities \[ l-1 = \injdim_{(R_{\mathfrak{p}})_f} (M_{\mathfrak{p}})_f \leq \injdim_{(R_{\mathfrak{q}})_f} (M_{\mathfrak{q}})_f \leq \dim \Supp_{(R_{\mathfrak{q}})_f} (M_{\mathfrak{q}})_f = l. \] It remains only to rule out the case $\injdim_{(R_{\mathfrak{q}})_f} (M_{\mathfrak{q}})_f = l-1$. Since $\injdim_{(R_{\mathfrak{p}})_f} (M_{\mathfrak{p}})_f = l-1$, there is a prime ideal $(\mathfrak{s} R_{\mathfrak{p}})_f$ of $(R_{\mathfrak{p}})_f$ such that \[ \mu^{l-1}((\mathfrak{s} R_{\mathfrak{p}})_f, (M_{\mathfrak{p}})_f) \, (= \mu^{l-1}((\mathfrak{s} R_{\mathfrak{q}})_f, (M_{\mathfrak{q}})_f)) > 0. \] Since $f \in \mathfrak{p} R_{\mathfrak{q}} \setminus \mathfrak{s} R_{\mathfrak{q}}$ and $\mathfrak{p} R_{\mathfrak{q}} \subset \mathfrak{q} R_{\mathfrak{q}}$, we have $\hgt \mathfrak{s} \leq \hgt \mathfrak{q} - 2$. It follows that $(\mathfrak{s} R_{\mathfrak{q}})_f$ is not a maximal ideal of $(R_{\mathfrak{q}})_f$. Since $\mu^{l-1}((\mathfrak{s} R_{\mathfrak{q}})_f, (M_{\mathfrak{q}})_f)) > 0$, we may invoke Proposition \ref{inj hull is finite}(b) and Corollary \ref{special cases}(a), which here imply that $\injdim_{(R_{\mathfrak{q}})_f} (M_{\mathfrak{q}})_f$ cannot equal $l-1$, completing the proof. \end{proof} \begin{thm}\label{holonomic over power series} Let $R = k[[x_1, \ldots, x_n]]$ where $k$ is a field of characteristic zero, and let $M$ be a holonomic $D(R,k)$-module. Then \[ \injdim_R M \geq \dim \Supp_R M - 1. \] \end{thm} \begin{proof} We may assume that $\dim \Supp_R M \geq 2$, as otherwise there is nothing to prove. Since $M$ is holonomic, it has finitely many associated primes as an $R$-module. By prime avoidance, we can choose a nonzero element $f \in \mathfrak{m}$ that does not belong to any minimal prime of $M$. We have $\dim \Supp_{R_f} M_f = \dim \Supp_R M - 1$. By Proposition \ref{injdim equals dim after localizing D-mod} (applied to $\mathfrak{q} = \mathfrak{m}$), we have $\injdim_{R_f} M_f = \dim \Supp_{R_f} M_f$. Since $\injdim_R M \geq \injdim_{R_f} M_f$, the theorem follows. \end{proof} \begin{remark}\label{bound is sharp} The lower bound in Theorem \ref{holonomic over power series} is the best possible. Indeed, let $\mathfrak{p} \subseteq R$ be a prime ideal of height $n-1$, and let $E(R/\mathfrak{p})$ be the injective hull of $R/\mathfrak{p}$. By Proposition \ref{which inj hulls are finite}(a), $E(R/\mathfrak{p})$ is a holonomic $D$-module, yet we have $\injdim_R E(R/\mathfrak{p}) = 0$ and $\dim \Supp_R E(R/\mathfrak{p}) = 1$. As shown by Hellus in \cite[Example 2.9]{hellus}, this example can be realized as a local cohomology module of $R$. Take $n=3$ and let $I = (x_1x_2, x_1x_3)$ and $\mathfrak{p} = (x_2, x_3)$. Then $\hgt \mathfrak{p} = n-1 = 2$ and the holonomic $D$-module $H^2_I(R)$ is isomorphic to $E(R/\mathfrak{p})$, therefore has injective dimension equal to one less than the dimension of its support. \end{remark} \section{Injective dimension of $F$-finite $F$-modules}\label{injdim of F-modules} In this section, we prove the positive-characteristic part (Theorem \ref{F-finite over regular char p}) of our main theorem. We begin with a counterpart to Proposition \ref{injdim equals dim after localizing D-mod}. \begin{prop}\label{injdim equals dim after localizing F-mod} Let $(R, \mathfrak{m})$ be a regular local ring of characteristic $p > 0$, and let $M$ be an $F$-finite $F$-module. Let $\mathfrak{q}$ be a prime ideal of $R$ belonging to $\Supp_R M$, and let $f \in \mathfrak{q} R_{\mathfrak{q}}$ be a element that does not belong to any minimal prime of $M_{\mathfrak{q}}$. Then \[ \injdim_{(R_{\mathfrak{q}})_f} (M_{\mathfrak{q}})_f = \dim \Supp_{(R_{\mathfrak{q}})_f} (M_{\mathfrak{q}})_f. \] \end{prop} \begin{proof} If $S \subseteq R$ is a multiplicative subset, then \[ \injdim_{S^{-1}R} S^{-1}M \leq \dim \Supp_{S^{-1}R} S^{-1}M \] by Proposition \ref{F-modules omnibus}(a,c). The proof is now word-for-word the same as the proof of Proposition \ref{injdim equals dim after localizing D-mod}, except that we use part (b) of Corollary \ref{special cases} instead of part (a). \end{proof} \begin{thm}\label{F-finite over regular char p} Let $R$ be a noetherian regular ring of characteristic $p > 0$, and let $M$ be an $F$-finite $F$-module. Then \[ \injdim_R M \geq \dim \Supp_R M - 1. \] \end{thm} For the same reasons as in Remark \ref{bound is sharp}, the lower bound in Theorem \ref{F-finite over regular char p} is the best possible. \begin{proof} We may assume that $\dim \Supp_R M \geq 2$, as otherwise there is nothing to prove. We may also assume that $(R, \mathfrak{m})$ is local; if the local case is known, we may choose a maximal ideal $\mathfrak{m}$ in $\Supp_R M$ such that $\dim \Supp_R M = \dim \Supp_{R_{\mathfrak{m}}} M_{\mathfrak{m}}$, and we have \[ \dim \Supp_R M - 1 = \dim \Supp_{R_{\mathfrak{m}}} M_{\mathfrak{m}} - 1 \leq \injdim_{R_{\mathfrak{m}}} M_{\mathfrak{m}} \leq \injdim_R M, \] so that the global case follows. Since $M$ is $F$-finite, it has finitely many associated primes as an $R$-module. By prime avoidance, we can choose a nonzero element $f \in \mathfrak{m}$ that does not belong to any minimal prime of $M$. We have $\dim \Supp_{R_f} M_f = \dim \Supp_R M - 1$. By Proposition \ref{injdim equals dim after localizing F-mod} (applied to $\mathfrak{q} = \mathfrak{m}$), we have $\injdim_{R_f} M_f = \dim \Supp_{R_f} M_f$. Since $\injdim_R M \geq \injdim_{R_f} M_f$, the theorem follows. \end{proof} Let $(R,\mathfrak{m})$ be a regular local ring of characteristic $p > 0$, and let $M$ be an $F$-finite $F$-module. Set $n=\dim R$, $d=\dim \Supp_R M$, and $t=\injdim_R M$. We know by Theorem \ref{F-finite over regular char p} that $t \in \{d-1, d\}$. We also know by Propositions \ref{inj hull is finite}(c) and \ref{which inj hulls are finite}(c) that if $\mathfrak{p} \subseteq R$ is a prime ideal such that $\mu^t(\mathfrak{p}, M) > 0$, then $\hgt \mathfrak{p} \in \{n-1, n\}$. It is easy to see that if $t = d$, then $\mu^t(\mathfrak{p}, M) > 0$ if and only if $\mathfrak{p} = \mathfrak{m}$: indeed, if $\mu^t(\mathfrak{p}, M) > 0$ for some non-maximal prime ideal $\mathfrak{p}$ in the support of $M$, we can localize at $\mathfrak{p}$, obtaining an $F_{R_{\mathfrak{p}}}$-module $M_{\mathfrak{p}}$ whose injective dimension is still $d$ but whose support has dimension strictly less than $d$, in contradiction to Proposition \ref{F-modules omnibus}(a). \begin{question}\label{last inj res term only supported at maximal ideal} Does the converse hold? That is, if $\mu^t(\mathfrak{p}, M) > 0$ only for $\mathfrak{p} = \mathfrak{m}$, must we have $t = d$? \end{question} One can also ask the analogous question for holonomic $D$-modules over formal power series rings. A positive answer to Question \ref{last inj res term only supported at maximal ideal} would impose strong constraints on the form of the minimal injective resolution $E^{\bullet}(M)$. In particular, it follows from an easy induction argument that we would have $\dim \Supp_R E^i(M) = \dim \Supp_R M - i$ for $0 \leq i \leq t$. \bibliographystyle{plain}
2,869,038,156,047
arxiv
\section{Introduction} As stated in \cite{sen1971choice}, it is well-known how to characterize the deterministic choice functions that can be represented by a unique, strict preference relation. However, in economics, agents' choices often display some element of randomness. Instead of observing a mapping from each menu to an element of the menu, the analyst may observe a mapping from each menu to a probability distribution over the menu. Analogously, the analyst may wish to represent this \textit{stochastic choice function} with a probability distribution over strict preference relations, also known as a \textit{random utility} (RU) representation. \cite{block1959random} and \cite{falmagne1978representation} showed that a single axiom characterizes the existence of such a representation.\footnote{See \hyperref[sec:ru]{\textbf{Section 2}} for the formal result.} \par Agents also frequently make choices over time. Given \textit{dynamic} nondeterministic choice data, the analyst may similarly wish to microfound the data with a multiperiod random utility representation. Depending on the primitive, there are multiple variants of this model. One type is to treat menus as endogenous: at any given period, the agent chooses a lottery over the set of pairs of immediate consumption and a menu of lotteries for the next period. Given dynamic choice data of this type, \cite{frick2019dynamic} obtained an axiomatization of \textit{dynamic random expected utility}, as well as sharper sub-models in which agents are forward-looking.\par However, there are also settings in which menus may be exogenously selected, such as research studies in which the authors present menus to the subjects. There are also settings in which the choice set is finite, ruling out lotteries. In particular, the analyst may wonder when this variant of dynamic choice data can be modeled by a stochastic process of random preferences. The main result of this paper is a characterization of these representations, which I name \textit{stochastic utility} (SU). The rest of the paper proceeds as follows. \hyperref[sec:ru]{\textbf{Section 2}} provides an overview of RU and its axiomatization. \hyperref[sec:su]{\textbf{Section 3}} formally defines SU and states its axiomatization. \hyperref[sec:cor]{\textbf{Section 4}} provides some corollaries to the main result. \hyperref[sec:app]{\textbf{Section 5}} contains some relevant combinatorics results and all proofs. \section{Random Utility} \label{sec:ru} Before introducing SU, I provide a brief overview of RU and its axiomatization. Let $X$ be a finite choice set, and let $\mathcal{M}:=2^{X}\backslash\{\emptyset\}$ be the set of all nonempty menus. Given a (exogenously-chosen) menu $A \in \mathcal{M}$, the agent makes a choice $x \in A$. The agent's choice data for all nonempty menus is encoded in the following primitive: \begin{definition} A \textbf{stochastic choice function} is a mapping $\rho: \mathcal{M} \rightarrow \Delta(X)$ satisfying $\text{supp }\rho(A) \subseteq A$ for all $A \in \mathcal{M}$. \end{definition} Stochastic choice functions must satisfy $\text{supp }\rho(A) \subseteq A$ because the agent can only pick from choices within the menu. As in \cite{SC}, I use $\rho$ to denote a stochastic choice function and $\rho(x,A)$ to denote the probability that $\rho(A)$ assigns $x$. Let $P$ be the set of strict preference relations over $X$, and let $C(x,A):=\big\{\succ \in P: x \succ A\backslash\{x\}\big\}$.\footnote{$P$ can also be viewed as the set of permutations of $X$. The notation $x \succ A\backslash\{x\}$ denotes $x \succ y \ \forall \ y \in A\backslash\{x\}$.} Observe that $\{C(x,A)\}_{x \in A}$ form a partition of $P$. \begin{definition} $\mu \in \Delta(P)$ is a \textbf{random utility (RU) representation} of $\rho$ if $\rho(x,A)=\mu\big(C(x,A)\big)$ for all $x \in A \in \mathcal{M}$. \end{definition} In the deterministic case, it is well-known that a choice function can be represented by a strict preference relation if and only if it satisfies Sen's $\alpha$ condition, as shown in \cite{sen1971choice}. I will now state the analogous axiomatization of RU, first for $|X|\leq 3$ and second for any finite $X$. \begin{axiom} $\rho$ satisfies \textbf{regularity} if $\rho(x,A) \geq \rho(x,B)$ for all $x \in A \subseteq B \in \mathcal{M}$. \end{axiom} As stated in \cite{SC}, regularity serves as the stochastic analog of Sen's $\alpha$. Regularity is necessary for RU because $C(x,A) \supseteq C(x,B)$ for all $x \in A \subseteq B$. In particular, when the choice set is of size three or less, regularity characterizes RU. \begin{lemma}[\textbf{\cite{block1959random}}] Suppose $|X| \leq 3$. $\rho$ has a RU representation if and only if it satisfies regularity. \end{lemma} If the choice set satisfies these cardinalities, \cite{SC} shows that the RU representation is also unique. For higher cardinalities, a stronger axiom is needed to characterize RU. \begin{definition}[\textbf{\cite{chambers2016revealed}}] For any $A \subsetneq X$ and $x \in A^C$, define their \textbf{Block-Marschak sum} to be \begin{align*} M_{x,A}:=\sum_{B \supseteq A^C} (-1)^{|B\backslash A^C|} \rho(x,B) \end{align*} \end{definition} \begin{axiom}[\textbf{\cite{chambers2016revealed}}] $\rho$ satisfies \textbf{Block-Marschak nonnegativity} if $M_{x,A} \geq 0$ for all $x \in A^C \neq \emptyset$. \end{axiom} \begin{lemma}[\textbf{\cite{block1959random}; \cite{falmagne1978representation}}] $\rho$ has a RU representation if and only if it satisfies Block-Marschak nonnegativity. \end{lemma} With these two axiomatizations in hand, I turn to the dynamic model. \section{Stochastic Utility} \label{sec:su} \subsection{Preliminaries} I begin by generalizing the agent's static problem to two periods, denoted $t=1,2$, with finite choice sets $X_t$. As before, define $\mathcal{M}_t:=2^{X_t}\backslash\{\emptyset\}$. In period $t$, the agent is offered a (exogenously-chosen) menu $A_t \in \mathcal{M}_t$ and makes a choice $x_t \in A_t$. Importantly, in the dynamic case, the analyst \textit{sequentially} observes the agent's choices. We can thus encode the agent's choice data as follows. As before, the analyst observes the (first-period) stochastic choice function $\rho_1$. As in \cite{SC}, define $\mathcal{H}:=\{(A_1,x_1) \in \mathcal{M}_1 \times X_1: \rho_1(x_1,A_1)>0\}$ as the set of \textit{observable} choice histories. In addition to $\rho_1$, the analyst also observes a family of period-2 stochastic choice functions $\{\rho_2(\cdot|h)\}_{h \in \mathcal{H}}$, indexed by choice histories. Thus, the primitive is the vector $\boldsymbol{\rho}^2:=\big(\rho_1,\{\rho_2(\cdot|h)\}_{h \in \mathcal{H}}\big)$. Since the analyst does not have access to data describing the agent's period-2 choices after making zero-probability period-1 choices, WLOG let $\rho_2(\cdot,A_2|A_1,x_1) \in \Delta(A_2)$ be an arbitrarily chosen probability distribution when $\rho_1(x_1,A_1)=0$.\footnote{This is to ensure that expressions like ``$\rho_2(x_2,A_2|A_1,x_1)\rho_1(x_1,A_1)=0$" make sense when $\rho_1(x_1,A_1)=0$, so that the forthcoming axioms are well-defined. As long as $\rho_2(\cdot,A_2|A_1,x_1)$ is an honest-to-god probability distribution over $A_2$, its values will not affect the axiomatization.} As before, let $P_t$ be the set of strict preference relations over $X_t$. For $x_t \in A_t$, define $C_t(x_t,A_t):=\big\{\succ_t \ \in P_t: x_t \succ_t A_t\backslash\{x_t\}\big\}$ and $C(x_t,A_t):=C_t(x_t,A_t) \times P_{-t}$.\footnote{Let $\{t,-t\}=\{1,2\}$.} \begin{definition}[\cite{SC}] $\mu \in \Delta(P_1 \times P_2)$ is a (two-period) \textbf{stochastic utility (SU) representation} of $\boldsymbol{\rho}^2$ if $\rho_1(x_1,A_1)=\mu\big(C(x_1,A_1)\big)$ for all $x_1 \in A_1 \in \mathcal{M}_1$ and $\rho_2(x_2,A_2|A_1,x_1)=\mu\big(C(x_2,A_2)|C(x_1,A_1)\big)$ for all $x_2 \in A_2 \in \mathcal{M}_2$ and $(A_1,x_1) \in \mathcal{H}$. \end{definition} If $\mu$ is a SU representation of $\boldsymbol{\rho}^2$, note that its marginal over $P_1$ is a RU representation of $\rho_1$ and its conditional $\mu\big(\cdot|C(x_1,A_1)\big)$ is a RU representation of $\rho_2(\cdot|A_1,x_1)$ for $(A_1,x_1) \in \mathcal{H}$. \begin{definition} For each $t=1,2$ and $x_t \in A_t^C \neq \emptyset$, define their \textbf{joint Block-Marschak sum} to be \begin{align*} M_{x_1,A_1;x_2,A_2}:=\sum_{B_2 \supseteq A_2^C} \sum_{B_1 \supseteq A_1^C} (-1)^{|B_1\backslash A_1^C|+|B_2\backslash A_2^C|} \rho_2(x_2,B_2|B_1,x_1)\rho_1(x_1,B_1) \end{align*} and their \textbf{joint upper edge set}\footnote{Joint upper edge sets are a generalization of what \cite{chambers2016revealed} define as \textit{upper contour sets}.} to be \begin{align*} E(x_1,A_1;x_2,A_2):=\{(\succ_1,\succ_2) \in P_1 \times P_2: A_t \succ_t x_t \succ_t A_t^C\backslash \{x_t\}, \ t=1,2\} \end{align*} \end{definition} \subsection{Axiomatization} \begin{axiom} $\boldsymbol{\rho}^2$ satisfy \textbf{stochastic Block-Marschak nonnegativity} if $M_{x_1,A_1;x_2,A_2} \geq 0$ for each $t=1,2$ and $x_t \in A_t^C \neq \emptyset$. \end{axiom} As its name suggests, \textbf{Axiom 3} serves as the two-period analog of \textbf{Axiom 2}. Unlike the static case, \textbf{Axiom 3} is not sufficient, and another axiom that enforces consistency between periods is needed to complete the characterization. \begin{axiom} $\boldsymbol{\rho}^2$ satisfy \textbf{marginal consistency}\footnote{In \cite{SC}, this is stated as the \textit{LTP axiom}.} if \begin{align*} P(x_2,A_2;A_1):=\sum_{x_1 \in A_1} \rho_2(x_2,A_2|A_1,x_1)\rho_1(x_1,A_1) \end{align*} is invariant in $A_1$ for all $x_2 \in A_2 \in \mathcal{M}_2$. \end{axiom} The main result of this paper is that \textbf{Axioms 1 and 2} characterize two-period SU: \begin{theorem} $\boldsymbol{\rho}^2$ has a SU representation if and only if it satisfies stochastic Block-Marschak nonnegativity and marginal consistency. \end{theorem} The full proof of \textbf{Theorem 1} is in \textbf{Section 5}, but I will provide a sketch here. First, I will state several helpful propositions. \textbf{Propositions 1 and 2}\footnote{\textbf{Proposition 1} is the two-period analog of \textbf{Lemma 7.4.I} in \cite{chambers2016revealed}, while \textbf{Proposition 2} is a partial generalization of \textbf{Lemma 7.4.II} in the same book.} serve as useful identities for the joint Block-Marschak sums, and \textbf{Proposition 3}\footnote{This is the two-period analog of \textbf{Proposition 7.3.} in \cite{chambers2016revealed}.} characterizes SU representations as probability measures that assign each joint upper edge set its corresponding joint Block-Marschak sum. \begin{proposition} For each $t=1,2$ and $x_t \in A_t^C \neq \emptyset$ \begin{align*} \rho_2(x_2,A_2^C|A_1^C,x_1)\rho_1(x_1,A_1^C)=\sum_{B_2 \subseteq A_2} \sum_{B_1 \subseteq A_1} M_{x_1,B_1;x_2,B_2} \end{align*} \end{proposition} \begin{proposition} For any $x_1 \in A_1^C \neq \emptyset$ and $\emptyset \subsetneq A_2 \subsetneq X_2$ \begin{align*} \sum_{x_2 \in A_2^C} M_{x_1,A_1;x_2,A_2}=\sum_{y_2 \in A_2} M_{x_1,A_1;y_2,A_2\backslash\{y_2\}} \end{align*} \end{proposition} \begin{proposition} $\mu$ is a SU representation of $\boldsymbol{\rho}^2$ if and only if $\mu\big(E(x_1,A_1;x_2,A_2)\big)=M_{x_1,A_1;x_2,A_2}$ for each $t=1,2$ and $x_t \in A_t^C \neq \emptyset$. \end{proposition} To prove the forwards direction of \textbf{Theorem 1}, note that \textbf{Proposition 3} immediately implies that stochastic Block-Marschak nonnegativity is necessary, since probability measures assign nonnegative values to all events.\footnote{Analogous reasoning provides intuition for why Block-Marschak nonnegativity is necessary for static RU.} Marginal consistency is necessary because of the Law of Total Probability. With \textbf{Proposition 3} in hand, it follows that to prove the backwards direction, it suffices to construct a probability measure $\mu \in \Delta(P_1 \times P_2)$ that assigns each joint upper edge set its corresponding joint Block-Marschak sum. I do this in \hyperref[sec:app]{\textbf{Section 5}} via the following steps:\footnote{The proof strategy for this direction is adapted from the proof of the static case in \cite{chambers2016revealed}.} \begin{enumerate} \item Using marginal consistency, prove the period-1 equivalent of \textbf{Proposition 2}. \item Using stochastic Block-Marschak nonnegativity, recursively define a ``partial measure" $\nu$. $\nu$ is ``partial" in the following sense: it is not defined on all subsets of $P_1 \times P_2$, but rather on pairs of subsets called $t$-\textit{cylinders}. \item Verify that $\nu$ satisfies two crucial additivity properties over the pairs of $t$-cylinders. \item Use both additivity properties to define a probability measure $\mu$ that is an extension of $\nu$, and verify that $\mu$ assigns each joint upper edge set its corresponding joint Block-Marschak sum. \end{enumerate} \subsection{Axiomatization for $\boldsymbol{|X_1|=|X_2|=3}$} At lower choice set cardinalities, we can restate stochastic Block-Marschak nonnegativity as a simpler axiom. \begin{axiom} $\boldsymbol{\rho}^2$ satisfies \textbf{stochastic regularity} if for each $t=1,2$ and $x_t \in A_t \subseteq B_t$, \begin{align*} \frac{\rho_1(x_1,A_1)}{\rho_1(x_1,B_1)} \geq \frac{\rho_2(x_2,A_2|B_1,x_1)-\rho_2(x_2,B_2|B_1,x_1)}{\rho_2(x_2,A_2|A_1,x_1)-\rho_2(x_2,B_2|A_1,x_1)} \end{align*} \end{axiom} Stochastic regularity is necessary because $x_t \in A_t \subseteq B_t$ for each $t=1,2$ implies \begin{align*} C(x_1,A_1) \cap \bigg(C(x_2,A_2)\backslash C(x_2,B_2)\bigg) \supseteq C(x_1,B_1) \cap \bigg(C(x_2,A_2)\backslash C(x_2,B_2)\bigg) \end{align*} When $|X_1|=|X_2|=3$ and marginal consistency holds, it is also sufficient. \begin{proposition} Suppose $|X_1|=|X_2|=3$. $\boldsymbol{\rho}^2$ has a unique SU representation if and only if it satisfies stochastic regularity and marginal consistency. \end{proposition} \section{Corollaries} \label{sec:cor} \subsection{SU with Full Support} As shown by \cite{fishburn1998stochastic}, for arbitrary finite $X$ RU representations need not be unique. This also implies that, in general, SU representations need not be unique.\footnote{To see this, let $\mu_1,\mu_1' \in \Delta(P_1)$ be distinct RU representations of $\rho_1$, and let $\mu_2 \in \Delta(P_2)$. Let $\mu=\mu_1 \times \mu_2$, $\mu'=\mu_1' \times \mu_2$: it follows that $\mu\big(C(x_2,A_2)|C(x_1,A_1)\big)=\mu_2\big(C_2(x_2,A_2)\big)=\mu'\big(C(x_2,A_2)|C(x_1,A_1)\big)$.} Indeed, sometimes it may be desirable to represent $\boldsymbol{\rho}^2$ using a SU representation with full support over $P_1 \times P_2$.\footnote{The existence of such a representation is equivalent to the existence of a distribution over $\mathbb{R}^{X_1} \times \mathbb{R}^{X_2}$ with positive density.} \begin{definition} $\mu \in \Delta(P_1 \times P_2)$ has \textbf{full support} if $\mu(\succ_1,\succ_2)>0$ for all $(\succ_1,\succ_2) \in P_1 \times P_2$. \end{definition} It turns out that characterizing this case requires only a slightly stronger version of \textbf{Axiom 3}. \begin{axiom} $\boldsymbol{\rho}^2$ satisfies \textbf{stochastic Block-Marschak positivity} if $M_{x_1,A_1;x_2,A_2}>0$ for each $t=1,2$ and $x_t \in A_t^C \neq \emptyset$. \end{axiom} \begin{corollary} $\boldsymbol{\rho}^2$ has a SU representation with full support if and only if it satisfies stochastic Block-Marschak positivity and marginal consistency.\footnote{Analogously, $\rho$ has a RU representation with full support if and only if it satisfies the strict version of \textbf{Axiom 2}. The proof proceeds analogously to the proof of this corollary.} \end{corollary} Observe that stochastic Block-Marschak positivity is necessary because probability measures with full support assign strictly positive probability to all nonempty events. \subsection{More Periods} \textcolor{red}{I am currently working on extending \textbf{Theorem 1} to more than two periods with multiperiod versions of both axioms.} Fix $n>2$. The primitive is now the vector \begin{align*} \boldsymbol{\rho}^n:=\big(\rho_1,\{\rho_2(\cdot|h_1)\}_{h_1 \in \mathcal{H}_1},\ldots,\{\rho_n(\cdot|h_{n-1})\}_{h_{n-1} \in \mathcal{H}_{n-1}}\big) \end{align*} where $\mathcal{H}_1:=\{(A_1,x_1): \rho_1(x_1,A_1)>0\}$ and $\mathcal{H}_t:=\{(A_t,x_t;h_{t-1}) \in \mathcal{M}_t \times X_t \times \mathcal{H}_{t-1}: \rho_t(x_t,A_t|h_{t-1})>0\}$ for all $t>1$. As before, WLOG let $\rho_t(\cdot,A_t|A_{t-1},x_{t-1};\cdots;A_1,x_1) \in \Delta(A_t)$ be an arbitrary probability distribution if $\rho_{t'}(x_{t'},A_{t'}|A_{t'-1},x_{t'-1};\cdots;A_1,x_1)=0$. Define $P_t,C_t(x_t,A_t),C(x_t,A_t)$ as before. Given $t>1$ and $h_t=(A_t,x_t;h_{t-1})$, define $C(h_t):=C(x_t,A_t) \cap C(h_{t-1})$. \begin{definition}[\textbf{\cite{SC}}] $\mu \in \Delta\big(\bigtimes_{t=1}^n P_t\big)$ is a ($n$-period) \textbf{stochastic utility (SU) representation} of $\boldsymbol{\rho}^n$ if $\rho_1(x_1,A_1)=\mu\big(C(x_1,A_1)\big)$ for all $x_1 \in A_1 \in \mathcal{M}_1$ and $\rho_t(x_t,A_t|h_{t-1})=\mu\big(C(x_t,A_t)|C(h_{t-1})\big)$ for all $x_t \in A_t \in \mathcal{M}_t$ and $h_{t-1} \in \mathcal{H}_{t-1}$. \end{definition} Now, we generalize the axioms. Let $(\boldsymbol{x},\boldsymbol{A}):=(x_t,A_t)_{t=1}^n$ and $(\boldsymbol{x}_{-t},\boldsymbol{A}_{-t})=(x_{t'},A_{t'})_{t'=1,t'\neq t}^n$. Let $\boldsymbol{A}^C=(A_t^C)_{t=1}^n$, and say $\boldsymbol{B}\geq\boldsymbol{A} \iff B_t \supseteq A_t$ for each $t=1,\ldots,n$. Let $j(\boldsymbol{x},\boldsymbol{A})=\rho_1(x_1,A_1)\prod_{t=2}^n \rho_t(x_t,A_t|A_{t-1},x_{t-1};\ldots,A_1,x_1)$. \begin{axiom} $\boldsymbol{\rho}^n$ satisfies ($n$-period) \textbf{stochastic Block-Marschak nonnegativity} if \begin{align*} M_{(\boldsymbol{x},\boldsymbol{A})}:=\sum_{\boldsymbol{B} \geq \boldsymbol{A}^C} (-1)^{\sum_{t=1}^n |B_t\backslash A_t^C|} j(\boldsymbol{x},\boldsymbol{B}) \geq 0 \end{align*} for all $(\boldsymbol{x},\boldsymbol{A})$ satisfying $x_t \in A_t^C \neq \emptyset$ for each $t=1,\ldots,n$. \end{axiom} \begin{axiom} $\boldsymbol{\rho}^n$ satisfies ($n$-period) \textbf{marginal consistency} if for any $(\boldsymbol{x},\boldsymbol{A})$ and $t=1,\ldots,n-1$, \begin{align*} P(\boldsymbol{x}_{-t},\boldsymbol{A}_{-t};A_t):=\sum_{x_t \in A_t} j(\boldsymbol{x}_{-t},\boldsymbol{A}_{-t};x_t,A_t) \end{align*} is invariant in $A_t$. \end{axiom} \begin{corollary}[\textcolor{red}{\textbf{Conjecture}}] $\boldsymbol{\rho}^n$ has a SU representation if and only if it satisfies stochastic Block-Marschak nonnegativity and marginal consistency. \end{corollary} As before, to prove \textbf{Corollary 2} it will be helpful to have the following generalization of \textbf{Proposition 3} in hand. \begin{corollary}[\textcolor{red}{\textbf{Conjecture}}] $\mu$ is a SU representation of $\boldsymbol{\rho^n}$ if and only if $\mu\big(E(\boldsymbol{x},\boldsymbol{A})\big)=M_{(\boldsymbol{x},\boldsymbol{A})}$ for all $(\boldsymbol{x},\boldsymbol{A})$ satisfying $x_t \in A_t^C \neq \emptyset$ for each $t=1,\ldots,n$, where \begin{align*} E(\boldsymbol{x},\boldsymbol{A}):=\big\{(\succ_1,\ldots,\succ_n) \in \bigtimes_{t=1}^n P_t: A_t \succ_t x_t \succ_t A_t^C\backslash\{x_t\}, \ t=1,\ldots,n\big\} \end{align*} \end{corollary} \section{Appendix} \label{sec:app} \subsection{The Möbius Inversion} Let $(L,\leq)$ be a finite, partially ordered set (poset). \begin{definition}[\textbf{\cite{van2001course}, 25.2}] The \textbf{Möbius function} $m_L: L^2 \rightarrow \mathbb{Z}$ is \begin{align*} m_L(a,b)=\begin{cases} 1 & a=b \\ 0 & a \nleq b \\ -\sum_{a \leq c<y} m_L(a,c) & a<b \end{cases} \end{align*} \end{definition} \begin{lemma}[\textbf{\cite{van2001course}, 25.5}] Given a function $f:L \rightarrow \mathbb{R}$, define $F(a):=\sum_{b \geq a} f(b)$. Then \begin{align*} f(a)=\sum_{b \geq a} m(a,b)F(b) \end{align*} This is known as the \textbf{Möbius inversion}. \end{lemma} I close this section with two more lemmas that will help for the following proofs. \begin{lemma}[\textbf{\cite{van2001course}, 25.1}] Fix finite $X$ and let $L=2^X$, $\leq=\subseteq$. Then \begin{align*} m_L(A,B)=\begin{cases} (-1)^{|B|-|A|} & A \subseteq B \\ 0 & \text{else} \end{cases} \end{align*} \end{lemma} \begin{lemma}[\textbf{\cite{godsil2018introduction}, 3.1}] Let $L,S$ be posets with respective Möbius functions $m_L,m_S$. Then \begin{align*} m_{L \times S}\big((a_L,a_S),(b_L,b_S)\big)=m_L(a_L,b_L)m_S(a_S,b_S) \end{align*} \end{lemma} \subsection{Proof of Proposition 1} \begin{proof} Let $L=2^{X_1} \times 2^{X_2}$ and $(A_1,A_2) \leq (B_1,B_2) \iff A_1 \subseteq B_1, \ A_2 \subseteq B_2$. Then $(L,\leq)$ is the (finite) product poset of $(2^{X_1},\subseteq)$ and $(2^{X_1},\subseteq)$. By \textbf{Lemmas 4 and 5}, its Möbius function is \begin{align*} m_L\big((A_1,A_2),(B_1,B_2)\big)=\begin{cases} (-1)^{|B_1|-|A_1|+|B_2|-|A_2|} & A_1 \subseteq B_1, \ A_2 \subseteq B_2 \\ 0 & \text{else} \end{cases} \end{align*} Now, for each $t=1,2$, fix any $x_t \in A_t^C \neq \emptyset$ and define $f: L \rightarrow \mathbb{R}$ as \begin{align*} f(B_1,B_2):=(-1)^{|B_1|-|A_1^C|+|B_2|-|A_2^C|} \rho_2(x_2,B_2|B_1,x_1)\rho_1(x_1,B_1) \end{align*} and $F: L \rightarrow \mathbb{R}$ as \begin{align*} F(D_1,D_2)=\sum_{B_2 \supseteq D_2} \sum_{B_1 \supseteq D_1} f(B_1,B_2) \end{align*} By \textbf{Lemma 3}, \begin{align*} f(D_1,D_2)=\sum_{B_2 \supseteq D_2} \sum_{B_1 \supseteq D_1} (-1)^{|B_1|-|D_1|+|B_2|-|D_2|} F(B_1,B_2) \\ \implies f(A_1^C,A_2^C)=\rho_2(x_2,A_2^C|A_1^C,x_1)\rho_1(x_1,A_1^C)=\sum_{B_2 \supseteq A_2^C} \sum_{B_1 \supseteq A_1^C} (-1)^{|B_1|-|A_1^C|+|B_2|-|A_2^C|} F(B_1,B_2) \end{align*} To see that \begin{align*} \sum_{B_2 \supseteq A_2^C} \sum_{B_1 \supseteq A_1^C} (-1)^{|B_1|-|A_1^C|+|B_2|-|A_2^C|} F(B_1,B_2)=\sum_{D_2 \subseteq A_2} \sum_{D_1 \subseteq A_1} M_{x_1,D_1;x_2,D_2} \end{align*} we can match terms as follows. Fix any $D_2 \subseteq A_2$ and $D_1 \subseteq A_1$. Then \begin{align*} (-1)^{|D_1^C|-|A_1^C|+|D_2^C|-|A_2^C|} F(D_1^C,D_2^C)=\sum_{B_2 \supseteq D_2^C} \sum_{B_1 \supseteq D_1^C} (-1)^{|D_1^C|-|A_1^C|+|D_2^C|-|A_2^C|} f(B_1,B_2) \\ =\sum_{B_2 \supseteq D_2^C} \sum_{B_1 \supseteq D_1^C} (-1)^{|B_1|-|D_1^C|+|B_2|-|D_2^C|} \rho_2(x_2,B_2|B_1,x_1)\rho_1(x_1,B_1)=M_{x_1,D_1;x_2,D_2} \end{align*} where the second equality follows by observing that $(-1)^n=(-1)^{-n}$. \end{proof} \subsection{Proof of Proposition 2} \begin{proof} Fix any $x_1 \in A_1^C \neq \emptyset$ and $\emptyset \subsetneq A_2 \subsetneq X_2$. I will use the notation $\rho_2(D_2,B_2|B_1,x_1):=\sum_{x_2 \in D_2} \rho_2(x_2,B_2|B_1,x_1)$. We can write \begin{align*} \sum_{x_2 \in A_2^C} M_{x_1,A_1;x_2,A_2}=\sum_{x_2 \in A_2^C} \bigg(\sum_{B_1 \supseteq A_1^C} \sum_{B_2 \supseteq A_2^C} (-1)^{|B_2|-|A_2^C|+|B_1|-|A_1^C|} \rho_2(x_2,B_2|B_1,x_1)\rho_1(x_1,B_1)\bigg) \\ =\sum_{B_1 \supseteq A_1^C} \rho_1(x_1,B_1) (-1)^{|B_1|-|A_1^C|}\bigg(\sum_{B_2 \supseteq A_2^C} (-1)^{|B_2|-|A_2^C|} \rho_2(A_2^C,B_2|B_1,x_1)\bigg) \end{align*} and \begin{align*} \sum_{y_2 \in A_2} M_{x_1,A_1;y_2,A_2\backslash\{y_2\}}=\sum_{B_1 \supseteq A_1^C} \rho_1(x_1,B_1) (-1)^{|B_1|-|A_1^C|} \bigg(\sum_{y_2 \in A_2}\sum_{B_2 \supseteq A_2^C\cup\{y_2\}} (-1)^{|B_2|-|A_2^C|-1} \rho_2(y_2,B_2|B_1,x_1)\bigg) \end{align*} Thus, to complete the proof it suffices to show \begin{align*} \sum_{B_2 \supseteq A_2^C} (-1)^{|B_2|-|A_2^C|} \rho_2(A_2^C,B_2|B_1,x_1)=\sum_{y_2 \in A_2}\sum_{B_2 \supseteq A_2^C\cup\{y_2\}} (-1)^{|B_2|-|A_2^C|-1} \rho_2(y_2,B_2|B_1,x_1) \end{align*} To see this, observe that \begin{align*} \sum_{B_2 \supseteq A_2^C} (-1)^{|B_2|-|A_2^C|} \rho_2(A_2^C,B_2|B_1,x_1) \\ =\rho_2(A_2^C,A_2^C|B_1,x_1)-\sum_{B_2=A_2^C \cup \{a_2\}} \rho_2(A_2^C,A_2^C\cup\{a_2\}|B_1,x_1)+\ldots+(-1)^{|A_2|} \rho_2(A_2^C,X_2|B_1,x_1) \\ =1-\sum_{B_2=A_2^C \cup \{a_2\}} \big(1-\rho_2(a_2,A_2^C\cup\{a_2\}|B_1,x_1)\big)+\ldots+(-1)^{|A_2|} \big(1-\rho_2(A_2,X_2|B_1,x_1)\big) \end{align*} Since there are $\binom{|A_2|}{k}$ sets of the form $B_2=A_2^C\cup\{a_2^1,\ldots,a_2^k\}$ and for $|A_2| \geq 1$, \begin{align*} \sum_{k=0}^{|A_2|} (-1)^k \binom{|A_2|}{k}=0 \end{align*} we can separate out an alternating sum of binomial coefficients: \begin{align*} =\sum_{B_2=A_2^C \cup \{a_2\}} \rho_2(a_2,A_2^C\cup\{a_2\}|B_1,x_1)-\sum_{B_2=A_2^C \cup \{a_2^1,a_2^2\}} \rho_2(\{a_2^1,a_2^2\},A_2^C\cup\{a_2^1,a_2^2\}|B_1,x_1) \\ +\ldots+(-1)^{|A_2|+1}\rho_2(A_2,X_2|B_1,x_1) \end{align*} Observe that there is a bijection between nonempty $D_2 \subseteq A_2$ and terms in this sum: \begin{align*} D_2 \xleftrightarrow{} (-1)^{|D_2|+1} \rho_2(D_2,A_2^C \cup D_2|B_1,x_1) \end{align*} Similarly, there is a bijection between nonempty $D_2 \subseteq A_2$ and terms in the following sum \begin{align*} \sum_{y_2 \in A_2}\sum_{B_2 \supseteq A_2^C\cup\{y_2\}} (-1)^{|B_2|-|A_2^C|-1} \rho_2(y_2,B_2|B_1,x_1) \end{align*} given by \begin{align*} D_2 \xleftrightarrow{} \sum_{y_2 \in D_2} (-1)^{|A_2^C\cup D_2|-|A_2^C|-1} \rho_2(y_2,A_2^C\cup D_2|B_1,x_1) \\ =(-1)^{|D_2|+1} \rho_2(D_2,A_2^C \cup D_2|B_1,x_1) \end{align*} Since both sums are comprised of precisely the same terms, they are equal. \end{proof} \subsection{Proof of Proposition 3} \begin{proof} Forwards direction: suppose $\mu$ is a SU representation of $\boldsymbol{\rho}^2$. For each $t=1,2$, fix any $x_t \in A_t^C \neq \emptyset$. Since $x_t \succ_t A_t^C\backslash\{x_t\}$ if and only if $B_t^C \succ_t x_t \succ_t B_t\backslash\{x_t\}$ for some $B_t \supseteq A_t^C$, \begin{align*} C(x_1,A_1^C) \cap C(x_2,A_2^C)=\bigcup_{B_2 \supseteq A_2^C} \bigcup_{B_1 \supseteq A_1^C} E(x_1,B_1^C;x_2,B_2^C) \end{align*} Furthermore, this union is disjoint, so \begin{align*} \rho_2(x_2,A_2^C|A_1^C,x_1)\rho_1(x_1,A_1^C)=\sum_{B_2 \supseteq A_2^C} \sum_{B_1 \supseteq A_1^C} \mu\big(E(x_1,B_1^C;x_2,B_2^C)\big) \end{align*} By \textbf{Lemmas 4 and 5}, $m\big((A_1,A_2),(B_1,B_2)\big)=(-1)^{|B_1|-|A_1|+|B_2|-|A_2|}$. By \textbf{Lemma 3} with $f(B_1,B_2)=\mu\big(E(x_1,B_1^C;x_2,B_2^C)\big)$ and $F(A_1,A_2)=\rho_2(x_2,A_2|A_1,x_1)\rho_1(x_1,A_1)$, we get \begin{align*} \mu\big(E(x_1,A_1;x_2,A_2)\big)=\sum_{B_2 \supseteq A_2^C} \sum_{B_1 \supseteq A_1^C} (-1)^{|B_2|-|A_2^C|+|B_1|-|A_1^C|} \rho_2(x_2,B_2|B_1,x_1)\rho_1(x_1,B_1)=M_{x_1,A_1;x_2,A_2} \end{align*} as desired.\footnote{The proof strategy for this direction is adapted from \cite{SC}.} \\ \\ Backwards direction: suppose there exists $\mu \in \Delta(P_1 \times P_2)$ satisfying $\mu\big(E(x_1,A_1;x_2,A_2)\big)=M_{x_1,A_1;x_2,A_2}$ for all $t=1,2$ and $x_t \in A_t^C \neq \emptyset$, and fix any $y_t \in D_t \in \mathcal{M}_t$. As before, observe that $y_t \succ_t D_t\backslash\{y_t\}$ if and only if $B_t \succ_t y_t \succ_t B_t^C\backslash\{y_t\}$ for some $B_t \subseteq D_t^C$, so \begin{align*} C(y_1,D_1) \cap C(y_2,D_2)=\bigcup_{B_2 \subseteq D_2^C} \bigcup_{B_1 \subseteq D_1^C} E(x_1,B_1;x_2,B_2) \end{align*} Furthermore, this union is disjoint, so \begin{align*} \mu\big(C(y_1,D_1) \cap C(y_2,D_2)\big)=\sum_{B_2 \subseteq D_2^C} \sum_{B_1 \subseteq D_1^C} M_{x_1,B_1;x_2,B_2}=\rho_2(y_2,D_2|D_1,y_1)\rho_1(y_1,D_1) \end{align*} where the second equality follows because $D_t \neq \emptyset \implies B_t \neq X_t$, and the third equality follows from \textbf{Proposition 1}. Since \begin{align*} \mu\big(C(y_1,D_1)\big)=\sum_{y_2 \in D_2} \mu\big(C(y_1,D_1) \cap C(y_2,D_2)\big)=\rho_1(y_1,D_1) \end{align*} and \begin{align*} \mu\big(C(y_2,D_2)|C(y_1,D_1)\big)=\frac{\mu(C(y_1,D_1) \cap C(y_2,D_2))}{\mu(C(y_1,D_1))}=\rho_2(y_2,D_2|D_1,y_1) \end{align*} we conclude that $\mu$ is a SU representation of $\boldsymbol{\rho}^2$. \end{proof} \subsection{Proof of Theorem 1 (Backwards Direction)} \begin{proof} Suppose $\boldsymbol{\rho}^2$ satisfies Block-Marschak nonnegativity and marginal consistency. As outlined in \textbf{Section 3}, the proof rests the following series of \textbf{Claims}, whose proofs are included in the forthcoming subsections. \begin{claim} For any $x_2 \in A_2^C \neq \emptyset$ and $\emptyset \subsetneq A_1 \subsetneq X_1$, \begin{align*} \sum_{x_1 \in A_1^C} M_{x_1,A_1;x_2,A_2}=\sum_{y_1 \in A_1} M_{y_1,A_1\backslash\{y_1\};x_2,A_2} \end{align*} \end{claim} \textbf{Claim 1} is the first-period analog of \textbf{Proposition 2} and follows from a similar argument by using marginal consistency. Now, I define the $t$-cylinders. \begin{definition} Given an ordered, distinct $\boldsymbol{k}$\textbf{-sequence} $(x_t^1,\ldots,x_t^k)$, its $\boldsymbol{t}$\textbf{-cylinder}\footnote{This definition is the two-period analog of \cite{chambers2016revealed}'s definition of cylinders.} is \begin{align*} I_{(x_t^1,\ldots,x_t^k)}=\big\{\succ_t \ \in P_t: x_t^1 \succ_t \cdots \succ_t x_t^k \succ_t X_t\backslash \{x_t^1,\ldots,x_t^k\}\big\} \end{align*} \end{definition} Given a menu $A_t$, let $\pi(A_t)$ denote the set of permutations of $A_t$. Let $\mathcal{I}_t(k)=\{I_{(x_t^1,\ldots,x_t^k)}: (x_t^1,\ldots,x_t^k) \in \pi(A_t), A_t \in \mathcal{M}_t, |A_t|=k\}$ be the set of all $t$-cylinders induced by $k$-sequences, and let $\mathcal{I}_t=\bigcup_{k=1}^{|X_t|} \mathcal{I}_t(k)$. Observe that $\mathcal{I}_t$ contains all singletons, since \begin{align*} I_{(x_t^1,\ldots,x_t^{|X_t|})}=\{\succ_t\} \iff x_t^1 \succ_t \cdots \succ_t x_t^{|X_t|} \end{align*} Next, I recursively define a function $\nu: \mathcal{I}_1 \times \mathcal{I}_2 \rightarrow \mathbb{R}_{\geq 0}$.\footnote{Again, my definition of $\nu$ is the two-period analog of \cite{chambers2016revealed}, (7.4).} Define \begin{align*} \nu\big(I_{x_1} \times I_{x_2}\big):=M_{x_1,\emptyset;x_2,\emptyset}=\rho_2(x_2,X_2|X_1,x_1)\rho_1(x_1,X_1) \end{align*} Now, let (distinct) $i,j \in \{1,2\}$. For any $1<k \leq |X_j|$ and $(x_j^1,\ldots,x_j^k,x_j^{k+1})$, let $A_j=\{x_j^1,\ldots,x_j^k\}$ and define \begin{align*} \nu\big(I_{x_i} \times I_{(x_j^1,\ldots,x_j^k,x_j^{k+1})}\big):=\begin{cases} 0 & \sum_{\tau_j \in \pi(A_j)} \nu(I_{x_i} \times I_{\tau_j})=0 \\ \frac{\nu(I_{x_i} \times I_{(x_j^1,\ldots,x_j^k)})M_{x_i,\emptyset;x_j^{k+1},A_j}}{\sum_{\tau_j \in \pi(A_j)} \nu(I_{x_i} \times I_{\tau_j})} & \text{else} \end{cases} \end{align*} Similarly, for any $1\leq k<|X_1|$, $1\leq \ell<|X_2|$ and $(x_1^1,\ldots,x_1^k,x_1^{k+1})$, $(x_2^1,\ldots,x_2^\ell,x_2^{\ell+1})$, let $A_1=\{x_1^1,\ldots,x_1^k\}$, $A_2=\{x_2^1,\ldots,x_2^\ell\}$ and define \begin{align*} \nu\big(I_{(x_1^1,\ldots,x_1^k,x_1^{k+1})} \times I_{(x_2^1,\ldots,x_2^\ell,x_2^{\ell+1})}\big):=\begin{cases} 0 & \sum_{\tau_1 \in \pi(A_1)} \sum_{\tau_2 \in \pi(A_1)} \nu(I_{\tau_1} \times I_{\tau_2})=0 \\ \frac{\nu(I_{(x_1^1,\ldots,x_1^k)} \times I_{(x_2^1,\ldots,x_2^\ell)})M_{x_1^{k+1},A_1;x_2^{\ell+1},A_2}}{\sum_{\tau_1 \in \pi(A_1)} \sum_{\tau_2 \in \pi(A_1)} \nu(I_{\tau_1} \times I_{\tau_2})} & \text{else} \end{cases} \end{align*} \begin{definition} For any $0 \leq k<|X_1|$, $0 \leq \ell <|X_2|$, the \textbf{first additive property} $\boldsymbol{p_1(k,\ell)}$ holds if for all $A_1,A_2$ s.t. $|A_1|=k,|A_2|=\ell$ and all $x_t \in A_t^C$ \begin{align*} \sum_{\tau_1 \in \pi(A_1)} \sum_{\tau_2 \in \pi(A_2)} \nu\big(I_{\tau_1,x_1} \times I_{\tau_2,x_2}\big)=M_{x_1,A_1;x_2,A_2} \end{align*} For any $0<k\leq|X_1|$, $0<\ell \leq|X_2|$, the \textbf{second additive property} $\boldsymbol{p_2(k,\ell)}$ holds if for all $A_1,A_2$ s.t. $|A_1|=k,|A_2|=\ell$ and all $\tau_t \in \pi(A_t)$ \begin{align*} \sum_{x_1 \in A_1^C} \sum_{x_2 \in A_2^C} \nu\big(I_{\tau_1,x_1} \times I_{\tau_2,x_2}\big)=\nu\big(I_{\tau_1} \times I_{\tau_2}\big) \end{align*} \end{definition} Observe that $\bigcup_{\tau_1 \in \pi(A_1)} \bigcup_{\tau_2 \in \pi(A_2)} (I_{\tau_1,x_1} \times I_{\tau_2,x_2})=E_{x_1,A_1;x_2,A_2}$ and $\bigcup_{x_1 \in A_1^C} \bigcup_{x_2 \in A_2^C} (I_{\tau_1,x_1} \times I_{\tau_1,x_1})=I_{\tau_1} \times I_{\tau_2}$ and these are disjoint unions, so these additive properties are necessary.\footnote{These additive properties are the two-period analogs of \cite{chambers2016revealed}, (7.2) and (7.3), respectively.} \begin{claim} $p_1(k,\ell)$ holds for all $0 \leq k<|X_1|$, $0 \leq \ell<|X_2|$. \end{claim} \begin{claim} $p_2(k,\ell)$ holds for all $0<k \leq |X_1|$, $0<\ell \leq |X_2|$. \end{claim} With these additive properties in hand, I am ready to define the candidate SU representation. Given $(\succ_1,\succ_2) \in P_1 \times P_2$, denote $x_t^1 \succ_t \cdots \succ_t x_t^{|X_t|}$ and define $\mu: 2^{P_1 \times P_2} \rightarrow \mathbb{R}$ as\footnote{This definition is the two-period analog of \cite{chambers2016revealed}'s definition of ``$\nu$."} \begin{align*} \mu\big(\{\succ_1,\succ_2\}\big)=\nu\big(I_{(x_1^1,\ldots,x_1^{|X_1|})} \times I_{(x_2^1,\ldots,x_2^{|X_2|})}\big) \\ \mu\big(S\big)=\sum_{(\succ_1,\succ_2) \in S} \mu\big(\{\succ_1,\succ_2\}\big) \end{align*} \begin{claim} $\mu$ is a probability measure. \end{claim} \begin{claim} $\mu=\nu$ on $\mathcal{I}_1 \times \mathcal{I}_2$. \end{claim} \textbf{Claim 5} shows that $\mu$ is an extension of $\nu$. Thus, I can leverage \textbf{Claim 2} as follows. Recall from \textbf{Proposition 1} that to show that $\mu$ is a SU representation of $\boldsymbol{\rho}^2$, it suffices to show $\mu\big(E(x_1,A_1;x_2,A_2)\big)=M_{x_1,A_1;x_2,A_2}$ for all $t=1,2$ and $x_t \in A_t^C \neq \emptyset$. For each $t=1,2$, fix any $x_t \in A_t^C \neq \emptyset$ (note this implies that $|A_t|<|X_t|$): then \begin{align*} \mu\big(E(x_1,A_1;x_2,A_2)\big)=\sum_{\tau_1 \in \pi(A_1)} \sum_{\tau_2 \in \pi(A_2)} \mu\big(I_{\tau_1,x_1} \times I_{\tau_2,x_2}\big) \\ =\sum_{\tau_1 \in \pi(A_1)} \sum_{\tau_2 \in \pi(A_2)} \nu\big(I_{\tau_1,x_1} \times I_{\tau_2,x_2}\big)=M_{x_1,A_1;x_2,A_2} \end{align*} where the penultimate equality follows because $\mu=\nu$ on $\mathcal{I}_1 \times \mathcal{I}_2$, and the last equality follows from \textbf{Claim 2}. \subsubsection{Proof of Claim 1} \begin{proof} We can write \begin{align*} \sum_{x_1 \in A_1^C} M_{x_1,A_1;x_2,A_2}=\sum_{B_2 \supseteq A_2^C} (-1)^{|B_2|-|A_2^C|}\Bigg(\sum_{B_1 \supseteq A_1^C} (-1)^{|B_1|-|A_1^C|} \bigg(\sum_{x_1 \in A_1^C}\rho_2(x_2,B_2|B_1,x_1)\rho_1(x_1,B_1)\bigg)\Bigg) \end{align*} and \begin{align*} \sum_{y_1 \in A_1} M_{y_1,A_1\backslash\{y_1\};x_2,A_2}=\sum_{B_2 \supseteq A_2^C} (-1)^{|B_2|-|A_2^C|} \Bigg(\sum_{y_1 \in A_1}\sum_{B_1 \supseteq A_1^C\cup\{y_1\}} (-1)^{|B_1|-|A_1^C|-1} \rho_2(x_2,B_2|B_1,y_1)\rho_1(y_1,B_1)\Bigg) \end{align*} Since marginal consistency is satisfied, we can write \begin{align*} \sum_{B_1 \supseteq A_1^C} (-1)^{|B_1|-|A_1^C|} \bigg(\sum_{x_1 \in A_1^C}\rho_2(x_2,B_2|B_1,x_1)\rho_1(x_1,B_1)\bigg) \\ =\sum_{B_1 \supseteq A_1^C} (-1)^{|B_1|-|A_1^C|} \bigg(P(x_2,B_2)-\sum_{x_1 \in B_1\backslash A_1^C} \rho_2(x_2,B_2|B_1,x_1)\rho_1(x_1,B_1)\bigg) \\ =P(x_2,B_2) \sum_{B_1 \supseteq A_1^C} (-1)^{|B_1|-|A_1^C|}+ \sum_{B_1 \supsetneq A_1^C} (-1)^{|B_1|-|A_1^C|+1} \sum_{x_1 \in B_1\backslash A_1^C} \rho_2(x_2,B_2|B_1,x_1)\rho_1(x_1,B_1) \end{align*} Again observing that \begin{align*} \sum_{B_1 \supseteq A_1^C} (-1)^{|B_1|-|A_1^C|}=\sum_{k=0}^{|A_1|} (-1)^k \binom{|A_1|}{k}=0 \end{align*} it thus suffices to show that \begin{align*} \sum_{B_1 \supsetneq A_1^C} (-1)^{|B_1|-|A_1^C|+1} \sum_{x_1 \in B_1\backslash A_1^C} \rho_2(x_2,B_2|B_1,x_1)\rho_1(x_1,B_1) \\ =\sum_{y_1 \in A_1}\sum_{B_1 \supseteq A_1^C\cup\{y_1\}} (-1)^{|B_1|-|A_1^C|-1} \rho_2(x_2,B_2|B_1,y_1)\rho_1(y_1,B_1) \end{align*} which immediately follows from matching terms. \end{proof} Note that \textbf{Proposition 2} and \textbf{Claim 1} together imply that for any $\emptyset \subsetneq A_t \subsetneq X_t$, \begin{align*} \sum_{x_1 \in A_1^C} \sum_{x_2 \in A_2^C} M_{x_1,A_1;x_2,A_2}=\sum_{x_1 \in A_1^C} \sum_{y_2 \in A_2} M_{x_1,A_1;y_2,A_2\backslash\{y_2\}}=\sum_{y_2 \in A_2} \sum_{x_1 \in A_1^C} M_{x_1,A_1;y_2,A_2\backslash\{y_2\}} \\ =\sum_{y_2 \in A_2} \sum_{y_1 \in A_1} M_{y_1,A_1\backslash\{y_1\};y_2,A_2\backslash\{y_2\}} \end{align*} \subsubsection{Proof of Claim 2} \begin{proof} I will prove this via induction. \\ \\ \textit{Base case}: fix $A_1=A_2=\emptyset$ (these are the only menus satisfying $|A_1|=|A_2|=0$ and $x_t \in X_t$). Then, by definition, \begin{align*} \sum_{\tau_1 \in \pi(A_1)} \sum_{\tau_2 \in \pi(A_2)} \nu\big(I_{\tau_1,x_1} \times I_{\tau_2,x_2}\big)=\nu\big(I_{x_1} \times I_{x_2}\big)=M_{x_1,\emptyset;x_2,\emptyset} \end{align*} \textit{First inductive step}: Fix $A_i=\emptyset$, $x_i \in X_i$, and $A_j=\{x_j^1,\ldots,x_j^k\}$ for any $k>0$ and observe that \begin{align*} \sum_{\tau_j \in \pi(A_j)} \nu\big(I_{x_i} \times I_{\tau_j}\big)=\sum_{y_j \in A_j} \bigg(\sum_{(y_j^1,\ldots,y_j^{k-1}) \in \pi(A_j\backslash\{y_j\})} \nu\big(I_{x_i} \times I_{(y_j^1,\ldots,y_j^{k-1},y_j)}\big)\bigg) \\ =\sum_{y_j \in A_j} M_{x_i,\emptyset;y_j,A_j\backslash\{y_j\}}=\sum_{x_j \in A_j^C} M_{x_i,\emptyset;x_j,A_j} \end{align*} where the first equality follows because permuting $A_j$ is equivalent to picking the last element and permuting the remaining $k-1$ elements, the second equality follows from the inductive hypothesis (which is ``$p_1(0,k-1)$ holds" if $i=1,j=2$ and ``$p_1(k-1,0)$ holds" otherwise), and the third equality follows from \textbf{Proposition 2} and \textbf{Claim 1}. There are two cases: \begin{enumerate} \item Suppose \begin{align*} \sum_{\tau_j \in \pi(A_j)} \nu\big(I_{x_i} \times I_{\tau_j}\big)=0 \end{align*} Since Block-Marschak nonnegativity holds by assumption, $M_{x_i,\emptyset;x_j,A_j}=0$ for each $x_j \in A_j^C$. Fix any $x_j \in A_j^C$: then we have \begin{align*} \sum_{\tau_i \in \pi(A_i)} \sum_{\tau_j \in \pi(A_j)} \nu\big(I_{\tau_i,x_i} \times I_{\tau_j,x_j}\big)=\sum_{\tau_j \in \pi(A_j)} \nu\big(I_{x_i} \times I_{\tau_j,x_j}\big)=0=M_{x_i,\emptyset;x_j,A_j} \end{align*} where the penultimate equality follows by definition of $\nu$. Thus, $p_1(0_i,k_j)$ holds. \item Suppose \begin{align*} \sum_{\tau_j \in \pi(A_j)} \nu\big(I_{x_i} \times I_{\tau_j}\big)>0 \end{align*} Again by definition of $\nu$, we have \begin{align*} \sum_{\tau_i \in \pi(A_i)} \sum_{\tau_j \in \pi(A_j)} \nu\big(I_{\tau_i,x_i} \times I_{\tau_j,x_j}\big)=\sum_{\tau_j \in \pi(A_j)} \nu\big(I_{x_i} \times I_{\tau_j,x_j}\big) \\ =\sum_{\tau_j \in \pi(A_j)} \bigg(\frac{\nu(I_{x_i} \times I_{\tau_j})}{\sum_{\alpha_j \in \pi(A_j)} \nu(I_{x_i} \times I_{\alpha_j})}M_{x_i,\emptyset;x_j,A_j}\bigg)=M_{x_i,\emptyset;x_j,A_j} \end{align*} Thus, $p_1(0_i,k_j)$ holds. \end{enumerate} \textit{Second inductive step}: This step proceeds similarly to the first inductive step. For any $0<k<|X_1|$ and $0<\ell<|X_2|$, fix $A_1=\{x_1^1,\ldots,x_1^k\}$, $A_2=\{x_2^1,\ldots,x_2^\ell\}$, and $x_t \in A_t^C$. Our inductive hypothesis is that $p_1(k-1,\ell-1)$ holds. Again, observe that \begin{align*} \sum_{\tau_1 \in \pi(A_1)} \sum_{\tau_2 \in \pi(A_2)} \nu\big(I_{\tau_1} \times I_{\tau_2}\big) \\ =\sum_{y_1 \in A_1} \sum_{y_2 \in A_2} \bigg(\sum_{(y_1^1,\ldots,y_1^{k-1}) \in \pi(A_1\backslash\{y_1\})} \sum_{(y_2^1,\ldots,y_2^{\ell-1}) \in \pi(A_2\backslash\{y_2\})} \nu\big(I_{(y_1^1,\ldots,y_1^{k-1},y_1)} \times I_{(y_2^1,\ldots,y_2^{\ell-1},y_2)}\big)\bigg) \\ =\sum_{y_1 \in A_1} \sum_{y_2 \in A_2} M_{y_1,A_1\backslash\{y_1\};y_2,A_2\backslash\{y_2\}}=\sum_{x_1 \in A_1^C} \sum_{x_2 \in A_2^C} M_{x_1,A_1;x_2,A_2} \end{align*} where the penultimate equality follows from the inductive hypothesis, and the last equality follows from \textbf{Proposition 2} and \textbf{Claim 1}. Again, there are two cases: \begin{enumerate} \item Suppose \begin{align*} \sum_{\tau_1 \in \pi(A_1)} \sum_{\tau_2 \in \pi(A_2)} \nu\big(I_{\tau_1} \times I_{\tau_2}\big)=0 \end{align*} Then weak Block-Marschak positivity implies $M_{x_1,A_1;x_2,A_2}=0$ for each $x_t \in A_t^C$. Fix any $x_t \in A_t^C$: by definition of $\nu$, we have \begin{align*} \sum_{\tau_1 \in \pi(A_1)} \sum_{\tau_2 \in \pi(A_2)} \nu\big(I_{\tau_1,x_1} \times I_{\tau_2,x_2}\big)=0=M_{x_1,A_1;x_2,A_2} \end{align*} and $p_1(k,\ell)$ holds. \item Suppose \begin{align*} \sum_{\tau_1 \in \pi(A_1)} \sum_{\tau_2 \in \pi(A_2)} \nu\big(I_{\tau_1} \times I_{\tau_2}\big)>0 \end{align*} Again by definition of $\nu$, it follows that \begin{align*} \sum_{\tau_1 \in \pi(A_1)} \sum_{\tau_2 \in \pi(A_2)} \nu\big(I_{\tau_1,x_1} \times I_{\tau_2,x_2}\big)=M_{x_1,A_1;x_2,A_2} \end{align*} and $p_1(k,\ell)$ holds. \end{enumerate} \end{proof} \subsubsection{Proof of Claim 3} \begin{proof} Fix any $0<k \leq |X_1|$ and $0<\ell \leq |X_2|$, $A_1=\{x_1^1,\ldots,x_1^k\}$ and $A_2=\{x_2^1,\ldots,x_2^\ell\}$, and $\tau_t \in \pi(A_t)$. As before, there are two cases: \begin{enumerate} \item Suppose \begin{align*} \sum_{\alpha_1 \in \pi(A_1)} \sum_{\alpha_2 \in \pi(A_2)} \nu\big(I_{\alpha_1} \times I_{\alpha_2}\big)=0 \end{align*} Since $\nu \geq 0$ by definition, in particular $\nu\big(I_{\tau_1} \times I_{\tau_2}\big)=0$. Furthermore, by definition, for each $x_t \in A_t^C$ \begin{align*} \nu\big(I_{\tau_1,x_1} \times I_{\tau_2,x_2}\big)=0 \\ \implies \sum_{x_1 \in A_1^C} \sum_{x_2 \in A_2^C} \nu\big(I_{\tau_1,x_1} \times I_{\tau_2,x_2}\big)=0=\nu\big(I_{\tau_1} \times I_{\tau_2}\big) \end{align*} as desired. \item Suppose \begin{align*} \sum_{\alpha_1 \in \pi(A_1)} \sum_{\alpha_2 \in \pi(A_2)} \nu\big(I_{\alpha_1} \times I_{\alpha_2}\big)>0 \end{align*} Since $0<k \leq |X_1|$ and $0<\ell \leq |X_2|$, $0\leq k-1 <|X_1|$ and $0\leq \ell-1<|X_2|$, so by \textbf{Claim 2} we can apply $p_1(k-1,\ell-1)$ as before to write \begin{align*} \sum_{\alpha_1 \in \pi(A_1)} \sum_{\alpha_2 \in \pi(A_2)} \nu\big(I_{\alpha_1} \times I_{\alpha_2}\big)=\sum_{x_1 \in A_1^C} \sum_{x_2 \in A_2^C} M_{x_1,A_1;x_2,A_2} \end{align*} which implies \begin{align*} \sum_{x_1 \in A_1^C} \sum_{x_2 \in A_2^C} \nu\big(I_{\tau_1,x_1} \times I_{\tau_2,x_2}\big)=\sum_{x_1 \in A_1^C} \sum_{x_2 \in A_2^C} \bigg(\frac{\nu(I_{\tau_1} \times I_{\tau_2})}{\sum_{\alpha_1 \in \pi(A_1)} \sum_{\alpha_2 \in \pi(A_2)} \nu(I_{\alpha_1} \times I_{\alpha_2})} M_{x_1,A_1;x_2,A_2}\bigg) \\ =\frac{\nu(I_{\tau_1} \times I_{\tau_2})}{\sum_{x_1 \in A_1^C} \sum_{x_2 \in A_2^C} M_{x_1,A_1;x_2,A_2}}\bigg(\sum_{x_1 \in A_1^C} \sum_{x_2 \in A_2^C} M_{x_1,A_1;x_2,A_2}\bigg)=\nu(I_{\tau_1} \times I_{\tau_2}) \end{align*} \end{enumerate} \end{proof} \subsubsection{Proof of Claim 4} \begin{proof} I have already shown that $\mu \geq 0$. First, fix $x_1 \in B_1$ and observe that \begin{align*} \sum_{x_2 \in X_2} \sum_{B_2 \supseteq \{x_2\}} (-1)^{|B_2|-1} \rho_2(x_2,B_2|B_1,x_1)=\sum_{i=1}^{|X_2|} (-1)^{i-1} \binom{|X_2|}{i}=1 \end{align*} To see this, fix $1 \leq i \leq |X_2|$ and $B_2=\{x_2^1,\ldots,x_2^i\}$. Then the terms $\sum_{k=1}^i (-1)^{i-1} \rho_2(x_2^k,B_2|B_1,x_1)=(-1)^{i-1}$ appear exactly once in the sum above, and there are $\binom{|X_2|}{i}$ menus of size $i$. Thus, \begin{align*} \sum_{\succ_1 \in P_1} \sum_{\succ_2 \in P_2} \mu(\succ_1,\succ_2)=\sum_{\tau_1 \in \pi(X_1)} \sum_{\tau_2 \in \pi(X_2)} \nu\big(I_{\tau_1} \times I_{\tau_2}\big)=\sum_{x_1 \in X_1} \sum_{x_2 \in X_2} M_{x_1,X_1\backslash\{x_1\};x_2,X_2\backslash\{x_2\}} \\ =\sum_{x_1 \in X_1} \sum_{x_2 \in X_2} \sum_{B_2 \supseteq \{x_2\}} \sum_{B_1 \supseteq \{x_1\}} (-1)^{|B_1|-1+|B_2|-1} \rho_2(x_2,B_2|B_1,x_1)\rho_1(x_1,B_1) \\ =\sum_{x_1 \in X_1} \sum_{B_1 \supseteq \{x_1\}} \bigg(\sum_{x_2 \in X_2} \sum_{B_2 \supseteq \{x_2\}} (-1)^{|B_2|-1} \rho_2(x_2,B_2|B_1,x_1)\bigg) (-1)^{|B_1|-1} \rho_1(x_1,B_1) \\ =\sum_{x_1 \in X_1} \sum_{B_1 \supseteq \{x_1\}} (-1)^{|B_1|-1} \rho_1(x_1,B_1)=1 \end{align*} where the last equality follows for the same reason as above. \end{proof} \subsubsection{Proof of Claim 5} \begin{proof} I will use induction. \\ \\ \textit{Base case}: for any $|X_1|$-sequence $(x_1^1,\ldots,x_1^{|X_1|})$ and $|X_2|$-sequence $(x_2^1,\ldots,x_2^{|X_2|})$, let $\succ_t$ be the (unique) preference satisfying $x_t^1 \succ_t \cdots \succ_t x_t^{|X_t|}$. Then, by definition, \begin{align*} \mu\big(I_{(x_1^1,\ldots,x_1^{|X_1|})} \times I_{(x_2^1,\ldots,x_2^{|X_2|})}\big)=\mu\big(\{\succ_1,\succ_2\}\big)=\nu\big(I_{(x_1^1,\ldots,x_1^{|X_1|})} \times I_{(x_2^1,\ldots,x_2^{|X_2|})}\big) \end{align*} \textit{First inductive step}: suppose $\mu\big(I_{(x_i^1,\ldots,x_i^{|X_i|})} \times I_{(x_j^1,\ldots,x_j^k)}\big)=\nu\big(I_{(x_i^1,\ldots,x_i^{|X_i|})} \times I_{(x_j^1,\ldots,x_j^k)}\big)$ for all $|X_i|$-sequences $(x_i^1,\ldots,x_i^{|X_i|})$ and $k$-sequences $(x_j^1,\ldots,x_j^k)$, where $k>1$. Then, for any $|X_i|$-sequence $(x_i^1,\ldots,x_i^{|X_i|})$ and $k-1$-sequence $(x_j^1,\ldots,x_j^{k-1})$, \begin{align*} \mu\big(I_{(x_i^1,\ldots,x_i^{|X_i|})} \times I_{(x_j^1,\ldots,x_j^{k-1})}\big)=\sum_{y_j \in \{x_j^1,\ldots,x_j^{k-1}\}^C} \mu\big(I_{(x_i^1,\ldots,x_i^{|X_i|})} \times I_{(x_j^1,\ldots,x_j^{k-1},y_j)}\big) \\ =\sum_{y_j \in \{x_j^1,\ldots,x_j^{k-1}\}^C} \nu\big(I_{(x_i^1,\ldots,x_i^{|X_i|})} \times I_{(x_j^1,\ldots,x_j^{k-1},y_j)}\big)=\nu\big(I_{(x_i^1,\ldots,x_i^{|X_i|})} \times I_{(x_j^1,\ldots,x_j^{k-1})}\big) \end{align*} where the first equality follows from $\mu$ being a probability measure, the second equality follows from the inductive hypothesis, and the third equality follows because $p_2(|X_1|,k-1)$ and $p_2(k-1,|X_2|)$ hold, by \textbf{Claim 3}. \\ \\ \textit{Second inductive step}: suppose $\mu\big(I_{(x_1^1,\ldots,x_1^k)} \times I_{(x_2^1,\ldots,x_2^\ell)}\big)=\nu\big(I_{(x_1^1,\ldots,x_1^k)} \times I_{(x_2^1,\ldots,x_2^\ell)}\big)$ for all $k$-sequences $(x_1^1,\ldots,x_1^k)$ and $\ell$-sequences $(x_2^1,\ldots,x_2^\ell)$, where $k,\ell>1$. Then for any $k-1$-sequence $(x_1^1,\ldots,x_1^{k-1})$ and $\ell-1$-sequence $(x_2^1,\ldots,x_2^{\ell-1})$, \begin{align*} \mu\big(I_{(x_1^1,\ldots,x_1^{k-1})} \times I_{(x_2^1,\ldots,x_2^{\ell-1})}\big)=\sum_{y_1 \in \{x_1^1,\ldots,x_1^{k-1}\}^C} \sum_{y_2 \in \{x_2^1,\ldots,x_2^{\ell-1}\}^C} \mu\big(I_{(x_1^1,\ldots,x_1^{k-1},y_1)} \times I_{(x_2^1,\ldots,x_2^{\ell-1},y_2)}\big) \\ =\sum_{y_1 \in \{x_1^1,\ldots,x_1^{k-1}\}^C} \sum_{y_2 \in \{x_2^1,\ldots,x_2^{\ell-1}\}^C} \nu\big(I_{(x_1^1,\ldots,x_1^{k-1},y_1)} \times I_{(x_2^1,\ldots,x_2^{\ell-1},y_2)}\big)=\nu\big(I_{(x_1^1,\ldots,x_1^{k-1})} \times I_{(x_2^1,\ldots,x_2^{\ell-1})}\big) \end{align*} Since every pair of cylinders in $\mathcal{I}_1 \times \mathcal{I}_2$ is induced by a pair of $k,\ell$-sequences with $1 \leq k \leq |X_1|$ and $1 \leq \ell \leq |X_2|$, I have shown that $\mu=\nu$ on $\mathcal{I}_1 \times \mathcal{I}_2$. \end{proof} Now that each \textbf{Claim} has been verifed, the proof is complete. \end{proof} \subsection{Proof of Proposition 4 (Backwards Direction)} \begin{proof} Suppose $|X_t|=3$ and denote $X_1=\{a,b,c\}$ and $X_2=\{d,e,f\}$. Suppose $\boldsymbol{\rho}^2$ satisfies stochastic regularity and marginal consistency. Then in particular, for any $x_t \in X_t$, $A_t=\{x_t\}^C=\{y_t,z_t\}$, and $B_t=X_t$, \begin{align*} \frac{\rho_1(y_1,A_1)}{\rho_1(y_1,B_1)} \geq \frac{\rho_2(y_2,A_2|B_1,y_1)-\rho_2(y_2,B_2|B_1,y_1)}{\rho_2(y_2,A_2|A_1,y_1)-\rho_2(y_2,B_2|A_1,y_1)} \\ \iff \rho_2(y_2,z_2|\{y_1,z_1\},y_1)\rho_1(y_1,z_1)-\rho_2(y_2,X_2|\{y_1,z_1\},y_1)\rho_1(y_1,z_1) \\ -\bigg(\rho_2(y_2,z_2|X_1,y_1)\rho_1(y_1,X_1)-\rho_2(y_2,X_2|X_1,y_1)\rho_1(y_1,X_1)\bigg) \geq 0 \\ \iff M_{y_1,\{x_1\};y_2,\{x_2\}}=\sum_{B_2 \supseteq \{y_2,z_2\}} \sum_{B_1 \supseteq \{y_1,z_1\}} (-1)^{|B_1|+|B_2|-4} \rho_2(y_2,B_2|B_1,y_1)\rho_1(y_1,B_1) \geq 0 \end{align*} Since marginal consistency is satisfied, \textbf{Proposition 2} and \textbf{Claim 1} are satisfied, so for any $i,j \in \{1,2\}$ \begin{align*} M_{x_i,A_i;y_j,\{x_j\}}+M_{x_i,A_i;z_j,\{x_j\}}=M_{x_i,A_i;x_j;\emptyset} \\ M_{x_i,A_i;x_j,\{y_j,z_j\}}=M_{x_i,A_i;y_j,\{z_j\}}+M_{x_i,A_i;z_j,\{y_j\}} \end{align*} which implies that stochastic Block-Marschak nonnegativity is satisfied. Thus, by \textbf{Theorem 1}, $\boldsymbol{\rho}^2$ has a SU representation. Of course, we can also directly verify this by defining \begin{align*} \mu(x_1y_1z_1,x_2y_2z_2):=M_{y_1,\{x_1\};y_2,\{x_2\}} \geq 0 \end{align*} It is then straightforward to verify \begin{align*} \mu(x_1y_1z_1):=\sum_{\tau_2 \in \pi(X_2)} \mu(x_1y_1z_1,\tau_2)=\rho_1(y_1,z_1)-\rho_1(y_1,X_1) \\ \implies \sum_{\tau_1 \in \pi(X_1)} \sum_{\tau_2 \in \pi(X_2)} \mu(\tau_1,\tau_2)=1 \end{align*} so $\mu$ is indeed a probability measure. Furthermore, \begin{align*} E(x_i,A_i;y_j,\{x_j\})\cup E(x_i,A_i;z_j,\{x_j\})=E(x_i,A_i;x_j;\emptyset) \\ E(x_i,A_i;x_j,\{y_j,z_j\})=E(x_i,A_i;y_j,\{z_j\})\cup E(x_i,A_i;z_j,\{y_j\}) \end{align*} and these unions are disjoint, so by \textbf{Proposition 3} it follows that $\mu$ is a SU representation. Finally, let $\mu'$ be a SU representation. Then by \textbf{Proposition 3}, \begin{align*} \mu'(x_1y_1z_1,x_2y_2z_2)=\mu'\big(E(y_1,\{x_1\};y_2,\{x_2\})\big)=M_{y_1,\{x_1\};y_2,\{x_2\}}=\mu(x_1y_1z_1,x_2y_2z_2) \end{align*} so $\mu$ is unique. \end{proof} \subsection{Proof of Corollary 1} Define $\nu$ as in the proof of \textbf{Theorem 1}. Since this definition is recursive and the base case is a joint Block-Marschak sum, it immediately follows that $\nu$ is strictly positive on $\mathcal{I}_1 \times \mathcal{I}_2$, so $\mu$ is strictly positive on $P_1 \times P_2$. All other parts of the proof of \textbf{Theorem 1} still hold, so we conclude that $\mu$ is a SU representation of $(\rho_1,\{\rho_2(\cdot|h)\}_{h \in \mathcal{H}})$ with full support.
2,869,038,156,048
arxiv
\section{Introduction} \label{chap:intro} Although much of the extraction of information from astronomy science images is now performed ``blindly'' using computer programs, astronomers still rely on visual examination for a number of tasks. Such tasks include image quality control, assessment of morphological features, and debugging of measurement algorithms. The generalization of standardized file formats in the astronomy community, such as FITS \citep{wells_fits_1981}, has facilitated the development of universal visualization tools. In particular, {\sc SAOimage} \citep{1990BAAS...22..935V}, {\sc Aladin} \citep{1994ASPC...61..215B}, {\sc SkyCat} \citep{1997ASPC..125..333A}, {\sc Gaia} \citep{2000ASPC..216..615D} and {\sc ds9} \citep{2003ASPC..295..489J}. These packages are designed to operate on locally stored data and provide efficient access to remote image databases by downloading sections of FITS data which are subsequently read and processed locally for display; all the workload, including image scaling, dynamic range compression, color compositing and gamma correction, is carried out client-side. However, the increasing gap between storage capacities and data access bandwidth \citep{budman} makes it increasingly efficient to offload part of the image processing and data manipulations to the server, and to transmit some form of pre-processed data to clients over the network. Thanks to the development of wireless networks and light mobile computing (tablet computers, smartphones), more and more scientific activities are now being carried out on-the-go outside an office environment. These possibilities are exploited by an increasing number of scientists, especially experimentalists involved in large international collaborations and who must interact remotely, often in real-time, with colleagues and data located in different parts of the world and in different time zones. Mobile devices have increasingly improved display and interfacing capabilities, however, they offer limited I/O performance and storage capacity, as well as poor battery life when under load. Web-based clients, or simply {\em Web Apps}, are the applications of choice for these devices, and their popularity has exploded over the past few years. Thanks to the ubiquity of web browsers on both desktop and mobile platforms, {\it Web Apps} have become an attractive solution for implementing visual interfaces. Modern web browsers feature ever faster and more efficient JavaScript engines, support for advanced standards such as HTML5 \citep{html5} and CSS3 \citep{css}, not to mention interactive 3D-graphics with the recent WebGL API \citep{webgl}. As far as data visualization is concerned, web applications can now be made sufficiently feature-rich so as to be able to match many of the functions of standalone desktop applications, with the additional benefit of having instant access to the latest data and being embeddable within web sites or data portals. One of the difficulties in having the browser deal with science data is that browser engines are designed to display gamma-encoded images in the GIF, JPEG or PNG format, with 8-bits per Red/Green/Blue component, whereas scientific images typically require linearly quantized 16-bit or floating point values. One possibility is to convert the original science data within the browser using JavaScript, either directly from FITS \citep{jsfits, astrojs}, or from a more ``browser-friendly'' format, such as e.g., a special PNG ``representation file'' \citep{js9}, or compressed JSON \citep{2011ASPC..442..467F}. In practice this is currently limited to small rasters, as managing millions of such pixels in JavaScript is still too burdensome for less powerful devices. Moreover, lossless compression of scientific images is generally not very efficient, especially for noisy floating-point data \citep[e.g.][]{2009PASP..121..414P}. Hence, currently, server-side compression and encoding of the original data to a browser-friendly format remains necessary in order to achieve a satisfactory user experience on the web client, especially with high resolution screens. Displaying images larger than a few megapixels on monitors or device screens requires panning and/or pixel rebinning, such as in ``slippy map'' implementations (Google Maps\texttrademark, OpenStreetMap\footnote{\url{http://www.openstreetmap.org}} etc.). On the server, the images are first decomposed into many small tiles (typically $256\times 256$ pixels) and saved as PNG or JPEG files at various levels of rebinning, to form a ``tiled pyramid''. Each of these small files corresponds to a URL and can be loaded on demand by the web client. Notable examples of professional astronomy web apps based on this concept include the Aladin Lite API \citep{2012ASPC..461..443S}, and the Mizar plugin\footnote{\url{https://github.com/TPZF/RTWeb3D}} in SITools2 \citep{2012ASPC..461..821M}. However, having the data stored as static 8-bit compressed images means that interaction with the pixels is essentially limited to passive visualization, with little latitude for image adjustment or interactive analysis. Server-side dynamic processing/conversion of science-data on the server and streaming of the processed data to the web client are necessary to alleviate these limitations. Visualization projects featuring dynamic image conversion/streaming in Astronomy or Planetary Science have mostly relied on browser plugins implementing proprietary technologies \citep{2012ASPC..461...95F} or Java clients/applets \citep{10.1109/MCSE.2009.142, kitaeff_2012}. Notable exceptions include Helioviewer \citep{hughitt_helioviewer:_2008}, which queries compressed PNG tiles directly from the browser with the tiles generated on-the-fly server-side from JPEG2000 encoded data. In this paper we describe an open source and multi-platform high performance client-server system for the processing, streaming and visualization of full bit depth scientific imagery at the terabyte scale. The system consists of a light-weight C++ server and W3C standards-based JavaScript clients capable of running on stock browsers. In section \ref{chap:method}, we present our approach, the protocols and the implementation of both the server and the client. Sections \ref{chap:astrapp} and \ref{chap:planetapp} showcase several applications in Astronomy and Planetary Science. In Section \ref{chap:perf}, we assess the performance of the system with various configurations and load patterns. Finally in Section \ref{chap:conclu}, we discuss future directions in the light of current technological trends. \section{Material and Methods} \label{chap:method} The proposed system consists of a (or several) central image server(s) capable of processing 32 bit floating point data on-demand and of transcoding the result into an efficient form usable by both light-weight mobile devices or desktop computers. \subsection{Image Server} \label{chap:server} At the heart of the system is the open source {\sc IIPImage}\footnote{\url{http://iipimage.sourceforge.net}} image server \citep{PPL06}. {\sc IIPImage} is a scalable client-server system for web-based streamed viewing and zooming of ultra high-resolution raster images. It is designed to be fast, scalable and bandwidth-efficient with low processor and memory requirements. {\sc IIPImage} has a long history and finds its roots in the mid 1990s in the cultural heritage field where it was originally created to enable the visualization of high resolution colorimetric images of paintings \citep{martinez_high_1998}. The original system was designed to be capable of handling gigapixel size, scientific-grade imaging of up to 16 bits per channel, colorimetric images encoded in the CIEL*a*b* color space and high resolution multispectral images \citep{martinez_ten_2002} (Fig. \ref{fig:multispectral}). It had hitherto been very difficult to simply even view such image data locally, let alone access it remotely, share or collaborate between institutions. The client-server solution also enabled integration of full resolution scientific imaging such as infra-red reflectography, Xray, multispectral and hyperspectral imagery (Fig. \ref{fig:hyperspectral}) into museum research databases, providing for unprecedented levels of interactivity and access to these resources \citep{lahanier_eros}. \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{figures/multispectral.png} \caption{Spectral visualization of Renoir's \textit{Femme Nue dans un Paysage}, Mus\'{e}e de l'Orangerie, showing spectral reflectance curve for any location and controls for comparing different imaging modalities} \label{fig:multispectral} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.9\columnwidth]{figures/hyperspectral.png} \caption{Hyperspectral imaging of paintings} \label{fig:hyperspectral} \end{figure} Beyond cultural heritage, the system has also been adapted for use in the field of biomedical imaging. For example, to visualize ultra-large high resolution electron microscopy maps created by ultra-structural mapping or \textit{virtual nanoscopy} \citep{faas_virtual_2012}, or to explore high resolution volumetric 3D cross-sectional atlases \citep{husz_web_2012}. In practice the {\sc IIPImage} platform consists of a light-weight C++ Fast-CGI \citep{fastcgi} server, {\tt iipsrv}, and an AJAX-based web interface. Image data stored on disk are structured in order to enable efficient and rapid multi-resolution random access to parts of the image, allowing terapixel scale data to be efficiently streamed to the client. As only the region being viewed needs to be decoded and sent, large and complex images can be managed without onerous hardware, memory or network requirements by the client. The {\sc IIPImage} server performs on-the-fly JPEG compression for final visualization, but as the underlying data is full bit depth uncompressed data, it can operate directly on scientific images, and perform operations such as rescaling or filtering before sending out the results to the client. IIPImage, therefore, possessed many of the attributes necessary for astronomy data visualization and rather than develop something from scratch, it was decided to leverage this existing system and extend it. A further benefit to this approach would be access to a larger scientific community beyond that of astronomy with certain similar data needs. Moreover, as IIPImage forms part of the standard Debian, Ubuntu and Fedora Linux distributions, access to the software, installation and maintenance would be greatly simplified and sustainable in the longer term. Hence a number of modifications were made to the core of {\sc IIPImage} in order to handle astronomy data. In particular, to extend the system to handle 32 bit data (both integer and IEEE floating point), FITS metadata, functionality such as dynamic gamma correction, colormaps, intensity cuts, and to be capable of extracting both horizontal and vertical data profiles. The resulting code has been integrated into the main {\sc IIPImage} software development repositories and is available from the project website\footnote{\url{http://iipimage.sourceforge.net}}, where it will form part of the $1.0$ release of {\tt iipsrv}. \subsection{Data Structures and Format} \label{chap:ipp}Extracting random image tiles from a very large image requires an efficient storage mechanism. In addition to tile-based access, the possibility to rapidly zoom in and out imposes some sort of multi-resolution structure on the data provided to the server. The solution adopted for ``slippy map'' applications is often simply to store individual tiles rebinned at the various resolution levels as individual image files. For a very large image this can translate into hundreds of thousands of small files being created. This approach is not convenient from a data management point of view, and for {\sc IIPImage} a single file approach has always been preferred. The current version of {\sc IIPImage} supports both TIFF and JPEG2000 formats. Multi-resolution encoding is one of the major features of JPEG2000, but the lack of a robust, high performance open source library has been a serious issue until recently. Nevertheless, the encoding of floating point values spanning a large dynamic range remains a concern with current open-source libraries, as in practice input data is managed with only fixed point precision \citep{6092314,2014arXiv1403.2801K}. The combining of tiling and multi-resolution mechanisms is also possible with TIFF. TIFF is able to store not only 8 bit and 16 bit data, but also 32 bit integers, and single or double precision floating point numbers in IEEE format. As a well supported and mature standard with robust and widely used open source development libraries readily available, TIFF was adopted as the main server-side storage format, rather than creating a completely new format or adapting existing science formats in some way. \subsection{Image Transcoding} Astronomy imaging data are usually stored in the FITS format \citep{wells_fits_1981}. FITS is a flexible container format that can handle data encoded in up to 64 bits per value. FITS supports image tiling, whereby the original raster is split into separate rectangular tiles, which can be retrieved quickly and read and decoded independently \citep{pence_w._fits_2000}. Versions of the same image could be stored at multiple resolution levels in different extensions, at the price of an increased file size. However currently neither tiling nor multi-resolution is present in archived FITS science images. Hence, regardless of the adopted storage format (TIFF in our case), a considerable amount of pixel shuffling and rebinning must be carried out in order to convert FITS data before they can be handled by the server. Transcoding from basic FITS to multi-resolution tiled TIFF is carried out via the {\sc STIFF} conversion package \citep{2012ASPC..461..263B}. The multi-resolution structure consists of an image ``pyramid'' whereby pixels in each image are successively rebinned $2\times2$ and stored in separate TIFF virtual ``directories'' in tiled format. Tile size remains constant across all resolution levels (Fig. \ref{fig:pyramid}). The total number of pixels stored in the pyramid is increased by approximately one third compared to the original raster, but TIFF's widespread support for various lossless compression algorithms (e.g., LZW, Deflate) mitigates some of this extra structural overhead. Note that using pixel rebinning instead of decimation (as in traditional astronomy image display tools) averages out background noise as one zooms out: this makes faint background features such as low surface brightness objects or sky subtraction residuals much easier to spot. The default orientation for TIFF images (and most image formats) is such that the first pixel resides in the upper left corner of the viewport, whereas FITS images are usually displayed with the first pixel in the lower left corner. To comply with these conventions, {\sc STIFF} flips the original image content along the y direction by proceeding through the FITS file backwards, line-by-line. {\sc STIFF} takes advantage of the TIFF header ``tag'' mechanism to include metadata that are relevant to the {\sc IIPImage} server and/or web clients. For instance, the {\tt ImageDescription} tag is used to carry a verbatim copy of the original FITS header. Another set of information of particular importance, especially with floating point data, is stored in the {\tt SMinSampleValue} and {\tt SMaxSampleValue} tags: these are the minimum and maximum pixel values ($S_{\rm min}$ and $S_{\rm max}$) that define the display scale. These values do not necessarily represent the full range of pixel values in the image, but rather a range that provides the best visual experience given the type of data. {\sc STIFF} sets $S_{\rm max}$ to the 999${\rm th}$ permil of the image histogram by default. $S_{\rm min}$ is computed in a way that the sky background $S_{\rm sky}$ should appear on screen as a dark grey $\rho_{\rm sky}\approx 0.001$ (expressed as a fraction of the maximum display radiant emittance: $1 \equiv $~full white): \begin{equation} S_{\rm min} = \frac{S_{\rm sky} - \rho_{\rm sky}\,S_{\rm max}} {1 - \rho_{\rm sky}}. \end{equation} {\sc STIFF} currently takes simply the median of all pixel values in the FITS file to compute $S_{\rm sky}$, although better estimates could be computed almost as fast \citep{1996A&AS..117..393B}. Transcoding speed can be a critical issue, for instance in the context of real-time image monitoring of astronomy observations. On modern hardware, the current {\sc STIFF} conversion rate for transcoding a FITS file to an IIPImage-ready tiled pyramidal TIFF ranges from about 5Mpixel/s to 25Mpixel/s (20-100MB/s) depending on the chosen TIFF compression scheme and system I/O performance. This means that FITS frames with dimensions of up to $~$16k$\times$16k pixels can be converted in a matter of seconds, and just-in-time conversion is a viable option for such images. Note that although {\sc STIFF} is multithreaded, all calls to {\tt libtiff} for writing tiles are done sequentially in the current implementation and there may, therefore, be some room for significant performance improvements. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/pyramid.png} \caption{Illustration of tiled multi-resolution pyramid with 4 levels of resolution.} \label{fig:pyramid} \end{figure} \subsection{Protocol and Server-Side Features} {\sc IIPImage} is based on the Internet Imaging Protocol (IIP), a simple HTTP protocol for requesting images or regions within an image, which allows the user to define resolution level, contrast, rotation and other parameters. The protocol was originally defined in the mid-1990s by the \textit{International Imaging Industry Association} \citep{iiprotocol}, but has since been extended for {\sc IIPImage}. The use of such a protocol provides a rich RESTful-like interface to the data, enabling flexible and consistent access to imaging data. {\sc IIPImage} is also capable of communicating using the simpler tile request protocols used by Zoomify or Deepzoom and the more recent IIIF image access API \citep[International Image Interoperability Framework,][]{iiif}. Table \ref{tab:commands} lists the main commands already available in the original, cultural heritage-oriented version of {\sc IIPimage}. For a complete description of the protocol, see the full IIP protocol specification \citep{iiprotocol}. \setlength{\tabcolsep}{8pt} \renewcommand{\arraystretch}{1.5} \begin{table}[h] \begin{small} \begin{tabular}{|p{0.15\columnwidth}|p{0.72\columnwidth}|} \hline \textbf{Command} & \textbf{Description}\\ \hline \hline {\tt FIF} & Image path $p$. [\texttt{FIF=p}]\\ \hline {\tt OBJ} & Property/ies $text$ to be retrieved from image and server metadata. [\texttt{OBJ=text}]\\ \hline {\tt QLT} & JPEG quality factor $q$ between 0 (worst) and 100 (best). [\texttt{QLT=q}]\\ \hline {\tt SDS} & Specify a particular image within a set of sequences or set of multi-band images. [\texttt{SDS=s1,s2}]\\ \hline {\tt CNT} & Contrast factor $c$. [\texttt{CNT=c}]\\ \hline {\tt CVT} & Return the full image or a region, in JPEG format. [\texttt{CVT=jpeg}]\\ \hline {\tt WID} & Width $w$ in pixels of the full sized JPEG image returned by the {\tt CVT} command (interpolated from the nearest resolution). [\texttt{WID=w}]\\ \hline {\tt HEI} & Height $h$ in pixels of the full sized JPEG image returned by the {\tt CVT} command (interpolated from the nearest resolution). [\texttt{HEI=h}]\\ \hline {\tt RGN} & Define a region of interest starting at relative coordinates $x$, $y$ with width $w$ and height $h$. [\texttt{RGN=x,y,w,h}]\\ \hline {\tt ROT} & Rotate the image by $r$ (90, 180 or 270 degrees). [\texttt{ROT=r}]\\ \hline {\tt JTL} & Return a tile with index $n$ at resolution level $r$, in JPEG format. [\texttt{JTL=r,n}] \\ \hline {\tt SHD} & Apply hillshading simulation with azimuth and altitude angles $a$, $b$. [\texttt{SHD=a,b}]\\ \hline {\tt SPECTRA} & Return pixel values in all image channels for a particular point $x$,$y$ on tile $t$ at resolution $r$ in XML format. [\texttt{SHD=r,t,x,y}]\\ \hline \end{tabular} \end{small} \caption{ \label{tab:commands} Main commands available in {\sc IIPImage} } \end{table} For this project the entire {\sc IIPImage} codebase was updated and generalized to handle up to 32 bits per pixel, with support for single precision floating point data. Support for double precision (at the expense of performance) would require a relatively simple code update. In addition, several extensions were implemented that allow the application of predefined colormaps to grayscale images, adjust the gamma correction, change the minimum and maximum cut-offs of the pixel value range, and that enable the export of image data profiles. A list of the new available commands is given in table \ref{tab:newcomm}. \begin{table}[h] \begin{small} \begin{tabular}{|p{0.15\columnwidth}|p{0.72\columnwidth}|} \hline \textbf{Command} & \textbf{Description}\\ \hline \hline {\tt CMP} & Set the colormap for grayscale images. Valid colormaps include {\tt GREY}, {\tt JET}, {\tt COLD}, {\tt HOT}, {\tt RED}, {\tt GREEN} and {\tt BLUE}. [\texttt{CMP=JET}]\\ \hline {\tt INV} & Invert image or colormap. [\texttt{Does not require an argument}] \\ \hline {\tt GAM} & Set gamma correction to $g$. [\texttt{GAM=g}] \\ \hline {\tt MINMAX} & Set minimum $min$ and maximum $max$ for channel $c$. [\texttt{MINMAX=c:min,max}]\\ \hline {\tt PFL} & Request full bit-depth data profile for resolution $r$ along the line joining pixel $x1$,$y1$ to $x2$,$y1$. [\texttt{PFL=r:x1,y1-x2,y2}] {\it\footnotesize Note: Only horizontal ($y1=y2$) and vertical profiles ($x1=x2$) currently supported}\\ \hline \end{tabular} \end{small} \caption{ \label{tab:newcomm} List of new commands implemented in {\sc IIPImage}. } \end{table} \subsubsection{Examples} \label{subsec:examples} In order to better understand how these commands can be used, here are several examples showing the typical syntax and usage for applying colormaps, setting a gamma correction and for obtaining a full bit-depth profile. All requests take the general form: \begin{lstlisting} <protocol>://<server address>/<iipsrv>?<IIP Commands> \end{lstlisting} The first IIP command must specify the image path and several IIP command--value pairs can be chained together using the separator $\&$ in the following way: \begin{lstlisting} FIF=<image path>&<command>=<value>&<command>=<value> \end{lstlisting} Thus, a typical request for the tile that fits into the smallest available resolution (tile 0 at resolution 0) of a TIFF image named \texttt{image.tif} is: \begin{lstlisting} http://server/iipsrv.fcgi?FIF=image.tif&JTL=0,0 \end{lstlisting} Let us now look at some more detailed examples using the new functionality created for {\sc IIPImage}. For example, in order to export a profile in JSON format from pixel location $x_{\rm 1}$,$y_{\rm 1}$ horizontally to pixel location $x_{\rm 2}$,$y_{\rm 2}$ at resolution $r$ on image \texttt{image.tif}, the request would take the form: \begin{lstlisting} FIF=image.tif&PFL=r:x1,y1-x2,y2 \end{lstlisting} In order to request tile $t$ at resolution $r$ and apply a standard \textit{jet} colormap to image \texttt{image.tif}, the request would take the form: \begin{lstlisting} FIF=image.tif&CMP=JET&JTL=r,t \end{lstlisting} and the equivalent inverted colormap request: \begin{lstlisting} FIF=image.tif&CMP=JET&INV&JTL=r,t \end{lstlisting} In order to obtain metadata containing the minimum and maximum values per channel: \begin{lstlisting} FIF=image.tif&OBJ=min-max-sample-values \end{lstlisting} In order to request tile $t$ at resolution $r$ and apply a gamma correction of $g$ and specify a minimum and maximum of $m_{\rm 1}$ and $m_{\rm 2}$ respectively for image band $b$: \begin{lstlisting} FIF=image.tif&MINMAX=b:m1,m2&GAM=g&JTL=r,t \end{lstlisting} Commands are not order sensitive excepting {\tt JTL} and {\tt CVT} that must always be specified last. \subsection{Security} A client-server architecture also has the advantage in terms of control and security of the data as the raw data at full bit depth does not necessarily need to be made fully available to the end user. Indeed, the raw data need never be directly accessible by the public and can be stored on firewalled internal storage and only accessible via the {\sc IIPImage} server. Thus only 8 bit processed data is ever sent out to the client and restrictions and limits can be applied if fully open access is not desired. The {\sc IIPImage} server also contains several features for added security, such as a path prefix, which limits access to a particular subdirectory on the storage server. Any requests to images higher up or outside of this subdirectory tree are blocked. If an even greater level of security is required on the transmitted data, the {\sc IIPImage} server can also dynamically apply a watermark to each image tile with a configurable level of opacity. Watermarking can be randomized both in terms of which tiles they are applied to as well as their position within the tile itself, making removal of watermarks extremely difficult. \subsection{Web Clients} \label{chap:client} Two web clients, developed using different approaches and different goals in mind, are presented in this paper as examples to illustrate the capabilities of the system. The first one, known as {\sc VisiOmatic}, is built on top of the {\sc leaflet} JavaScript mini-framework, and is designed to display large celestial images through a classic image tile-based view. The second client builds on the existing {\sc IIPMooViewer} client to demonstrate two experimental features more specifically relevant to planetary surface studies: hillshading and advanced compositing / filtering performed at the pixel level within the browser. \section{Astronomy Applications} \label{chap:astrapp} \subsection{Celestial Images} Two essential features of astronomy image browsers are missing in the IIPMooViewer client originally developed for cultural heritage applications: the handling of celestial coordinates and a comprehensive management system for vector layers (overlays). It soon became clear that developing such a system from scratch with limited human resources would raise severe maintenance issues and portability concerns across browsers and platforms. We investigated several JavaScript libraries that would provide such functionality and decided to build a new client, {\sc VisiOmatic}, based on the {\sc Leaflet} library \citep{leafletjs}. {\sc Leaflet} is open-source and provides all the necessary functions to build a web client for browsing interactive maps. It is, in fact, not simply a client, but a small framework, offering features not directly available in standard JavaScript such as class creation and inheritance. It has a well-documented, user-friendly API and a rich collection of plug-ins that significantly boost its potential, while providing many advanced programming examples. Indeed, {\sc VisiOmatic} operates as a {\sc Leaflet} plug-in and as such comes bundled as a NodeJS package. Documentation for the {\sc VisiOmatic} API is available on the {\sc VisiOmatic} {\sc GitHub} page\footnote{\url{https://github.com/astromatic/visiomatic}}. Once the {\tt iipsrv} server has been installed, embedding a zoomable astronomy image in a web page with the {\sc VisiOmatic} and {\sc Leaflet} JavasScript libraries is very simple and can be done with the following code: \begin{lstlisting} <div id="map"></div> <script> var map = L.map('map'), layer = L.tileLayer.iip('/fcgi-bin/iipsrv.fcgi?FIF=/path/to/image.ptif').addTo(map); </script> \end{lstlisting} {\sc Leaflet} was built from the ground up with mobile device support in mind. {\sc VisiOmatic} capitalizes on this approach by defining the current map coordinates at the center of the viewport instead of the mouse position. This also makes the coordinate widget display area usable for input, copy or paste as coordinates do not change while moving the mouse. Celestial coordinates are handled through a custom JavaScript library that emulates a small subset of the WCS standard \citep{2002A&A...395.1077C}, based on the FITS header content transmitted by the {\sc IIPImage} server. Our simplified WCS library fits into {\sc Leaflet}'s native latitude--longitude coordinate management system, giving access to all layer contents directly in celestial coordinates. This makes it particularly easy to synchronize maps that do not use the same projection for e.g., orientation maps, ``smart'' magnifiers, or multi-band monitoring. Changing image settings is done by appending the relevant IIP commands (see e.g., Table \ref{tab:newcomm}) to the {\tt http} {\tt GET} tile requests. Metadata and specific data queries, such as profile extractions, are carried out through AJAX requests. {\sc VisiOmatic} also uses AJAX requests for querying catalogs from other domains, with the restriction that the {\it same origin security policy}\footnote{The {\it Cross-Origin Resource Sharing} (CORS) mechanism implemented in modern browsers could in principle prevent that, but it is not supported by the main astronomy data providers at this time.} present in current browsers requires that all requests transit through the image server domain, which must, therefore, be configured as a web proxy. The {\sc VisiOmatic} website\footnote{\url{http://visiomatic.org}} showcases several examples of applications built with the {\sc VisiOmatic} client. They involve large images of the deep sky stored in floating point format, including a one terabyte (500,000 $\times $500,000 pixels) combination of 250,000 exposures from the 9$^{\rm th}$ Sloan Digital Sky Survey data release \citep{2012ApJS..203...21A}, representing about 3TB worth of raw image data (Fig. \ref{fig:visiomatic}). Display performance with the {\sc VisiOmatic} client varies from browser to browser. Browsers based on the {\sc WebKit} rendering engine (e.g., {\sc Chrome}, {\sc Safari}) generally offer the smoothest experience on all platforms, especially with complex overlays. User experience may also vary because of the different ways browsers are able to deal with data. For example, examining images at exceedingly high zoom levels and scrutinizing groups of pixels displayed as blocks is common practice among astronomers. {\sc Leaflet} takes advantage of the built-in resampling engines in browsers to allow image tiles to be zoomed in smoothly through CSS3 animations. {\sc VisiOmatic} uses the {\tt image-rendering} CSS property to activate nearest-neighbor interpolation and have the pixels displayed as blocks at higher zoom levels. Although this works in, for example, {\sc Firefox} and {\sc Internet Explorer 11}, other browsers, such as {\sc Chrome}, do not offer the possibility to turn off bilinear interpolation at the present time, and zoomed images will not appear pixelated in those browsers. Hopefully, it is expected that such residual differences will eventually disappear as browser technology converges over standards. \begin{figure*}[htb!] \centering \includegraphics[width=\textwidth]{figures/visiomatic.png} \caption{The Visi{\it O}matic web client showing a part of an SDSS release nine image stack \citep{2012ApJS..203...21A} provided by the {\sc IIPImage} server in the main layer, plus two vector layers superimposed. Yellow: local detections from the photometric SDSS catalog provided by the Vizier service \citep{2000A&AS..143...23O}. Purple: horizontal profile through the image extracted by the {\sc IIPImage} server.} \label{fig:visiomatic} \end{figure*} \section{Planetary Science} \label{chap:planetapp} Planetary Science data are largely heterogeneous with respect to the physical quantities they describe (chemical abundances, atmospheric composition, magnetic and gravitational fields of Earth-like planets and satellites, reflectance, surface composition) and with respect to the formats they are encoded in (raster, vector, time-series, in ASCII or various binary formats). Two scientific communities are essentially involved in Planetary Science research: astronomers and geologists / geophysicists. Geographical Information Systems (GIS) are the basis for planetary surface studies but they often suffer from a lack of systematic and controlled access to pixel values for quantitative physical analyses on raster data \citep{2012ASPC..461..411M}. In Earth Sciences, distributors of GIS commercial software have been ready to exploit the potential of the Web. This is the case, for example, of the online ArcGIS WebMap Viewer\footnote{\url{http://www.arcgis.com/home/webmap/viewer.html}}. However, the difficulty in sending 16 or 32 bit precision scientific data using current web technologies has, hitherto, limited web visualization to public outreach applications such as the Microsoft World Wide Telescope available for images of Mars \citep{2011LPI....42.2337S} or Google Earth for Mars\footnote{\url{http://www.google.com/earth/explore/showcase/mars.html}}. Nevertheless, remote scientific visualization has been achieved with tools such as JMars\footnote{\url{http://jmars.mars.asu.edu}} \citep{2009AGUFMIN22A..06C} and HiView\footnote{\url{http://hirise.lpl.arizona.edu/hiview/}}, which are both Java clients that aim to visualize both remote and local data. The first is GIS based (layer superposition oriented) while HiView is more a remote sensing software (raster manipulation oriented) that is an ad-hoc product for HiRISE\footnote{\url{http://hirise.lpl.arizona.edu}} and which uses the JPIP protocol for remote access to JPEG2000 imagery. The visualization system we propose already supports basic manipulations on raster layers, raster layer superposition and could easily manage vector layer creation and superposition. It, moreover, enables access to full precision pixel values and provides a simple and generic solution for planetary applications, efficiently and elegantly blending both GIS and remote sensing approaches. \subsection{Color Compositing} Color compositing is an essential feature in both GIS and remote sensing applications and is used to point out differences in surface composition by performing on-demand composition of specific color bands. Interactive color composition on the Web can be achieved using the HTML5 \texttt{canvas} element which allows us to directly access and manipulate image pixel values. This, therefore, enables more complex real-time image processing directly within the client and we have developed a version of our client making extensive use of HTML5 \texttt{canvas} properties\footnote{\url{http://image.iap.fr/iipcanvas/hrsc.html}} in order to implement on-demand color composition with multiple input channels (Fig. \ref{fig:hrsc}). Color compositing performance depends essentially on canvas size (the overall image size is irrelevant as only the displayed part of the image needs to be processed). For the example cited above the processing time is about 1 to 3 ms per tile ($256\times 256$ pixels), depending on browser and client hardware. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/hrsccanvas.jpg} \caption{Example of a planetary application using HRSC Mars images (ESA/DLR/FU Berlin/G. Neukum). The resulting color image is a linear combination of input channels. The mixing matrix is defined by a user-adjustable combination of Red, Green, Blue and a contrast factor for each input channel.} \label{fig:hrsc} \end{figure} \subsection{Terrain Maps - Hillshading} High resolution 3D data is not easy to stream or to make multi-resolution. Furthermore as {\sc IIPImage} is essentially image-centric, a 2D rendering approach to visualization was favored. In order to facilitate the use of DEM (digital elevation map) data, two approaches have been developed in our {\sc IIPimage} framework. The first approach is the dynamic application of custom colormaps to grayscale images. A new command \texttt{CMP} has been added to the IIP protocol, which can also be useful for the visualization of other physical map data such as gas density, temperature or chemical abundances in real or simulated data. In the second approach, elevation point data is converted to vector normal and height data at each pixel. In this form, they are also able to be stored within the standard TIFF format. The $XYZ$ normal vectors can be packed into a 3 channel ``color'' TIFF, whereas the height data can be packed into a separate 1 channel monochrome TIFF. They are both, therefore, able to be tiled, compressed and structured into a multi-resolution pyramid for streaming with {\sc IIPImage}. A basic rendering technique for DEM data is that of hill-shading \citep{horn_hill_1981} where a virtual directional illumination is used to create shading on virtual ``hills''. A fast hill-shading algorithm has been implemented server-side in {\sc IIPImage} and extended to 32 bit data allowing the user to interactively set the angle of incidence of the light source and view a dynamically rendered hill-shaded relief map. An example showing a Mars terrain map from Western Arabia Terra can be seen online\footnote{\url{http://image.iap.fr/iipdiv/hirise.html}} and in Fig. \ref{fig:hirise}. \begin{figure*}[htb!] \centering \includegraphics[width=\textwidth]{figures/planetary.png} \caption{Example of a planetary web application using HiRISE Mars images (NASA/JPL/University of Arizona). The digital elevation model (a floating point raster) is displayed using the JET colormap with cuts set by the user from the control panel. Superimposed is the hill-shading layer computed by the {\sc IIPImage} server from the DEM; the azimuth incidence angle can be adjusted from the control panel.} \label{fig:hirise} \end{figure*} \section{Performance Analysis} \label{chap:perf} Although the use of JPEG compression as a delivery format significantly reduces the bandwidth required, data access, dynamic processing and compression of 32 bit data can impose significant server-side overhead, that will ultimately dictate the maximum number of users that a server will be able to handle. Timings and memory usage depend on image type, server settings and commands in the query; we chose to focus on the typical case of browsing a large, single channel, single precision floating-point image stored with tiles of $256\times256$ pixels in size. In order to fully test this, we created a large $131,072\times 131,072$ pixel FITS image by combining contiguous SDSS i-band images using the {\sc SWarp} package \citep{2002ASPC..281..228B}. This large image was then converted to a 92GB multi-resolution TIFF comprising 9 resolution levels using {\sc STIFF}. Our tests were performed on two Dell PowerEdge servers running GNU/Linux (Fedora distribution with kernel 3.11) and equipped with 2.6GHz processors, 32 and 48 GB of RAM and a Perc5i internal RAID controller. In order to check the influence of the I/O subsystem on server performance, we installed the TIFF file on two different types of RAID: \begin{itemize} \item[a)] a RAID 6 array of $12\times3$ TBytes SAS (6Gb/s) hard drives formatted with the XFS filesystem. \item[b)] a RAID 5 array of $6\times1$ Tbytes SATA3 (6Gb/s) solid-state drives (SSDs) formatted with the Ext4 filesystem. \end{itemize} On both systems we obtain a typical sequential read speed of 1.2 GB/s for large blocks; but obviously access times are much lower on the RAID of SSDs ($<1$ms vs 15ms for the one with regular hard drives). The client consists of a third machine sending requests to any of the two servers through a dedicated 10GbE network. We used a modified version of {\sc ab}, the {\sc ApacheBench} HTTP server benchmarking package, to send sequences of requests to random tiles among the 262,144 that compose the highest image zoom level. Appropriate system settings, as prescribed in \cite{Veal:2007:PSM:1323548.1323562}, were applied server-side and client-side to ensure that both ends would stand the highest possible concurrency levels with minimum latencies and maximum throughput. We conducted preliminary tests through Apache's httpd\footnote{\url{http://httpd.apache.org}}, Lighty Labs' lighttpd\footnote{\url{http://www.lighttpd.net}}, a combination of Nginx\footnote{\url{http://nginx.org}} and Lighty Labs' {\tt spawn-fcgi} and finally LiteSpeed Technologies' OpenLiteSpeed\footnote{\url{http://open.litespeedtech.com}}. We found the latter to offer the best combination of performance and robustness, especially at high concurrency levels; hence all the requests to {\tt iipsrv} in the tests reported below were served through OpenLiteSpeed (one single {\tt lshttpd} instance). \subsection{Timings} Figure \ref{fig:timings} shows the distribution of timings of the main tasks involved in the server-side processing of a tile, for several system and {\tt iipsrv} cache settings. In order to probe the impact of I/O latencies, we set up an experiment where the server system page cache is flushed and the {\tt iipsrv} internal cache is deactivated prior to running the test (upper row of Fig. \ref{fig:timings}). With such settings most accesses to TIFF raw tiles do not benefit from caching. As a consequence, {\tt iipsrv} timings are dominated by random file access times when the data are stored on spinning hard disks, with access latencies reaching up to $\approx 25$ms in unfavorable situations. As expected, switching to SSDs reduces the uncached file access latencies to less than 1ms. \begin{figure*}[htb] \centering \includegraphics[width=0.49\textwidth]{figures/iipsrv_raid_nocache.pdf} \includegraphics[width=0.49\textwidth]{figures/iipsrv_ssd_nocache.pdf} \centering \includegraphics[width=0.49\textwidth]{figures/iipsrv_raid_diskcache.pdf} \includegraphics[width=0.49\textwidth]{figures/iipsrv_raid_allcaches.pdf} \caption{Cumulative distributions for the timings of the main tasks involved in the processing of random $256\times256$ pixel tiles in four different contexts (see text for details). ``Initialization'' is the time taken to initialize various objects and (re-)open the TIFF file that contains the requested raw tile (call to {\sc TIFFOpen()}). ``Tile access'' is the time spent accessing and reading the content of a raw tile. ``Normalization'' and ``Gamma correction'' respectively measure the time it takes to apply intensity cuts and to compress the dynamic range of pixel values for the whole tile. ``8 bit quantization'' is the time spent converting the tile to 8 bit format, while ``JPEG encoding'' is the time taken to encode the tile in JPEG format.} \label{fig:timings} \end{figure*} However, in practice much better timings will be obtained with regular hard drives, as tiles are generally not accessed randomly. Moreover, leaving the system page cache un-flushed between test sessions when using spinning hard drives reduces access latencies to a few milliseconds (lower row in Fig. \ref{fig:timings}). Activating {\tt iipsrv}'s internal cache system will further reduce latencies close to zero for tiles that were recently visited. Further testing with TIFF images of different size was carried out in order to ensure that the system would also scale in terms of file size and the timings reported above remain roughly identical as file size increases up to at least 1.8TB. \subsection{Concurrency and Data Throughput} Each single-threaded FastCGI process takes about 5-10ms to complete, and is therefore capable of serving up to 100-200 $256\times 256$ ``new'' tiles per second. Higher tile serving rates are obtained by running several instances of {\tt iipsrv} on servers with multiple CPU cores. But how is the system able to keep up with a large number of concurrent requests? As Fig. \ref{fig:concur} shows, the tile serving rate remains remarkably flat, and latency scales linearly with the concurrency level when the number of concurrent requests exceeds that of CPU cores. Setting a limit for average latency to $\sim$500ms for comfortable image browsing, we see that a single 12-core web server can handle $\sim$700 concurrent $256\times256$ tile requests, which corresponds to about 100 users frantically browsing large, uncompressed, single-channel floating-point images. This estimation is well verified in practice, although it obviously depends on tile size and on the amount of processing carried out by {\tt iipsrv}. Note that the average tile serving rate obtained with a single 12-core web server corresponds to a sustained data rate of 60MB/s for $256\times256$ tiles encoded at a JPEG quality factor of 90; higher JPEG quality factors bring the data rate close to the saturation limit of a 1GbE connection. \begin{figure}[htb!] \centering \includegraphics[width=\columnwidth]{figures/concur.pdf} \caption{Tile serving rate (in blue) and latency (in orange) as a function of the number of concurrent tile requests using 12 instances of {\tt iipsrv} on a 12-core server equipped with an SSD RAID.} \label{fig:concur} \end{figure} \section{Conclusion and Future Work} \label{chap:conclu} A high performance web-based system for remote visualization of full resolution scientific grade astronomy images and data has been developed. The system is entirely open-source and capable of efficiently handling full resolution 32 bit floating point image and elevation map data. We have studied the performance and scalability of the system and have shown that it is capable of handling terabyte-size scientific-grade images that can be browsed comfortably by at least a hundred simultaneous users, on a single server. By using and extending an existing open source project, a system for astronomy has been put together that is fully mature, that will benefit from the synergies of the wider scientific imaging community and that is ready for use in a busy production environment. In addition the {\sc IIPImage} server, is distributed as part of the default Debian, Ubuntu and Fedora package repositories, making installation and configuration of the system very straightforward. All the code developed within this project for {\tt iipsrv} has been integrated into the main code base and will form an integral part of the 1.0 release. However, there are still many potential directions for improvements, both server-side and client-side. Most importantly: \begin{itemize} \item {The TIFF storage format used on the server currently restricts pixel bit depth, the number of image channels, and I/O performance (through {\tt libtiff}). A valid alternative to TIFF could be the Hierarchical Data Format Version 5 (HDF5) \citep{the_hdf_group_hierarchical_2000}, which provides a generic, abstract data model that enables POSIX-style access to data objects organized hierarchically within a single file; some radio-astronomers have been trying to promote the use of HDF5 for storing massive and complex astronomy datasets \citep{2012ASPC..461..871M}. A more radical approach would be to adopt JPEG2000 as the archival storage format for astronomy imaging archives \citep{2014arXiv1403.2801K}, which could also remove the need for transcoding images for visualization purposes.} \item {Additional image operations could be implemented within {\tt iipsrv}, including real-time hyperspectral image processing and compositing.} \item {Although the {\sc IIPImage} image tile server already supports simple standard tile query protocols and interfaces easily with most image panning clients, a welcome addition would be to offer support for the more GIS-oriented WTMS (Web Map Tile Service) protocol \citep{wmts}}. \item {The International Virtual Observatory Alliance (IVOA) has agreed on a standard set of specifications for discovering and accessing remote astronomical image datasets: the Simple Image Access Protocol (SIAP) \citep{2011arXiv1110.0499T}. The response to an SIAP query consists of metadata and download URLs for matching image products. Current SIAP specifications\footnote{http://www.ivoa.net/documents/SIA/} do not provide specific ways to access pyramids of tiled images. Still, support for SIAP could be implemented within or outside of {\tt iipsrv} for generating, for example, JPEG cutouts or lists of tiles that match a given set of coordinates/field of view/pixel scale.} \item {Both {\sc IIPMooViewer} and {\sc Leaflet} clients require all layers displayed on a map at the same moment to share the same ``native'' pixel grid (projection). Although this limitation does not prevent ``blinking'' images with different pixel grids, it precludes overlapping different observations/exposures on screen. For instance it makes it impossible to display accurately the entire focal plane of a mosaic camera on a common viewport, without prior resampling. Having different images with different native pixel grids sharing the same map would require the web-client to perform real-time reprojection. Client-side reprojection should be possible e.g., with version 3 of the {\sc OpenLayers} library\footnote{\url{http://ol3js.org/}}}. \end{itemize} \section{Acknowledgments} The authors would like to thank the anonymous referees whose comments helped not only in improving the clarity of this paper, but also the performance of the code. CM wishes to acknowledge Prof. Joe Mohr for hospitality at USM, Munich, and the SkyMapper team, in particular Prof. Brian Schmidt, Dr. Patrick Tisserand and Dr. Richard Scalzo for support during her stay at MSO-ANU, Canberra where part of this work was completed. EB thanks Raphael Gavazzi, Val\'erie de Lapparent, and the Origin and Evolution of Galaxies group at IAP, Paris for financial support with the {\sc VisiOmatic} hardware, and Dr. Herv\'e Bouy at CAB, Madrid for providing content for the {\sc VisiOmatic} demos. The {\sc VisiOmatic} client implements services provided by the Sesame Name Resolver and the VizieR catalog access tool developed at CDS, Strasbourg, France. Some of our demonstration data are based on SDSS-III\footnote{\url{http://www.sdss3.org}} images. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University. \bibliographystyle{model2-names}
2,869,038,156,049
arxiv
\section{Introduction} The development of new applications based on innovative thermoset polymers is strongly correlated to the understanding of its chemorheological behavior \cite{fredi2019novel,ageyeva2019review,van2007reactive}. Obtaining a chemorheological master model \cite{chiacchiarelli2012cure,kenny1990model,lelli2009,maffezzoli1994correlation} is a key prerequisite for the successful application of polymers in advanced applications. For example, polymer composite prepregs \cite{mazumdar2001composites} represent a material of choice for advanced applications in aeronautics as well as aerospace applications \cite{mazumdar2001composites,mallick2007fiber}. Taking into account that the control and evolution of chemical conversion is a key issue for the development of prepregs, it is crucial to use chemorheological master models to that purpose. This, in turn, helps to avoid costly experiments whole sole purpose is to obtain an empirical solution to the problem. Even though several studies have been conducted to understand the chemorheological behavior of polyurethane thermosets \cite{chiacchiarelli2012cure,chiacchiarelli2013kinetic,diaz2011cure,milanese2011cure,papadopoulos2009thermal,rodrigues2005dsc,saha2011study}, the authors have not found any focusing on thermosets synthesized from soybean oil.\\ \\The search of sustainable and environmentally friendly solutions within the polyurethane industry has fueled scientific and industrial activities which emphasize on the replacement on fossil-fuel precursors with renewable ones \cite{chiacchiarelli2019sustainable}. One of the most relevant initiatives in this regard is the development of polyols from vegetable oils \cite{desroches2012vegetable,fridrihsone2020life,miao2014vegetable,petrovic2008polyurethanes,xia2010vegetable,petrovic2013biological}. Within the Americas region, soybean-oil based polyols are currently being developed both in industry and in scientific laboratories with the general objective to partially or fully replace petroleum-based polyols. Thermosetting polymers are ubiquitous for the development of polymer composites, however, a highly crosslinked network is a requirement to achieve that goal \cite{ionescu2005chemistry}. In polyurethanes, this means that highly functional polyols as well as isocyanates are mandatory. As we have already pointed out in a previous publication \cite{herran2019highly}, the synthesis of high hydroxide number (OH\#) biobased polyols combined with a suitable functionality (>3.0) and low apparent viscosity (< 2000 cp) represents a challenge within this field. However, these polyols have a key property which renders it suitable for thermosetting applications, that is, hydrophobicity. Epoxidation of soybean-oil is a frequent intermediate synthesis route to obtain polyols with low OH\# \cite{petrovic2002epoxidation,campanella2006,campanella2007}. A slight change of temperature \cite{petrovic2002epoxidation,campanella2006,campanella2007,campanella2008,santacesaria2020soybean,aguilera2018epoxidation} within the synthesis protocol can lead to reduced epoxidation efficiencies, giving rise to a polyol with a low OH\#, good functionality and relatively low apparent viscosity \cite{herran2019highly}. In addition, the presence of oxirane moieties as well as low polar fatty ester chains renders the polyol a high hydrophobic character. It is known that polyurethane formulations are prone to the formation of gases during cure \cite{szycher1999handbook}. Interestingly, a hydrophobic polyol can be extremely useful to circumvent this issue, allowing the development of a thermosetting which can potentially be applied to polymer composites. Introducing this requirement in the analysis and taking into account that these polyols have an OH\# within the order of 150 mg KOH.mg$^{-1}$ and functionalities of around 2.9, then, it is essential to have a crosslinker to develop a suitable formulation. We propose glycerin as a suitable candidate to achieve this goal. Being a byproduct of the biodiesel industry makes this biobased crosslinker a key component to improve the thermomechanical properties of the final polymer. Even though glycerin has been used previously as a crosslinker as well as a starter for polyol synthesis \cite{anbinder2020structural,meiorin2020comparative,corcione2009glass,czachor2021hydrophobic}, the authors have not found any study using the formulation proposed in this study .\\ \\In this work, a complete chemorheological analysis of a thermosetting polyurethane (TsPU) which is suitable for its application in polymer composites has been developed using differential scanning calorimetry (DSC) as well as rotational rheometry. The effect of catalyst type and concentration, crosslinker concentration, isocyanate index (NCO$_{index}$) as well as the polyol crystallization has been systematically incorporated in the model. Thermomechanical studies of uncured and post-cured samples using dynamical mechanical thermal analysis (DMA) and quasi-static flexural bending were performed. Finally, a general chemorheological master model is developed for the proposed formulation.\\ \section{Materials and Methods} \label{sec:headings} \subsection{Materials and sample preparation procedures} The epoxidized soybean oil polyol (ESO) was obtained using an epoxidation protocol based on formic acid (Anedra, A.C.S. grade) as the carrier and hydrogen peroxide (Stanton, 30\% vol.) as the oxidizer, following the procedure of previous works \cite{petrovic2002epoxidation,campanella2006,campanella2007,campanella2008}. The reactions were performed at isothermal conditions (50\textdegree\ C) using a RBD grade soybean oil provided by Molinos Rio de La Plata and by mixing initially the soybean oil with formic acid and subsequently adding the oxidizer at a rate of 2 ml.min$^{-1}$. The organic phase was extracted using ethyl acetate (Biopack, A.C.S. grade) and neutralization with a saturated solution of NaCl. Finally, the resultant polyol, from now on denominated SOYPOL, was degassed using a vacuum mixer for 2 hours. The OH\# of the resultant polyol (137 mgKOH.mg$^{-1}$) was obtained using the guidelines of the test method A provided by the ASTM D4274.\\ \\A polymeric methylene diphenyl diisocyanate (pMDI), denominated commercially as Suprasec 5005 (Huntsman), with an NCO$_{number}$ of 31.0 and a functionality of 2.7 was used to prepare all the thermosetting samples. Before use, the pMDI was degassed using a dispermat vacuum mixer at 30 mbar and stirring at 5300 rpm. Glycerin (Anedra, USP) with a functionality of 3.0 and OH\# of 1800 mgKOH.mg$^{-1}$ was used after degassing in a vacuum mixer. Dibutyltin dilaurate (95\%, Sigma Aldrich) and dimethylcyclohexylamine (99\%, Huntsman) were used as catalysts.\\ \\The polyurethane formulations used in this work are reported in Table 1. A total of eight formulations were tested to analyze the effect of catalyst type, isocyanate index (NCO$_{index}$) and crosslinking. The general preparation procedure consisted first with the degassing of the components using the protocols described in the last paragraph. Then, samples of approximately 80 g were prepared by mixing all the components listed in Table 1 and by adding the pMDI as the final one. Finally, the sample was mixed at a speed of 7800 rpm at 30 mbar. It is important to highlight that, only for section 4, the final addition of pMDI was performed using manual mixing. As already noticed in previous studies \cite{castro1982studies,macosko1989rim}, the mixing intensity has a fundamental role in the reactivity of polyurethane formulations. DSC analysis performed with hand mixed samples of the TSPU$_{7}$ revealed the presence of both low and high temperature transitions, indicating that the mixing efficiency lead to a probable reaction between the isocyanate radicals.\\ \subsection{Sample characterization techniques} Heat flow analysis was performed using a Shimadzu DSC-60 differential scanning calorimeter (DSC) equipped with Aluminum hermetic pans. The dynamic thermal cycles started at 25\textdegree\ C going up to 200\textdegree\ C at a scan rate of 2\textdegree\ C.min$^{-1}$. Sample mass was typically within the order of 10 mg.\\ \\FTIR spectra were obtained using a Shimadzu IRAffinity-1 using attenuated total reflection (ATR) and absorption methods. The ATR method was used for liquid samples (uncured precursors). The ATR consisted of a Miracle single reflection device equipped with a ZnSe prism. Each spectrum was obtained by recording 50 scans in the range 4000 cm$^{-1}$ to 600 cm$^{-1}$ with a resolution of 4 cm$^{-1}$ at ambient temperature. The phenyl absorption band (centered at 1595 cm$^{-1}$) was used to height normalize each spectra.\\ \\Dynamic Mechanical Thermal Analysis (DMA) was carried out using a Perkin \& Elmer DMA 8000 equipment using the single cantilever bending fixture mode. The oscillation frequency was fixed at 1.0 Hz and the amplitude to 0.004 mm, previously selected from a strain scan at ambient temperature. At least three samples were used to corroborate the reproducibility of the results. The sample dimensions were in the order of 10 mm in length, 8.8 mm in width and 1.8 mm in thickness.\\ \\Rheological measurements were performed using a Brookfield Viscometer (Myr VR 3000) with a LV-3 stainless steel spindle. The measurements were performed at isothermal conditions with samples of approximately 150 gr. using a stainless steel container submersed in an oil bath thermally controlled with a heating plate (Velp Scientific).\\ \\Quasi-static flexural mechanical tests were performed with an Instron 5985 following the guidelines of the standard ASTM D790. For each mechanical test, five samples were tested. The span to depth ratio was set to 16\: 1 and the speed of the flexural deformation was 0.1 mm.min$^{-1}$. These samples were post-cured in a Horizontal Forced Air Drying Oven (Milab 101-2AB) before flexural tests from ambient temperature to 70\textdegree\ C and to 110\textdegree\ C at 10\textdegree\ C.min-1 with heating steps of 20\textdegree\ C and 60 min. dwell time. Both post-cure cycles were concluded with an outdoor cooling.\\ \section{Results and discussion} \subsection{The role of catalyst type and concentration on the cure of the TsPU} A key issue for the development of a TsPU formulation is the proper selection of catalyst type and concentration. If the main objective is to develop a thermosetting polyurethane, then, it is desirable that the catalyst should selectively promote the formation of urethane or other covalent bonds (gelling chemical reactions) while simultaneously avoiding the formation of CO$_{2}$(g), which is caused mainly by the reaction of isocyanate with water (blowing reaction). Organotin compounds, such as stannous octoate and dibutyltin dilaurate (DBTDL), are currently widely employed in industry to this effect \cite{Randall2002ThePB}. However, the main drawback of such catalysts are associated to increasing the toxicity of the resultant polyurethane formulation, particularly for biomedical applications. Organocatalysts \cite{sardon2015synthesis} have a great potential to replace organotin compounds, however, their availability in industry is still limited to niche applications. Tertiary amines, such as DMCHA, have also been widely employed to alleviate this effect, however, its selectivity towards blowing reactions is much higher than gelling reactions. This represents a serious drawback because TsPUs need to have very low porosity levels to achieve improved thermomechanical properties. When it comes to the concentration of the catalyst, the minimum value is usually associated to the minimum amount which is capable to bring a change of the order of the chemical reaction taking place \cite{marciano1982curing}. For the case of DBTDL, early studies of Marciano et. al \cite{marciano1982curing} have shown that a minimum concentration of 2.6 mol.m$^{-3}$ was appropriate. On the other hand, the maximum value is usually defined as a function of the processing method. For example, for the case of spray polyurethane foams (SPF), moderate to high concentrations are necessary to achieve a fast surface adhesion of the foam. On the other hand, for the case of thermosets applied in the polymer composite industry, the processing window is usually deduced from chemoviscosity profiles \cite{yuksel2021material}. For example, if infusion \cite{poodts2013fe} is used as the manufacturing process, it might be argued that having an apparent viscosity below 1000 mPa.s is a key prerequisite for the successful application of a thermosetting polymer\cite{mazumdar2001composites}. For this reason, we have reported the chemoviscosity profile of the proposed formulation (see section 4).\\ \\The effect of catalyst type and concentration was analyzed from the exothermic thermal transitions identified by performing dynamic thermal scans using DSC analysis. These events were quantified by indicating the position of the thermal event (exothermic peak temperature) and the total enthalpy associated to it ($\Delta$H$_{T}$). The results of these analyses are reported in Table 2. For the case of the TsPU$_{1}$, an exothermic event centered at 80\textdegree\ C with an enthalpy of 3.32 J.g$_{-1}$ was identified. The position as well as the enthalpic value clearly indicated that the SOYPOL had a low reactivity, an aspect which can be explained by the fact that the hydroxides present in the polyol are secondary and that the oxirane rings also contribute to a strong steric hindrance effect \cite{ionescu2005chemistry}. If the $\Delta$H$_{T}$ is expressed as a function of isocyanate equivalents (see Table 2), we can also notice that the value is well below the heat of reaction of typical isocyanate-hydroxyl reactions reported in literature \cite{macosko1989rim,zhao2015}. This result was expected and it can be explained by the absence of a catalyst in the formulation as well as the low reactivity of the SOYPOL. FTIR analysis (Fig. 1) of the TsPU$_{1}$ revealed the presence of urethane bonds, hydroxides and free isocyanates, supporting the hypothesis that those reactions had occurred, but to a lesser extent.\\ \\The effect of increasing amounts of DMCHA on the cure enthalpy of the formulation is also reported in Table 2. The thermal transition shifted to higher temperatures \cite{lipshitz1977kinetics} and the $\Delta$H$_{T}$ increased by +174\% (for a 0.9 wt.\% of DMCHA), indicating that the catalyst was effective in increasing the catalytic activity of the formulation. This fact clearly suggests that the formulation was capable of producing reactions which might not be only associated to urethane linkages. For temperatures above 100\textdegree\ C, a wide variety of reactions between isocyanate groups are feasible \cite{ionescu2005chemistry}. This hypothesis was corroborated by the FTIR analysis (Fig. 1), which corroborated the presence of isocyanurate, urea and other chemical bonds typically found in such formulations.\\ \\The effect of incorporating DBTDL in the formulation is reported in Table 2 (TsPU$_{5}$). At only 0.2 wt.\%, the thermal transition shifted to much lower temperatures and the $\Delta$H$_{T}$ also increased to similar levels of the TsPU$_{4}$ formulation. This fact clearly indicated that this catalyst was much more effective at lower concentrations, a result which was expected \cite{Randall2002ThePB}.\\ \\The effect of incorporating glycerin in the formulation catalyzed with DBTDL is reported also in Table 2 (TsPU$_{6}$, TsPU$_{7}$ and TsPU$_{8}$). A substantial increase of the $\Delta$H$_{T}$ was measured, reaching a value of 35.7 J.g$^{-1}$. Taking into account that this formulation had the highest amounts of hydroxide equivalents, it was logical to obtain those results. In addition, the $\Delta$H$_{TNCO}$ (enthalpy normalized with respect to isocyanate equivalents) was also the highest, indicating that the role of urethane linkages was predominant for the formation of a crosslinked network.\\ \\Finally, the effect of isocyanate index (NCO$_{index}$) is also reported in Table 2. It is important to highlight that the index effect was based on the TsPU$_{7}$ formulation. By reducing the index to 0.31, we found a 62\% decrease of the $\Delta$H$_{T}$, clearly indicating that the formulation had a deficit of isocyanate equivalents. On the other hand, an increase of the NCO$_{index}$ to 1.37 also caused a 13.4\% decrease of the $\Delta$H$_{T}$, indicating an excess of isocyanate equivalents. Taking into account these results, it is logical to deduce that the proposed TsPU$_{7}$ formulation had an adequate stoichiometric relation, giving rise to a balance of hydroxides as well as isocyanate equivalents. This is further confirmed by the $\Delta$H$_{TNCO}$, which was the highest of all the tested formulations.\\ \\Lastly, it is important to comment on the specific values of $\Delta$H$_{TNCO}$ measured for all the formulations. In scientific literature \cite{castro1982studies,lipshitz1977kinetics}, and, in particular, when model systems are under analysis, the total reaction enthalpy is normalized with respect to the isocyanate or hydroxides equivalents. The main purpose of this normalization is to compare the enthalpies for the formation of a urethane bond. For example, several studies have indicated that, for model systems, the urethane bond formation should be 83.6 .103 J.mol$_{-1}$\cite{macosko1989rim,zhao2015}. Then, by analyzing the results reported in Table 2, it might be argued that the values reported are far below what it has been stablished by previous works. However, it is important to highlight that not only urethane bonds are being formed in the TsPU$_{7}$ formulation. As already discussed above and supported by FTIR analysis (Fig. 1), other chemical bonds were present, which might be associated to lower formation energies. Another important aspect of this formulation has to do with the role of the rubber to glass transition temperature (Tg). As it will be explained in detail in the following section, this formulation had a cure kinetics which was substantially hindered by diffusion effects. This meant that it was logical to have a low $\Delta$H$_{TNCO}$ during cure, because such systems tend to have very long post-cure cycles. This hypothesis was also supported by the results presented in section 3.3.\\ \subsection{The role of polyol crystallization on cure kinetics} Due to the fact that the SOYPOL used in this work was synthesized from an epoxidation reaction, it is possible that the unreacted oxirane groups present in the molecular structure of the SOYPOL might lead to macromolecular crystallization \cite{petrovic2007network}. This phenomena has previously been identified by other studies \cite{petrovic2007network,lin2008kinetic}, however, all the melting transitions were found well below -10\textdegree\ C, indicating that this phenomena was relevant only for very low temperatures. It is important to highlight that the authors have not found kinetic studies which quantify the crystallization nor melting of such polyols. As far as our studies are concerned, the crystallization of the SOYPOL was visually identified after samples were stored at temperatures ranging from -18\textdegree\ C to approximately 4\textdegree\ C . The formation of crystals of the SOYPOL as well as the SOYPOL formulated with glycerin are depicted in Fig. 2.\\ \\At this point, it is important to emphasize the relevance of understanding the concept that crystallization can have a deleterious impact on the final properties of a TsPU. For example, if a polyol that has been partially crystallized is used in the formulation, this means that, under this condition, the polyol will have a lower OH\#. This is because the crystals behave as a second solid phase with a very low reactivity towards the isocyanate precursor. The consequence is that the polyurethane formulation will have a higher NCO$_{index}$, because less hydroxides groups will be available to react with the isocyanate precursor. In addition, due to the fact that the polyurethane cure generates heat, the crystals will melt, giving rise to localized zones with incomplete cure. Such phenomena will certainly affect adversely the thermomechanical properties of the resultant TsPU.\\ \\To further understand this phenomena, a series of experiments were performed studying the effect of temperature on crystallization. The polyols reported in Table 3 were stored at temperatures within the range of -18 \textdegree\ C to 4\textdegree\ C for periods of time ranging from 15 days to 180 days. Subsequently, samples of those conditioned polyols were analyzed with DSC using a standard dynamic thermal scan so as to identify melting endotherms. For example, for the case of the SOYPOL stored at 4\textdegree\ C for 180 days, an endotherm centered at 44\textdegree\ C and with a total endothermic enthalpy of 4.74 J.g$_{-1}$ was measured. The presence of a melting endotherm centered at temperatures well above ambient temperature clearly suggested that polyol crystallization could have a strong effect on cure kinetics.\\ \subsection{Dynamic Mechanical Analysis (DMA)} The elastic bending modulus (E’) as well as the damping factor (Tan $\delta$) as a function of temperature for the TsPU$_{7}$ formulation are depicted in Fig. 3. The sample TsPU$_{7}$-1 was obtained from a plate cured at ambient temperature. After this thermal cycle was conducted, the same sample was subjected to a second thermal cycle, denominated as TsPU$_{7}$-2. For the case of the TsPU$_{7}$-1, the storage bending modulus (E’) had a substantial decrease as a function of temperature, starting at 4.66.109 Pa at ambient temperature and going down to 3.58.108 Pa at 180\textdegree\ C. In addition, the damping factor (Tan $\delta$) presented a thermal transition centered at approximately 100\textdegree\ C. However, this transition was not clearly defined, indicating that the sample was curing when the analysis was being conducted. On the other hand, for the case of the TsPU$_{7}$-2, a clearly defined thermal transition (Tg) was found at approximately 220\textdegree\ C, indicating that the material attained a higher conversion. In addition, the residual storage elastic modulus was 6.7.107 Pa, indicating that only a 1.3\% of the initial E’ was retained at 240\textdegree\ C.\\ \\It is important to highlight that, as already mentioned in the previous section, vitrification inhibited substantially the final conversion of the TsPU. When a sample is obtained from an unheated mould, it is logical to expect that a post-cure cycle will be necessary to achieve improved thermo-mechanical properties. We chose to conduct these experiments intentionally so as to highlight the impact of vitrification on the thermomechanical properties of the TsPU$_{7}$.\\ \subsection{Quasi-static flexural mechanical tests} The results of the flexural mechanical tests performed on samples of the TsPU$_{7}$ post-cured at 70\textdegree\ C (TsPU$_{7}$ – 70\textdegree\ C) and 110\textdegree\ C (TsPU$_{7}$ – 110\textdegree\ C) are reported in Table 4. The flexural strengths attained under both conditions were similar, with a maximum value of 99.4 MPa for the case of the sample cured at a higher temperature. The flexural modulus had a strong change as a function of post-cure cycle, increasing by up to 24\% to 2.14 GPa for the case of the TsPU$_{7}$ – 110\textdegree\ C. In addition, the standard deviation of the flexural modulus decreased substantially as a function of higher post-cure cycle temperatures. This was a clear indication that supported the hypothesis that vitrification was playing a key role in the post-cure of this thermosetting formulation. For this reason, higher temperatures were needed in order to attain full cure and, subsequently, to homogenize the mechanical properties of the material under analysis. Finally, the flexural strain to failure was also highly dependent on post-cure cycle, decreasing by up to 30\% to 5.69\% for the case of high temperature post-cure cycle. This result also reflected the fact that subsequent crosslinking took place under the post-cure cycle. Such effect was expected due to the fact that the initial crosslinking of the low OH\# polyol (ESO) caused vitrification (see section 4.2), decreasing substantially the rate of reaction of the OH groups present in the glycerin polyol.\\ \section{Chemorheological master model of the TsPU$_{7}$ baseline formulation} \subsection{Cure kinetics of the TsPU$_{7}$ formulation} The heat flow as a function of time for isothermal experiments during the cure of the TsPU$_{7}$ are depicted in Fig. 4. The isothermal experiments were performed from 30\textdegree\ C all the way up to 70\textdegree\ C. First, it is important to emphasize that the time was synchronized with the start of the DSC isothermal experiment. The total time should also include the sampling preparation time, which is reported in the experimental section (2.1). This preparation time sets the limit of the maximum isothermal temperature, which was 70\textdegree\ C. For higher temperatures, the rise of the speed of the reaction would increase the exothermic heat flow to such an extent that its measurement would not be possible.\\ \\The total reaction enthalpy ($\Delta H_{ISO}$) associated to each isothermal experiment is reported in table 5. Taking into account the $\Delta H_{T}$ reported in table 2, the maximum conversion ($\alpha_{max}$) of each isothermal experiment was also calculated and reported in table 5. For example, at 30\textdegree\ C, a maximum conversion of 42\% was achieved during cure. Clearly, this was an indication that vitrification took place and that a proper cure cycle should certainly include a post-cure cycle. On the other hand, the highest isothermal temperatures indicated a full conversion, indicating that full cure was attained within this temperature range.\\ \\The evolution of the maximum conversion as a function of cure temperature was found to follow a Boltzmann behavior according to the following equation\ :\\ \begin{equation} \label{eq:sedov} \alpha_{max}=\frac{A}{1+\exp(\beta(T-T_{0.5})})+B \end{equation}\\ Where A, $\beta$ were fitting parameters which affected the slope of the conversion curve and T$_{0.5}$ was the absolute temperature (K) at which half conversion was achieved. The fitted parameters can be consulted in table 6.\\ \\To better understand which phenomenological model was appropriate to predict the cure kinetics of the TsPU$_{7}$, the experimental data presented in Fig. 5 was expressed as a function of conversion rate using equation eq. 2, where $\Delta H_{ISO}$ was the total enthalpic contribution associated to each isothermal experiment and $\Delta H_{T}$ represented the total enthalpy of the TsPU$_{7}$ formulation. The experiments expressed using conversion are reported in Fig. 6. The shape of the curves presented in Fig. 5 were used to infer that an autocalytic phenomenological model should be implemented for the TsPU$_{7}$. In addition, taking into account that the maximum conversion increased as a function of temperature, it was also necessary to include vitrification in the model. Hence, the following equation was proposed,\\ \begin{equation} \label{eq:sedov} \frac{d\alpha}{dt}=k\alpha^m(\alpha_{max}-\alpha)^n \end{equation}\\ Where n and m represented the reaction orders and k is the frequency factor that includes an Arrhenius type temperature dependency and is determined as follows, \begin{equation} \label{eq:sedov} k=k_0\exp^{(\frac{E_a}{RT})} \end{equation}\\ Where k$_{0}$ is the pre-exponential factor, E$_{a}$ is the activation energy, R is the gas constant and T is the absolute temperature. The activation energy (E$_{a}$) was calculated from the slope of the relationship between inverse temperature (1/T) and natural logarithm of the frequency factor previously fitted in the rate of conversion (d$\alpha$/dt) for each temperature. All values were reported in table 7 and table 8.\\ \subsection{Evolution of rubbery to glass transition temperature (Tg) as a function of conversion} The evolution of Tg as a function of conversion is a key aspect to understand the role of vitrification on the cure of a thermosetting polymer \cite{teil2004ttt}. From a processing point of view, the occurrence of vitrification represents a drawback, because it limits the maximum conversion during cure, affecting the final mechanical properties of the polymer under analysis. A glassy state during cure will certainly indicate that a post-cure cycle should be implemented, so as to attain the maximum Tg of the thermosetting polymer. However, vitrification can also be used to induce a cure inhibition effect, due to the fact that the glassy state inhibits the formation of covalent bonds within the reactive mixture. Further details about the role of vitrification on poly(urethane-isocyanurate) thermosets can be consulted in a previous study of our group \cite{chiacchiarelli2013kinetic}.\\ \\To understand how the Tg evolved as a function of conversion, dynamic thermal experiments where performed on samples previously cured at isothermal conditions. The evolution of Tg as a function of conversion can be depicted in Fig. 7. The Di-Benedetto equation was used to model the experimental results using the following equation,\\ \begin{equation} \label{eq:sedov} \frac{Tg-Tg_0}{Tg_0}=\frac{(\frac{E_\infty}{E_0}-\frac{C_\infty}{C_0})\alpha}{1-(1-\frac{C_\infty}{C_0})\alpha} \end{equation}\\ where Tg$_0$ indicates transition temperature of the initial monomers in the system, $\alpha$ extent of reaction (conversion), E$_{\alpha}$/E$_{0}$ lattice energy of cross-linked and partial cross-linked polymer ratio and C$_{\alpha}$/C$_{0}$ the segment mobility ratio.\\ \\The evolution of Tg as a function of conversion is a key measurement so as to understand the role of vitrification on the cure kinetics of the thermosetting polymer under analysis. To better understand this, we have incorporated in Fig. 7 the results of other thermosetting polymers, particularly poly(urethane-isocyanurate) [10] and anhydride cured epoxy systems \cite{belnoue2016novel}. For a fixed conversion, i.e. 40\%, we can deduce that the Tg of polyurethane systems was much higher with respect to epoxy. This tendency was also found for the case of low conversions, but, for conversions above 80\%, the Tg’s tended to converge to similar values. These observations clearly reflect how vitrification influences the cure of polyurethane and epoxy systems. Whenever a thermosetting polymer presents a Tg versus conversion curve shifted upwards (Fig. 7), then, it is expected that vitrification will certainly inhibit cure. This is because the gelation process is much slower in comparison to the formation of molecular chains which can undergo vitrification at curing temperatures. Hence, it is also expected that the final cure of the polymer will also take much more time, because the chemical reactions are mostly taking place under a glassy state. However, this effect can also be used to inhibit cure, an aspect which is fundamental for the development of prepregs \cite{centea2015review,tuncay2018fast}.\\ \subsection{Chemorheological behavior of the TsPU$_{7}$} The apparent viscosity as a function of time for isothermal experiments of the TSPU$_{7}$ formulations are depicted in Fig. 8. For an isothermal experiment at 40\textdegree\ C, the initial apparent viscosity ranged at 5.10$^{2}$ cp and started to increase significantly after 25 min., reaching 3.10$^{3}$ mPa.s at approximately 45 min. In contrast, the isothermal experiment at 60\textdegree\ C yielded an apparent viscosity of 3.10$^{3}$ mPa.s at only 12.5 min. To better understand the evolution of apparent viscosity as a function of conversion, the results depicted in Fig. 9 were modeled using the following equation, proposed originally by Kim and Macosko \cite{macosko1989rim},\\ \begin{equation} \label{eq:sedov} \eta (T,\alpha)=\eta (T)(\frac{\alpha_g}{\alpha_g-\alpha})^{a+b\alpha} \end{equation}\\ where “$\alpha$g” is the theoretical gel point ($\alpha$g = 0.533), “a” and “b” are fitting parameters and the function $\eta$(T) has an Arrhenius dependence as indicated in the following equation,\\ \begin{equation} \label{eq:sedov} \eta(T)=\eta_0\exp^{(\frac{E_a}{RT})} \end{equation}\\ where $\eta_{0}$ is the pre-exponential factor, and E$_{a,M}$ is the activation energy, R is the gas constant and T is the absolute temperature. The fitting parameters obtained from the experimental results are reported in table 9.\\ \\A very important deduction from Fig. 9 has to do with the role of vitrification and gelation in the evolution of apparent viscosity. For example, for the case of an isothermal experiment at 40\textdegree\ C, from Fig. 9 it can be understood that at a conversion of approximately 0.25, the increase of apparent viscosity steeped up significantly. Due to the fact that the theoretical gelation point was located at much higher conversions (0.53), it can be deduced that vitrification played the most important role in the increase of apparent viscosity. These results support what has been found in previous sections, that is, vitrification was the main cause of the observed steep increase in apparent viscosity. Then, it is logical that subsequent post-cure cycles should be performed so as to achieve higher chemical conversions.\\ \\Another important aspect which needs to be discussed has to do with mass transfer effects. On the one hand, cure reactions are exothermic, so, it is expected that heat flow will have to be dissipated according to the specific geometry of the experiment being conducted. If a high area per unit volume experiment is performed, a lower adiabatic medium shall be present and the material would maintain its original temperature to a much better extent. On the other hand, if isothermal experiments are performed, heat exchange will inevitable affect the reaction kinetics of the polymer system being analyzed \cite{pascault2002thermosetting}. These effects have been extensively studied in literature, but it is important to emphasize the fact that it will inevitably affect any rheological experiment being implemented. For example, in this work, we can deduct from the in-situ temperature measurements (Fig. 8) that mass effects contributed to an increase in the time to reach the specific isothermal temperature that was priory stablished. It might be argued that a different sample mass should be implemented to avoid this, but, the experiments were performed according to standard recommendations regarding the measurement of apparent viscosity of thermosetting polymers. It is extremely important to report the results presented in Fig. 8 because those are important for the application of this thermosetting in polymer composites. From a strictly modeling point of view, it is clear that a parallel plate rheometer [4] will provide a much better set of experimental data, but we also think it is very important to conduct and report rotational experiments because these are widely employed in industry.\\ \section{Conclusions} A thermosetting polyurethane was synthesized by combining an aromatic isocyanate and a polyol obtained from soybean oil using ESO crosslinked with glycerin. Cure kinetics analysis with DSC revealed that DBTDL was the most effective catalyst for the proposed formulation, with a total reaction enthalpy of 35.7 J.g$^{-1}$. Aminic catalysts improved reaction kinetics, but to a lesser extent. DSC analysis were able to corroborate that a proper NCO$_{index}$ was selected, maximizing the reaction enthalpy for the proposed formulation. A cure kinetics model based on autocatalytic heat flow where vitrification was preponderant in the evolution of conversion was obtained. FTIR analysis of cured samples corroborated the formation of both urethane as well as isocyanurate and urea bonds. The evolution of Tg’s as a function of temperature corroborated that vitrification slowed down cure kinetics, particularly at isothermal experiments performed at lower temperatures. ESO crystallization was induced and quantified with DSC, finding a melting endotherm well above room temperature. This clearly indicated that ESO crystallization might have a strong effect on cure kinetics. DMA analysis of un-cured and in-situ post-cured samples revealed that a Tg centered at 220\textdegree\ C was attained. Quasi-static flexural mechanical tests proved that post-cure cycles had a strong effect on flexural modulus, strain to failure and strength. A maximum flexural modulus of 2.14GPa was obtained for the highest temperature post-cure (110\textdegree\ C) and a maximum strain to failure of 8.14 \% was obtained for the lowest post-cure temperature (70\textdegree\ C).\\ \\The previous results emphasize the feasibility of obtaining thermosetting polymers from a soybean oil based polyol. These results will serve to further extend the use of biobased polymers in the polymer composites industry. Ongoing work in this area will focus on the effect of increasing amounts of crosslinking on the thermomechanical properties of the resultant thermosetting polymers, focusing on the role of cure and post-cure cycles.\\ \textbf{Acknowledgements}\\ \\The author would like to thanks colleagues which indirectly contributed to this work, Matias Ferreyra (Huntsman), Diego Judas (Alkanos) and Hernan Bertolotto (Evonik). The work was supported by the ‘PICT 2015 N0475’ erogated by the ANPCYT (Argentina) as well as the “PIP-2015-N0425”, erogated by CONICET.\\ \bibliographystyle{unsrt}
2,869,038,156,050
arxiv
\section{Introduction: Correlation Estimates and Energy Level Statistics} Correlations between various families of random variables associated with disordered systems are an important aspect governing the transport properties of the system. For example, the conductivity is expressible in terms of the second moment of the one-electron spectral density. Another example is the correlation between the energy levels of noninteracting electrons for finite-volume systems and their behavior in the thermodynamic limit. Some of the first studies of energy level correlations were made by Molchanov \cite{[Molchanov]} and by Minami \cite{Mi96} for systems in the strong localization regime. Molchanov \cite{[Molchanov]} studied a family of random Schr\"odinger operators in one-dimension with a random potential given by $q(t, \omega) = F(x_t(\omega))$, where $x_t(\omega)$ is Brownian motion on a compact manifold $K$ and $F$ is a smooth, real-valued, Morse function on $K$. It is known that this model exhibits Anderson localization at all energies (cf.\ \cite{[PF],[CL]}). Minami \cite{Mi96} studied the lattice Anderson model (see (\ref{minami05.eq-andersonmodel00})) in any dimension with a bounded random Anderson-type potential for energies in the strong localization regime. These authors proved that, under certain hypotheses, the normalized distribution of electron energy levels in the thermodynamic limit is Poissonian. This is interpreted to mean that there is no level repulsion (nor attraction) between energy levels in the thermodynamic limit provided the energy lies in the strong localization region. This is in contrast to the expected behavior when the energy lies in the region of transport and strong correlations between energy levels are expected. In this case, the expected eigenvalue spacing distribution is a Wigner-Dyson distribution (cf.\ \cite{[SSSLS]}). The precise formulation of this result is as follows. The standard Anderson model studied by Minami is given by the following random Hamiltonian acting on $\ell^2({\mathbb Z}^d)$, \begin{equation} \label{minami05.eq-andersonmodel00} H_\omega \psi(x) \;=\; \sum_{y;|y-x|=1} \; \psi(y) \;+\; V (x)\psi(x), ~~~x \in {\mathbb Z}^d, \hspace{2cm} \psi \in \ell^2({\mathbb Z}^d)\,, \end{equation} \noindent where the potential $\omega = \left( V(x)\right)_{x\in{\mathbb Z}^d}$ is a family of independent, identically distributed random variables with common distribution with a density $\rho(V (0))$ such that $\|\rho\|_{\infty} = \sup_{V(0)} \rho(V(0)) < \infty$. Let $H_\Lambda$ denote the restriction of $H_\omega$ to a box $\Lambda \subset {\mathbb Z}^d$ with Dirichlet boundary conditions. The spectrum of $H_\Lambda$ is finite discrete and the eigenvalues $E_j^\Lambda (\omega)$ are random variables. For any subset $J \subset {\mathbb R}$, we let $E_\Lambda (J)$ be the spectral projection for $H_\Lambda$ and $J$. The integrated density of states (IDS) $N(E)$ is defined by \begin{equation}\label{ids1} N(E) = \lim_{|\Lambda| \rightarrow \infty} \frac{ {\mathbb E} (Tr E_\Lambda ( (-\infty, E])) }{ |\Lambda|} , \end{equation} when this limit exists. It is known (cf.\ \cite{[CL],[PF]}) that, for the lattice models considered here, this function exists and is Lipschitz continuous (at least if the density has compact support). Consequently, it is almost everywhere differentiable, and its derivative $n(E)$ is the density of states (DOS) at energy $E$. In order to describe energy level correlations, we focus on the spectrum near a fixed energy $E$. We define a point process $\xi ( \Lambda;E) (dx) $ by \begin{equation}\label{poisson1} \xi ( \Lambda; E) (dx) = \sum_{j \in {\mathbb N}} ~\delta(( | \Lambda| (E_j^\Lambda (\omega) - E) - x ) ~dx \end{equation} The rescaling by the volume $|\Lambda|$ reflects the fact that the average eigenvalue spacing is proportional to $| \Lambda|^{-1}$. Minami proved the following theorem. \begin{theo}\label{minamitheorem1} Consider the standard Anderson model (\ref{minami05.eq-andersonmodel00}) and suppose the DOS $n(E)$ exists at energy $E$ and is positive. Suppose also that the expectation of some fractional moment of the finite-volume Green's function decays exponentially fast as described in (\ref{expect1}). Then, the point process (\ref{poisson1}) converges weakly as $|\Lambda| \to \infty$ to the Poisson point process with intensity measure $n(E) ~dx$. \end{theo} Minami's result requires two technical hypotheses: 1) the density of states $n (E)$ must be non-vanishing at the energy $E$ considered, and 2) the expectation of some fractional moment of the finite-volume Green's function decays exponentially. Wegner\cite{[Wegner]} presented an argument for the nonvanishing of the DOS $n(E)$ and a strictly positive lower bound was proved by Hislop and M\"uller \cite{[HM]} under the assumption that the probability density satisfies $\rho \geq \rho_{min} > 0$. Suppose the deterministic spectrum of $H_\omega$ is $[ \Sigma_- , \Sigma_+]$. Then, for all $\epsilon > 0$, there is a constant $ C_\epsilon >0$, depending on $\rho_{min}$, such that $n(E) > C_\epsilon$ for all $E \in [ \Sigma_- + \epsilon, \Sigma_+ - \epsilon ]$. Exponential decay of fractional moments of Green's functions for random Schr\"odinger operators was established in certain energy regimes by Aizenman and Molchanov \cite{[AM]}, by Aizenman \cite{[Aizenman]}, and by Aizenman, Schenker, Friedrich and Hundertmark \cite{[ASFH]}. Additionally, Minami's proof rests on a certain correlation estimate for the second moment of the resolvent. It is this estimate that interests us here. We present a new proof of this estimate that generalizes Minami's result in two ways: 1) it holds for general bounded, selfadjoint Hamiltonians $H_0$, including magnetic Schr\"odinger operators and operators with decaying, off-diagonal matrix elements, and 2) it holds for higher-order moments of the Green's function. These generalizations of Minami's estimate were recently also obtained by Graf and Vaghi \cite{[GV]} with a different method, which we outline in section~\ref{sec:GV} below. We also apply this moment bound to prove a new estimate on the probability that there are at least $n$ eigenvalues of a local Hamiltonian in a given energy interval. We interpret this as an $n$-{\it level Wegner-type estimate} that bounds the probability of $n$ eigenvalues being in the same energy interval. As such, it is a measure of the correlation between multiple eigenvalues. Minami's estimate may be stated in several ways. For $z \in {\mathbb C}^+$, we let $R_\Lambda (z) = (H_\Lambda - z)^{-1}$ denote the resolvent of the finite-volume Hamiltonian on $\ell^2 (\Lambda)$. The corresponding Green's function is denoted by $G_\Lambda ( x, y ;z)$, for $x,y \in \Lambda$. Minami stated the estimate this way (Lemma 2, \cite{Mi96}). \begin{lemma}\label{minami1} For any $z \in {\mathbb C}^+$, any cube $\Lambda \subset {\mathbb Z}^d$, and for any $x, y \in \Lambda$ with $x \neq y$, we have \begin{equation}\label{minamiest1} {\mathbb E} \left[ \det \left( \begin{array}{ll} \Im G_\Lambda (x,x ;z) & \Im G _\Lambda ( x,y ;z) \\ \Im G_\Lambda ( y,x; z) & \Im G_\Lambda ( y, y; z) \end{array} \right) \right] \leq \pi^2 \| \rho \|_\infty^2 . \end{equation} \end{lemma} However, for many purposes, it is clearer to note that terms as on the left side of (\ref{minamiest1}) arise when evaluating $\left( Tr \{\Im R_\Lambda\}\right)^2 - Tr \{ (\Im R_\Lambda)^2\}$ in the canonical basis of $\ell^2(\Lambda)$. Thus (\ref{minamiest1}) produces the bound \begin{equation}\label{minamiest2} {\mathbb E} \Big[ \left( Tr \{\Im R_\Lambda (z)\}\right)^2 - Tr \{ (\Im R_\Lambda (z) )^2\} \Big] \leq \pi^2 \| \rho \|_\infty^2 | \Lambda |^2. \end{equation} As written in the appendix of \cite{[KM]}, estimate (\ref{minamiest2}) easily leads to the bound \begin{equation}\label{minamiest3} {\mathbb E} \{ ( Tr E_\Lambda (J) )^2 - Tr E_\Lambda (J) \} \leq \pi^2 \| \rho \|_\infty^2 | J |^2 | \Lambda|^2 , \end{equation} for any interval $J \subset {\mathbb R}$. This estimate (\ref{minamiest3}) was used by Klein and Molchanov \cite{[KM]} to provide a new proof of the simplicity of eigenvalues for random Schr\"odinger operators on the lattice, previously shown by Simon \cite{[Simon]} with other methods. In fact, from Chebyshev's inequality, we can write (\ref{minamiest3}) as \begin{equation}\label{km01} {\mathbb P} \{ Tr E_\Lambda ( J) \geq 2 \} \leq \frac{\pi^2}{2}\; \| \rho \|_\infty^2 | J|^2 | \Lambda|^2. \end{equation} Note, for comparison, that the Wegner estimate states that \begin{equation}\label{wegner1} {\mathbb P} \{ Tr E_\Lambda ( J) \geq 1 \} \leq \pi \| \rho \|_\infty | J | |\Lambda|. \end{equation} It is crucial for the applications that in the bound of the left side of (\ref{km01}) the exponents of both the volume factor and the length of the interval $|J|$ be greater than one. We mention that a bound of the type (\ref{km01}) is not known for random Schr\"odinger operators on $L^2 ( \Lambda)$, for $\Lambda \subset {\mathbb R}^d$. This is the main remaining obstacle to extending Minami's result on Poisson statistics for energy level spacings, and the Klein-Molchanov proof of the simplicity of eigenvalues, for energies in the strong localization regime, to continuum Anderson-type models. Our original motivation for this work was two-fold: First, we wanted to find another proof of Minami's miracle in order to better understand it, and, secondly, we wanted to try to generalize the Minami estimate so that it was applicable to other models. \subsection{Contents of this Article} We state our main results, a generalization of Minami's correlation estimate, Theorem \ref{minami05.th-minami}, and its application to an $n$-level Wegner estimate, Theorem \ref{minami05.th-locdet}, in section 2. Given Theorem \ref{minami05.th-minami}, we prove the $n$-level Wegner estimate in section 2. We prove the main correlation estimate in section 3 using a Gaussian integral representation of the determinant. In section 4, we discuss applications to energy level statistics and the simplicity of eigenvalues in the localization regime for general Hamiltonians, and a related proof of the correlation estimate due to Graf and Vaghi \cite{[GV]}. We also discuss the related works by Nakano \cite{[N]} and Killip and Nakano \cite{[KN]} on joint energy-space distributions. For convenience, a proof of the Schur complement formula is presented in the appendix. \section{The Main Results} \label{minami05.sec-results} We consider random perturbations of a fixed, bounded, selfadjoint, background operator $H_0$. The general Anderson model is given by the following random Hamiltonian acting on $\ell^2({\mathbb Z}^d)$, \begin{equation} \label{genand1} H_{\omega}\psi(x) \;=\; H_0 \psi (x) \;+\; V (x)\psi(x), ~~~x \in {\mathbb Z}^d, \hspace{2cm} \psi \in \ell^2({\mathbb Z}^d)\,, \end{equation} \noindent that generalizes (\ref{minami05.eq-andersonmodel00}). The potential $\omega = \left( V (x)\right)_{x\in{\mathbb Z}^d}$ is a family of independent, identically distributed random variables $V(x)$ with distribution given by a density $\rho(V(0))$ such that $\|\rho\|_{\infty} = \sup_{V(0)} \rho(V(0)) < \infty$. Among the general operators $H_0$, we note the following important examples. The first family of examples include nonrandom perturbations of the lattice Laplacian $L$, defined by, \begin{equation}\label{laplacian1} (L \psi) (x ) = \sum_{y;|y-x|=1} \; \psi(y), \end{equation} by $\Gamma \subset {\mathbb Z}^d$-periodic potentials $V_0$ on ${\mathbb Z}^d$ so that $H_0 = L + V_0$. Here, the group $\Gamma$ is some nondegenerate subgroup of ${\mathbb Z}^d$ like $N {\mathbb Z}^d$, for some $N > 1$. The second family of examples consists of bounded, selfadjoint, operators $H_0$ with exponentially-decaying, off-diagonal matrix elements. The third family of examples are discrete Schr\"odinger operators with magnetic fields, \begin{equation}\label{magnetic1} (H_0 \psi) (x ) = \sum_{y;|y-x|=1} ( \psi (x) - e^{i A(x,y)} \psi (y) ), \end{equation} where $A(x,y) = - A(y,x)$ is nonvanishing for $|x-y| =1$ and takes values on the torus. The operator $H_0$ need not be a Schr\"odinger operator but simply a bounded selfadjoint operator for Theorems \ref{minami05.th-minami}, \ref{minami05.th-locdet}, and \ref{minami05.th-wegner}. The boundedness of $H_0$ is not essential, but we will require this in order to avoid selfadjointness problems. When we consider localization and eigenvalue level spacing statistics in section 4, we will require that, in addition, the selfadjoint operator $H_0$ is translation invariant with the off-diagonal matrix elements, $| \langle x | H_0 | y \rangle |$, decaying sufficiently fast in $| x - y|$. We will discuss the required properties of $H_0$ further in section \ref{applications}. \vspace{.2cm} \subsection{Generalization of Minami's Correlation Estimate}\label{correlest1} \noindent For any subset $\Lambda \subset {\mathbb Z}^d$, we define $P_\Lambda$ to be the orthogonal projection onto $\ell^2(\Lambda)$, so that $P_\Lambda f(x) = \sum_{y \in \Lambda} f(y) \delta_{x,y}$. By $H_{\Lambda}$ we denote the restriction $P_{\Lambda} H_\omega P_{\Lambda}$ of $H_\omega$ to $\ell^2(\Lambda)$. \noindent Let $\Delta \subset \Lambda$ and note that $V$ commutes with $P_\Delta$: $V_\Delta = P_\Delta V=VP_\Delta$. For $z\in {\mathbb C}$, with $\Im (z) >0$, the matrix-valued function $g_\Delta(z) = P_\Delta (H_{\Lambda}-z)^{-1}P_\Delta$ has the following property (see \cite{Mi96} for the case $n=2$). \begin{theo} \label{minami05.th-minami} For $\Im(z) >0$ and any subset $\Delta \subset \Lambda$, with $|\Delta|=n$, the following inequality holds: \begin{equation}\label{minami01} {\mathbb E} \left( \det\{\Im g_\Delta(z)\} \right) \leq \pi^n\|\rho\|_{\infty}^n\,, \hspace{2cm} |\Delta|=n\,. \end{equation} \end{theo} \noindent A new proof of this result, using the representation of the square root of a determinant by a Gaussian integral, will be given in section~\ref{minami05.sect-main}. It is a generalization of Minami's result Lemma~\ref{minami1} where $H_0$ is the discrete Laplacian $L$, defined in (\ref{laplacian1}), and $n = 2$. In the case $n =2$, we may write $P_\Delta = | x \rangle \langle x| + |y \rangle \langle y |$, for $x \neq y$, so that \begin{equation}\label{minami2} g_\Delta ( z) = \left( \begin{array}{cc} G_\Lambda (x,x; z) & G_\Lambda (x,y; z) \\ G_\Lambda (y,x;z) & G_\Lambda (y,y;z) \end{array} \right) \end{equation} where, as above, $G_\Lambda (x,y;z)$ is the Green's function for $H_\Lambda$. Thus Lemma~\ref{minami1} follows from Theorem~\ref{minami05.th-minami} if we note that in this case one has $G_{\Lambda}(x,y;z)=G_{\Lambda}(y,x;z)$ and thus $\Im g_\Delta = (g_\Delta-g_\Delta^*)/2\imath$ in (\ref{minami01}) is the same as the matrix on the left of (\ref{minamiest1}). \subsection{The $n$-level Wegner Estimate}\label{correlest2} We use Theorem \ref{minami05.th-minami} to prove a new estimate about multiple eigenvalue correlations. We begin with the observation that the left hand side of (\ref{minami01}) can be interpreted in terms of the eigenvalues of an operator acting on a certain antisymmetric subspace of a finite tensor product. \begin{theo} \label{minami05.th-locdet} Let $A=A^\ast$ be a selfadjoint operator on $\ell^2(\Lambda)$, $\Lambda \subset {\mathbb Z}^d$ finite, with eigenvalues $a_1 \leq a_2 \leq \cdots \leq a_N$, where $N=|\Lambda|$. Then, the following holds $$\sum_{\Delta\subset \Lambda;|\Delta|=n} \det\{P_\Delta A P_\Delta\} \;=\; \sum_{1\leq i_1<\cdots < i_n \leq N} a_{i_1}\dots a_{i_n}\,. $$ \noindent Moreover, if ${\mathcal H}_n = \ell^2(\Lambda)^{\wedge n}$ is the {\em $n$-fermion} subspace, let $A^{\wedge n}$ be the restriction of $A^{\otimes n}$ to ${\mathcal H}_n$. Then $$\sum_{\Delta\subset \Lambda;|\Delta|=n} \det\{P_\Delta A P_\Delta\} \;=\; \TR_{{\mathcal H}_n}\left(A^{\wedge n}\right)\,. $$ \end{theo} \noindent {\bf Proof: } The first identity is a trivial consequence of the second. For indeed, the eigenvalues of $A^{\wedge n}$ are products of the form $a_{i_1}\dots a_{i_n}$ with $1\leq i_1<\cdots < i_n \leq N$ and the trace is the sum of the eigenvalues. \vspace{.2cm} \noindent To prove the second identity, for each $x\in\Lambda$, let $e_x$ be the unit vector in ${\mathcal H}_1=\ell^2(\Lambda)$ supported by $x$, namely $e_x(y) = \delta_{x,y}$. Then $\{e_x\,;\, x\in \Lambda\}$ is an orthonormal basis of ${\mathcal H}_1$. Let $\Lambda$ be ordered so that we may write $x_1 < x_2 < \cdots < x_N$. Then, $\{ e_{x_{i_1}}\wedge \cdots \wedge e_{x_{i_n}} ~| ~x_{i_j} \in \Lambda, ~1 \leq i_j \leq N \}$ is an orthonormal basis of ${\mathcal H}_n$ if we restrict to indices so that $x_{i_1} < x_{i_2} < \cdots < x_{i_n}$, with $1 \leq i_j \leq N$. Thus, the trace on ${\mathcal H}_n$ can be expanded as $$\TR_{{\mathcal H}_n}\left(A^{\wedge n}\right)\;=\; \sum_{x_1 <\cdots <x_n} \langle e_{x_1}\wedge \cdots \wedge e_{x_n}| A^{\wedge n} e_{x_1}\wedge \cdots \wedge e_{x_n} \rangle\,. $$ \noindent If $\Delta = \{x_1, \cdots, x_n\}$ (where the labeling is such that $x_1 <x_2<\cdots <x_n$), then the definition of the determinant gives \begin{eqnarray*} \langle e_{x_1}\wedge \cdots \wedge e_{x_n}| A^{\wedge n} e_{x_1}\wedge \cdots \wedge e_{x_n} \rangle &=& \langle e_{x_1}\wedge \cdots \wedge e_{x_n}| (P_\Delta A P_\Delta)^{\wedge n} e_{x_1}\wedge \cdots \wedge e_{x_n} \rangle \\ &=& \det\{P_\Delta A P_\Delta\} \,. \end{eqnarray*} \hfill $\Box$ We can now combine the previous two Theorems to generalize (\ref{km01}) and prove the following $n$-level Wegner estimate. We point out that this estimate holds for all energy intervals $J$ in the spectrum of $H_\Lambda$. \begin{theo} \label{minami05.th-wegner} For any positive integer $n$, and interval $J\subset {\mathbb R}$, and any cube $\Lambda \subset {\mathbb Z}^d$ we have \begin{equation}\label{generalwegner} {\mathbb P}(\TR E_{\Lambda}(J) \ge n) \le \frac{\pi^n}{n!} \|\rho\|^n_{\infty} |J|^n |\Lambda|^n. \end{equation} \end{theo} \noindent {\bf Proof:} By taking $A= \Im R_{\Lambda}(z)$, $\Im z>0$, in Theorem~\ref{minami05.th-locdet} and using the result from Theorem~\ref{minami05.th-minami} we get \begin{eqnarray} \label{generalminami1} {\mathbb E} \left( \TR_{{\mathcal H}_n} ((\Im R_{\Lambda}(z))^{\wedge n})\right) & = & \sum_{\Delta\subset \Lambda;\,|\Delta|=n} {\mathbb E}(\det \{\Im g_{\Delta}(z)\}) \nonumber \\ & \le & {|\Lambda| \choose n} \pi^n \|\rho\|_{\infty}^n. \end{eqnarray} For $\zeta=\sigma+i\tau$, $\sigma \in {\mathbb R}$, $\tau>0$, define \[ f_{\zeta}(x) = \frac{\tau}{(x-\sigma)^2+\tau^2}.\] If $(a,b)\subset J \subset [a,b]$ and $z:=(a+b+i|J|)/2$, then $\chi_J(x) \le |J| f_z(x)$ for all $x\in {\mathbb R}$ and thus, by the spectral theorem, \[ E_{\Lambda}(J) \le |J| \Im R_{\Lambda}(z). \] This carries over to ${\mathcal H}_n$ as \begin{equation} \label{fermionbound} E_{\Lambda}(J)^{\wedge n} \le |J|^n (\Im R_{\Lambda}(z))^{\wedge n}. \end{equation} \noindent If $X$ is the range of $E_{\Lambda}(J)$, then $E_{\Lambda}(J)^{\wedge n}$ is the orthogonal projection onto the subspace $X^{\wedge n}$ of ${\mathcal H}_n$. Thus \[ \TR_{{\mathcal H}_n} (E_{\Lambda}(J)^{\wedge n}) = \left\{ \begin{array}{ll} {\TR E_{\Lambda}(J) \choose n} & \mbox{if $\TR E_{\Lambda}(J) \ge n$}, \\ 0 & \mbox{if $\TR E_{\Lambda}(J)<n$}. \end{array} \right. \] \noindent Chebyshev's inequality and the elementary fact that $k/n \le {k \choose n}$ for all $k\ge n$ imply that \begin{eqnarray*} {\mathbb P}( \TR E_{\Lambda}(J)\ge n) & \le & \frac{1}{n} {\mathbb E} \left( (\TR E_{\Lambda}(J)) \cdot \chi_{\{\TR E_{\Lambda}(J) \ge n\}} \right) \\ & \le & {\mathbb E} \left( {\TR E_{\Lambda}(J) \choose n} \cdot \chi_{\{\TR E_{\Lambda}(J) \ge n\}} \right) \\ & = & {\mathbb E} (\TR_{{\mathcal H}_n} E_{\Lambda}(J)^{\wedge n}). \end{eqnarray*} Using the bounds (\ref{fermionbound}) and (\ref{generalminami1}) finally yields the desired result, \begin{eqnarray*} {\mathbb P} (\TR E_{\Lambda}(J) \ge n) & \le & |J|^n {\mathbb E} \left( \TR_{{\mathcal H}_n} (\Im R_{\Lambda}(z))^{\wedge n} \right) \\ & \le & |J|^n {|\Lambda| \choose n} \pi^n \|\rho\|_{\infty}^n \\ & \le & \frac{\pi^n}{n!} \|\rho\|_{\infty}^n |J|^n |\Lambda|^n. \end{eqnarray*} \hfill $\Box$ \section{The Generalized Minami Correlation Estimate} \label{minami05.sect-main} \noindent Our approach to the Minami correlation estimate Lemma \ref{minami1}, and its generalization, is to work with the resolvent, rather than the Green's function. We use the Schur complement formula to isolate the random variables and the representation of the inverse of the square root of a determinant by a Gaussian integral, see (\ref{gaussrep1}). The proof of Theorem~\ref{minami05.th-minami} requires several steps. As in section \ref{minami05.sec-results}, we let $\Delta \subset \Lambda$ and denote the orthonormal projection onto $\ell^2(\Delta)$ by $P_\Delta$. We define $\tilde{H}_\Lambda = H_\Lambda - V_\Delta$, and for $z\in {\mathbb C}$, with $\Im (z) >0$, we define the matrix-valued functions $\tilde{g}_\Delta(z) = P_\Delta (\tilde{H}_{\Lambda}-z)^{-1}P_\Delta$ and $g_\Delta(z) = P_\Delta (H_{\Lambda}-z)^{-1}P_\Delta$. \subsection{Schur's Complement and Kre\u{\i}n's Formula} \label{minami05.ssect-} \begin{lemma} \label{minami05.lem-krein} The following formula holds $$g_\Delta(z) = \frac{1}{V_\Delta + \tilde{g}_\Delta(z)^{-1}} \hspace{2cm}\mbox{\small\bf Kre\u{\i}n's formula} $$ \end{lemma} \noindent {\bf Proof: } The {\em Schur complement formula} \cite{Schur17} (also called Feshbach's projection method \cite{Feshbach58 \footnote{ The Schur complement method \cite{Schur17} is widely used in numerical analysis under this name, while Mathematical Physicists prefer the reference to Feshbach\cite{Feshbach58}. It is also called Feshbach-Fano \cite{Fano35} or Feshbach-L\"owdin \cite{Lowdin62} in Quantum Chemistry. This method is used in various algorithms in Quantum Chemistry ({\em ab initio} calculations), in Solid State Physics (the muffin tin approximation, LMTO) as well as in Nuclear Physics. The formula used above is found in the original paper of Schur \cite{Schur17} (the formula is on p.217). The formula has been proposed also by an astronomer Tadeusz Banachiewicz in 1937, even though closely related results were obtained in 1923 by Hans Boltz and in 1933 by Ralf Rohan \cite{PS04}. Applied to the Green function of a selfadjoint operator with finite rank perturbation, it becomes the Kre\u{\i}n formula \cite{Krein46}. } \,states that if $H=H^\ast$ is a bounded, selfadjoint operator on some Hilbert space and if $P={\mathbf 1} -Q$ is an orthonormal projection, then \begin{equation}\label{schur1} P\frac{1}{H-z}P\;=\; \frac{1}{H_{eff}(z) - z P }\,, \hspace{1cm} H_{eff}(z) = PHP + PHQ\frac{1}{QHQ-z}QHP\,. \end{equation} \noindent For completeness we provide a proof of (\ref{schur1}) in section~\ref{sec:schur}. Applied to $g_\Delta(z) = P_\Delta (H_{\Lambda}-z)^{-1}P_\Delta$ gives $$g_\Delta(z)^{-1}\;=\; H_\Delta + P_\Delta H_\Lambda P_{\Lambda\setminus \Delta} \frac{1}{H_{\Lambda\setminus \Delta}-z} P_{\Lambda\setminus \Delta}H_\Lambda P_\Delta\,. $$ \noindent By definition, $H_\Delta =\tilde{H}_\Delta + V_\Delta$ while $H_{\Lambda\setminus \Delta}=\tilde{H}_{\Lambda\setminus \Delta}$, so that, applying the Schur complement formula to $\tilde{g}_\Delta(z)$ instead, gives the desired result $$g_\Delta(z)^{-1} \;=\; V_\Delta + \tilde{g}_\Delta(z)^{-1}\,. $$ \hfill $\Box$ \begin{lemma} \label{minami05.lem-pos} If $\Im z>0$, then $\Im g_\Delta(z) >0$ and $-\Im \{\tilde{g}_\Delta(z)^{-1}\}>0$. \end{lemma} \noindent {\bf Proof: } The resolvent equation gives $$\Im g_\Delta(z) = P_\Delta \frac{\Im z}{|H_\Lambda-z|^2} P_\Delta \;>\;0\,, \hspace{2cm} \Im \tilde{g}_\Delta(z) = P_\Delta \frac{\Im z}{|\tilde{H}_\Lambda-z|^2} P_\Delta \;>\;0\,. $$ \noindent Using $A^{-1}-A^{-1\,\ast}= A^{-1}\{A^\ast - A\}A^{-1\,\ast}$ gives the other inequality. \hfill $\Box$ \begin{lemma} \label{minami05.lem-det} The following formula holds: $$\det\{\Im g_\Delta(z)\}\;=\; \frac{\det\{-\Im \tilde{g}_\Delta(z)^{-1}\}} {|\det\{V_\Delta +\tilde{g}_\Delta(z)^{-1}\}|^2} $$ \end{lemma} \noindent {\bf Proof: } By definition of the imaginary part, using Lemma~\ref{minami05.lem-krein} gives \begin{eqnarray*} \Im g_\Delta(z) &=& \frac{g_\Delta(z)-g_\Delta(z)^\ast}{2\imath}\\ &=& \frac{1}{V_\Delta +\tilde{g}_\Delta(z)^{-1}} \left( \frac{\tilde{g}_\Delta(z)^{-1\,\ast}-\tilde{g}_\Delta(z)^{-1}}{2\imath} \right) \frac{1}{V_\Delta +\tilde{g}_\Delta(z)^{-1\,\ast}}\\ &=& -\frac{1}{V_\Delta +\tilde{g}_\Delta(z)^{-1}} \left[\Im \tilde{g}_\Delta(z)^{-1}\right] \frac{1}{V_\Delta +\tilde{g}_\Delta(z)^{-1\,\ast}}. \end{eqnarray*} \noindent Taking the determinant of both sides gives the result. \hfill $\Box$ \begin{coro} \label{minami05.cor-minaineq1} If ${\mathbb E}_S$ denotes the average over the potentials $V_x$ for $x\in S$, the following estimate holds: $${\mathbb E}_\Lambda\left( \det\{\Im g_\Delta(z)\} \right)\;\leq\; {\mathbb E}_{\Lambda\setminus \Delta}\left( \|\rho\|_{\infty}^n \det\{-\Im \tilde{g}_\Delta(z)^{-1}\} \int_{{\mathbb R}^\Delta} dV_\Delta \frac{1}{|\det\{V_\Delta +\tilde{g}_\Delta(z)^{-1}\}|^2} \right) $$ \end{coro} \noindent {\bf Proof: } By definition and since $|\Delta|=n$, if $f$ is a nonnegative function of $V=(V_x)_{x\in \Lambda}$ then $${\mathbb E}_\Lambda (f) \;=\; \int_{{\mathbb R}^\Lambda} \prod_{x\in\Lambda} dV_x \;\rho(V_x) \;f(V) \; \leq \; \|\rho\|_{\infty}^n {\mathbb E}_{\Lambda\setminus \Delta}\left( \int_{{\mathbb R}^\Delta} \prod_{x\in\Delta} dV_x f(V) \right). $$ \noindent Since $\tilde{g}_\Delta(z)$ does not depend on $V_\Delta$, the result follows from Lemma~\ref{minami05.lem-det}. \hfill $\Box$ \begin{lemma} \label{minami05.lem-gauss} Let $M$ be a complex $n\times n$ matrix such that $M=B-\imath A$ with $B=B^\ast$ and $A$ positive definite. Then, taking the principal branch of the square root, \begin{equation}\label{gaussrep1} \frac{1}{\sqrt{\det{M}}} \;=\; e^{\imath n \pi/4} \int_{{\mathbb R}^n} \frac{d^n u}{(2\pi)^{n/2}}\; e^{-\imath \langle u| Mu\rangle/2} \end{equation} \end{lemma} \noindent {\bf Proof: } Since $A >0$, it follows that $\imath M$ has a positive definite real part, so that the integral converges and is analytic in $M$. The formula follows from standard Gaussian integrals. \hfill $\Box$ \begin{lemma} \label{minami05.lem-sphe} Let $F$ be an integrable function on ${\mathbb R}^2\times {\mathbb R}^2$. Then the following formula holds \begin{equation}\label{integral1} \int_{{\mathbb R}^2\times {\mathbb R}^2} d^2\vec{u}\; d^2\vec{v}\;\; F(\vec{u},\vec{v})\; \delta\left( \frac{\vec{u}^2-\vec{v}^2}{2} \right) \;=\; \int_{{\mathbb R}^2} d^2\vec{u} \int_0^{2\pi} d\theta\; F(\vec{u}, R_\theta \vec{u})\,, \end{equation} \noindent where $R_\theta$ denotes the rotation of angle $\theta$ in ${\mathbb R}^2$. \end{lemma} \noindent {\bf Proof: } Let $\vec{u}$ be expressed in polar coordinates $(r,\phi)$. The change of variable $s= \vec{u}^2/2= r^2/2$ gives $\vec{u}=(\sqrt{2s},\phi)$ and $d^2\vec{u}=ds d\phi$. In much the same way, $\vec{v}$ can be expressed as $(\sqrt{2t},\psi)$. Thus the integral becomes \begin{eqnarray*} & &\int_{{\mathbb R}^2\times {\mathbb R}^2} d^2\vec{u}\; d^2\vec{v}\;\; F(\vec{u},\vec{v})\; \delta\left( \frac{\vec{u}^2-\vec{v}^2}{2} \right) \\ & = & \int_0^\infty ds \int_0^\infty dt \int_0^{2\pi} d\phi \int_0^{2\pi} d\psi\; F(\sqrt{2s},\phi;\sqrt{2t},\psi) \delta(s-t) \\ & = & \int_0^\infty ds \int_0^{2\pi} d\phi \int_0^{2\pi} d\psi\; F(\sqrt{2s},\phi;\sqrt{2s},\psi)\,. \end{eqnarray*} \noindent Setting $\psi= \theta + \phi$ gives the result. \hfill $\Box$ \subsection{Proof of Theorem~\ref{minami05.th-minami}} \label{minami05.ssect-proofth} \noindent Thanks to Corollary~\ref{minami05.cor-minaineq1}, the main Theorem~\ref{minami05.th-minami} follows from the following inequality. \begin{lemma} \label{minami05.lem-minaineq2} The following estimate holds $$J\;:=\; \int_{{\mathbb R}^\Delta} dV_\Delta \frac{1}{|\det\{V_\Delta +\tilde{g}_\Delta(z)^{-1}\}|^2} \;\leq\; \frac{\pi^n}{\det\{-\Im\tilde{g}_\Delta(z)^{-1}\}}\,. $$ \end{lemma} \noindent {\bf Proof: } Using the Gaussian integral in Lemma~\ref{minami05.lem-gauss}, the integral $J$ can be written as \begin{equation} \label{minami05.eq-J1} J= \int_{{\mathbb R}^\Delta} dV_\Delta \int_{({\mathbb R}^\Delta)^{\times 4}} \frac{d^n u_1 d^n u_2}{(2\pi)^n} \frac{d^n v_1 d^n v_2}{(2\pi)^n} e^{-1/2\sum_{i=1,2} \{ \langle u_i |\imath (V_\Delta +\tilde{g}_\Delta(z)^{-1})u_i\rangle - \langle v_i |\imath (V_\Delta +\tilde{g}_\Delta(\overline{z})^{-1})v_i\rangle \} }\,. \end{equation} \noindent Let $\vec{u}(x) = (u_1(x), u_2(x))\in {\mathbb R}^2$ where $u_i=\left(u_i(x)\right)_{x\in\Delta} \in {\mathbb R}^\Delta$. In much the same way let $\vec{v}(x) = (v_1(x), v_2(x))\in {\mathbb R}^2$ be used in this integral. The term $V_x$ appears in the Gaussian exponent with the factor $(-\imath/2) V_x(\vec{u}(x)^2-\vec{v}(x)^2)$. Hence integration over $V_x$ gives $$\int_{\mathbb R} dV_x \; e^{ -\imath V_x (\vec{u}(x)^2-\vec{v}(x)^2)/2 } \;=\; 2\pi \delta\left( \frac{\vec{u}(x)^2-\vec{v}(x)^2}{2} \right)\,. $$ \noindent Inserting this result in eq.~(\ref{minami05.eq-J1}), using Lemma~\ref{minami05.lem-sphe} leads to $$J= \prod_{x\in \Delta}\int_0^{2\pi} d\theta_x\; \int_{({\mathbb R}^\Delta)^{\times 2}} \frac{d^{2n} \vec{u} }{(2\pi)^n} e^{-1/2 \{ \langle \vec{u} |\imath \tilde{g}_\Delta(z)^{-1}\vec{u}\rangle + \langle R(\theta)\vec{u} |\imath \tilde{g}_\Delta(\overline{z})^{-1} R(\theta)\vec{u}\rangle} \} \,, $$ \noindent where $R(\theta)$ is the orthogonal $2n\times 2n$ matrix acting on $\vec{u}=(\vec{u}(x))_{x\in\Delta}$ by $$\left( R(\theta)\vec{u} \right) (x) \;=\; R_{\theta_x} \vec{u}(x)\,. $$ \noindent Because of Lemma~\ref{minami05.lem-pos}, we know that $-\Im \tilde{g}_\Delta(z)^{-1} >0$, so that the Gaussian term can be bounded from above by $$J \leq \prod_{x\in \Delta}\int_0^{2\pi} d\theta_x\; \int_{({\mathbb R}^\Delta)^{\times 2}} \frac{d^{2n} \vec{u} }{(2\pi)^n} e^{-1/2 \{ \langle \vec{u} | (-\Im \tilde{g}_\Delta(z)^{-1} ) \vec{u}\rangle + \langle R(\theta)\vec{u} | (-\Im \tilde{g}_\Delta(z)^{-1} ) R(\theta)\vec{u}\rangle}\,. $$ \noindent Thus a Schwarz inequality, the rotational invariance of the measure $d^{2n} \vec{u}$ and another use of the Gaussian formula given in Lemma~\ref{minami05.lem-gauss} gives the bound $$J \leq \prod_{x\in \Delta}\int_0^{2\pi} d\theta_x\; \int_{({\mathbb R}^\Delta)^{\times 2}} \frac{d^{2n} \vec{u} }{(2\pi)^n} e^{- \langle \vec{u} | ( -\Im \tilde{g}_\Delta(z)^{-1} ) \vec{u}\rangle}\;=\; \frac{\pi^n}{\det\{-\Im \tilde{g}_\Delta(z)^{-1}\}}\,, $$ proving the theorem. \hfill $\Box$ \section{Applications of the Correlation Estimate and Related Results} \label{applications} \subsection{Level Statistics and Simplicity of Eigenvalues} The proof of Poisson statistics for the eigenvalue level spacing in the thermodynamic limit for general Anderson Hamiltonians $H_0 + V$ as in (\ref{genand1}) follows as in Minami's article provided several other conditions are satisfied. In addition to the selfadjointness and boundedness of $H_0$, we require that $H_0$ be translation invariant, so that the DOS exists, and that the off-diagonal matrix elements of $H_0$ decay exponentially in $|x-y|$ with a uniform rate. In addition to the positivity of the DOS at energy $E$, discussed in the introduction, Minami requires the exponential decay of the expectation of a fractional moment of the Green's function of $H_\omega$. Let us describe this fractional moment condition. The Green's function $G_\Lambda (x,y; z)$ for the restriction of $H_\omega$ to a finite cube $\Lambda \subset {\mathbb Z}^d$ with Dirichlet boundary conditions is required to satisfy the following bound. There is some $0 < s < 1$ and constants $C_s > 0$ and $\alpha_E > 0$ so that \begin{equation}\label{expect1} {\mathbb E} \{ | G_\Lambda ( x, y; E+i \epsilon ) |^s \} \leq C_s e^{-\alpha_E | x - y|}, \end{equation} provided $x \in \Lambda$, $y \in \partial \Lambda$, and $z \in \{ w \in {\mathbb C} ~| ~\Im w > 0, | w - E | < r \}$, for some $r>0$. \begin{coro}\label{minamitheorem2} Consider the general Anderson model (\ref{genand1}) with a bounded, translation-invariant, selfadjoint $H_0$ having matrix elements satisfying $|\langle x|H_0|y\rangle| \le Ce^{-\eta|x-y|}$ for some $C<\infty$ and $\eta>0$. Suppose that the DOS $n(E)$ for $H_\omega$ exists at energy $E$ and is positive. Suppose also that the expectation of some fractional moment of the Green's function of $H_\omega$ decays exponentially fast as described in (\ref{expect1}). Then, the point process (\ref{poisson1}) converges weakly to the Poisson point process with intensity measure $n(E) ~dx$. \end{coro} \noindent We don't provide a detailed proof of this here, as it is easily checked that under the assumption of exponential decay of $|\langle x|H_0|y\rangle|$ the remaining arguments of Minami's proof of Poisson statistics go through. Translation invariance of $H_0$ comes in as an extra assumption to guarantee ergodicity of $H_{\omega}$, and thus existence of the IDS (\ref{ids1}). Exponential decay of $|\langle x|H_0|y\rangle|$ implies the strong localization condition (\ref{expect1}) at extreme energies or high disorder \cite{[AM]}, or, for low disorder, at band edges \cite{[Aizenman]}. Also, exponential bounds of the form (\ref{expect1}) imply almost sure pure point spectrum for the energies at which they hold and exponential decay of the corresponding eigenfunctions. These conditions on $H_0$ and the decay estimate (\ref{expect1}) also insure that the result of Klein and Molchanov \cite{[KM]} (which uses \cite{[AM]} and thus rapid off-diagonal decay of the matrix elements of $H_0$) on the almost sure simplicity of eigenvalues are applicable in the above situation, thus: \begin{coro}\label{minamitheorem3} The eigenvalues of the general Anderson model considered in Corollary~\ref{minamitheorem2} in the region of localization are simple almost surely. \end{coro} In fact, all of the above can be extended to $H_0$ with sufficiently rapid power decay of the off-diagonal elements. The works \cite{[AM]} and \cite{[Aizenman]} discuss how a result somewhat weaker than (\ref{expect1}) can be obtained in this case. In particular, this only gives power decay of eigenfunctions, which for sufficiently fast power decay still allows to apply the result of \cite{[KM]}. Moreover, a thorough analysis of Minami's proof shows that it works for suitable power decay. \subsection{A different proof of Theorem 2} \label{sec:GV} After we finished the proof of Theorem \ref{minami05.th-minami}, we received the preprint of Graf and Vaghi \cite{[GV]} in which they proved essentially the same result using a different approach. One of their main motivations was to eliminate Minami's symmetry condition on the Green's function $G_\Lambda ( x,y;z) = G_\Lambda (y,x;z)$ thus allowing magnetic fields as in (\ref{magnetic1}). They base their calculation on the following lemma. By $\mbox{diag} ~( v_1, \ldots, v_n)$, we mean the $n \times n$-matrix with only nonzero diagonal entries $v_1, \ldots, v_n$. \begin{lemma}\label{gv1} Let $A = (a_{ij})$ be an $n \times n$ matrix with $\Im A > 0$. then, \begin{equation}\label{gvbound1} \int ~dv_1 \cdots d v_n ~\det ( \Im [ ~\mbox{diag} ~( v_1, \ldots, v_n) - A ]^{-1} ) \leq \pi^n . \end{equation} \end{lemma} It is not surprising that the proof of Lemma \ref{gv1} involves the Schur complement formula. One applies Lemma \ref{gv1} by noting that the argument of the determinant on the left side of (\ref{minamiest1}) (for the case $n=2$) may be written as the imaginary part of a matrix of the form $[ ~\mbox{diag} ~( v_1, \ldots, v_n) - A ]^{-1}$ by Krein's formula, where $A$ is obtained from $H_\Lambda$ by setting $V(x) = V(y) = 0$. Without explicitly stating it, Graf and Vaghi also indicate that a bound as in (\ref{minami01}) follows from (\ref{gvbound1}) for general $n$. The key to proving Lemma \ref{gv1} for $n=2$ are the integral formulas \begin{equation}\label{gv2} \int_{{\mathbb R}} ~dx ~\frac{1}{|ax + b|^2} = \frac{\pi}{\Im ( \overline{b}a)}, \end{equation} assuming $a, b \in {\mathbb C}$ and $\Im ( \overline{b} a) > 0$, and \begin{equation}\label{gv3} \int_{{\mathbb R}} ~dx ~\frac{1}{ax^2 + bx+ c} = \frac{ 2\pi}{\sqrt{ 4ac - b^2}}, \end{equation} assuming $a, b, c \in {\mathbb R}$, $a > 0$, and $4ac - b^2 > 0$. The case for general $n$ is obtained by induction. \subsection{Joint Energy-Space Distributions} We mention two related results of interest. Nakano \cite{[N]} recently obtained some quantitative results providing insight into the idea, going back to Mott, that when eigenvalues in the localization regime are close together, the centers of localization are far apart. Roughly, Nakano proves that for any subinterval $J$ of energies in the localization regime with sufficiently small length, there is at most one eigenvalue of $H_\omega$ in $J$ with a localization center in a sufficiently large cube about any point with probability one. His proof uses Minami's estimate in the form (\ref{minamiest3}) and the multiscale analysis. In this sense, the centers of localization are repulsive. On the other hand, if one studies an appropriately scaled space and energy distribution of the eigenfunctions in the localization regime in the thermodynamic limit, Killip and Nakano \cite{[KN]} proved that this distribution is Poissonian, extending Minami's work for the Anderson model (\ref{minami05.eq-andersonmodel00}). They define a measure $d \xi$ on ${\mathbb R}^{d+1}$ by the following functional. For $f \in C_0 ( {\mathbb R} )$ and $g \in C_0 ({\mathbb R}^d)$, consider the map \begin{equation}\label{kn1} f,g \rightarrow tr ( f(H) g(\cdot) ) = \int_{{\mathbb R} \times {\mathbb R}^d} f(E) g(x) ~d \xi (E, x). \end{equation} This measure is supported on $\Sigma \times {\mathbb Z}^d$, where $\Sigma \subset {\mathbb R}$ is the deterministic spectrum of $H_\omega$. They perform a microscopic rescaling of $d \xi$ in both energy and space to obtain a measure $d \xi_L$ as follows \begin{equation}\label{kn2} \int f(E,x) ~d \xi_L (E,x) = \int f( L^d (E - E_0), x L^{-1}) ~d \xi (E, x), \end{equation} where $E_0$ is a fixed energy for which the density of states $n$ exists and is positive. In the limit $L \rightarrow \infty$, they prove that this rescaled measure converges in distribution to a Poisson point process on ${\mathbb R} \times {\mathbb R}^d$ with intensity given by $n(E_0) dE \times dx$. This work also relies on Minami's estimate (\ref{minamiest3}) but uses the fractional moment estimates rather than multiscale analysis. Both of these papers treat the standard Anderson model (\ref{minami05.eq-andersonmodel00}) so that Theorem 2 extends the results to more general lattice operators of the form $H_0 + V_\omega$. \section{Appendix: The Schur Complement Formula} \label{sec:schur} We prove the Schur complement formula for a selfadjoint operator $H$ and an orthogonal projection $P$ with $Q = 1 - P$ on a Hilbert space $\mathcal{H}$. In the case that $H$ is unbounded, we assume that $P \mathcal{H} \subset D(H)$. Let $ z \in {\mathbb C}$ and suppose that $Q(H-z)Q$ is boundedly invertible on the range of $Q$ (as always the case for $z\in{\mathbb C} \setminus {\mathbb R}$). We write $R_Q(z) = (Q(H-z)Q)^{-1}$ for the resolvent of the reduced operator. We write $H-z$ on $\mathcal{H}$ as the matrix \begin{equation}\label{matrixresolvent1} H-z = \left[ \begin{array}{ll} P(H-z)P & P H Q \\ Q H P & Q(H-z)Q \end{array} \right] \end{equation} We introduce the triangular matrix $L$ given by \begin{equation}\label{triangle1} L = \left[ \begin{array}{ll} P & 0 \\ - R_Q(z) QHP & R_Q(z) \end{array} \right] \end{equation} The {\it Schur complement} of $Q(H-z)Q$ is defined as $S(z) \equiv P(H-z)P - PHQ R_Q(z) QHP$. We assume that $S(z)$ is boundedly invertible on the range of $P$ (true for $z\in {\mathbb C}\setminus {\mathbb R}$). Multiplying $(H-z)$ on the right by $L$ we obtain \begin{equation}\label{resolventmatrix2} (H-z) \cdot L = \left[ \begin{array}{ll} S(z) & PHQ R_Q(z) \\ 0 & Q P(H-z)P \end{array} \right] \end{equation} This matrix may be inverted, and multiplying the inverse $( (H-z) L)^{-1} = L^{-1} R(z)$ on the left by $L$ gives \begin{equation}\label{resolventmatrix3} R(z) = \left[ \begin{array}{ll} S(z)^{-1} & - S(z)^{-1} PHQ R_Q(z) \\ - R_Q(z) QHP S(z)^{-1} & R_Q(z) + R_Q(z) QHP S(z)^{-1} Q H P R_Q(z) \end{array} \right] \end{equation} The formula for $PR(z)P$ readily follows from (\ref{resolventmatrix3}) since in matrix notation $P$ is block diagonal.
2,869,038,156,051
arxiv
\section{Introduction} \label{sec::intro} The majority of stars born in dense stellar clusters are part of binary star systems \citep{Duquennoy1991,Ghez1993,Duchene2013}. The observed orbital eccentricities of binaries vary with orbital separation \citep{Raghavan2010,Tokovinin2016}. For tight binaries, the eccentricities are small, which implies that there has been circularization of the binary orbit caused by stellar tidal dissipation \citep{Zahn1977}. More widely-separated binaries have observed eccentricities ranging from $e_{\rm b} = 0.39$ to $0.59$, with a considerable number of highly eccentric systems with $e_{\rm b} > 0.8$. The interactions of the binary with surrounding gas may be responsible for the present-day observed binary eccentricities \citep{Goldreich1980,Artymowicz1991,Artymowicz1992,Armitage2005,Cuadraetal2009,Roedig2011,Munoz2019,Zrake2021}. Circumbinary discs of gas and dust are sometimes observed to be responsible to be providing accreting material onto the binary \cite[e.g.,][]{Alves2019}. The gas flow dynamics from the circumbinary disc onto the binary components has significant implications for planet formation scenarios in binary systems. Circumbinary discs are commonly observed to be moderately to highly misaligned to the binary orbital plane. For example, the pre-main sequence binary KH 15D has a circumbinary disc inclined by $5-16^\circ$ \citep{Chiang2004,Smallwood2019,Poon2021}. The radial extent of the disc is narrow and presumed to be rigidly precessing to explain the unique periodic light curve. A $\sim 60^\circ$ inclined circumbinary disc is found around the main-sequence binary IRS 43 \citep{Brinch2016}, along with misaligned circumstellar discs around each binary component. There is an observed misalignment of about $70^\circ$ between the circumbinary disc and the circumprimary disc in HD 142527 \citep{Marino2015,Owen2017}. Another young binary, HD 98800 BaBb, has the only observed polar (inclined by $\sim 90^\circ$) gaseous circumbinary disc \citep{Kennedy2019}. The $6$--$10\, \rm Gyr$ old binary system, 99 Herculis, has a nearly polar (about $87^\circ$) debris ring \citep{Kennedy2012,Smallwood2020a}. Apart from binaries, stars may also form in higher-order systems \citep{Tokovinin2014a,Tokovinin2014b}. The circumtriple disc around the hierarchical triple star system, GW Ori, is tilted by about $38^\circ$ \citep{Bi2020,Kraus2020,Smallwood2021GWOri}. The observations of inclined circumbinary discs have implications on planet formation models. Observations from space and ground-based telescopes reveal that $\sim 50$ per cent of the confirmed exoplanets reside in binary systems \citep{Horchetal2014, Deacon2016,Ziegler2018}. For example, the binary system $\gamma$ Cep AB hosts a giant planet around the primary star, $\gamma$ Cep Ab \citep{Hatzes2003}. It is crucial to study the structure and evolution of protoplanetary discs since these are the sites for planet formation \citep{DAngelo2018}. A forming planet's orbital properties are directly related to the orientation of the protoplanetary disc. For example, the observed young binary system XZ Tau shows both the circumprimary and circumsecondary discs are misaligned to the binary orbital plane \citep{Ichikawa2021}. The binary system HD 142527 shows the presence of a misaligned inner disc around one of the stellar components, presumably fed from the circumbinary disc \citep{Price2018a}. Furthermore, IRAS 04158+2805 is a binary system where the two circumstellar discs and the circumbinary discs have been observed to be misaligned \citep{Ragusa2021}. Therefore, highly-inclined circumstellar discs may give birth to planets on highly-tilted orbits. Due to viscous dissipation, a misaligned circumbinary disc undergoes nodal precession and evolves towards either a coplanar or polar alignment. For an initially low-inclination circumbinary disc, the disc precesses about the angular momentum vector of the binary and eventually evolves to be coplanar to the binary orbital plane \citep{Facchinietal2013,Foucart2014}. Slightly misaligned discs around an eccentric binary undergo tilt oscillations as they align, due to the nonaxisymmetric potential produced by the eccentric binary \citep{Smallwood2019,Smallwood2020a}. For highly inclined discs around eccentric orbit binaries, the angular momentum vector of the disc precesses about the eccentricity vector of the binary \citep[e.g.][]{Aly2015}, which leads the disc to align perpendicular (i.e., polar) to the binary orbital plane \citep{Martinlubow2017,Lubow2018,Zanazzi2018,Martin2018,Cuello2019}. A massive circumbinary disc that is undergoing polar alignment aligns to a generalized polar state which is less than $90^\circ$ \citep{Zanazzi2018,MartinLubow2019,Chen2019}. Circumbinary gas discs contain a central cavity around the binary where little material is present. The cavity size is determined by where the tidal torque is balanced with the viscous torque \citep{Artymowicz1994,Lubow2015,Miranda2015,Franchini2019b,Hirsh2020,Ragusa2020}. The strength of the binary torque on the disc is dependent on the tilt of the circumbinary disc and binary eccentricity. The tidal torque at a given radius is zero when the circumbinary disc is polar and the binary eccentricity approaches $e_{\rm b} = 1$ \citep{Lubow2018} or if the disc is retrograde \cite[e.g.,][]{Nixon2013}. In the simplest models, the production of an outward forcing torque by the binary can prevent circumbinary material from flowing through the cavity \citep{LP1974, Pringle1991}. However, material from the circumbinary disc flows through the binary cavity in the form of gaseous streams \citep[e.g.][]{Artymowicz1996,Gunther2002,NixonKing2012,Shi2012,DOrazio2013,Farris2014,Munoz2019,Alves2019}. These streams are responsible for forming and replenishing circumstellar discs around each binary component. The accretion of material onto the circumstellar discs may aid in the formation of $S$--type planets, those that orbit one component of a binary. Accretion of material onto the central binary may be suppressed for small disc aspect ratios. The structure of a circumstellar disc around one star is strongly affected by the tidal field of the binary companion \citep{Papaloizou1977,Artymowicz1994,Pichardo2005,JangCondell2015}. Circumstellar discs around each binary component undergo tidal truncation. A circumstellar disc in a circular orbit binary is typically truncated to about one-third to one-half of the binary orbital separation The tidal truncation radius is expected to decrease with increasing binary eccentricity. Kozai-Lidov (KL) oscillations \citep{Kozai1962, Lidov1962} have been studied extensively to analyze several astronomical processes involving bodies that orbit a member of a binary system that begin on highly misaligned orbits. During KL oscillations, the object's inclination is exchanged for eccentricity, and vice versa. These processes include asteroids and irregular satellites \citep{Kozai1962,Nesvorny2003}, artificial satellites \citep{Lidov1962}, tidal disruption events \citep{Chen2011}, formation of Type Ia supernovae \citep{Kushnir2013}, triple star systems \citep{Eggleton2001,Fabrycky2007}, planet formation with inclined stellar companions \citep{Wu2003,Takeda2005}, giant outbursts in Be/X-ray binaries \citep{Martinetal2014,MartinFranchini2019}, inclined planetary companions \citep{Nagasawa2008}, mergers of binaries in galactic nuclei \citep{Blaesetal2002,Antonini2012,Hamers2018,Hoang2018,Fragione2019a,Fragione2019b}, stellar compact objects \citep{Thompson2011}, and blue straggler stars \citep{Perets2009}. A highly misaligned initially circular disc around one component of a binary undergoes KL cycles in which its inclination is exchanged for eccentricity, and vice versa \citep{Martinetal2014}. Due to disc dissipation by viscosity and shocks, these oscillations are typically significantly damped after a few oscillations. KL oscillations can occur in a fluid disc with a wide variety of disc and binary parameters \citep{Fu2015}. When the disc becomes eccentric, it overflows its Roche lobe and transfers material to the companion star \citep{Franchini2019}. Self-gravity of a disc can suppress disc KL oscillations if the disc is close to being gravitationally unstable \citep{Fu2015b}. KL oscillations in a circumstellar disc may have significant consequences for planet formation since strong shocks in the gas are produced during high eccentricity phases \citep{Fu2017}. A misaligned circumbinary disc may form misaligned circumstellar discs around the individual binary components \cite[e.g.,][]{Nixon2013,Smallwood2021}. A highly misaligned disc around one component of a binary may be unstable to the Kozai-Lidov (KL) mechanism \citep{Martinetal2014}. \cite{Smallwood2021} simulated the flow of gas originating from an initially misaligned circumbinary disc by $60^{\circ}$. The misaligned gas streams that flow into the binary cavity result in formation of highly tilted circumstellar discs around each binary component. The inclined circumstellar discs in turn undergo KL oscillations. However, the KL oscillations are long-lived, due to the continuous accretion of inclined material from the circumbinary disc. Long-lived KL cycles have important implications for planet formation in binary systems. In this work, we extend the previous study \cite{Smallwood2021} and consider more highly inclined circumbinary discs. We first revisit the dynamics of highly inclined test particle orbits around one component of a binary in Section~\ref{sec::kozai_testpart}. In Section~\ref{sec::setup}, we describe the setup for our hydrodynamical simulations. In Section~\ref{sec::results_CPD}, we discuss the results of our circumprimary disc simulations. We simulate a highly inclined circumprimary disc in a binary to explore the dynamics of the KL cycles. Previous studies have only dealt with circumprimary disc inclinations $\lesssim 60^\circ$, while we consider higher tilts, including a polar circumprimary disc. In Section~\ref{sec::results_CBD}, we show the results of our hydrodynamical simulations with an initial circumbinary disc, where we consider the flow of material from discs with various initial misalignments, including a polar circumbinary disc. Finally, a summary is given in Section~\ref{sec::summary}. \begin{figure} \includegraphics[width=\columnwidth]{kozai_particle_i.eps} \centering \caption{Eccentricity (upper panel) and inclination (lower panel) evolution of circumprimary test particles under the influence of a circular binary for initially circular orbit particles. We vary the initial particle orbital tilt, $i_0$, beginning with $30^\circ$ (black), $45^\circ$ (blue), $60^\circ$ (red), $75^\circ$ (green), $80^\circ$ (yellow), $85^\circ$ (purple), and $90^\circ$ (pink). The initial orbital radius of the particle is set at $r_0 = 0.06a$, where $a$ is the separation of the binary. The time is in units of binary orbital period $P_{\rm orb}$.} \label{fig::kozai_particle_i} \end{figure} \section{Kozai-Lidov oscillations of test particles} \label{sec::kozai_testpart} Before considering discs, we consider the properties of test particle orbits that undergo KL oscillations. As a consequence of the conservation of the component of the angular momentum that is perpendicular to the binary orbital plane, the test particle's inclination is recurrently exchanged for eccentricity. This conservation is expressed as \begin{equation} \sqrt{1-e^2_{\rm p}}\cos{i_{\rm p}} \approx \rm const, \label{eq::ang_mom} \end{equation} where $i_{\rm p}$ is the particle inclination with respect to the binary orbital plane and $e_{\rm p}$ is the eccentricity of the test particle. A initially circular orbit particle initially gains eccentricity while reducing its orbital tilt (i.e. going towards alignment which means higher values of $|\cos{i_{\rm p}}|$) and then circularizes while gaining orbital tilt back to its original inclination. For an initially circular orbit particle, KL oscillations only occur if the initial tilt of the test particle $i_{\rm p0}$ satisfies $\cos^2{i_{\rm p0}} < \cos^2{i_{\rm cr}} = 3/5$ \citep{Innanen1997}, which requires that $39^\circ \lesssim i_{\rm p0} \lesssim 141^\circ$. From Eq.~(\ref{eq::ang_mom}), an initially circular particle orbit can achieve a maximum eccentricity given by \begin{equation} e_{\rm max} = \sqrt{1 - \frac{5}{3}\cos^2{i_{\rm p0}}}. \end{equation} The increase in a circular particle's eccentricity can be quite significant. For example, if the particle's initial orbit is tilted by $60^\circ$, the maximum eccentricity reached during a KL cycle is about $0.75$. For eccentric binaries, stronger effects from KL oscillations have been found to exist \citep{Ford2000,Lithwick2011,Naoz2011,Naoz2013a,Naoz013b,Teyssandier2013,Li2014,Liu2015}. The KL oscillation period for a particle in the potential of an eccentric binary is approximately given by \begin{equation} \frac{\tau_{\rm KL}}{P_{\rm b}} \approx \frac{M_{\rm 1} + M_{\rm 2}}{M_{\rm 2}} \frac{P_{\rm b}}{P} (1 - e_{\rm b}^2)^{3/2} \label{eq::KL_period} \end{equation} \citep{Holman1997,Innanen1997,Kiseleva1998}, where $M_1$ and $M_2$ are the masses of the primary and secondary components of the binary, respectively, $P = 2 \pi/ \sqrt{GM_1/a_{\rm p}^3}$ is the orbital period of the particle with semimajor axis $a_{\rm p}$, $P_{\rm b} = 2\pi / \Omega_{\rm b}$ is the orbital period of the binary, $e_{\rm b}$ is the binary eccentricity, and $\Omega_{\rm b} = \sqrt{G(M_1 + M_2)/a_{\rm b}^3}$ is the binary orbital frequency for binary semimajor axis $a_{\rm b}$. To simulate an inclined circumprimary test particle in a binary, we use the $N$--body integrator, {\sc MERCURY} \citep{Chambers1999}. The test particle is orbiting the primary companion with an initial tilt $i_0$ relative to the binary orbital plane. The binary components have equal mass so that $M_1 =M_2 = M/2$, where $M$ is the total mass of the binary. \cite{Fu2015b} ran numerous test particle orbits showing the effects the particle and binary parameters have on the induced KL oscillations. Following their work, we model an eccentric inclined particle around one component of an eccentric binary, more applicable to binary systems. We first simulate an inclined particle in a circular binary to match previous results. Fig.~\ref{fig::kozai_particle_i} shows the eccentricity and inclination of a circumprimary particle as a function of time that begins on a circular orbit. The analytic solution for these test particle orbits in the quadrupole approximation is given in \cite{Lubow2021}. We consider various initial tilts of the test particle orbit. The critical inclination that the test particle orbit must have to induce KL cycles is $\sim 39^\circ$. Thus, a particle tilt of $30^\circ$ (black line) does not undergo KL oscillations. As the initial inclination of the particle increases, the KL oscillations become more frequent, and the growth in the eccentricity becomes more prominent (in agreement with Fig.~1 in \cite{Fu2015b}). The trough in the inclination profile of a test particle becomes narrower with initial inclination. An initial particle orbit tilt of $90^\circ$ becomes unstable and collides with the primary star during the first KL oscillation because the particles eccentricity exceeds $1.0$. The eccentricity of the polar particle increases almost up to its maximum eccentricity before the tilt begins to change. Next, we set the initial particle tilt to $60^\circ$ around a slightly eccentric binary with $e_{\rm b} = 0.1$, as we will consider in the disc simulations. We model various initial test particle eccentricities ranging from $0.0$ to $0.5$. Figure~\ref{fig::kozai_particle_e} shows the eccentricity and inclination of eccentric circumprimary particles as a function of time in binary orbital periods. An inclined circular test particle within an eccentric binary has an increased frequency in KL oscillations when compared to a particle orbiting one component of a circular binary, as expected by equation (\ref{eq::KL_period}). From Figure~\ref{fig::kozai_particle_e}, when the particle eccentricity is increased, the maximum eccentricity reached during a KL oscillation also increases. However, the difference between the initial eccentricity to the maximum eccentricity of the particle decreases as the initial particle eccentricity increases. Lastly, we examine the KL mechanism for a nearly polar particle. From Fig.~\ref{fig::kozai_particle_i}, an initially circular orbit particle with an initial orbital tilt of $85^\circ$ is unstable to KL oscillations but is otherwise stable. We consider a nearly polar orbit particle with an initial orbital tilt $i_0 = 85^\circ$ around a binary with eccentricity $e_{\rm b} = 0.1$. In Fig.~\ref{fig::kozai_particle_e2} we show the particle eccentricity and inclination as a function of time in binary orbital periods. The various lines correspond to different initial particle eccentricities ranging from $0.0$ to $0.5$. For all values of the initial particle eccentricity we consider, the particle proceeds through KL cycles in a periodic fashion. Unlike the particle beginning at a tilt of $60^\circ$, a nearly polar particle exhibits similar maximum eccentricity close to unity during a KL oscillation regardless of initial particle eccentricity. The minimum inclination reached during each KL oscillation is roughly independent of particle initial eccentricity. \begin{figure} \includegraphics[width=\columnwidth]{kozai_particle_e.eps} \centering \caption{Eccentricity (upper panel) and inclination (lower panel) evolution of circumprimary test particles under the influence of binary with eccentricity $e_{\rm b} = 0.1$. The initial tilt of the particle orbit is set to $60^\circ$. We vary the initial particle eccentricity $e_0$ beginning with $e_0 = 0$ (black), $0.1$ (blue), $0.2$ (red), $0.3$ (green), $0.4$ (yellow), $0.5$ (purple). The initial orbital radius of the particle is set at $r_0 = 0.06a$, where $a$ is the separation of the binary. The time is in units of binary orbital period $P_{\rm orb}$.} \label{fig::kozai_particle_e} \end{figure} \section{Hydrodynamical-simulation setup} \label{sec::setup} We use the smoothed particle hydrodynamics (SPH) code {\sc phantom} \citep{Price2018b} to model gaseous circumbinary and circumstellar discs. {\sc phantom} has been tested extensively for modeling misaligned circumbinary discs \citep{Nixon2012,Nixon2013,Nixon2015,Facchini2018,Smallwood2019,Poblete2019,Smallwood2020a,Aly2020,Hirsh2020,Smallwood2021}, as well as misaligned circumstellar discs around individual binary components \citep[e.g.][]{Martin2014,Dougan2015,Franchini2020}. The suite of simulations is summarised in Table~\ref{table::setup}. In this section we describe the setup for the binary star, circumprimary disc, and circumbinary disc in further detail. \begin{figure} \includegraphics[width=\columnwidth]{kozai_particle_e2.eps} \centering \caption{Same as Fig.~\ref{fig::kozai_particle_e} but for nearly polar test particles with an initial orbital tilt $i_0 = 85^\circ$.} \label{fig::kozai_particle_e2} \end{figure} \subsection{Binary star setup} We model the binary star system as a pair of sink particles, with an initial binary separation $a$. The binary is not static but rather evolves freely in time. Each sink particle is given an initial mass with $M_1$ being the primary mass and $M_2$ being the secondary mass. The total binary mass is thereby $M = M_1 + M_2$. All of our simulations assume an equal-mass binary ($M_1 = M_2$). In Cartesian coordinates, the orbit of the binary lies in the $x$-$y$ plane initially. The binary begins initially at apastron along the $x$-axis. The massive sink particles have a hard accretion boundary, meaning that when particles penetrate the sink accretion radius, the particle's mass and angular momentum are deposited onto the star \cite[e.g.,][]{Bate1995}. A large accretion radius is often used to reduce the computation time significantly by neglecting to resolve close-in particle orbits. In this work, however, we are interested in resolving the formation and evolution of the circumstellar material. Therefore, we adopt a relatively small accretion radius of $0.05a$ for simulations that begin with a circumbinary disc and an accretion radius of $0.025a$ for simulations that begin with a circumprimary disc. Using a smaller accretion radius for the circumprimary disc simulations ensures that the disc lifetime is longer, along with higher disc resolution. The more eccentric the binary, the smaller the outer truncation radius for the circumstellar discs \citep{Artymowicz1994}. Having a small binary eccentricity helps with the resolution of the circumstellar discs. On the other hand, to have a stable polar circumbinary disc, the binary eccentricity needs to be a non-zero value. The initial binary eccentricity is set to $e_{\rm b} = 0.1$, with the binary eccentricity vector along the positive $x$--axis. With this value of binary eccentricity, the critical tilt of the circumbinary disc to remain nearly polar is $\sim 77^\circ$ \cite[see eq. 33 in ][]{MartinLubow2019}. \begin{table*} \centering \caption{The setup of the SPH simulations that includes an initial circumprimary disc (CPD) or circumbinary disc (CBD). The table lists the initial parameters beginning with the disc tilt $i_0$, inner disc radius $r_{\rm in}$, outer disc radius $r_{\rm out}$, $\alpha$ viscosity parameter, disc aspect ratio at inner disc radius $H/r_{\rm in}$, disc aspect ratio at outer disc radius $H/r_{\rm out}$, the number of particles, and whether or not the circumstellar discs undergo the Kozai-Lidov (KL) instability. } \begin{tabular}{lccccccccc} \hline Model & Disc Setup & $i_0/^\circ$ & $r_{\rm in}/a$ & $r_{\rm out}/a$ & $\alpha$ & $H/r_{\rm in}$ & $H/r_{\rm out}$ & $\#$ Particles & KL unstable?\\ \hline \hline run1 & CPD & $60$ & $0.025$ & $0.25$ & $0.01$ & $0.035$ & $0.02$ & $750,000$ & Yes\\ run2 & CPD & $70$ & $0.025$ & $0.25$ & $0.01$ & $0.035$ & $0.02$ & $750,000$ & Yes \\ run3 & CPD & $80$ & $0.025$ & $0.25$ & $0.01$ & $0.035$ & $0.02$ & $750,000$ & Yes \\ run4 & CPD & $90$ & $0.025$ & $0.25$ & $0.01$ & $0.035$ & $0.02$ & $750,000$ & Yes \\ run5 & CPD & $100$ & $0.025$ & $0.25$ & $0.01$ & $0.035$ & $0.02$ & $750,000$ & Yes \\ \hline run6$^*$ & CBD & $60$ & $1.6$ & $2.6$ & $0.1$ & $0.1$ & $0.088$ & $1.5\times 10^6$ & Yes \\ run7 & CBD & $60$ & $1.6$ & $2.6$ & $0.1$ & $0.1$ & $0.088$ & $750,000$ & Yes \\ run8 & CBD & $90$ & $1.6$ & $2.6$ & $0.1$ &$0.1$ & $0.088$ & $1.5\times 10^6$ & Yes \\ \hline \multicolumn{10}{l}{$^{*}$ \text{Simulation from \citet{Smallwood2021}}}\\ \hline \end{tabular} \label{table::setup} \end{table*} \subsection{Circumprimary disc setup} To model a circumprimary disc, we follow the methods of \cite{Martin2014}. Runs 1-5 in Table~\ref{table::setup} simulate initially a circumprimary disc. The inner and outer disc radii are set at $r_{\rm in} = 0.025a$ and $r_{\rm out} = 0.25a$, respectively, with a initial total disc mass $M_{\rm CPD} = 10^{-3} M$. The circumprimary disc consists of $750,000$ equal-mass Lagrangian particles. We neglect any effects of self-gravity. The disc surface density profile is initially a power law distribution given by \begin{equation} \Sigma(r) = \Sigma_0 \bigg( \frac{r}{r_{\rm in}} \bigg)^{-p}, \label{eq::sigma} \end{equation} where we set $p = 3/2$. We adopt a locally isothermal disc with sound speed $c_{\rm s} \propto R^{-3/4}$, $H/r = 0.035$ at $r = r_{\rm in}$, and $H/r = 0.02$ at $r = r_{\rm out}$. With this prescription, the viscosity parameter $\alpha$ and $\langle h \rangle / H$ are effectively constant over the radial extend of the disc \citep{Lodato2007}. For the circumprimary disc simulations, we take the \cite{Shakura1973} $\alpha$ parameter to be $0.01$. To accomplish this, the SPH artificial viscosity coefficients are set as $\alpha_{\rm AV} = 0.18$ and $\beta_{\rm AV} = 2.0$. The disc is resolved with shell-averaged smoothing length per scale height $\langle h \rangle / H \approx 0.55$. \subsection{Circumbinary disc setup} To model an initially flat but tilted gaseous circumbinary disc, we follow the methods of \cite{Smallwood2021}. Runs 6, 7, and 8 in Table~\ref{table::setup} describe the simulations of a circumbinary disc. The disc initially consists of $1.5\times10^6$ equal-mass Lagrangian SPH particles. We also model a $750,000$ particle simulation for a resolution study. The simulations run for $45\, P_{\rm orb}$, where $P_{\rm orb}$ is the orbital period of the binary. This is sufficient time for the forming circumstellar discs to reach a quasi-steady state. We simulate initially highly misaligned disc inclinations of $i_0 = 60^\circ,90^\circ$. A disc with $i_0 = 90^\circ$ is in a polar configuration, where the angular momentum vector of the disc is aligned to the eccentricity vector of the binary. At the beginning of our simulations, we select an initial inner disc radius, $r_{\rm in}$, and outer disc radius, $r_{\rm out}$, where the initial total disc mass, $M_{\rm CBD}$, is confined. All of the simulations model a low-mass circumbinary disc such that $M_{\rm CBD} = 10^{-3}M$. We choose the circumbinary disc to be radially very narrow and close to the binary orbit. This is done to maximise the accretion rate onto the binary and hence the resolution of the circumstellar discs \cite[e.g.,][]{Smallwood2021}. For our simulations, we take $r_{\rm in} = 1.6a$ and $r_{\rm out} = 2.6a$. The tidal torque is weaker at a given radius for a more highly misaligned disc which allows the inner disc radius to lie closer to the binary than a coplanar disc \cite[e.g.,][]{Lubow2015,Miranda2015,Lubow2018}. The inner truncation radius of a polar circumbinary disc is around $1.6\,a$ \citep{Franchini2019b}, much smaller than the $2-3\, a$ expected for coplanar discs \citep{Artymowicz1994}. The disc surface density profile follows from Equation~(\ref{eq::sigma}). The physical disc viscosity is incorporated by using artificial viscosity $\alpha^{\rm av}$, which is detailed in \cite{Lodato2010}. By using our surface density profile and a disc aspect ratio $H/r = 0.1$ at $r_{\rm in}$, the shell-averaged smoothing length per scale height $\langle h \rangle / H$ and the disc viscosity parameter $\alpha$ are constant over the radial extent of the disc \citep{Lodato2007}. The circumbinary disc is initially resolved with $\langle h \rangle / H \approx 0.11$. The parameters for the simulations require a high viscosity in order to {\bf maximize the accretion rate on to the circumstellar discs and} provide better resolution. We consider a relatively high value for the \cite{Shakura1973} $\alpha_{\rm SS}$ of $0.1$. In a more realistic system, the disc viscosity may be lower. In order to more accurately simulate the formation and development of circumstellar discs, we adopt the locally isothermal equation of state of \cite{Farris2014} and set the sound speed $c_{\rm s}$ to be \begin{equation} c_{\rm s} = {\cal{F}} c_{\rm s0}\bigg( \frac{a}{M_1 + M_2}\bigg)^q \bigg( \frac{M_1}{r_1} + \frac{M_2}{r_2}\bigg)^q, \label{eq::EOS} \end{equation} where $r_1$ and $r_2$ are the radial distances from the primary and secondary stars, respectively, and $c_{\rm s0}$ is a constant with dimensions of velocity. $q$ is set to 3/4. ${\cal{F}}$ is a dimensionless function of position that we define below. This sound speed prescription guarantees that the temperature profiles in the circumprimary and circumsecondary discs are set by the primary and secondary stars, respectively. For $r_1, r_2 \gg a$, $c_{\rm s}$ is set by the distance from the binary centre of mass. To increase the resolution of the circumstellar discs, we include a function $\cal{F}$ in Equation (\ref{eq::EOS}) as detailed in \cite{Smallwood2021}. The purpose of $\cal{F}$ is to modify the sound speed around each binary component so that the viscous timescale is longer. This increases the mass (and hence the resolution) in the steady-state circumstellar discs. We take \begin{equation} \cal{F}=\begin{cases} \sqrt{0.001}, & \text{if $r_1\, {\rm or}\, r_2 < r_{\rm c}$},\\ 1, & \text{otherwise},\\ \end{cases} \end{equation} where $r_{\rm c}$ is the cutoff radius. We set a cutoff radius of $r_{\rm c}=0.35a$ from each binary component \cite[e.g.,][]{Smallwood2021}. Using the prescription mentioned above ensures that the disc aspect ratio of the circumstellar discs at radius $r=0.1a$ is $H/r \sim 0.01$, which is one-tenth of the disc aspect ratio at the initial inner circumbinary disc radius. \subsection{Analysis routine} We analyse the disc and binary parameters as a function of time. The parameters include tilt, eccentricity, the longitude of the ascending node, mass, and mass accretion rate. To probe the circumprimary disc simulations, we average over particles in the radial range from $0.025a$ to a distance of $0.30a$. For the circumbinary disc simulations, we average over particles in the radial range from $1.4a$ to a distance of $10a$. For the forming circumstellar discs, we average over all particles bound to each binary component (i.e., the specific energies, kinetic plus potential, of the particles are negative, neglecting the thermal energy). The tilt, $i$, is defined as the angle between the initial angular momentum vector of the binary (the $z$-axis) and the angular momentum vector of the disc. The longitude of the ascending node, $\phi$, is measured relative to the $x$-axis (the initial binary eccentricity vector). \begin{figure} \includegraphics[width=\columnwidth]{kozai.eps} \centering \caption{ Evolution of a KL unstable circumprimary disc as a function of time in units of the binary orbital period $P_{\rm orb}$. We simulate five different initial disc inclinations, which are $60^\circ$ (run1 from Table~\ref{table::setup}, black), $70^\circ$ (run2, blue), $80^\circ$ (run3, red), $90^\circ$ (run4, green), and $100^\circ$ (run5, yellow). The disc parameters are tilt $i$ (panel 1), eccentricity $e$ (panel 2), longitude of the ascending node $\phi$ (panel 3), and disc mass $M_{\rm d}$ (panel 4). The mass accretion rate $\dot{M}$ onto the primary star is shown in panel 5. } \label{fig::kozai_disc} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.49\columnwidth]{k_xz_t0.eps} \includegraphics[width=0.49\columnwidth]{k_yz_t0.eps} \includegraphics[width=0.49\columnwidth]{k_xz_t10.eps} \includegraphics[width=0.49\columnwidth]{k_yz_t10.eps} \includegraphics[width=0.49\columnwidth]{k_xz_t15.eps} \includegraphics[width=0.49\columnwidth]{k_yz_t15.eps} \end{center} \caption{The evolution of polar circumprimary disc (run4 from Table~\ref{table::setup}). The white circles denote the eccentric orbit binary components with an initial binary separation of $a$. The top row shows the initial disc setup. The middle and bottom rows show the disc evolution at $t = 10\, P_{\rm orb}$ and $t = 15\, P_{\rm orb}$, respectively, where $P_{\rm orb}$ is the binary orbital period. The color denotes the gas surface density, with the orange regions being about three orders of magnitude larger than the purple regions. The left column shows the $x$--$z$ plane, and the right column shows the $y$--$z$ plane. At $t = 10\, P_{\rm orb}$, the circumprimary disc is highly eccentric due to the Kozai-Lidov instability. Also, at this time, a circumsecondary disc is being formed from material flowing close to the secondary binary component from the eccentric circumprimary disc. At $t = 15\, P_{\rm orb}$, the circumprimary disc has completely dissipated from being accreted onto the primary star and transferring material to the secondary star. At this time, there is more material in the newly formed circumsecondary disc. } \label{fig::splash_90pri} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{resolution.eps} \centering \caption{Resolution study for a circumbinary disc that is initially misaligned by $60^\circ$. The blue curves represent the simulation with initially $1.5\times 10^6$ particles in the circumbinary disc, while the red curves denotes the simulation with initially $750,000$ particles. The first four panels show the disc parameters for the newly forming circumprimary disc as a function of time in units of the binary orbital period, $P_{\rm orb}$. The disc parameters are tilt $i$ (panel 1), eccentricity $e$ (panel 2), longitude of the ascending node $\phi$ (panel 3), and disc mass $M_{\rm d}$ (panel 4). The black dotted curve in the third panel denotes the circumbinary disc. The lower panel shows the mass accretion rate onto the primary star $\dot{M}_{\rm pri}$ (panel 5). } \label{fig::i60} \end{figure} \section{Hydrodynamical results with a circumprimary disc} \label{sec::results_CPD} This section considers the evolution of a circumprimary disc in the absence of accretion from a circumbinary disc. This enables us to disentangle the effect of accretion onto the circumstellar discs. We focus on large circumprimary disc misalignments in an eccentric binary star system. We consider five different initial disc tilts, $60^\circ$ (run1 from Table~\ref{table::setup}), $70^\circ$ (run2), $80^\circ$ (run3), $90^\circ$ (run4), and $100^\circ$ (run5). Figure~\ref{fig::kozai_disc} shows the disc tilt, eccentricity, the longitude of the ascending node, the mass of the circumprimary disc, and the accretion rate onto the primary star as a function of time in binary orbital periods. The disc exhibits KL cycles for each initial tilt, where the disc eccentricity and inclination are exchanged. For a disc with an initial tilt of $60^\circ$, \cite{Martinetal2014} found that the first KL oscillation occurred around $10\, \rm P_{\rm orb}$ for a circular binary. In our case, the disc with the same initial tilt undergoes the first KL oscillation much sooner due to the binary having a slightly eccentric orbit \cite[see Fig.12 in][]{Fu2015}. Due to viscous dissipation and the lack of circumbinary material, the KL oscillations damp quickly in time. For higher initial inclinations, $70^\circ$, $88^\circ$, $90^\circ$ and $100^\circ$, the discs do not survive after one KL oscillation for our given sink size. The discs become very eccentric, which leads to the majority of the disc material being accreted by the primary star. Increasing the resolution of these simulations does not lengthen the disc lifetime. However, if we were to use a smaller sink size, then the disc could survive through the KL oscillations. A smaller sink size would ensure that a larger portion of the disc could survive. An accretion radius of $\sim 0.01\, \rm au$ is comparable to the size of the star, but we simulate a larger sink size for computational reasons and to compare with the circumbinary disc simulations detailed in the next Section. The initially polar disc's tilt does not change much from polar before the majority of the disc is accreted. This is likely a consequence of the high disc eccentricities that are developed which is consistent with the results for test particle orbits (see Fig.~\ref{fig::kozai_particle_i}). In the retrograde case, $i_0 = 100^\circ$, as the disc eccentricity increases, the inclination also increases, opposite to the prograde cases. Highly inclined particle orbits experience a large (nearly $180^\circ$) shift in $\phi$ within a small time interval centered about the eccentricity maximum (see the plot for $\Omega(t)$ in Figure 1 of \cite{Lubow2021}). This large shift does not appear in Figure~\ref{fig::kozai_disc} or in any of our other phase results. We are not sure why this is the case. Perhaps the disc is unable to respond to such a large shift within a short time. We further examine the evolution of the polar ($i_0 = 90^\circ$) circumprimary disc. In Fig.~\ref{fig::splash_90pri}, we show the polar circumprimary disc structure at three different times, $t=0\, P_{\rm orb}$, $10\, P_{\rm orb}$, and $15\, P_{\rm orb}$. Initially, the polar disc around the primary star (left white dot) is edge-on in the $x$-$z$ plane and face-on $y$-$z$ plane. At $t=10\, P_{\rm orb}$, the disc is at peak eccentricity growth from the KL instability. Also, at this time, streams of material from the circumprimary disc flow around the secondary star (right white dot) and begin forming a circumsecondary disc. At $t=15\, P_{\rm orb}$, the circumprimary disc has dissipated due to accretion onto the primary star and transporting material to the circumsecondary disc. The newly formed circumsecondary disc is at a lower tilt, below the threshold, to induce the KL cycles. \section{Hydrodynamical results with a circumbinary disc} \label{sec::results_CBD} In this section we examine how misaligned and polar circumbinary material flows through the binary cavity and forms circumstellar discs around each binary component. We first conduct a resolution study of our earlier work from \cite{Smallwood2021}, modeling an initially $60^\circ$ misaligned circumbinary disc. We then focus on the polar circumbinary disc case. \subsection{Resolution Study} We examine a circumbinary disc with an initial misalignment of $i_0 = 60^\circ$ with two different initial numbers of particles, $1.5\times 10^6$ (run6) and $750,000$ (run7). The upper four panels in Figure~\ref{fig::i60} show the circumprimary disc parameters as a function of time. The bottom panel shows the mass accretion rate onto the primary star. The blue curves represent the $1.5\times 10^6$ particle simulation, while the red curves represent the $750,000$ particle simulation. Panels 1 and 2 show the evolution of disc eccentricity and inclination where the forming circumprimary disc undergoes KL oscillations from the continuous accretion of material from the circumbinary disc. The oscillations damp in time at both resolutions, with the lower resolution simulation damping more quickly. Therefore, the oscillations are likely limited by resolution. If the accretion timescale is long compared to the KL timescale, we expect the KL oscillations to damp over time, similar to the circumprimary disc simulations without accretion shown in the previous Section. If the accretion timescale is short compared to the KL timescale, there should be no KL oscillations present. In this case, the material moves through the disc faster than it becomes unstable to KL oscillations. We expect the optimal oscillations when the timescales are comparable because the disc refills mass on the timescale that the oscillations take place. For the simulation with a $60^\circ$ tilted circumbinary disc, the accretion timescales for the primary and secondary are $\sim 1.5\, \rm P_{orb}$, whereas the KL timescale for this simulation is $\sim 5\, \rm P_{orb}$. The simulation is in the regime where the accretion timescale is shorter than the KL oscillation timescale because when the disc becomes eccentric during the KL oscillations, a large amount of disc material is accreted, reducing the accretion timescale. However, the accretion timescale is dependent on the disc viscosity. In our hydrodynamical simulations, we use an artificial viscosity to model an expected \cite{Shakura1973} viscosity coefficient. The number of Lagrangian particles determines how close the artificial viscosity is to the actual value. Thus, the $\alpha$ is artificially higher at lower resolutions, leading to a shorter accretion timescale. For our higher-resolution simulation, the $\alpha$ is lower, leading to a longer accretion timescale. Panel 3 in Fig.~\ref{fig::i60} shows the longitude of the ascending node as a function of time. The precession rate of the circumprimary disc is only slightly faster than the circumbinary disc on average. In the absence of the effects of KL oscillations, the nodal precession rate of the primary disc, assuming constant surface density $\Sigma$ out to disc radius $r$ from the primary, is given by \begin{equation} \omega_{\rm pr} = - \frac{15M_2 r^3}{32 M_1 a_{\rm b}^3}\cos{(i)} \, \Omega(r), \label{omnonKL} \end{equation} where $i$ is inclination angle of the primary disc relative to the binary orbital plane and $\Omega = \sqrt{G M_1/r^3}$ is the angular velocity in the disc \citep{Larwoodetal1996}. With $r = 0.35\, a_{\rm b}$, we find $\omega_{\rm pr} = 6^\circ/P_{\rm orb}$ with a revolution period of $\sim 56\, \rm P_{orb}$. Therefore, the circumstellar discs should have nodally precessed $75$ per cent of a revolution in $45\, \rm P_{\rm orb}$. In panel 3 we see that the circumstellar discs have only completed roughly $30$ per cent of a nodal revolution. It is possible that the circumprimary phase is affected by the phase of accreted gas from the circumbinary disc that undergoes relatively slow nodal precession. As discussed in Section \ref{sec::results_CPD}, KL oscillations modify the nodal precession rate of a test particle in a way that we do not see in the disc simulations. Lastly, the mass in the circumprimary discs oscillates in time, with the troughs corresponding with each high eccentricity period. During each high eccentricity phase, the accretion rate peaks as seen in panel 5. \begin{figure} \begin{center} \includegraphics[width=0.49\columnwidth]{p_xz_t0.eps} \includegraphics[width=0.49\columnwidth]{p_yz_t0.eps} \includegraphics[width=0.49\columnwidth]{p_xz_t25.eps} \includegraphics[width=0.49\columnwidth]{p_yz_t25.eps} \end{center} \caption{The formation of polar circumstellar discs from an initially low-mass polar circumbinary disc (run8). The white circles denote the eccentric orbit binary components with an initial binary separation of $a$. The upper panels denote the initial disc setup, while the bottom panels show the disc evolution at $t = 25\, P_{\rm orb}$, where $P_{\rm orb}$ is the binary orbital period. At this time, nearly polar circumstellar discs are forming around each binary component. The color denotes the gas density using a weighted density interpolation, which gives a mass-weighted line of sight average. The yellow regions are about three orders of magnitude larger than the purple. The left column shows the $x$--$z$ plane, and the right column shows the $y$--$z$ plane. } \label{fig::polar_splash} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{run4_params.eps} \centering \caption{Simulation results for run8 for an initially polar circumbinary disc. The disc parameters are shown for the circumprimary, circumsecondary, and circumbinary discs as a function of time in units of the binary orbital period, $P_{\rm orb}$. The upper four panels show the disc tilt $i$ (panel 1), eccentricity $e$ (panel 2), longitude of the ascending node $\phi$ (panel 3), and disc mass $M_{\rm d}$ (panel 4) for the three discs. The lower panel shows the mass accretion rate onto the sinks $\dot{M}$ (panel 5).} \label{fig::i90} \end{figure} \begin{figure} \includegraphics[width=0.49\columnwidth]{vel_left.eps} \includegraphics[width=0.49\columnwidth]{vel_right.eps} \centering \caption{Edge-on view ($x$--$z$ plane) of a polar circumbinary disc (run8) at a time $t = 5\, P_{\rm orb}$. We ignore the main portions of the disc confined within $r < 0.45a_{\rm b}$, where $a_{\rm b}$ is the separation of the binary. The binary components are shown as the green dots. The colours denote the disc surface density, with the orange regions being about three orders of magnitude larger than the purple regions. We overlay the velocity vectors shown by the black arrows. The length of the arrow is proportional to the velocities of the particles. We see two asymmetric lobes of material that are produced by the binary. Several of the velocity vectors are directed away from the plane of the circumbinary disc; however, the material then falls back onto the disc gap.} \label{fig::vel} \end{figure} \subsection{Polar discs} In this section, we present a hydrodynamical simulation of the flow of material from a polar circumbinary disc onto the binary components (run8). The top row of Fig.~\ref{fig::polar_splash} shows the initial configuration of the polar circumbinary disc around an eccentric binary. The bottom row shows the disc structure at $t = 25\, P_{\rm orb}$. The circumbinary disc remains nearly polar ($\sim 90^\circ$) as shown in the $x$-$z$ plane. Material flows from the polar circumbinary disc and forms nearly polar circumstellar discs around each binary component. The cavity size is smaller in the polar disc compared to a coplanar disc simulation as expected \citep{Lubow2015, Miranda2015}. The upper four panels in Fig.~\ref{fig::i90} show the inclination, eccentricity, the longitude of the ascending node, and disc mass for the three discs as a function of time in binary orbital periods. The lower panel shows the mass accretion rate onto the sinks. The circumstellar discs form at a time of $\sim 10\, \rm P_{orb}$, later than in the simulation with a lower level of circumbinary disc misalignment. The circumbinary disc tilt evolves in time. Since we model a disc with a non-zero mass, it will align to a generalised polar state with an inclination that is $<90^\circ$ \cite[e.g.,][]{MartinLubow2019, Chen2019}. In this case, the circumstellar discs form slightly retrograde, with a tilt just above $90^\circ$. The primary and secondary discs form with an eccentricity of $\sim 0.25$. However, the polar circumstellar discs undergo the KL instability, which forces the disc eccentricity and tilt to oscillate in time. Looking at panels 1 and 2, we see that as the disc eccentricity increases, the disc tilt also increases, the opposite of the conventional KL case involving prograde orbits. However, this result is consistent with the KL mechanism for retrograde orbits. Panel 3 shows the evolution of the longitude of the ascending node in time. Since the circumprimary and circumsecondary discs are nearly polar, they exhibit very little precession (see equation \ref{omnonKL} and discussion below it). The mass of the polar circumstellar discs oscillates in time (panel 4), likely due to the oscillating disc eccentricity. The polar circumbinary disc has lost $\sim 25$ per cent of its initial mass. The KL oscillations from Fig.~\ref{fig::i90} damp in time. However, from our resolution study, the damping is primarily due to the initial number of particles. The accretion timescale for this simulation is $\sim 15\, \rm P_{orb}$, and the KL timescale in this case is $\sim 10\, \rm P_{orb}$. The accretion timescale is longer in the polar simulation than in the $60^\circ$ simulation because the polar circumstellar discs become less eccentric during each KL cycle, accreting less disc material. For a higher resolution, we expect the KL oscillations to be long-lived even for polar circumstellar discs. On the bottom-left panel in Fig.~\ref{fig::polar_splash}, we see that some material is flung out of the disc plane on both sides of the polar circumbinary disc. This material forms two lobes on both sides of the disc. Figure~\ref{fig::vel} shows the edge-on view of the disc surface density, along with the velocity vectors. The material is being flung outwards but remains bound to the binary. Therefore, the material then falls back into the gap region of the circumbinary disc. Throughout the simulation, the material is periodically flung out every $0.5\, P_{\rm orb}$ when the binary components pass through the polar circumbinary disc plane. \begin{figure} \includegraphics[width=\columnwidth]{tilt_radius.eps} \centering \caption{ Circumbinary disc tilt, $i$, as a function of radius, $r$, for the polar circumbinary disc. The color corresponds to the time in binary orbital periods, $\rm P_{orb}$. } \label{fig::tilt_radius} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=0.49\columnwidth]{p1_xz.eps} \includegraphics[width=0.49\columnwidth]{p1.eps} \includegraphics[width=0.49\columnwidth]{p6_xz.eps} \includegraphics[width=0.49\columnwidth]{p6.eps} \includegraphics[width=0.49\columnwidth]{p2_xz.eps} \includegraphics[width=0.49\columnwidth]{p2.eps} \includegraphics[width=0.49\columnwidth]{p7_xz.eps} \includegraphics[width=0.49\columnwidth]{p7.eps} \includegraphics[width=0.49\columnwidth]{p3_xz.eps} \includegraphics[width=0.49\columnwidth]{p3.eps} \includegraphics[width=0.49\columnwidth]{p8_xz.eps} \includegraphics[width=0.49\columnwidth]{p8.eps} \includegraphics[width=0.49\columnwidth]{p4_xz.eps} \includegraphics[width=0.49\columnwidth]{p4.eps} \includegraphics[width=0.49\columnwidth]{p9_xz.eps} \includegraphics[width=0.49\columnwidth]{p9.eps} \includegraphics[width=0.49\columnwidth]{p5_xz.eps} \includegraphics[width=0.49\columnwidth]{p5.eps} \includegraphics[width=0.49\columnwidth]{p10_xz.eps} \includegraphics[width=0.49\columnwidth]{p10.eps} \end{center} \caption{Zoomed-in snapshots of the disc surface density showing the flow of material from a polar circumbinary disc onto the nearly polar circumstellar discs. The white circles denote the eccentric orbit binary components with an initial binary separation of $a$. The color denotes the gas density using a weighted density interpolation, which gives a mass-weighted line of sight average. The yellow regions are about three orders of magnitude larger than the purple. We view the orbit of the binary in the $x$--$z$ and $y$--$z$ planes. The snapshots show a period from $20\, \rm P_{orb}$ to $20.9\, \rm P_{orb}$ in increments of $0.1\, \rm P_{\rm orb}$, where $P_{\rm orb}$ is time in binary orbital periods. } \label{fig::polar_zoom} \end{figure*} \begin{figure} \includegraphics[width=\columnwidth]{mass_polar_flow.eps} \centering \caption{The circumprimary disc mass evolution during one binary orbital period, $\rm P_{orb}$, at times $20-21\, \rm P_{orb}$ (blue), $21-22\, \rm P_{orb}$ (red), $22-23\, \rm P_{orb}$ (yellow), $23-24\, \rm P_{orb}$ (purple), and $24-25\, \rm P_{orb}$ (green). The mass of the disc decreases every $0.5\, \rm P_{orb}$. The vertical dashed-lines denote the times when the binary is aligned with the circumbinary disc plane during $20-21\, \rm P_{orb}$. An increased flow of material onto the circumstellar discs occurs when the binary is aligned with the circumbinary disc plane.} \label{fig::mass_polar_flow} \end{figure} We further examine the flow of polar circumbinary material onto the forming circumstellar discs. First, we investigate the tilt of the gaseous streams that accrete onto the circumstellar discs as a function of time. Figure~\ref{fig::tilt_radius} shows the circumbinary disc tilt as a function of disc radius. The inner edge of the disc lies roughly at $1.6a$. The curves that are shown at radii $<1.6a$ map the tilt of the streams. We show the disc tilt for a full binary orbital period from $20\, \rm P_{orb}$ to $21\, \rm P_{orb}$ in increments of $0.1\, \rm P_{orb}$. At every $0.5\, \rm P_{orb}$, the tilt of the streams are low at $\sim 80^\circ$. When the binary orbital period is not at half increments, the tilt of the streams increases beyond $90^\circ$. For example, at times $20.2-20.3\, \rm P_{orb}$ and $20.6-20.7\, \rm P_{orb}$, the streams are highly tilted. Recall that the forming circumstellar discs initially form at a high disc tilt, $> 90^\circ$. Therefore, whenever the gaseous streams are highly tilted, there is an increased accretion of material onto the circumstellar discs from the circumbinary disc. When the streams are less inclined, every $0.5\, \rm P_{orb}$, there will be less material accreted onto the polar circumstellar discs. This phenomenon is also consistent with Fig.~\ref{fig::vel}, where material is flung out of the plane of the circumbinary disc every $0.5\, \rm P_{orb}$. We test this by further visualizing the inflow of material. Figure~\ref{fig::polar_zoom} shows snapshots of zoomed-in views in the $x$--$z$ and $y$--$z$ planes of the disc surface density, showing the gaseous streams accreting onto the nearly polar circumstellar discs. The snapshots show the flow of material over $20\, \rm P_{orb}$ to $20.9\, \rm P_{orb}$ in increments of $0.1\, \rm P_{orb}$. Higher density streams occur at times $20.3\, \rm P_{orb}$ and $20.7\, \rm P_{orb}$. The flow of material decreases every $0.5\, \rm P_{orb}$ during the orbit. At these times, the steams are less dense, leading to less material accreting onto the circumstellar discs. We relate the flow of material from Fig.~\ref{fig::polar_zoom} to the mass of the circumstellar discs. Figure~\ref{fig::mass_polar_flow} shows the mass of the circumprimary disc from $20\, \rm P_{orb}$ to $25\, \rm P_{orb}$ folded on top of one another for each orbital period. The vertical dashed-lines denote the times when the binary is aligned with the circumbinary disc plane, which is assumed when the stars are both aligned with $x$--$z$ plane. Each time the binary aligns to the plane of the disc, the masses of the circumstellar discs increase. The mass of the disc decreases every $0.5\, \rm P_{orb}$. This behaviour repeats every orbital period. Overall, the disc mass deceases in time due to the KL mechanism. \section{Summary} \label{sec::summary} In this work, we investigated the flow of material from a circumbinary disc that results in the formation circumstellar discs around each binary component. We simulated an initially highly misaligned and polar circumbinary disc using three-dimensional SPH. We considered cases of low initial binary eccentricity (typically $e_{\rm b}=0.1$) and binary mass ratio of unity. We also simulated cases of test particles around the primary star and cases of circumprimary discs only (i.e., no circumbinary or circumsecondary discs) for comparison. In order to carry out these simulations in a reasonable amount of time, we made some compromises on our choice of parameters. In particular, we introduced a higher viscosity parameter for the circumbinary disc than is likely to occur and a lower temperature of the gas in the gap region. These choices were made to improve the resolution of the simulations. Even with these parameters, the resolution is still playing a role in our results (see Fig.~\ref{fig::i60}). While we have chosen the disc parameters ($\alpha$ and $H/R$) in our simulations to maximise the accretion rate on to the binary components and therefore the simulation resolution, we expect the general behaviour to persist for more realistic parameters applicable to protoplanetary discs. The mass of the circumstellar disc scales with the infall accretion rate. If the resolution of the circumstellar disc is too poor, then the disc artificially accretes rapidly due to the artificially enhanced effects of viscosity at low density in the SPH code. We first examined the behavior of initially highly inclined circumstellar discs that are not supplied with material from a circumbinary disc. A polar test particle in orbit around a primary star reaches an eccentricity of nearly unity during the first KL cycle, forcing the particle to become unbound or hit the central star. Similarly, initially highly inclined circumstellar discs around individual binary components can experience very strong KL oscillations. For an equal mass binary containing only a single circumstellar disc at high inclination between $70^{\circ}$ and $100^{\circ}$, the disc undergoes only a single KL oscillation before losing nearly all its mass for our given sink size. Some of the disc mass is transferred to the companion star to form a low inclination disc that does not undergo KL oscillations. These results suggests that such high inclinations of discs are short-lived due to enhanced dissipation from shocks that leads to tilt evolution on short timescales. In contrast, discs that are highly inclined but are not subject to KL oscillations would undergo much slower evolution. In particular, a polar disc would not precess (see e.g., equation (\ref{omnonKL})) and therefore not warp. The disc would then not be subject to torques that act to change its inclination. In this work, and from \cite{Smallwood2021}, we showed that the continuous accretion of material from the circumbinary disc allows the effects of KL oscillations on circumstellar discs to be much longer-lived. In this process, the circumbinary material is continuously delivered with a high inclination to the lower inclination circumstellar discs. We found that the simulation resolution is important for modeling the longevity of the KL oscillations. We find longer lived KL oscillations that show signs of mild weakening in time, possibly due to the resolution (e.g., Figure \ref{fig::i60}). The balance between the accretion timescale and the KL timescale determines whether the oscillations are sustained or damp in time. If the circumstellar disc material were to accrete on a much shorter timescale than the KL oscillation period, we would not expect the KL oscillations to operate. We found that with increasing resolution, the accretion timescale becomes comparable to the KL timescale, favoring sustained KL oscillations. Planet formation is thought to still occur in non-zero eccentricity discs \citep{Silsbee2021}. In the case of {\it S}-type planets (planets orbiting one of the stellar companions in a binary), gravitational perturbations from an eccentric orbit stellar companion and an eccentric disc increase planetesimal eccentricities, leading to collisional fragmentation, rather than growth, of planetesimals. However, \cite{Rafikov2015} analyzed the planetesimal motion in eccentric protoplanetary discs when the planetesimals were affected by gas drag and disc gravity. They found that the planetesimals could withstand collisional fragmentation and erosion, thereby providing a pathway to forming planetary cores by coagulation in a binary. It is not clear how those results carry over to the case of highly eccentric discs undergoing KL oscillations. However, the formation of nearly polar circumstellar discs from this work may give rise to the formation of nearly polar planets that become Kozai-unstable. Planet formation in a polar circumstellar disc requires the disc to last for a sufficiently long time. We speculate that this is possible provided that the disc is continuously accreting material in a polar configuration. Observations of misaligned planetary systems show a preference for nearly polar orbits with true obliquities $\psi$ in the range $\psi =80^\circ - 125^\circ$ \citep{Albrecht2021,Dawson2021}. For example, two observed ultra-short-period hot Jupiters in polar orbits around an A-type star are Kelt-9b \citep{Ahlers2020b} and MASCARA-4b \citep{Ahlers2020a}. The majority of planets studied by \cite{Albrecht2021} were hot Jupiters, since the measurements for these types of planets are more precise. However, a few warm-Neptunes with polar orbits were observed, including HAT-P-11b \citep{Sanchis-Ojedda2011}, GJ 436b \citep{Bourrier2018,Bourrier2022}, HD 3167c \citep{Dalal2019,Bourrier2021}, and WASP-107b \citep{Dai2017,Rubenzahl2021}. A more recent warm Neptune, GJ 3470b, is also observed to be on a polar orbit \citep{Stefansson2022}. \section*{Acknowledgements} We thank the anonymous reviewer for helpful suggestions that positively impacted the work. We thank Daniel Price for providing the {\sc phantom} code for SPH simulations and acknowledge the use of SPLASH \citep{Price2007} for the rendering of the figures. Computer support was provided by UNLV's National Supercomputing Center. We acknowledge support from NASA XRP grants 80NSSC19K0443 and 80NSSC21K0395. This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958. SHL thanks the Simons Foundation for support during a visit to the Flatiron Institute. \section*{Data Availability} The data supporting the plots within this article are available on reasonable request to the corresponding author. A public version of the {\sc phantom}, {\sc splash}, and {\sc mercury} codes are available at \url{https://github.com/danieljprice/phantom}, \url{http://users.monash.edu.au/~dprice/splash/download.html}, and \url{https://github.com/4xxi/mercury}, respectively. \bibliographystyle{mnras}
2,869,038,156,052
arxiv
\section{Introduction} Functional Reverse Engineering (RE) aims to analyze gate-level netlists that have been synthesized from a Register-Transfer Level (RTL) description of an Integrated Circuit (IC) design. A gate-level netlist contains information about the individual gates of a design and how they are interconnected to implement certain functionality. A description of which functionality is actually implemented however, is no longer present. It is the goal of functional RE to retrieve this information and reconstruct a high-level description of the functionality within the netlist~\cite{bsim:2014}. While the retrieval of a high-level description itself is challenging, it becomes even more of a task when analyzing approximate circuits. Approximate Computing (AxC) achieves significant advantages over traditional computing concerning circuit area and power efficiency in scenarios where some loss of quality in the computed result can be tolerated~\cite{jie:ETS:2013}. For instance, in deep learning~\cite{zervakis:TC:2022, abreu:TCAS-I:2022} or media processing~\cite{paim:TCSVT:2020} such as audio, video, or image compression, where an exact computation is often not necessary due to the limited perception capability in humans~\cite{Venkatesan:ICCAD:2011}. Despite the widespread employment of AxC, the research community did not investigate its security aspects so far. To the best of our knowledge, we are the first to evaluate the resilience of AxC to functional RE. In this work, we show that existing functional RE methods fail to reverse engineer approximate circuits. In the following we discuss the limitations of existing methods, which are also summarized in Table~\ref{tab:comparison}. \vspace{-0.5em} \subsection{Motivation and Research Challenges} \textbf{Golden Design Requirement:} Traditional functional RE methods that are based on template matching~\cite{10.1145/3193121,wordrev:2013,patternmining:2012,circuitUnderstanding:2014} have shown great potential in unveiling the functionality of gate-level netlists. However, these methods work by identifying sub-circuits and matching them to a \textit{"golden"} library of components known in prior. This makes them prone to erroneous classification caused by variations in the implementation of a module (i.e., structural or functional variations). With the rise of AxC, these methods finally fall short due to the vast design space that is opened up by the different possible methods of approximation. To investigate this, we employed the \textit{bsim} tool~\cite{bsim:2014}, one of the state-of-the-art algorithmic RE methods, on selected exact and approximate adder circuits. In our experiments, we have observed a mismatch between the types of recovered sub-circuits between the exact and the approximate implementations, i.e. they are not mapped to the same functionality, as Figure~\ref{fig:bsim} illustrates. \textit{This further highlights the fact that template matching-based approaches are not suitable for functional RE of AxCs.} \begin{figure}[!t] \centering \includegraphics[width=0.8\linewidth]{figures/fig1.pdf} \caption{Number of identified sub-circuits by bsim~\cite{bsim:2014} in an exact 16-bit adder and an approximate 16-bit adder circuit.} \label{fig:bsim} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.9\linewidth]{figures/fig2.pdf} \caption{Node-level classification accuracy of GNN-RE~\cite{GNNRE} on the approximate adders from the EvoApprox~\cite{EvoApprox:2017} dataset. Approximation aggressiveness increases along the X-axis (drop in normalized area). GNN-RE classification accuracy drops with the increase in approximation aggressiveness.} \label{fig:gnn:performance} \end{figure} \textbf{Low Classification Performance:} Machine Learning (ML)-based approaches that employ Convolutional Neural Networks (CNNs)~\cite{circuitrecognition:2019:DATE,circuitrecognition:2017:HOST} as well as Graph Neural Networks (GNNs)~\cite{GNNRE} have been introduced for the task of functional RE~\cite{10.1145/3464959}. They promise to be more resilient against variations due to their generalization capabilities. \textit{However, their performance in the the presence of AxC has not yet been evaluated}. Therefore, we evaluated the state-of-the-art GNN-RE~\cite{GNNRE} for the task of classification of approximate adder circuits. While classification accuracy for exact adder circuits is on average $98\%$, once it was used for classification of approximate adder circuits, it reports an accuracy as low as $0\%$. Figure~\ref{fig:gnn:performance} further demonstrates this loss of performance and shows the relation between the aggressiveness of the approximation and the corresponding classification accuracy. The higher the aggressiveness in the approximation, the smaller individual circuits become and the more area is saved in the final integrated circuit~\cite{EvoApprox:2017}. At the same time, the loss in GNN classification accuracy becomes larger with increasing approximation aggressiveness. \textbf{Research Challenges:} The discussion and experimental analysis above demonstrate that performing functional RE on approximate circuits is still an open research problem that imposes the following key research challenges. \begin{enumerate} \item \textit{Handling inexact implementations:} Approximate circuits differ in their implementation compared to their exact circuit counterparts in terms of structure and functionality. A technique that is capable of generalizing to unseen approximated circuits (i.e., new structures) is required. \item \textit{Handling different approximation types:} Different methods of approximation are possible for any given circuit. Handcrafted approximation methods can produce AxCs that structurally differ significantly from \textit{"black-box"} circuits that are generated using, for instance, evolutionary algorithms such as the EvoApprox~\cite{EvoApprox:2017} dataset (more details in Section~\ref{sec:background:AC}). Additionally, the aggressiveness of the approximation influences implementation details, further broadening the space that must be explored. \textit{Thus, a method that can generalize to different approximation types and levels is desired.} \end{enumerate} \vspace{-0.7em} \subsection{Our Novel Concept and Contributions} To address the above challenges, we propose the \textit{AppGNN} platform that extends GNN-based functional RE to allow for the accurate classification of approximated circuits, all while training on exact circuits. Based on the fact that GNN-based RE relies on the structural properties of a circuit, we implement a novel graph-based sampling approach in AppGNN that can mimic a generic approximation of any design, in terms of structure, requiring zero knowledge of the targeted approximation type and its aggressiveness. Our platform operates on flattened (i.e., without hierachy) gate-level netlists and automatically identifies the boundaries between sub-circuits and classifies the sub-circuits based on their functionalities, whether the designs are approximate or not (see Figure~\ref{fig:motivational}). \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{figures/fig3.pdf} \caption{AppGNN approximation-aware node-level classification. Individual nodes in a given gate-level netlist are classified according to their role in a sub-circuit.} \vspace{-0.5em} \label{fig:motivational} \end{figure} The novel contributions of this work are as follows: \\ (1) A \textbf{comprehensive security analysis (Section~\ref{sec:analysis})} is provided. To the best of our knowledge, we are the first to investigate the resilience of AxC to functional RE. \\ (2) An \textbf{GNN-based functional RE platform (Section~\ref{sec:representation})} is developed. Our AppGNN approach does not rely on the exact matching of sub-circuits to a preexisting library, making it much more flexible in classifying AxCs. \\ (3) A novel \textbf{graph-based sampling method (Section~\ref{sec:graph_sampling})} is proposed. Our AppGNN methodology mimics common adder approximation methods, on the graph-level, in order to improve the classification accuracy. \\ (4) {Our AppGNN framework is publicly available to the scientific community under: \url{https://github.com/ML-CAD/AppGNN}} \textbf{Key results:} We perform an extensive experimental evaluation of AppGNN and GNN-RE~\cite{GNNRE} on various types of approximate circuits. We show that GNN-RE fails to reliably classify approximate circuits and that this failure is proportional to the aggressiveness of the approximation. To this end, we evaluate state-of-the-art GNN-RE and our AppGNN on the EvoApprox approximate adders and multiplier circuits~\cite{EvoApprox:2017}, as well various bit-width variations of Almost Correct Adder (ACA)~\cite{verma:DATE:2008}, Error-Tolerant Adder I (ETA-I)~\cite{zhu:TVLSI:2010}, Lower-part OR Adder (LOA)~\cite{mahdiani:TCAS-I:2010}, Lower-part Copy Adder (LCA)~\cite{gupta:TCAD:2013}, Lower-part Truncation Adder (LTA), Leading one Bit based Approximate (LoBA) multiplier~\cite{garg:JET:2020} and Rounding-based Approximate (RoBA) multiplier~\cite{reza:TVLSI:2017}, all of which are described in detail in Section~\ref{sec:background:AC}. Over all these extensive datasets, our approach achieves an improvement in classification of approximate adders and multipliers of up to $28$ percent points compared to GNN-RE, reaching a classification accuracy of up to $100\%$ for some circuits. \section{Background and Related Work} \label{sec:background} In this section, we present a brief introduction to AxC along with the datasets used in this work. We will also explain the underlying principles of GNNs and functional RE. \vspace{-0.6em} \subsection{Approximate Computing (AxC)} \label{sec:background:AC} AxC is established as a new paradigm to boost design efficiency by trading the intrinsic error resiliency of several applications such as signal, image, video processing~\cite{paim:TCSVT:2020} and ML~\cite{zervakis:TC:2022, abreu:TCAS-I:2022}. Driven by the high potential for energy savings, designing approximate functional units has attracted significant research interest~\cite{zervakis:ASPDAC:2021}. In the following, we review approximate adder and multiplier functional units that we employ in this work as a case study. It is noteworthy that our work is not limited to certain type of approximate circuits. \vspace{-1em} \subsubsection{Approximate Adders (AxAs)} AxAs can generally be classified as (i) Lower-Part Adders (LPA) with low-magnitude frequent errors, (ii) Block-based Speculative Adders (BSA), which result in high-magnitude infrequent errors, and (iii) Evolutionary Approximate (EvoApprox) circuits generated using evolutionary algorithms. \noindent\textbf{Lower-Part Adders (LPAs).} LPAs split their operation into an exact part with $w-k$ Most Significant Bits (MSBs) and an approximate part with $k$ Least Significant Bits (LSBs), where $w$ is the bitwidth and $k$ is the approximation level. The higher $k$, the higher the errors in the LPA class. In this work, we consider the following LPAs: \begin{itemize}[noitemsep, topsep=0.1cm,leftmargin=0cm,itemindent=*] \item LTA which truncates $k$ LSBs of the output with logic 0. \item LCA~\cite{gupta:TCAD:2013} copies the $k$ LSBs of an operand to the $k$ output LSBs. \item LOA~\cite{mahdiani:TCAS-I:2010} atributes a bitwise OR logic of the $k$ LSBs operands to the $k$ output LSBs. \item ETA-I~\cite{zhu:TVLSI:2010} performs a carry-less sum with a bitwise XOR of the operands (i.e., a propagate operation). To compensate for the missing carry, if a carry-generate (i.e., a bitwise AND of the operands) is true in a bit of the approximate part, the previous output LSBs are set to 1. An OR-logic chain propagates the set-to-one command from the generate position up to the LSB. \end{itemize} \noindent\textbf{Block-based Speculative Adders (BSA).} BSAs split their operation into blocks of $m$ width, where $m$ is related to the approximation level. The higher $m$ is, the lower are the errors and the benefits of the BSAs class. In this paper, we consider the following BSA. ACA~\cite{verma:DATE:2008} implements $m$-bit overlapping blocks to speculate the carry operation. ACA generates exact output values for the first $m$ LSBs. Each output bit at the position $m$ up to the MSB is independently generated by overlapping adder blocks that speculate $m$ bits of the operands. \vspace{-0.8em} \subsubsection{Approximate Multipliers (AxMs)} AxMs are commonly designed by (i) mathematically refactoring the multiplication for eliminating parts of its equation and (ii) detecting the Leading One Bit (LOB) position of the operands for discarding the hardware that computes unnecessary leading zeros and (iii) truncating part of the LSBs. We employ the following multipliers that comprehend these strategies as case studies in this work. \begin{itemize} [noitemsep, topsep=0.1cm,leftmargin=0cm,itemindent=*] \item RoBA multiplier~\cite{reza:TVLSI:2017} simplifies the multiplication process by using numbers equal two to a power $n$ ($2^n$). The multiplication is factored as $A\times B = (A_r - A) (B_r -B) + A_r B + A B_r - A_r B_r $. Then, the $A_r B $, $A_rB_r $ and $A B_r $ sub-expressions are approximated by simple shift operations. Thus, the multiplication is simplified for $A \times B\approx A_r B + A B_r - A_r B_r$. A sweep process in the operands bits finds the nearest value of $A_r$ and $B_r$. The $A_{r}[i]$ is logic 1 in two cases. In the first case, $A[i]$ is set to logic 1, and all the bits on its left side are logic 0 while $A[i-1]$ is logic 0. In the second case, when $A[i]$ and all its left-side bits are logic 0, $A[i-1]$ and $A[i-2]$ are both logic 1. The circuit employs $B_r$ by the same process of $A_r$. \item LoBA multiplier~\cite{garg:JET:2020} splits the multiplication in smaller fixed-width multiplier blocks. LoBA multiplier identifies the LOB position and defines two $k$-bit new operands $A_{kh}$ and $B_{kh}$, which are multiplied and shifted left based on the LOB position. It multiplies the higher k-bit part of the operands starting from the LOB position suppressing leading zeros and truncating the lower part operands. Therefore, LoBA reduces the length of the multiplication to just $k$ bits. We refer as LoBA the LoBA0 configuration in~\cite{garg:JET:2020}. \end{itemize} \vspace{0.5em} \subsubsection{Evolutionary-based Approximate (EvoApprox)} EvoApprox adder and multiplier circuits are automatically generated by a genetic algorithm, a method inspired by natural selection that mimics biological evolution~\cite{EvoApprox:2017}. EvoApprox circuits are created by an extensive design exploration which evolves strong candidates and discards weak generations of circuits with a given error metric. We employ the EvoApprox library made available online by~\cite{EvoApprox:2017}. \begin{figure*}[!t] \centering \includegraphics[width=0.9\textwidth]{figures/fig4.pdf} \caption{AppGNN work flow. Exact gate-level netlists are transformed into a graph representation. Node sampling is performed on the graph-level before the sampled graph is added to the training dataset and for the GNN to perform training.} \label{fig:flow} \end{figure*} \vspace{-3mm} \subsection{Graph Neural Networks (GNNs)} \label{sec:GNN} GNNs perform graph-representation learning on graph-structured data and generate a vector representation (i.e., embedding) for each node in a given graph to be used for a desired task such as node classification. The embeddings are generated based on the input features of the nodes and the underlying graph structure so that similar nodes in the graph are close in the embedding space. Let $G(V, E)$ denote a graph, where $V$ represents its set of nodes, and $E$ represents its set of edges. Each node in the graph $v \in V$ is initialized with a feature vector $x_v$ that captures its properties. In GNNs, typically an $Aggregate$ function collects the node information from the direct neighborhood of node $v$, denoted as $N(v)$. An $Update$ function updates the current embedding of node $v$ by combining its previous state with the aggregated information. GNNs run such a \textit{neighborhood aggregation} procedure for $L$ rounds (increasing the depth of the aggregated information)~\cite{kipf2016semi}. GNNs mainly differ based on the $Aggregate$ and $Update$ functions used. The fundamental Graph Attention Network (GAT)~\cite{velivckovic2017graph} measures the importance (weight) of the edges during the aggregation phase. It employs a multi-head attention mechanism of $K$, in which the layer $l-1$'s information propagates to layer $l$ as follows; \begin{equation} {h}_{v}^{l}= \left|\begin{matrix} \\ \\ \end{matrix}\right|_{k=1}^K \sigma\begin{pmatrix} \sum_{u\in N(v)} \alpha_{u,v}^k {W}^k {h}_{v}^{l-1} \end{pmatrix} \\ \end{equation} \begin{equation} \alpha_{u,v}^k={LeakyReLU} \begin{pmatrix} ({a}^k)^\intercal[{W}^k{h}_u \parallel {W}^k {h}_v] \end{pmatrix} \end{equation} Where $h_{v}^{l}$ denotes the generated embedding of node $v$ at the $l^{th}$ round, $h_{v}^{0}={x}_{v}$, and $\sigma(.)$ is a non-linear activation function (e.g., \texttt{ReLU}). $\alpha_{u,v}$ specifies the weighting factor (i.e., importance) of node $u$'s features for node $v$, which is computed as a byproduct of an attention mechanism ${a}$. The multi-head attention mechanism replicates the aggregation layers $K$ times, each replica having different trainable parameters ${W}^{k}$, and the outputs are feature-wise aggregated using a concatenation operation as described in Equation~(1). The final embedding vector of node $v$, after $L$ layers, is as follows: \begin{equation} {z}_{v}= {h}_{v}^L \end{equation} \subsection{Functional Reverse Engineering (RE)} Without access to the RTL description of a design, functional RE is one option to gain further detailed insight on the functions that are implemented inside a circuit. However, gate-level netlists are inherently difficult to analyze, since structural information about the function and boundaries of sub-circuits is usually omitted during the synthesis process. This leaves only the high-level circuit description and list of gates and their interconnections to be analyzed. Therefore, automated algorithmic analysis tools, such as~\cite{bsim:2014}, have been introduced. Their functionality is based on partitioning the gate-level netlist by identifying replicated bitslices and, in turn, aggregating them into individual candidate modules. Afterwards, using formal verification methods, these candidate modules are matched against a \textit{"golden"} component library containing reference circuits in order to infer their high-level functionality, e.g. adder, multiplier, subtractor, etc~\cite{bsim:2014,wordrev:2013,patternmining:2012,circuitUnderstanding:2014}. A shortcoming of pattern matching approaches like these is that they are extremely dependant on the quality and size of the \textit{"golden"} component library that is used. Due to this, components that differ from the "standard" implementation, for instance of an adder circuit, will not be classified correctly and may remain undetected~\cite{GNNRE}. Additionally, formal verification methods can be very resource demanding in terms of compute power, limiting their applicability. Therefore, other methods have been proposed in literature. Recently, ML-based methods have shown great potential~\cite{GNNRE}, achieving high classification accuracy at only a one-time training effort. In GNN-RE, the gate-level netlist of a circuit is first transformed into a graph representation. The graph representation preserves the structure of the netlist, captures the features of each gate (for instance gate type, number of inputs/outputs, etc.) and also encodes the neighborhood of each gate. This way, the graph representation encodes both the structural and functional attributes of each node and it's surrounding circuitry. Then, GNN-RE employs a GNN to learn on the graph representation of the circuit to predict the sub-circuit each gate belongs to (i.e., node classification task). In any GNN implementation, the quality of the model depends on the quality of the training data set that is used as a ground truth. Including a wide range of different implementations and bit-widths for each high-level component (e.g. adder, multiplier, etc.), the ML model can learn to generalize and become robust to variations. Functional RE can be very helpful to various entities such as IC designers to verify to correctness of a chip. For instance, using functional RE, Hardware Trojans (HTs) or Intellectual Property (IP) infringements in competitor products can be detected~\cite{circuitrecognition:2017:HOST,circuitrecognition:2019:ASPDAC,circuitrecognition:2019:DATE}. On the flipside, functional RE can potentially also be abused to gain insights into patented designs and steal IP~\cite{REofEmbeddedSystems:2011}. \section{Our Proposed GNN-based Reverse Engineering: AppGNN} \begin{figure}[!t] \centering \includegraphics[width=0.65\linewidth]{figures/graphs/fig5.pdf} \caption{The graph representation of two 12-bit adder circuits. (a) shows the graph of an exact 12-bit adder circuit, (b) displays a LCA-based 12-bit AxC adder circuit. The structure and the number of gates differ between the two versions.} \label{fig:approximation:3bit} \end{figure} In the following, we describe our concept of AppGNN. We first cover the assumptions that we make. Then, we explain the graph representation for our GNN model, the datasets that we use for training and finally demonstrate our node sampling techniques. The overall work flow of AppGNN is illustrated in Figure~\ref{fig:flow}. We start in step \bettercircle{1} with a gate-level netlist, which is transformed into a graph (Section~\ref{sec:representation}) and abstracted in steps \bettercircle{2} and \bettercircle{3}. In step \bettercircle{4}, we perform node sampling using either random node sampling (Section~\ref{sec:random:sampling}) or leaf node sampling (Section~\ref{sec:leaf:sampling}). In final step \bettercircle{5}, the sampled graph is added to the training dataset (Section~\ref{sec:dataset}). \vspace{-0.5em} \subsection{Threat Model and Assumptions} We perform functional RE on AxCs under the following assumptions, which are consistent with prior work~\cite{bsim:2014,wordrev:2013,patternmining:2012,circuitUnderstanding:2014,circuitrecognition:2017:HOST,circuitrecognition:2019:ASPDAC,circuitrecognition:2019:DATE,GNNRE}. We assume that the gate-level netlist of the design that is being analyzed has been correctly retrieved, either by deriving it from the physical chip~\cite{stateOfTheArtRE:2009} or by other means (e.g., access to layout information). In particular, access to the RTL source code of the design is not available. We make no assumptions about the given netlists that are analyzed. In particular, we do not assume any knowledge about the used approximation technique or the aggressiveness of the approximation. In fact, we do not assume it to be an approximate circuit at all. \textit{This allows us to develop a generic approach and to operate on exact circuits as well as approximate ones}. \vspace{-0.5em} \subsection{Why do Approximate Circuits Appear to be Reverse-Engineering Resilient?} \label{sec:analysis} Approximate circuits can differ substantially from their exact counterparts in terms of their general graph structure, gate count and gate type. As an example, Figure~\ref{fig:approximation:3bit} illustrates the graph representation of two 12-bit adder circuits. In the figure, (a) displays the connectivity of the individual gates of an exact adder circuit, (b) displays this for an AxC adder that has been generated using LCA~\cite{gupta:TCAD:2013}. Six of the twelve primary outputs are approximated (copied directly from the input to the output). As the figure illustrates, the graph structure of both circuits and their node count is dissimilar. \begin{figure}[!t] \centering \includegraphics[width=0.8\linewidth]{figures/fig6.pdf} \caption{Histogram of the occurance of individual gate types in a 12-bit exact adder implementation and a LCA-based 12-bit AxC adder circuit.} \label{fig:hist:gatetypes} \vspace{2mm} \end{figure} \begin{table}[!t] \caption{The datasets used in the training of AppGNN} \resizebox{0.45\textwidth}{!}{% \begin{tabular}{cccc} \hline \textbf{Datasets} & \textbf{\#Nodes} & \textbf{\#Circuits} & \textbf{Source} \\ \hline Add-Mul-Mix & 15,582 & 7 & \multirow{7}{*}{GNN-RE~\cite{GNNRE}}\\ Add-Mul-Mux & 21,602 & 6 & \\ Add-Mul-Combine & 14,288 & 6 & \\ Add-Mul-All & 51,472 & 19 & \\ Add-Mul-Comp & 15,898 & 6 & \\ Add-Mul-Sub & 18,206 & 6 & \\ Add-Mul-Comp-Sub & 19,151 & 6 & \\ \hline Adder 8bit & 469 & 8 & \multirow{4}{*}{This work (\textbf{AppGNN})}\\ Adder 9bit & 792 & 9 & \\ Adder 12bit & 981 & 9 & \\ Adder 16bit & 1314 & 9 & \\ \hline \label{tab:training:dataset} \end{tabular} } \vspace{-3mm} \end{table} In addition to the graph structure and gate count being different, the type of used gates in both circuits varies as well, as Figure~\ref{fig:hist:gatetypes} shows. It can be seen from the figure that some gate types appear exclusively in one of either circuits, such as \texttt{CLKBUF}, \texttt{OAI22} (Or-And-Invert) and \texttt{XOR2} which only appear in the exact implementation. Conversely, \texttt{NAND3} is exclusively used in the approximate implementation. The remaining shared gate types are used with different frequencies in the two implementations. For example \texttt{INV} gates are used $2.8$ times (20 vs. 7) and \texttt{NOR2} gates are used $2.25$ times (27 vs. 12) more frequently in the exact circuit compared to the approximate one. However, some similarity remains. Gate types \texttt{AND2}, \texttt{AOI2} (And-Or-Invert), \texttt{NOR3}, \texttt{OR2}, \texttt{OAI21} and \texttt{XNOR2} appear with similar frequency in the two implementations. Our experiments show that when we extend the original GNN-RE dataset of exact circuits, as displayed in Table~\ref{tab:training:dataset}, by another set of approximate circuits, the classification accuracy on approximate circuits as a whole can be improved. Specifically, we add the dataset of LTA adders (see Table~\ref{tab:approx:dataset}) to the training dataset. The resulting improvement in classification accuracy on the EvoApprox adders can be seen in Figure~\ref{fig:gnn-re-lta}. The average node-level classification accuracy on the EvoApprox adders dataset increased from $53.8$\% (original GNN-RE) to $89.6$\% (GNN-RE extended by the LTA dataset). The above analysis shows that once the GNN is exposed to approximated structures during training, the classification accuracy on AxCs can be improved. However, this setup will require the user to access some AxC scripts to generate the required dataset, which may not be available. The analysis motivates us to extend the GNN training stage itself to manage AxC while still training on exact circuits. In order to improve the classification accuracy of a GNN regardless of the dataset that is added to the training process, we propose a node sampling approach which aims to mimic a generic approximation technique. Next, we discuss the main steps of our AppGNN framework, starting with circuit-to-graph conversion. \begin{table}[!t] \caption{Adders and multipliers in our evaluation dataset} \resizebox{0.4\textwidth}{!}{% \begin{tabular}{cccc} \hline \textbf{Evaluation dataset} & \textbf{Circuit type} & \textbf{\#Nodes} & \textbf{\#Circuits} \\ \hline ACA~\cite{verma:DATE:2008} & adder & 2618 & 28\\ \hline ETA-I~\cite{zhu:TVLSI:2010} & adder & 2283 & 28 \\ \hline LOA~\cite{mahdiani:TCAS-I:2010} & adder & 2210 & 28 \\ \hline LCA~\cite{gupta:TCAD:2013} & adder & 2282 & 28 \\ \hline LTA & adder & 2028 & 28 \\ \hline EvoApprox-Add~\cite{EvoApprox:2017} & adder & 14901 & 214 \\ \hline LoBA~\cite{garg:JET:2020} & multiplier & 43694 & 31 \\ \hline RoBA~\cite{reza:TVLSI:2017} & multiplier & 31077 & 5 \\ \hline EvoApprox-Mul~\cite{EvoApprox:2017} & multiplier & 78155 & 159 \\ \hline Exact circuit & adder & 467 & 5 \\ \hline Exact circuit & multiplier & 13701 & 6\\ \hline \label{tab:evaluation:dataset} \end{tabular} } \label{tab:approx:dataset} \vspace{-6mm} \end{table} \begin{figure}[!t] \centering \includegraphics[width=0.8\linewidth]{figures/results/fig7.pdf} \caption{Node-level classification accuracy on the EvoApprox~\cite{EvoApprox:2017} adders dataset of GNN-RE and when the LTA dataset of approximate adders is included in the training.} \label{fig:gnn-re-lta} \vspace{-1mm} \end{figure} \vspace{-0.5em} \subsection{Graph Representation} \label{sec:representation} The flat netlist is first transformed into a directed graph representation, which retains a notion of order to be able to reason about successor and predecessor nodes. In the graph, each gate in the netlist corresponds to a specific node. Edges represent connections (wires) between individual gates/nodes. This transformation from a netlist to a graph will retain all structural information present in the netlist, e.g. the connectivity between individual gates will be represented using an adjacency matrix. In order to retain information about the individual gates, such as the gate type (e.g. \texttt{XOR}), whether gates are Primary Inputs (PI) or Primary Outputs (PO) or the input and output degree, each node will keep a reference to a feature vector $x$ with length $k$ describing these details. The length of $k$ is largely determined by the number of available gate types in the used technology library\footnote{In our experiments, a 14nm FinFET library containing 24 individual gates was used.}, in our case $k=24$. Figure~\ref{fig:featurevector} shows an example of a netlist that contains various gate types. In this example, node $g$ is a primary output (since it is a leaf node) which is captured in the first two fields of the feature vector $x_g$ (\textit{Ports} section in Figure~\ref{fig:featurevector}). The feature vector also contains information about the gate type of $g$ (\texttt{XNOR}) as well as the gate types and number of occurrences of other gates present in the two-hop neighborhood\footnote{The two-hop neighborhood of a node $g$ includes all nodes directly adjacent to $g$ and all nodes that in turn are adjacent to these nodes.} of $g$. In this example, the 2-hop neighborhood contains two \texttt{NAND2}, two \texttt{XNOR}, one \texttt{NOR} and one \texttt{INV} gate. This neighborhood information is stored in the following fields of $x_g$ (\textit{Neighborhood} section in Figure~\ref{fig:featurevector}). The last two fields (\textit{Structure} section in Figure~\ref{fig:featurevector}) capture the input (2) and output (1) degree of $g$. A feature matrix $X\in\mathbb{R}^{n\times k}$, with $n$ describing the total number of nodes, will aggregate the feature vectors of all nodes. The feature matrix $X$ is then standardized by removing the mean and scaled to unit variance. Such a representation of node features has been found efficient in other works~\cite{GNNRE,gnnunlock}. Next, we describe two node sampling methods that aim to mimic a generic approximation technique. \begin{figure} \centering \includegraphics[width=0.75\linewidth]{figures/fig8.pdf} \caption{Feature vector $x_g$ of node g, adapted from~\cite{GNNRE}.} \label{fig:featurevector} \end{figure} \vspace{-3mm} \subsection{Node Sampling} \label{sec:graph_sampling} \label{sec:graph:sampling} As described in Section~\ref{sec:analysis}, the structure of approximate circuits can differ substantially from an exact implementation. However, there are typically fewer nodes and connections in an approximate circuit. Our approach aims to exploit this observation by mimicking a generic approximation constructed from an exact circuit. In the following, we describe two node sampling methods which remove individual nodes and all transient inputs to this node (the datapath of the node). The number of initially selected nodes (for removal) relates to the level of the approximation. Selecting a large number of initial nodes for removal thus corresponds to a high level approximation, while selecting only a few initial nodes relates to a lower level of approximation. In the following examples, we demonstrate two approaches: (i) \textit{random node sampling} and (ii) \textit{leaf node sampling}. We illustrate these methods on a simple 3-bit adder circuit. Note that 3-bit adder circuits are not included in the actual AppGNN dataset and this is for demonstrative purposes only. \subsubsection{Random Node Sampling} \begin{figure}[!t] \centering \includegraphics[width=0.9\linewidth]{figures/fig9.pdf} \caption{The work flow of our random node sampling.} \label{fig:graphsampling:random} \end{figure} \label{sec:random:sampling} We select a number of nodes for removal based on the desired level of approximation. In this example, we are randomly selecting a single node. We then identify the datapath of this node and remove all found nodes (including the initially selected node) from the graph. We now describe this process in detail, which is also displayed in Figure~\ref{fig:graphsampling:random}. We start with a directed graph that is constructed from a gate-level netlist according to Section~\ref{sec:representation} (Step \bettercircle{1}). Then, we randomly select a node in the graph for removal. U17 (in red) is selected in this example (Step \bettercircle{2}). In order to mimic approximation, we identify the datapath of the selected node using Algorithm~\ref{algo:datapath}. The identified nodes, including the initially selected one, will be marked for removal (Step \bettercircle{3}). Finally, all found nodes are removed from the graph (Step \bettercircle{4}) using Algorithm~\ref{algo:graphsampling}. Depending on the chosen node, the remaining subgraphs can differ substantially. In the example shown in Figure~\ref{fig:graphsampling:random}, node U17 is selected and four additional nodes are identified as the datapath of U17. If node U20 had been chosen instead of U17, only this node would be removed since the datapath is empty as it is a root node. \subsubsection{Leaf Node Sampling} \begin{figure}[!t] \centering \includegraphics[width=0.9\linewidth]{figures/fig10.pdf} \caption{The work flow of our leaf node sampling approach.} \label{fig:graphsampling:PO} \end{figure} \label{sec:leaf:sampling} In contrast to the random node sampling technique previously described, we will now explain our leaf node sampling technique. This method is closer to actual approximation since it takes the role of primary outputs (leaf nodes in the graph representation) into account. Depending of the desired level of approximation, this method will select a number of leaf nodes for removal. Leaf nodes in a graph are identified using Algorithm~\ref{algo:leafnodes}. The entire sampling process is displayed in Figure~\ref{fig:graphsampling:PO} and is described in the following. Similar to the random node sampling technique, we start with a directed graph that is constructed from a gate-level netlist in a way according to Section~\ref{sec:representation} (Step \bettercircle{1}). Next, all leaf nodes that are present in the graph are identified using Algorithm~\ref{algo:leafnodes} (Step \bettercircle{2}). Depending on the desired level of approximation, a number of identified leaf nodes are selected for removal (Step \bettercircle{3}). In this example, only one node (U19, in red) is selected. Thereafter, the datapath(s) of the previously selected leaf node(s) are identified using Algorithm~\ref{algo:datapath} and all found nodes are marked for removal. Finally, all previously marked nodes are removed from the graph (Step \bettercircle{4}) using Algorithm~\ref{algo:graphsampling}. Since leaf nodes (which correspond to primary outputs) have been removed, their datapath is replaced with a single new node. This is to ensure that the resulting graph has the same number of leafs as before and to retain more of the original graph structure. The feature vector $x$ associated with this node will capture that this node is a primary input and a primary output (it redirects all input directly to the output), that the input and output degree are both be one and that the gate type is \texttt{BUF}. \input{algorithms/datapath} \vspace{-7.5mm} \input{algorithms/graphsampling} \vspace{-7.5mm} \input{algorithms/leafnodes} \begin{figure*} \centering \includegraphics[width=\linewidth]{figures/results/fig11.pdf} \caption{Classification accuracy of GNN-RE~\cite{GNNRE} and AppGNN with random node sampling and with leaf node sampling. The X-axis is the normalized circuit area. A small circuit area correlates with a high approximation aggressiveness, thus the level of approximation increases from left-to-right in each plot.} \label{fig:allscores} \end{figure*} \vspace{-1.5em} \subsection{GNN Model} We employ the fundamental GAT architecture described in Section~\ref{sec:GNN} to perform node classification. We further utilize the node sampling approach, GraphSAINT~\cite{zeng2019graphsaint}, to maintain scalability, as suggested in~\cite{GNNRE}. The GraphSAINT approach samples sub-graphs from the original input graph, and a full GNN is constructed for each extracted sub-graph. We employ two GAT aggregation layers with a hidden dimension of $256$ each and ReLU activation function. For the attention mechanism, $K=8$. The final layer is a fully-connected layer of size $5$ with a Softmax activation function for classification. The GNN is trained using the \textit{Adam optimization algorithm}, with an initial learning rate of $0.01$ and a dropout rate of $0.1$. \subsection{Dataset Generation} \label{sec:dataset} In order to improve the classification accuracy of the GNN on approximate circuits, we extend the training dataset. We do so by adding 35 exact adder circuits of varying bit-widths (see Table~\ref{tab:training:dataset}). After these circuits are converted into their graph representation, as described in Section~\ref{sec:representation}, they are passed through the node sampling stage, as described in Section~\ref{sec:graph:sampling}. During this stage, either random sampling or leaf node sampling is performed. For each bit-width of the exact adders (8,9,12 and 16), 9 sampled graphs (8 for the 8bit adder) with increasing approximation aggressiveness are retrieved. This means that 1 to 9 nodes of a original graph are selected for removal (including their datapath). An overview over the training dataset used in AppGNN can be found in Table~\ref{tab:training:dataset}. For the training process, this dataset is split as follows: $65\%$ for training, $25\%$ for validation and $15\%$ for testing. In the following, we show the evaluation of our AppGNN approach. \section{Experimental Overview} We evaluate AppGNN on the AxC dataset shown in Table~\ref{tab:approx:dataset}. We compare the node-level classification accuracy of AppGNN to that of GNN-RE~\cite{GNNRE}, which serves as a baseline. All datasets consist flat gate-level netlists that are synthesized from their RTL description using Synopsys Design Compiler~\cite{synopsys:designcompiler} using the \texttt{compile\_ultra} directive while performing area optimization. The designs are synthesized using a 14nm FinFET technology library. The conversion from netlist to the graph representation is implemented in \textit{Perl} and \textit{Python3}. Our node sampling methods are also implemented in \textit{Python3}. Training is carried out on a single computer with 16 cores (Intel Core i7-10700 CPU @ 2.90GHz) and 32GB of DDR4 RAM. We employ the random walk sampler of GraphSAINT with a walk depth of $2$ and $3000$ root nodes. We run training for $100$ epochs. The GNN model is evaluated on the graphs in the validation set after each epoch. The best-performing model on the validation set is restored at the end of training and subsequently used to evaluate the GNN on the testing set. \vspace{-0.3cm} \subsection{Results} Our results are presented in the following. Figure~\ref{fig:allscores} shows the classification accuracy of AppGNN and GNN-RE~\cite{GNNRE}, which serves as a baseline, for all classes of AxC adders in Table~\ref{tab:approx:dataset}. In each figure, the X-axis displays the normalized circuit area and thus is related to the level of approximation (which increases from left-to-right on the X-axis). Figure~\ref{fig:accuracy:aprx:adders} summarizes these results and displays the average classification accuracy of GNN-RE and AppGNN. \textbf{Comparison to GNN-RE.} As Figure~\ref{fig:allscores} demonstrates, AppGNN outperforms GNN-RE in all benchmarks, regardless of the used sampling method or type of approximation. Except for very high levels of approximation in LOA-based circuits (Figure~\ref{fig:allscores} (c)) when using random node sampling. Here, AppGNN performs circa 8 percentage points worse than GNN-RE. However, AppGNN with leaf node sampling still performs better than GNN-RE in this benchmark. It is noteworthy that, in the case of ACA (Figure\ref{fig:allscores}(a)) and EvoApprox adders (Figure\ref{fig:allscores}(f)) benchmarks, AppGNN with leaf node sampling performs better with an increase of approximation aggressiveness. On average, AppGNN outperforms GNN-RE on all types of approximate adder circuits as Figure~\ref{fig:accuracy:aprx:adders} shows. The smallest gains in classification accuracy are achieved on the ACA based circuits with $8.8$ percentage points compared to GNN-RE, which already performs well on this type of circuit. The largest gains can be seen in the EvoApprox adder dataset. Here, AppGNN outperforms GNN-RE by 28 percentage points. \textbf{Effect of the AxA Type.} With up to $91\%$ classification accuracy, AppGNN performs best on LTA based circuits. This is not a surprising finding; LTA and LCA based circuits are implemented by either truncating $m$ input bits of the operands (LTA) and feeding the output with logical zeroes, or by copying (LCA) $m$ bits of an input operand to the output. In our graph representation, this architecture leads to many isolated nodes (nodes that are both PIs and POs and no adjacent nodes). Our node sampling methods produce graphs that are structurally similar to these, thus allowing AppGNN classify them more accurately. Simultaneously, the classification accuracy of AppGNN on exact adder circuits is comparable to that of GNN-RE, as Figure~\ref{fig:accuracy:aprx:adders} shows. Only a negligible difference of up to $1.3$ percentage points is observed. \textbf{Effect of the AxM Type.} Figure~\ref{fig:accuracy:aprx:multipliers} illustrates the classification accuracy of GNN-RE and AppGNN on multiplier circuits. In general, GNN-RE does not suffer the same accuracy loss when classifying approximate multipliers as seen in approximate adders. Although AppGNN was not specifically designed to improve the classification accuracy in approximate multipliers, it still managed to outperform GNN-RE by 3.7 percentage points on ROBA based multipliers as Figure~\ref{fig:accuracy:aprx:multipliers} shows. On the LOBA dataset, GNN-RE and AppGNN are on a par with each other, only a negligible difference in classification accuracy can be observed. However, in the case of EvoApprox multipliers, AppGNN suffers significantly. Here, the approximation-unaware GNN-RE actually outperforms AppGNN by 14.2 percentage points. Lastly, the evaluation shows that AppGNN performs similar to GNN-RE when classifying accurate multiplier circuits. A difference of up to 1.4 percentage points can be observed. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{figures/results/fig12.pdf} \caption{Average classification accuracy of GNN-RE~\cite{GNNRE} and AppGNN with random node sampling and with leaf node sampling for all evaluated classes of adder circuits.} \label{fig:accuracy:aprx:adders} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\linewidth]{figures/results/fig13.pdf} \caption{Average classification accuracy of GNN-RE~\cite{GNNRE} and AppGNN with random node sampling and with leaf node sampling for all evaluated classes of multiplier circuits.} \label{fig:accuracy:aprx:multipliers} \end{figure} \textbf{Effect of the Aggressiveness of the Approximation.} As Figure~\ref{fig:allscores} demonstrates, the classification accuracy of GNN-RE significantly drops with an increase of approximation aggressiveness with the exception of ACA and LOA based circuits. Although a similar trend is observable in the results of AppGNN, it is typically much weaker or even reversed, i.e. in EvoApprox adders (Figure~\ref{fig:allscores}(f)). \section{Conclusion} In this work, we investigated the impact of Approximate Computing (AxC) on functional Reverse Engineering (RE). To the best of our knowledge, this is the first time such an investigation is performed. We demonstrated that traditional means of functional RE are insufficient in the context of AxC. Although Machine Learning (ML)-based methods can handle some variation in the circuit and still provide reasonable results, with an increase of approximation aggressiveness their classification accuracy declines rapidly. We proposed a method for approximation-aware functional RE using Graph Neural Networks (GNNs). Our presented graph sampling based methods aim to mimic the structure of a generic approximation method in order to make a GNN aware of approximation. We evaluated our approach on a wide range of different datasets to show the improved classification accuracy of AppGNN against state-of-the-art GNN-RE. Our extensive evaluation demonstrated how AppGNN outperforms the GNN-RE method in almost all cases. \vspace{-0.1cm} \section*{Acknowledgements} This work was supported in part by the German Research Foundation (DFG) through the Project ‘‘Approximate Computing aCROss the System Stack (ACCROSS)’’ AM 534/3-1, under Grant 428566201. Besides, this work was also supported by the Center for Cyber Security (CCS) at New York University Abu Dhabi (NYUAD). \balance
2,869,038,156,053
arxiv
\section{Introduction} Regression methods are really helpful to analyse dependencies between a variable, named the response, and one or several explanatory covariates. This is one of the reasons why they are widely used and studied in statistical analysis \cite{BookHastieTibshFriedman}. Many models have been introduced including the well-known linear regression for a continuous response variable or logistic regression for a binary response variable. Indeed, many data sets involve this last situation such as the occurence of a disease in medicine or voting intentions in econometrics. Another type of data is nominal data (that is unordered categorical data) like housing types or food choice of predators. The situation is a bit more complicated when the response is ordered categorical (ordinal), e.g. different stages of cancer, pain scales, place ratings on Google or data collected from surveys (0: never, 10: always). Logistic regression can naturally be extended to the case where the response is nominal. For such data, many authors \cite{BookAgrestiOrdinal,McCullagh1980, LiuAgresti2005,Suggala2017} provided models based on odds ratios such as cumulative link models, adjacent-categories logit models or continuation-ratio logit models. The choice of one of these models depends on the kind of problem. In this paper, we concentrate on a restricted but large spectrum of regression models including all regression models mentioned above.\\ Although prediction and interpretation provide major challenges in regression motivations, another important issue is to identify the influential explanatory covariates, that is variable selection. Selection problems often arise in many fields including biology \cite{WhuChenHastie}. For example, in microarray cancer diagnosis \cite{ZhuHastie2004}, a primary goal is to understand which genes are relevant. For cost and time reasons, it can also be convenient for biologists to restrict their studies to a smaller subset of explanatory covariates (genes, bacteria populations...). Accordingly, the sparsity assumption (that is, a few number of relevant explanatory covariates) is frequently suitable and adequate, even crucial for interpretation. Indeed, with a large number of covariates, it is also useful for interpretation to determine a smaller subset of covariates that have the strongest effects. Besides, when the number of covariates is larger than the number of observations or when covariates are highly correlated, standard regression methods become inappropriate. Lasso penalisation, or $L_1$-penalisation, introduced by Tibshirani \cite{Tibshirani1997} offers an attractive solution to these issues. That includes a $L_1$ penalty in the estimation of the regression coefficients in order to perform variable selection by optimising a convex criterion. The regularisation resulting from Lasso penalisation shrinks down to zero the coefficients of explanatory covariates that have the lowest effects and leads to sparse solutions and more interpretable models, making Lasso one of the most popular penalisation \cite{BookHastie2015StatisticalSparsity,ZhaoYu2006,ParkHastie2007}. \\ Moreover, using Lasso induces the critical choice of the penalty parameter which controls the number of selected covariables. This choice is major because two close values of the penalty parameter can often lead to very different scientific conclusions. Many general techniques have been proposed in the literature but they do not have the same purposes. For instance, $K$-fold cross validation emphasises prediction, the validation step involving computing the prediction error and aiming at minimising this. Furthermore, cross validation is often quite greedy and tends to overfit the data \cite{Wasserman2009high}. Other techniques, like StARS \cite{StARS}, can be adapted to a regression framework and aim at 'overselecting', that is selecting a larger set of covariates which contains the relevant ones, allowing false positive detections. Some frameworks such as gene regulatory networks require this choice: indeed, false positive detections can then be eliminated by further biological experiments whereas omitted interactions cannot be recovered after that. On the contrary, one can prefer selecting a set of covariates included in the set of true covariates to avoid false positive detections that is 'underselecting'. This constraint comes from the fact that after selection, the relevant covariates have to be studied by scientists through new experiments. But new experiments are generally expensive or time-consuming and it would be a waste to involve noisy covariates. In this paper, we focus on variable selection in the former case. Compared with our goals, we propose an intuitive and general method for automatic variable selection, inspired from the knockoffs idea of Barber and Candes \cite{CandesKnockoff,Candes2016knockoff}. This method uses a matrix of knockoffs of the covariates built by swapping the rows of the matrix of covariates. This knockoffs matrix is thus random and aims at determining if a covariable belongs to the model using a decision rule based on change detection methods. One of the major advantages is that it can be performed in a wide range of regression frameworks including when the number of covariates is much larger than the number of observations. We will see that our method does not lead to a choice of the penalty parameter. Nevertheless, it provides an order of importance on the covariates allowing to select covariates according to the target.\\ In this paper, we address the problem of variable selection in $L_1$-penalised regressions. Our goal is to determine which covariates are relevant and which are noisy. We achieve it by proposing a new method of type knockoffs. The rest of the paper is organised as follows. In Section 2, we first introduce the background and describe the knockoffs method for variable selection. We also describe briefly our R package \texttt{kosel} in which the revisited knockoffs method is implemented. In Section 3, we give many illustrations and results of our method on simulated data. Furthermore, we propose a way to exploit randomness of our procedure. \section{Revisited knockoffs method} \subsection{Background} Consider we have $p$ explanatory $\mathbb R$-valued covariates $\overrightarrow{X} := (X_1, X_2, \dots, X_p)$ and a response variable $Y$ linked with $\overrightarrow{X}$ by $m$ equations of the type: \begin{equation}\label{regression} f_k(\mu_k(Y|X)) = \alpha_k + \beta_1 X_1 + \ldots + \beta_p X_p,\ k = 1, \ldots, m, \end{equation} where $f_k$ is a deterministic function, $\mu_k(Y|X)$ parameters of the distribution of $Y$ given $X$ and $\alpha_k, \beta_1, \ldots, \beta_p$ real coefficients. Note that the vector of regression coefficients $\boldsymbol \beta := (\beta_1, \ldots, \beta_p)$ does not depend on $k$.\\ This framework is quite general and emcompasses many regression models such as generalised linear models \cite{BookAgrestiCategorical} (linear regression, logistic regression, Poisson regression, multinomial regression), ordinal logistic regression models \cite{BookAgrestiOrdinal} (cumulative logit models with proportional odds \cite{Simon1974,WilliamsGrizzle1972,AndersonPhilips1981}, adjacent-categories logit models, continuation-ratio logit models) or cumulative link models \cite{BookAgrestiOrdinal}. Indeed, for generalised linear models, $m = 1$, $\mu_1(Y|X) = \mathbb E(Y|X)$ and $f_1$ is the link function (identity, $\log$, $\text{logit}$...) of the corresponding model. For ordinal logistic regression models, $f_k = \text{logit}$ and $\mu_k(Y|X) = \P(Y \leq k | X)$ (cumulative), $\mu_k(Y|X) = \P(Y = k | Y = k\text{ or } Y = k + 1, X)$ (adjacent), $\mu_k(Y|X) = \P(Y > k | Y \leq k, X)$ (continuation). Notice that these last models only allow identical effects of the covariates, which implies that the regression coefficients $\beta_i$ do not depend on the modality $k$ of the response variable $Y$. In particular, this framework includes models for many types of response variable such as binary, continuous, ordinal or categorical.\\ In this framework, covariates $X_l,\ l = 1, \ldots, p$ have to be linked to the response variable $Y$ through a linear expression so that the conditional dependence between $Y$ and $X_l$ given $X_1, \ldots, X_{l-1}, X_{l+1}, \ldots, X_p$ can be measured through the regression coefficient $\beta_l$. More precisely, $\beta_l = 0$ means that $X_l$ and $Y$ are independent conditionally on the other covariates $X_k,\ k = 1, \ldots, l-1, l+1, \ldots p.$ We are thus interested in the nullity of the regression coefficients $\beta_l$ to select the relevant covariates. Moreover, we make sparsity assumption, that is a relatively small number of covariates play an important role. This implies that only a few covariates are relevant and thus, only a few regression coefficients $\beta_l$ are non-null. This sparsity assumption is convenient for scientists to restrict their studies to a smaller subset of covariates, namely in high-dimensional settings. Instead of checking the nullity of each coefficient $\beta_l$ by performing statistical tests, we add a $L_1$-penalisation on the coefficients $\boldsymbol{\beta}$ in the estimation of the coefficients of the model. Coefficients are usually estimated by solving the optimisation problem: \begin{equation}\label{EstiClassique} \underset{(\boldsymbol{\alpha},\boldsymbol{\beta})}{\text{argmax}\ } L(\boldsymbol{\alpha}, \boldsymbol{\beta},\textbf{Y},\textbf{X}), \end{equation} where $L(\boldsymbol{\alpha}, \boldsymbol{\beta}, \textbf{Y}, \textbf{X})$ is a function of the coefficients relative to the model (like log-likelihood), depending on the observations $\textbf{Y}$ and $\textbf{X}$ of the response variable $Y$ and the vector of covariates $\overrightarrow{X}$ respectively. Instead of estimating the coefficients by \eqref{EstiClassique}, we add a Lasso penalisation on the coefficients vector $\boldsymbol{\beta}$ which leads to solve the following optimisation problem: \begin{equation}\label{penaloptim} \underset{(\boldsymbol{\alpha},\boldsymbol{\beta})}{\text{argmax}\ } \big\{L(\boldsymbol{\alpha}, \boldsymbol{\beta}, \textbf{Y}, \textbf{X}) - \lambda ||\boldsymbol{\beta}||_1\big\}, \end{equation} where $\lambda > 0$ is the penalty parameter. \\ Usually, all penalisation methods require the choice of the (positive) penalty parameter, also referred as tuning or regularisation parameter. We then need to tune the penalty parameter $\lambda$ (involved in the optimisation problem \eqref{penaloptim}) which controls the number of selected covariates: the larger $\lambda$ is, the fewer the selected covariates are. On the contrary, values of $\lambda$ closed to $0$ lead to the full model, that is the model with all the covariates. We remind that our goal is to select only relevant covariates and thus, to avoid false positive detections (the wrongly detected covariates). \\ With regard to our problems and goals, we propose a new method, inspired from the knockoffs process used by Barber and Candes \cite{CandesKnockoff} in the linear Gaussian regression setting. Actually, this method does not lead to a choice of the penalty parameter $\lambda$ but it puts the explanatory covariates in order from the most relevant to the leas . Furthermore, it suits any regression of the type presented in \eqref{regression} including when the number $n$ of observations is smaller than the number $p$ of covariates. Obviously, in the linear Gaussian model, it is much more pertinent to use the procedure described in \cite{CandesKnockoff} because of their theoretical guarantees. Even if their procedure initially held for $n > p$, they subsequently extended it thanks to a preliminary screening step \cite{Candes2016knockoff}. In what follows, we present the principle of our revisited knockoffs method.\\ \subsection{Principle and generalities} Let $\textbf{X}$ denote the $n\times p$ matrix of the $n$ observations of the $p$-vector $\overrightarrow{X} = (X_1,\ldots,X_p)$ of covariates, called the design matrix. The principle, given in \cite{CandesKnockoff}, is to use a matrix $\tilde{\textbf{X}}$ of knockoffs (of the covariates $X_i$) whose covariance structure is similar to $\textbf{X}$ but independent from $\textbf{Y}$. The goal is to determine if a covariate $X_i$ is relevant by studying if it enters the model before its knockoff $\tilde X_i$, that is if $X_i$ enters the model for a larger value of the penalty parameter $\lambda$. Indeed, as the knockoff matrix is independent from $\textbf{Y}$, if a covariate enters the model after its knockoff, we can rightfully suspect that this covariate does not belong to the model.\\ We mainly differ from the method proposed by \cite{CandesKnockoff} in the construction of the knockoffs. In their paper, they propose a sophisticated construction of the knockoff filter using linear algebra tools. This construction allows to control the false discovery rate (FDR) --the expected fraction of false discoveries among all discoveries-- in the linear Gaussian model whenever there are at least as many observations as covariates. This difference in the construction of the knockoffs makes our method suitable for the setting $n < p$ and for a larger set of regression models. Nevertheless, theoretical guarantees about the control of the false discovery rate do not hold anymore.\\ We construct our knockoff matrix $\tilde{\textbf{X}}$ by randomly swapping the $n$ rows of the design matrix $\textbf{X}$. This way, the correlations between the knockoffs remain the same as the original variables but the knockoffs are not linked to the response $\textbf{Y}$. Note that this construction of the knockoffs matrix also makes the procedure random. Then, in the same way as \cite{CandesKnockoff}, we perform the regression of $\textbf{Y}$ on the $n \times 2p$ augmented matrix $[\textbf{X}, \tilde{\textbf{X}}]$, i.e. the columnwise concatenation of $\textbf{X}$ and $\tilde{\textbf{X}}$. Let us note $\boldsymbol{\hat\beta}(\lambda)$ the estimated regression coefficients of the $\lambda$-penalised regression of $\textbf{Y}$ on the augmented matrix $[\textbf{X}, \tilde{\textbf{X}}]$: \begin{equation*} \Big(\boldsymbol{\hat\alpha}(\lambda),\boldsymbol{\hat\beta}(\lambda)\Big) := \underset{(\boldsymbol{\alpha},\boldsymbol{\beta})}{\text{argmax}\ } \big\{L(\boldsymbol{\alpha}, \boldsymbol{\beta}, \textbf{Y}, [\textbf{X}, \tilde{\textbf{X}}]) - \lambda ||\boldsymbol{\beta}||_1\big\}. \end{equation*} For each variable of the augmented design, that is for each covariate and its corresponding knockoff, we consider $T_i := \sup\ \{\lambda>0,\ \hat\beta_i(\lambda) \neq 0\},\ i \in \{1,\ldots,p,p+1,\ldots,2p\}$. Statistics $T_i$ correspond to the largest value of $\lambda$ for which the covariate $X_i$ if $i \in \{1,\ldots,p\}$ or its knockoff $\tilde X_{i-p}$ if $i \in \{p+1,\ldots,2p\}$ first enters the model. At this stage, we hope that $T_i$ is large for the relevant covariate, that is for $X_i, i \in \{1,\ldots,p\}$ such that $\beta_i \neq 0$ and small for the knockoffs variables $X_i := \tilde X_{i-p},\ i \in \{p+1,\ldots,2p\}$ or for the noisy covariates $X_i,\ i \in \{1,\ldots,p\}$ such that $\beta_i = 0$. This yields us a $2p$-vector $(T_1,...,T_p,\tilde T_1,...,\tilde T_p)$ where $\tilde T_i$ denotes $T_{i+p}$. Then, we consider, for all $i \in \{1,...,p\}$, $W_i := \max(T_i, \tilde T_i) \times \left\{ \begin{array}{ll} (+1) & \mbox{if } T_i > \tilde T_i \\ (-1) & \mbox{if } T_i \leq \tilde T_i \end{array} \right.$. Statistics $W_i$ allo to determine if a covariate enters the model before or after its knockoff. A negative value for $W_i$ means that the covariate $X_i$ enters the model after its knockoff and we eliminate it. On the contrary, a positive value for $W_i$ means that the covariate $X_i$ enters the model before its knockoff and is more likely to belong to the model. But covariates $X_i$ whose statistic $W_i$ is positive are not necessarily relevant: we hope that $W_i$ is large for most of relevant covariates and small for the other ones. Thus, we are interested in the largest positive values of the $p$-vector of statistics $W$ which moreover indicates that the corresponding covariate enters the model early, that is for a large value of $\lambda$. Statistics $W_i$ allows in fact to sort the covariates according to their importance: the larger $W_i$ is, the more relevant the associated covariate $X_i$ is.\\ This then implies defining a threshold $s$ for $W_i$ over which we will keep the corresponding covariates in the estimated model. On the whole, we will choose the estimated model $\hat S$ such that: \[ \hat S := \{X_i : W_i \geq s\}. \] \subsection{Choice of the threshold} \label{Thresholds} The second major difference with Barber and Candes \cite{CandesKnockoff} lies in the choice of the threshold $s$. They provide in fact a data-dependent threshold that shows attractive results relative to the false discovery rate in the Gaussian setting. Unfortunately, these results do not hold in general, out of the linear Gaussian model. In our method, we make the assumption that there is a breakdown in the distributions between the $W_i$ corresponding to the covariates $X_i$ belonging to the model and the other ones (see Figure \ref{Distributions}). Figure \ref{Distributions} illustrates that distributions of $W_i$ depend on whether $X_i$ is relevant or not. To generate Figure \ref{Distributions}, we have simulated a set of data under a linear Gaussian regression model with $p = 20$ independent Gaussian covariates. Only the five first ones were linked to $Y$: $$Y = \beta_1X_1 + \dots + \beta_p X_p + \epsilon, $$ where $\beta = (1,1,1,1,1,0,\dots,0)$ and $\epsilon \sim \mathcal N(0,1)$. In our knockoffs procedure applicated to this data set, only statistics $W_1,W_2,W_3,W_4, W_5,W_6, W_7,W_{13},W_{14},W_{16}, W_{19}$ associated to the covariates $X_1, X_2, \ldots, X_{14}, X_{16}$ and $X_{19}$ have a positive value. For example, the covariate $X_1$ entered the model for $\lambda = 1.002$ (thus, $T_1 = 1.002$) and entered the model before its knockoff $\tilde X_1$. This means that the knockoff variable $\tilde X_1$ entered the model for $\lambda < 1.04$ and implies that $\tilde T_1 < 1.002$. $W_3$ takes the largest value among all the statistics $W_i,\ i = 1, \ldots, 20$, which implies that $X_3$ is the covariate the most likely to belong to the model. We can clearly observe a breakdown between the values of the five first covariates and the other ones.\\ \begin{figure} \centering \includegraphics[scale = 0.55]{Distributions.eps} \caption{Example of positive statistics $W_i$ sorted in ascending order. Linear Gaussian regression model with $n = 500$ observations of $p = 20$ covariates. Only covariates $X_1$, $X_2$, $X_3$, $X_4$ and $X_5$ belong to the model (regression coefficients are set to $\beta = (1,1,1,1,1,0,\ldots,0)$).} \label{Distributions} \end{figure} Consequently, we present two automatic ways to define the threshold $s$ by using two change detection methods: the method proposed by Auger and Lawrence \cite{AugerLawrence,picard2004statistical,Lebarbier} and the CUSUM method for mean change detection. Let $W_{(i)},\ i = 1,\ldots,w,$ denote the sorted $w$ positive statistics $W_i,\ i = 1,...,w$, that is $0 < W_{(1)} \leq W_{(2)} \leq \ldots \leq W_{(w)}$ and $e_j~=~W_{(j+1)}~-~W_{(j)}$ for all $j = 1, \ldots w-1$ the $w-1$ gaps between these sorted statistics. Remark that $w$, the number of positive statistics $W_i$, is random ($w = 11$ on Figure \ref{Distributions}). We propose two automatic thresholds defined as: \begin{itemize} \item the minimum of the two thresholds obtained by applying these two change detection methods directly on the statistics $W_{(i)},\ i = 1, \ldots, w$ sorted in ascending order, \item the minimum of the two thresholds obtained by applying these two change detection methods on the gaps $e_j,\ j = 1, \ldots, w-1$. \end{itemize} Let us name the first one '$W$-threshold' and the second one 'gaps-threshold' for the sake of simplicity. \subsection{R package \texttt{kosel}} Our procedures have been implemented in a R package, called \texttt{kosel} (for knockoffs selection), available on the CRAN. Our package includes three functions: \texttt{ko.glm}, \texttt{ko.ordinal} and \texttt{ko.sel}.\\ The two first functions construct the knockoffs matrix and return the $p$-vector of statistics $W$ for $L_1$-penalised regressions models respectively implemented in the R functions \texttt{glmnet} and \texttt{ordinalNet} from the packages of the same name. \texttt{glmnet} emcompasses generalised linear models whereas \texttt{ordinalNet} includes ordinal regression models such as cumulative link, adjacent or continuation-ratio or stopping-ratio. By default, a seed is used so that the knockoffs matrix remains the same (and thus, the resulting statistics vector $W$). But this can be changed with the option \texttt{random = TRUE} to exploit the randomness of the procedure (see Subsection \ref{Randomness} for further details).\\ The third function \texttt{ko.sel} deals with the choice of the threshold. It uses the statistics vector $W$ obtained by one of the two other functions and returns the $p$-binary vector of estimation and the threshold $s$. Three choices are proposed: \texttt{method = 'stats'} and \texttt{method = 'gaps'} respectively correspond to the '$W$-threshold' and 'gaps-threshold' while \texttt{method = 'manual'} allows the user to choose its own threshold. The option \texttt{print = TRUE} displays the positive statistics $W_i$ sorted in ascending order like in Figure \ref{Distributions}. For \texttt{method = 'manual'}, they are automatically displayed so that the user can choose its threshold. For the '$W$-threshold' (\texttt{method = 'stats'}) and 'gaps-threshold' (\texttt{method = 'gaps'}), option \texttt{print = TRUE} also displays an horizontal line corresponding to the threshold. \section{Simulation studies} \subsection{Settings} We now describe experimental results to study the efficiency of our procedure. For that, we have performed different simulations with various regressions: linear Gaussian regression, logistic regression and cumulative logit regression (with proportional odds). Covariates $\overrightarrow X$ are simulated as Gaussian such that $\mathbb E(X_k) = 0$ and $\text{var}(X_k) = 1$ for all $k = 1, \ldots, p$ and such that $X_i$ and $X_j$ are dependent conditionally on the other covariates $X_k,\ k \in \{1, \ldots, p\}\setminus\{i,j\}$ with probability $0.2$. The design matrix $\textbf{X}$ of covariates has been simulated with the R function \texttt{huge.generator} from the package \texttt{huge}, for a random graph structure. We have then simulated the observations of the response variable $Y$ as: \begin{align*} Y & = \beta_1 X_1 + \ldots + \beta_p X_p + \epsilon, \tag{linear regression}\\ \text{logit}(\P(Y = 1|X)) & = \alpha_1 + \beta_1 X_1 + \ldots + \beta_p X_p, \tag{logistic regression}\\ \text{or } \text{logit}(\P(Y \leq k|X)) & = \alpha_k + \beta_1 X_1 + \ldots + \beta_p X_p, k = 1, 2, \tag{cumulative logit regression} \end{align*} where $\epsilon \sim \mathcal N(0, 1)$ is Gaussian noise, the vector of regression coefficients $\boldsymbol \beta$ is sparse and given below and intercepts $\alpha_1$ and $\alpha_2$ are chosen so that the response variable $\textbf{Y}$ takes enough values in each of its modalities ($\{0,1\}$ for logistic regression and $\{0,1,2\}$ for cumulative logit regression). These regressions have been respectively performed with the R functions \texttt{glmnet} and \texttt{ordinalNet} of the eponymous R packages.\\ We present detection rates of each covariates on $B = 100$ repetitions for different settings. The detection rate of the covariate $X_l$ is the number of times among the $100$ repetitions where the estimated model included $X_1$. First, we have simulated $n = 200$ observations of $p = 50$ covariates for pedagogical reasons and next, $n = 1000$ observations of $p = 2000$ covariates to illustrate results in a higher-dimensional framework. For $p = 2000$ covariates, results are presented as boxplots of detection rates according to the regression coefficient $\beta$ in order to improve readability.\\ In addition, we compare our results with results obtained by cross validation. Cross validation has been performed with the R functions \texttt{cv.glmnet} for linear and logistic regressions and \texttt{ordinalNetTune} with 'logLik' tune method for cumulative logit regression. For $p = 50$, we also compare our results in the linear Gaussian setting with results obtained by Barber and Candes' procedure. Their procedure is implemented in the function \texttt{knockoff.filter} from the R package \texttt{knockoff}. We do not perform this comparison for $p = 2000$ because their procedure is not applicable due to the too few observations. \subsection{Efficiency and comparisons} \subsubsection{p = 50} \label{p50} In the first place, we present results for $n = 200$ observations of $p = 50$ covariates. Regression coefficients are set to $\boldsymbol \beta = (1,1,1,1,1,0,\ldots,0)$ and $\boldsymbol \beta = (2.5,2,1.5,1,0.5,0,\ldots,0)$. Covariates $\textbf{X}$ are the same for each different regression. But they are different according to the regression coefficients $\boldsymbol \beta$. In other words, for a fixed value of $\boldsymbol \beta$, the design matrix $\textbf{X}$ is the same for each of the three regression models. But the response variable $\textbf{Y}$ is simulated according to the regression model and is therefore different. The knockoffs matrix is also different.\\ \begin{figure}[htbp!] \centering \subfloat[$\boldsymbol \beta = (1,1,1,1,1,0,\ldots, 0)$.]{% \resizebox*{7.11cm}{!}{\includegraphics{B1-Gaus.eps}}}\hspace{0pt} \subfloat[$\boldsymbol \beta = (2.5,2,1.5,1,0.5,0,\ldots,0)$.]{% \resizebox*{7.11cm}{!}{\includegraphics{B2-Gaus.eps}}} \caption{Detection rates of each covariate for the four methods: revisited knockoffs $W$-threshold and gaps-thresholds, cross validation and Barber and Candes' knockoffs. Linear Gaussian regression model with $n = 200$ observations of $p = 50$ covariates with regression coefficients $\boldsymbol \beta = (1,1,1,1,1,0,\ldots, 0)$ (a) and $\boldsymbol \beta = (2.5,2,1.5,1,0.5,0,\ldots,0)$ (b). Covariates are dependent Gaussian with a random structure. The number of i.i.d. repetitions is $B = 100$.} \label{fig:RLG_EC} \end{figure} \begin{figure}[] \centering \subfloat[$\boldsymbol \beta = (1,1,1,1,1,0,\ldots, 0)$.]{% \resizebox*{7.11cm}{!}{\includegraphics{B1-Log.eps}}}\hspace{0pt} \subfloat[$\boldsymbol \beta = (2.5,2,1.5,1,0.5,0,\ldots,0)$.]{% \resizebox*{7.11cm}{!}{\includegraphics{B2-Log.eps}}} \caption{Detection rates of each covariate for the three methods: revisited knockoffs $W$-threshold and gaps-thresholds and cross validation. Logistic regression model with $n = 200$ observations of $p = 50$ covariates with regression coefficients $\boldsymbol \beta = (1,1,1,1,1,0,\ldots, 0)$ (a) and $\boldsymbol \beta = (2.5,2,1.5,1,0.5,0,\ldots,0)$ (b). Covariates are dependent Gaussian with a random structure. The number of i.i.d. repetitions is $B = 100$.} \label{fig:RL_EC} \end{figure} \noindent \textbf{Results and comments.} Figures \ref{fig:RLG_EC}, \ref{fig:RL_EC} and \ref{fig:CLR_EC} show detection rates for cross validation and for the revisited knockoffs method after thresholding with the $W$-threshold and with the gaps-threshold. These detection rates are illustrated on Figures \ref{fig:RLG_EC}, \ref{fig:RL_EC} and \ref{fig:CLR_EC} for respectively linear, logistic and cumulative logit regressions. First, we can note that our procedure is efficient for each of these regressions: the difference of detection rates of the first five covariates and the rest of them is really clear, regardless of the regression coefficients or the choice of the threshold. For linear regression, these two thresholds give similar results whereas for logistic and cumulative logit regressions, gaps-threshold tends to give slightly lower detection rates than $W$-threshold, for both relevant and noisy covariates. \\ In comparison, cross validation provides considerably higher detection rates: although the first five covariates, especially $X_4$ and $X_5$, can be more detected, noisy covariates are also much more detected than with our procedure. For example, for logistic regression in Figure \ref{fig:RL_EC}, noisy covariates are almost all detected less than 20\% with our procedures whereas they are detected between 20\% and 40\% with cross validation. In practice, using cross validation would give more false positive detections than our procedures.\\ Figure \ref{fig:RLG_EC} also show detection rates obtained by Barber and Candes' knockoffs in the linear Gaussian regression model. To perform their procedure, we have to choose a target false discovery rate. In practice, we want it to be small but too small values lead to an infinite threshold and thus an empty estimated model. By default, the FDR is set to $0.1$ but we set it to $0.4$ to avoid too many empty estimated models. For the two different configurations of $\boldsymbol \beta$, we have obtained $4$ empty estimated models on $100$ repetitions. Because of that, detection rates of the noisy covariates tend to be a bit higher than ours. For the same reason, the five first ones are a bit less detected (close to $96$\%, which corresponds to the number of repetitions for which the threshold was not infinite). For $\boldsymbol \beta = (2.5,2,1.5,1,0.5,0,\ldots, 0)$, $X_5$ is yet better detected.\\ All of these three figures illustrate also that detection rates depend on the regression coefficient $\beta$: the higher $\beta$ is, the more the associated covariate is detected. Indeed, for $\boldsymbol \beta = (2.5,2,1.5,1,0.5,0,\ldots,0)$, we can observe that the covariate $X_5$ tends to be less detected than the four first ones. Furthermore, we can notice that some noisy covariates are more detected. For example, this is the case for the noisy covariates $X_{18}, X_{19}, X_{29}, X_{36}, X_{39}$ or $X_{47}$ for $\boldsymbol \beta = (1,1,1,1,1,0,\ldots, 0)$ and for all kind of regressions (since the design matrix is the same). This is probably due to the dependence structure of $\overrightarrow X$. In particular, these covariates are dependent to three of the first five covariates conditionally on the other ones. Similar phenomenon occurs for $\boldsymbol \beta = (2.5,2,1.5,1,0.5,0,\ldots, 0)$ with noisy covariates $X_{19}$, $X_{41}$ and $X_{47}$.\\ Finally, our procedure seems to be quite efficient regardless of the regression model. Nevertheless, results are a little bit better for linear regression. In this case, we remind that Barber and Candes' procedures \cite{CandesKnockoff,Candes2016knockoff} also provide theoretical guarantees. \begin{figure}[] \centering \subfloat[$\boldsymbol \beta = (1,1,1,1,1,0,\ldots, 0)$.]{% \resizebox*{7.11cm}{!}{\includegraphics{B1-Cum.eps}}}\hspace{0pt} \subfloat[$\boldsymbol \beta = (2.5,2,1.5,1,0.5,0,\ldots,0)$.]{% \resizebox*{7.11cm}{!}{\includegraphics{B2-Cum.eps}}} \caption{Detection rates of each covariate for the three methods: revisited knockoffs $W$-threshold and gaps-thresholds and cross validation. Cumulative logit model with $n = 200$ observations of $p = 50$ covariates with regression coefficients $\boldsymbol \beta = (1,1,1,1,1,0,\ldots, 0)$ (a) and $\boldsymbol \beta = (2.5,2,1.5,1,0.5,0,\ldots,0)$ (b). Covariates are dependent Gaussian with a random structure. The number of i.i.d. repetitions is $B = 100$.} \label{fig:CLR_EC} \end{figure} \subsubsection{p = 2000} We present now results for $n = 1000$ observations of $p = 2000$ covariates to illustrate that our procedure is suitable with thousands of covariates. Regression coefficients are set to $\beta_k = \left\{ \begin{array}{l} 5, $ if $1 \leq k \leq 20, \\ 4, $ if $21 \leq k \leq 40, \\ 3, $ if $41 \leq k \leq 60, \\ 2, $ if $61 \leq k \leq 80, \\ 1, $ if $81 \leq k \leq 100, \\ 0, $ otherwise.$ \end{array} \right.$. In the same way as for $p = 50$ (Subsection \ref{p50}), covariates $\textbf{X}$ are the same for each different regression. But they are different according to the regression coefficients $\boldsymbol \beta$. In other words, for a fixed value of $\boldsymbol \beta$, the design matrix $\textbf{X}$ is the same for each of the three regression models. But the response variable $\textbf{Y}$ is simulated according to the regression model and is therefore different. The knockoffs matrix is also different.\\ \begin{figure}[htbp!] \centering \subfloat[Revisited knockoffs with $W$-threshold.]{% \resizebox*{7.11cm}{!}{\includegraphics{Gaus-W.eps}}\label{fig:RLG_2000_W}}\hspace{0pt} \subfloat[Revisited knockoffs with gaps-threshold.]{% \resizebox*{7.11cm}{!}{\includegraphics{Gaus-gaps.eps}}\label{fig:RLG_2000_gaps}}\vspace{0pt} \subfloat[Cross validation.]{% \resizebox*{7.11cm}{!}{\includegraphics{Gaus-CV.eps}}}\hspace{0pt} \subfloat[Comparison of detection rates of noisy covariates for the revisited knockoffs method with gaps-threshold and for cross validation.]{% \resizebox*{7.11cm}{!}{\includegraphics{Gaus-NP.eps}}\label{fig:RLG_2000_NP}} \caption{Boxplots of detection rates of each covariate according to their regression coefficient $\beta$ for the three methods: revisited knockoffs $W$-threshold (a), gaps-thresholds (b) and cross validation (c). Linear Gaussian regression model with $n = 1000$ observations of $p = 2000$ covariates. Covariates are dependent Gaussian with a random structure. The number of i.i.d. repetitions is $B = 100$.} \label{fig:RLG_2000} \end{figure} \begin{figure}[htbp!] \centering \subfloat[Revisited knockoffs with $W$-threshold.]{% \resizebox*{7.11cm}{!}{\includegraphics{Log-W.eps}}}\hspace{0pt} \subfloat[Revisited knockoffs with gaps-threshold..]{% \resizebox*{7.11cm}{!}{\includegraphics{Log-gaps.eps}}}\vspace{0pt} \subfloat[Cross validation.]{% \resizebox*{7.11cm}{!}{\includegraphics{Log-CV.eps}}}\hspace{0pt} \subfloat[Comparison of detection rates of noisy covariates for the revisited knockoffs method with gaps-threshold and for cross validation.]{% \resizebox*{7.11cm}{!}{\includegraphics{Log-NP.eps}}\label{fig:RL_2000_NP}} \caption{Boxplots of detection rates of each covariate according to their regression coefficient $\beta$ for the three methods: revisited knockoffs $W$-threshold (a), gaps-thresholds (b) and cross validation (c). Logistic regression model with $n = 1000$ observations of $p = 2000$ covariates. Covariates are dependent Gaussian with a random structure. The number of i.i.d. repetitions is $B = 100$.} \label{fig:RL_2000} \end{figure} \begin{figure}[htbp!] \centering \subfloat[Revisited knockoffs with $W$-threshold.]{% \resizebox*{7.11cm}{!}{\includegraphics{Cum-W.eps}}}\hspace{0pt} \subfloat[Revisited knockoffs with gaps-threshold..]{% \resizebox*{7.11cm}{!}{\includegraphics{Cum-gaps.eps}}}\vspace{0pt} \subfloat[Cross validation.]{% \resizebox*{7.11cm}{!}{\includegraphics{Cum-CV.eps}}}\hspace{0pt} \subfloat[Comparison of detection rates of noisy covariates for the revisited knockoffs method with gaps-threshold and for cross validation.]{% \resizebox*{7.11cm}{!}{\includegraphics{Cum-NP.eps}}\label{fig:Cum_2000_NP}} \caption{Boxplots of detection rates of each covariate according to their regression coefficient $\beta$ for the three methods: revisited knockoffs $W$-threshold (a), gaps-thresholds (b) and cross validation (c). Cumulative logit model with $n = 1000$ observations of $p = 2000$ covariates. Covariates are dependent Gaussian with a random structure. The number of i.i.d. repetitions is $B = 100$.} \label{fig:Cum_2000} \end{figure} \noindent \textbf{Results and comments.} Figures \ref{fig:RLG_2000}, \ref{fig:RL_2000} and \ref{fig:Cum_2000} each contain four graphics: three of them are boxplots of detection rate of each of the 6 groups of covariates according to their regression coefficient $\beta$. These detection rates are respectively obtained with the revisited knockoffs methods and cross validation. In order to compare our method with cross validation for the noisy covariates, we present detection rates of the noisy covariates (these for which $\beta = 0$) obtained with the knockoffs method (with gaps-threshold) in function of detection rates obtained with cross validation in the last graphic.\\ Results in the linear regression framework are presented in Figure \ref{fig:RLG_2000}. Comparing the three boxplots, we can remark that detection rates with revisited knockoffs method with $W$-threshold are lower than with gaps-threshold. More specifically, relevant covariates whose $\beta = 1$ are detected between 10 and 95\% with $W$-threshold whereas they are detected more than 80\% with gaps-threshold. Detection rates with $W$-threshold are lower and more widespread for covariates whose regression coefficient $\beta = 1$ comparing to the other relevant covariates. Cross validation leads to better detection rates for the relevant covariates. However, Figure \ref{fig:RLG_2000_NP} illustrates that most of the noisy covariates have higher detection rates with cross validation than with our procedure. Thus, cross validation gives more false positive detections.\\ Results for logistic and cumulative logit regressions are respectively presented in Figures \ref{fig:RL_2000} and \ref{fig:Cum_2000}. As for $p = 50$, we can note on boxplots that detection rates depend on the regression coefficient $\beta$: for all the three methods, detection rates are decreasing according to $\beta$ that is, the higher $\beta$ is, the more the associated covariates are detected. We also observe this on graphics \ref{fig:RLG_2000_W} and \ref{fig:RLG_2000_gaps} for linear regression, although it is less pronounced. As for linear regression, even if cross validation gives better detection rates for the relevant covariates, it also gives more fase positive detections for the noisy covariates as illustrated in graphics \ref{fig:RL_2000_NP} and \ref{fig:Cum_2000_NP}. This phenomenon is even stronger for these two regression models where almost all of the noisy covariates are more detected with cross validation than with our procedure. Contrary to linear regression, detection rates obtained by the knockoffs method with $W$-threshold are higher than with gaps-threshold and they are higher for both relevant and noisy covariates.\\ Although detection rates are better for linear regression, our procedures lead to satisfying results for the three regression models. Even though relevant covariates are not always enough detected, detection rates of noisy covariates are also often very low, especially in comparison with cross validation. On the whole, it seems to be appropriate for sparse models regardless to the regression model and particularly when the goal is to avoid false positive detections. Notice that the threshold ($W$ or gaps-threshold) to be used to avoid false positive detections may vary according to the regression model. \subsection{Randomness of the procedure} \label{Randomness} Note that revisited knockoffs procedure and cross validation are both random (which is not the case for Barber and Candes' procedure). Indeed, the former is random in the construction of the knockoffs matrix whereas randomness of cross validation lies in the choice of the folds. Hence, applying several time one of these methods leads to different results. To conclude this section, we compare detection rates obtained by these three methods on the same sample of data. This sample includes $n = 200$ observations of $p = 50$ covariates and consists in the first sample of the $B = 100$ samples used in the Subsection \ref{p50}. Dependence structure of the vector $\overrightarrow X$ is thus also the same as in Subsection \ref{p50}.\\ \begin{figure}[htbp!] \centering \subfloat[Linear regression.]{% \resizebox*{7.11cm}{!}{\includegraphics{Random-Gaus.eps}}}\hspace{0pt} \subfloat[Logistic regression.]{% \resizebox*{7.11cm}{!}{\includegraphics{Random-Log.eps}}} \subfloat[Cumulative logit regression.]{% \resizebox*{7.11cm}{!}{\includegraphics{Random-Cum.eps}}} \caption{Detection rates of each covariate for the three methods: revisited knockoffs method with the $W$- and gaps-thresholds (see Subsection \ref{Thresholds}) and cross validation. Detection rates are obtained on 100 repetitions on the same sample of $n = 200$ observations of $p = 50$ covariates with regression coefficients $\boldsymbol \beta = (1,1,1,1,1,0,\ldots, 0)$. Covariates are dependent Gaussian with a random structure.} \label{fig:Random} \end{figure} \noindent \textbf{Results and comments.} Figure \ref{fig:Random} displays detection rates of each covariate using randomness of the three procedures: revisited knockoffs method with $W$ and gaps-thresholds and cross validation. Detection rates are obtained on 100 repetitions on the same sample for the three methods and for the three regression models: linear, logistic and cumulative logit regressions. Regression coefficients are set to $\boldsymbol \beta = (1,1,1,1,1,0,\ldots, 0)$. Thus, only the first five covariates belong to the model. We can notice that these first five covariates are almost always 100\% detected except with gaps-threshold for cumulative logit regression (for which they are still detected more than 95\%). Noisy covariates, that is covariates $X_6, \ldots, X_{50}$, are always less detected by our procedures than by cross validation. However, some of them are wrongly highly detected: $X_{18}$ for linear regression, $X_{18}, X_{25}, X_{30}$ and $X_{39}$ for logistic regression or $X_{19}, X_{21}, X_{36}$ and $X_{40}$ for cumulative logit regression. This is probably due to the sample (in which the dependence structure is more or less pronounced). It should be recalled that this dependence structure is the same as in Subsubsection \ref{p50} for $p = 50$. In comparison with Figures \ref{fig:RLG_EC}, \ref{fig:RL_EC} and \ref{fig:CLR_EC} of Subsubsection \ref{p50}, we can see that these covariates have also higher detection rates. For all these covariates, cross validation gives higher detection rates than our procedures. Cross validation gives also much higher detection rates for some other noisy covariates. For instance, $X_6, X_{11}, X_{13}, X_{24}, X_{40}$ and $X_{48}$ are always detected for linear regression whereas revisited knockoffs with $W$ and gaps threshold never detect them. For cumulative logit model, $X_8, X_{41}, X_{46}$ and $X_{50}$ are always detected whereas our procedures detect them less than 15\%.\\ In practice, with real data, this randomness opens up to further ways to perform variable selection. \section{Discussion} In this paper, we proposed a method for variable selection in regression models based on the construction of a matrix of knockoffs of the covariates. This method is quite intuitive and suitable for many types of regressions, including when the number of observations is much smaller than the number of covariates. Two different thresholds can be chosen, leading to two procedures, which have been implementend in the R package \texttt{kosel}. We have seen that these procedures both turn out to be very pertinent and efficient as the many and diverse simulations exemplify. Our two procedures are particularly appropriate when the goal is to avoid false positive detections. Indeed, even if there are false negative detections, there are also a very small rate of false positive detections. Simulations show also that efficiency of our procedures depends on the regression model. In general, we can try the two thresholds and choose results according to its target. Furthermore, randomness of our procedures provides other techniques to perform variable selection. \\ Nonetheless, in the case of linear Gaussian regression, Barber and Candes' procedures also offer theoretical guarantees. \\ In addition, our procedures give better results than cross validation with regard to false positive detections, even when we make use of randomness. However, if we aim at overselecting, it is more appropriate to use other techniques such as cross validation. \\ \bibliographystyle{plain}
2,869,038,156,054
arxiv
\section{Introduction} It is well known that the solution of the Cauchy problem for the Hopf equation \begin{equation} \label{hopf} u_t+6uu_x=0,\quad u(x,t=0)=u_0(x), \;\;x\in\mathbb{R},\;t\in\mathbb{R}^+ \end{equation} reaches a point of gradient catastrophe in a finite time. The solution of the viscosity or conservative regularization of the above hyperbolic equation display a considerably different behavior. Equation (\ref{hopf}) admits an Hamiltonian structure \[ u_t +\{ u(x), H_0\}\equiv u_t +\partial_x\frac{\delta H_0}{\delta u(x)} =0,\quad \] with Hamiltonian and Poisson bracket \[ H_0 =\int u^3\, dx, \quad \{ u(x) , u(y)\}=\delta'(x-y), \] respectively. All the Hamiltonian perturbations up to the order $\epsilon^4$ of the hyperbolic equation (\ref{hopf}) have been classified in \cite{dubcr}. They are parametrized by two arbitrary functions $c(u)$, $p(u)$ \begin{equation} \begin{split} \label{riem2} & u_t +6u\, u_x + \frac{\epsilon^2}{24} \left[ 2 c\, u_{xxx} + 4 c' u_x u_{xx} + c'' u_x^3\right]+\epsilon^4 \left[ 2 p\, u_{xxxxx} \right.\\ & \\ &\left. +2 p'( 5 u_{xx} u_{xxx} + 3 u_x u_{xxxx}) + p''( 7 u_x u_{xx}^2 + 6 u_x^2 u_{xxx} ) +2 p''' u_x^3 u_{xx}\right]=0, \end{split} \end{equation} where the prime denotes the derivative with respect to $u$. The corresponding Hamiltonian takes the form \[ H=\int \left[ u^3 - \epsilon^2 \frac{c(u)}{24} u_x^2 +\epsilon^4 p(u) u_{xx}^2 \right]\, dx \] For $c(u)=12$, $p(u)=0$ one obtains the Korteweg - de Vries (KdV) equation $u_t+6uu_x+\epsilon^{2}u_{xxx}=0$, and for $c(u)=48u $ and $p(u)=2u$ the Camassa-Holm equation up to order $\epsilon^{4}$; for generic choices of the functions $c(u)$, $p(u)$ equation (\ref{riem2}) is apparently not an integrable PDE. However it admits an infinite family of commuting Hamiltonians up to order $O(\epsilon^6).$ The case of small viscosity perturbations of one-component hyperbolic equations has been well studied and understood (see \cite{bressan} and references therein), while the behavior of solutions to the conservative perturbation (\ref{riem2}) to the best of our knowledge has not been investigated after the point of gradient catastrophe of the unperturbed equation except for the KdV case, \cite{LL,V2,DVZ}. In a previous paper \cite{GK} (henceforth referred to as I) we have presented a quantitative numerical comparison of the solution of the Cauchy problem for KdV \begin{equation} \label{KdV} u_t+6uu_x+\epsilon^{2}u_{xxx}=0,\quad u(x,0)=u_0(x), \end{equation} in the small dispersion limit $\epsilon\rightarrow 0$, and the asymptotic formula obtained in the works of Lax and Levermore \cite{LL}, Venakides \cite{V2} and Deift, Venakides and Zhou \cite{DVZ} which describes the solution of the above Cauchy problem at the leading order as $\epsilon\rightarrow 0$. The asymptotic description of \cite{LL},\cite{DVZ} gives in general a good approximation of the KdV solution, but is less satisfactory near the point of gradient catastrophe of the hyperbolic equation. This problem has been addressed by Dubrovin in \cite{dubcr}, where, following the universality results obtained in the context of matrix models by Deift et all \cite{DKMVZ}, he formulated the universality conjecture about the behavior of a generic solution to the Hamiltonian perturbation (\ref{riem2}) of the hyperbolic equation (\ref{hopf}) near the point $(x_c,t_c,u_c)$ of gradient catastrophe for the solution of (\ref{hopf}). He argued that, up to shifts, Galilean transformations and rescalings, this behavior essentially depends neither on the choice of solution nor on the choice of the equation. Moreover, the solution near the point $(x_c, t_c, u_c)$ is given by \begin{equation} \label{univer} u(x,t,\epsilon)\simeq u_c +a\,\epsilon^{2/7} U \left( b\, \epsilon^{-6/7} (x- x_c-6u_c (t-t_c)); c\, \epsilon^{-4/7} (t-t_c)\right) +O\left( \epsilon^{4/7}\right) \end{equation} where $a$, $b$, $c$ are some constants that depend on the choice of the equation and the solution and $U=U(X; T)$ is the unique real smooth solution to the fourth order ODE \begin{equation}\label{PI2} X=6T\, U -\left[ U^3 + (\frac12 U_{X}^2 + U\, U_{XX} ) +\frac1{10} U_{XXXX}\right], \end{equation} which is the second member of the Painlev\'e I hierarchy. We will call this equation PI2. The relevant solution is characterized by the asymptotic behavior \begin{equation} \label{PI2asym} U(X,T)=\mp(X)^{\frac{1}{3}}\mp \dfrac{2T}{X^{\frac13}}+O(X^{-1}),\quad X\rightarrow \pm \infty, \end{equation} for each fixed $T\in\mathbb{R}$. The existence of a smooth solution of (\ref{PI2}) for all $X,T\in\mathbb{R}$ satisfying (\ref{PI2asym}) has been recently proved by Claeys and Vanlessen \cite{CV}. Furthermore they study in \cite{CV1} the double scaling limit for the matrix model with the multicritical index and showed that the limiting eigenvalues correlation kernel is obtained from the particular solution of (\ref{PI2}) satisfying (\ref{PI2asym}). This result was conjectured in the work of Br\'ezin, Marinari and Parisi \cite{BMP}. In this paper we address numerically the validity of (\ref{univer}) for the KdV equation, and we identify the region where this solution provides a better description than the Lax-Levermore, and Deift-Venakides-Zhou theory. As an outlook for the validity of (\ref{univer}) for other equations in the family (\ref{riem2}), we present a numerical analysis of the Camassa-Holm equation near the breakup point. While the validity of (\ref{univer}) can be theoretically proved using a Riemann-Hilbert approach to the small dispersion limit of the KdV equation \cite{DVZ} and recent results in \cite{DKMVZ},\cite{CV},\cite{CV1}, for the Camassa-Holm equation and also for the general Hamiltonian perturbation to the hyperbolic equation (\ref{hopf}), the problem is completely open. Furthermore for the general equation (\ref{riem2}), the existence of a smooth solution for a short time has not been established yet. An equivalent analysis should also be performed for Hamiltonian perturbation of elliptic systems, in particular for the semiclassical limit of the focusing nonlinear Schr\"odinger equation \cite{KMM},\cite{TVZ}. The paper is organized as follows. In section 2 we give a brief summary of the Lax-Levermore, and Deift-Venakides-Zhou theory and the multiscale expansion (\ref{univer}). In section 3 we present the numerical comparison between the asymptotic description based on the Hopf and Whitham solutions and the multiscale solutions with the KdV solution. In section 4 we study the same situation for the Camassa-Holm equation. In the appendix we briefly outline the used numerical approaches. \section{Asymptotic and multiscale solutions} Following the work of \cite{LL}, \cite{V2} and \cite{DVZ}, the rigorous theoretical description of the small dispersion limit of the KdV equation is the following: Let $\bar{u}(x,t)$ be the zero dispersion limit of $u(x,t,\epsilon)$, namely \begin{equation} \label{baru} \bar{u}(x,t)=\lim_{\epsilon\rightarrow 0}u(x,t,\epsilon). \end{equation} \noindent 1) for $0\leq t< t_c$, where $t_c$ is a critical time, the solution $u(x,t,\epsilon)$ of the KdV Cauchy problem is approximated, for small $\epsilon$, by the limit $\bar{u}(x,t)$ which solves the Hopf equation \begin{equation} \label{Hopf} \bar{u}_t+6\bar{u}\bar{u}_x=0. \end{equation} Here $t_c$ is the time when the first point of gradient catastrophe appears in the solution \begin{equation} \label{Hopfsol} \bar{u}(x,t)=u_0(\xi),\quad x=6tu_0(\xi)+\xi, \end{equation} of the Hopf equation. From the above, the time $t_c$ of gradient catastrophe can be evaluated from the relation \[t_{c}=\dfrac{1}{\min_{\xi\in\mathbb{R}}[-6u_0'(\xi)]}. \] 2) After the time of gradient catastrophe, the solution of the KdV equation is characterized by the appearance of an interval of rapid modulated oscillations. According to the Lax-Levermore theory, the interval $[x^-(t), x^+(t)]$ of the oscillatory zone is independent of $\epsilon$. Here $x^-(t)$ and $x^+(t)$ are determined from the initial data and satisfy the condition $x^-(t_c)=x^+(t_c)=x_c$ where $x_c$ is the $x$-coordinate of the point of gradient catastrophe of the Hopf solution. Outside the interval $[x^-(t), x^+(t)]$ the leading order asymptotics of $u(x,t,\epsilon)$ as $\epsilon\rightarrow 0$ is described by the solution of the Hopf equation (\ref{Hopfsol}). Inside the interval $[x^-(t), x^+(t)]$ the solution $u(x,t,\epsilon)$ is approximately described, for small $\epsilon$, by the elliptic solution of KdV \cite{GP}, \cite{LL}, \cite{V2}, \cite{DVZ}, \begin{equation} \label{elliptic} u(x,t,\epsilon)\simeq \bar{u}+ 2\epsilon^2\frac{\partial^2}{\partial x^2}\log\theta\left(\dfrac{\sqrt{\beta_1-\beta_3}}{2\epsilon K(s)}[x-2 t(\beta_1+\beta_2+\beta_3) -q];\mathcal{T}\right) \end{equation} where now $\bar{u}=\bar{u}(x,t)$ takes the form \begin{equation} \label{ubar} \bar{u}=\beta_1+\beta_2+\beta_3+2\alpha, \end{equation} \begin{equation} \label{alpha} \alpha=-\beta_{1}+(\beta_{1}-\beta_{3})\frac{E(s)}{K(s)},\;\;\mathcal{T}=i\dfrac{K'(s)}{K(s)}, \;\; s^{2}=\frac{\beta_{2}-\beta_{3}}{\beta_{1}-\beta_{3}} \end{equation} with $K(s)$ and $E(s)$ the complete elliptic integrals of the first and second kind, $K'(s)=K(\sqrt{1-s^{2}})$; $\theta$ is the Jacobi elliptic theta function defined by the Fourier series \[ \theta(z;\mathcal{T})=\sum_{n\in\mathbb{Z}}e^{\pi i n^2\mathcal{T}+2\pi i nz}. \] For constant values of the $\beta_i$ the formula (\ref{elliptic}) is an exact solution of KdV well known in the theory of finite gap integration \cite{IM}, \cite{DN0}. However in the description of the leading order asymptotics of $u(x,t,\epsilon)$ as $\epsilon\rightarrow 0$, the quantities $\beta_i$ depend on $x$ and $t$ and evolve according to the Whitham equations \cite{W} \[ \dfrac{\partial}{\partial t}\beta_i+v_i\dfrac{\partial}{\partial x}\beta_i=0,\quad i=1,2,3, \] where the speeds $v_i$ are given by the formula \begin{equation} v_{i}=4\frac{\prod_{k\neq i}^{}(\beta_{i}-\beta_{k})}{\beta_{i}+\alpha}+2(\beta_1+\beta_{2}+\beta_{3}), \label{eq:la0} \end{equation} with $\alpha$ as in (\ref{alpha}). Lax and Levermore first derived, in the oscillatory zone, the expression (\ref{ubar}) for $\bar{u}=\bar{u}(x,t)$ which clearly does not satisfy the Hopf equation. The theta function formula (\ref{elliptic}) for the leading order asymptotics of $u(x,t,\epsilon)$ as $\epsilon\rightarrow 0$, was obtained in the work of Venakides and the phase $q=q(\beta_1,\beta_2,\beta_3)$ was derived in the work of Deift, Venakides and Zhou \cite{DVZ}, using the steepest descent method for oscillatory Riemann-Hilbert problems \cite{DZh} \begin{equation} \label{q0} q(\beta_{1},\beta_{2},\beta_{3}) = \frac{1}{2\sqrt{2}\pi} \int_{-1}^{1}\int_{-1}^{1}d\mu d\nu \frac {f_-( \frac{1+\mu}{2}(\frac{1+\nu}{2}\beta_{1} +\frac{1-\nu}{2}\beta_{2})+\frac{1-\mu}{2}\beta_{3})}{\sqrt{1-\mu} \sqrt{1-\nu^{2}}}, \end{equation} where $f_-(y)$ is the inverse function of the decreasing part of the initial data. The above formula holds till some time $T>t_c$ (see \cite{DVZ} or I for times $t>T$). \noindent 3) Fei-Ran Tian proved that the description in 1) and 2) is generic for some time after the time $t_c$ of gradient catastrophe \cite{FRT1}. In I we discussed the case $u_{0}(x) = -\mbox{sech}^{2}x$ in detail as an example. The main results were that the asymptotic description is of the order $\mathcal{O}(\epsilon)$ close to the center of the Whitham zone, but that the approach gives considerably less satisfactory results near the edges of the Whitham zone and close to the breakup of the corresponding solution to the Hopf equation. In the present paper we address the behavior near the point of gradient catastrophe of the Hopf solution in more detail. In Fig.~\ref{fig2} we show the KdV solution and the corresponding asymptotic solution as given above for several values of the time near the critical time $t_{c}$. It can be seen that there are oscillations before $t_{c}$, and that the solution in the Whitham zone provides only a crude approximation of the KdV solution for small $t-t_{c}$. \begin{figure}[!htb] \centering \epsfig{figure=break9.eps, width=\textwidth} \caption{The blue line is the solution of the KdV equation for the initial data $u_0(x)=-1/\cosh^2x$ and $\epsilon=10^{-2}$, and the purple line is the corresponding leading order asymptotics given by formulas (\ref{Hopfsol}) and (\ref{elliptic}). The plots are given for different times near the point of gradient catastrophe $(x_c,t_c)$ of the Hopf solution. Here $x_c\simeq -1.524 $, $t_c\simeq 0.216$.} \label{fig2} \end{figure} The situation does not change in principle if we consider smaller values of $\epsilon$ as can be seen from Fig.~\ref{figbreak4}. The solution shows the same qualitative behavior as in Fig.~\ref{fig2}, just on smaller scales in $t$ and $x$. \begin{figure}[!htb] \centering \epsfig{figure=break4.1e6.eps, width=\textwidth} \caption{KdV solution and asymptotic solution for $\epsilon=10^{-3}$ close to the breakup time.} \label{figbreak4} \end{figure} \subsection{Multiscale expansion} We give a brief summary of the results in \cite{dubcr} relevant for the KdV case we are interested in here. Near the point of gradient catastrophe $(x_c,t_c, u_c)$, the Hopf solution is generically given in lowest order by the cubic \begin{equation} \label{hopfsolc} x-x_c \simeq 6(t-t_c) u -k(u-u_c)^3,\quad k=-f_-'''(u_c)/6, \end{equation} because $6t_c+f_-'(u_c)=0$ and $f_c''(u_c)=0$. Here $f_-(u)$ is the inverse of the decreasing part of the initial data $u_0(x)$. Now let us consider $h_k=\dfrac{\delta H_k}{\delta u}$ where $H_k$ are the KdV Hamiltonians such that $h_k=u^{k+2}/(k+2)!+O(\epsilon^2)$. We have \[ h_{-1}=u,\;\;h_0=\dfrac{u^2}{2}+\dfrac{\epsilon^2}{6} u_{xx},\;\; h_1=\dfrac{1}{6}(u^3+\frac{\epsilon^2}{2}(u_x^2+2uu_{xx})+\dfrac{\epsilon^4}{10}u_{xxxx}), \] and the KdV equation is obtained from $u_t+6\partial_xh_0=0$. Then \[ x=6tu+a_0h_0+a_1h_1+\dots a_kh_k, \] is a symmetry of the KdV equation \cite{DZ}. Setting $a_0=0,$ $a_1=-f'''(u_c)/6=k$ and $a_{k>2}=0$, and making the shift $t\rightarrow t-t_c$, $u\rightarrow u-u_c$ and the Galilean transformation $x\rightarrow x-x_c-6(t-t_c)u_c$ we arrive at the fourth order equation of Painlev\'e type \begin{equation} \label{painleve} x-x_c-6(t-t_c)u_c = 6(t-t_c) (u-u_c) -k\left[(u-u_c)^3 +\epsilon^2 (\dfrac{ u_x^2}{2} +(u-u_c) u_{xx})+\dfrac{ \epsilon^4 }{10} u_{xxxx}\right] \end{equation} which is an exact solution of the KdV equation and can be considered as a perturbation of the Hopf solution (\ref{hopfsolc}) near the point of gradient catastrophe $(x_c,t_c,u_c)$. The solution $u(x,t,\epsilon)$ of (\ref{painleve}) is related to the solution $U(X,T)$ of (\ref{PI2}) by the rescalings \begin{equation} \label{KdVrescaled} u(x,t,\epsilon)=u_c+\left(\dfrac{\epsilon}{ k}\right)^{2/7} U(X,T) \end{equation} where \begin{equation} \label{scalings} X =\dfrac{ x - x_c - 6 u_c (t-t_c)}{\epsilon^{\frac{6}{7}}k^{\frac17}},\quad T=\dfrac{t-t_c}{\epsilon^{\frac{4}{7}}k^{\frac37}}. \end{equation} According to the conjecture in \cite{dubcr}, the solution (\ref{KdVrescaled}) is an approximation modulo terms $O(\epsilon^{\frac{4}{7}})$ to the solution of the Cauchy problem (\ref{KdV}) for $(x,t,u)$ near the point of gradient catastrophe $(x_c,t_c,u_c)$ of the hyperbolic equation (\ref{Hopf}). \section{Numerical comparison} In this section we will present a comparison of numerical solutions to the KdV equation and asymptotic solutions arising from solutions to the Hopf and the Whitham equations as well as the Painlev\'e I2 equation as given above. Since we control the accuracy of the used numerical solutions, see I, \cite{numart1d} and the appendix, we ensure that the presented differences are entirely due to the analytical description and not due to numerical artifacts. We study the $\epsilon$-dependence of these differences by linear regression analysis. This will be done for nine values of $\epsilon$ between $10^{-1}$ and $10^{-3}$. Obviously the numerical results are only valid for this range of parameters, but it is interesting to note the high statistical correlation of the scalings we observe. We consider the initial data \[ u_0(x)=-1/\cosh^2x. \] For this initial data \begin{equation} x_c=-\dfrac{\sqrt{3}}{2}+\log((\sqrt{3}-1)/\sqrt{2}), \;\;t_c=\dfrac{\sqrt{3}}{8},\;\; \;\;u_c=-2/3. \label{tcrit} \end{equation} \subsection{Hopf solution} We will first check whether the rescalings of the coordinates given in (\ref{KdVrescaled}) are consistent with the numerical results. It is known that the Hopf solution provides for times $t\ll t_{c}$ an asymptotic description of the KdV solution up to an error of the order $\epsilon^{2}$. This means that the $L_{\infty}$-norm of the difference between the two solutions decreases as $\epsilon^{2}$ for $\epsilon\rightarrow 0$. For $t=0.1$ we actually observe this dependence. More precisely this difference $\Delta_{\infty}$ can be fitted with a straight line by a standard linear regression analysis, $-\log_{10}\Delta_{\infty}=-a\log_{10}\epsilon+b$ with $a=1.9979$, with a correlation coefficient of $r=0.99999$ and standard error $\sigma_{a}=4.1*10^{-3}$. Near the critical time $t_{c}$ this picture is known to change considerably. Dubrovin's conjecture \cite{dubcr} presented above suggests that the difference between Hopf and KdV solution near the critical point should scale roughly as $\epsilon^{2/7}$. In the following we will always compare solutions in the intervals \begin{equation} \label{interval} [x_{c}+6u_{c}(t-t_{c})-\alpha \epsilon^{6/7},x_{c}+6u_{c}(t-t_{c})+\alpha \epsilon^{6/7}] \end{equation} where $\alpha$ is an $\epsilon$-independent constant (typically we take $\alpha=3$). Numerically we find at the critical time that the $L_{\infty}$-norm of the difference between Hopf and KdV solution scales like $\epsilon^a$ where $a=0.2869$ ($2/7=0.2857\ldots$) with correlation coefficient $r=0.9995$ and standard error $\sigma_{a}=6.9*10^{-3}$. Thus we confirm the expected scaling behavior within numerical accuracy. We also test this difference for times close to $t_{c}$. The relations (\ref{KdVrescaled}) suggest, however, a rescaling of the time, i.e., to compare solutions for different values of $\epsilon$ at the same value of $T$. We compute the respective solutions for KdV times $t_{\pm}(\epsilon)=t_{c}\pm 0.1\epsilon^{4/7}$. Before breakup at $t_{-}$ we obtain $a=0.31$ with $r=0.999$ and $\sigma_{a}=9.8*10^{-3}$, i.e., as expected a slightly larger value than $2/7$. After breakup at $t_{+}$ we find $a=0.26$ with $0.9995$ and $\sigma_{a}=6.6*10^{-3}$. We remark that after the breakup time, the asymptotic solution is obtained by gluing the Hopf solution and the theta-functional solution (\ref{elliptic}). These results indicate that the scalings in (\ref{KdVrescaled}) are indeed observed by the KdV solution. We show the corresponding situation for $t_{-}$ for two values of $\epsilon$ in Fig.~\ref{figscalings}. \begin{figure}[!htb] \centering \epsfig{figure=scalings.eps, width=\textwidth} \caption{KdV solution (blue) and Hopf solution (green) at the times $t_{-}(\epsilon)$ in a rescaled interval for two values of $\epsilon$.} \label{figscalings} \end{figure} \subsection{Multiscale solution} In Fig.~\ref{figpain4breakg} we show the numerical solution of the KdV equation for the initial data $u_{0}$ and the corresponding PI2 solution (\ref{KdVrescaled}) for $\epsilon=10^{-2}$ close to breakup. It can be seen that the PI2 solution (\ref{KdVrescaled}) gives a correct description of the KdV solution close to the breakup point. For larger values of $|x-x_{c}|$ the multiscale solution is not a good approximation of the KdV solution. \begin{figure}[!htb] \centering \epsfig{figure=pain4breakg.eps, width=\textwidth} \caption{The blue line is the solution of the KdV equation for the initial data $u_0(x)=-1/\cosh^2x$ and $\epsilon=10^{-2}$, and the green line is the corresponding multiscale solution given by formula (\ref{KdVrescaled}). The plots are given for different times near the point of gradient catastrophe $(x_c,t_c)$ of the Hopf solution. Here $x_c\simeq -1.524 $, $t_c\simeq 0.216$.} \label{figpain4breakg} \end{figure} A similar situation is shown in Fig.~\ref{figpain4break1e6} for the case $\epsilon=10^{-3}$. Obviously the approximation is better for smaller $\epsilon$. Notice that the asymptotic description is always better near the leading edge than near the trailing edge. \begin{figure}[!htb] \centering \epsfig{figure=pain4break1e6.eps, width=\textwidth} \caption{The blue line is the solution of the KdV equation for the initial data $u_0(x)=-1/\cosh^2x$ and $\epsilon=10^{-3}$, and the green line is the corresponding multiscale solution given by formula (\ref{KdVrescaled}). The plots are given for different times near the point of gradient catastrophe $(x_c,t_c)$ of the Hopf solution.} \label{figpain4break1e6} \end{figure} \begin{figure}[!htb] \centering \epsfig{figure=pain4delta.eps, width=1.1\textwidth} \caption{The blue line is the difference between the solution of the KdV equation for the initial data $u_0(x)=-1/\cosh^2x$ and $\epsilon=10^{-2}$ and the multiscale solution, and the green line is the difference between the asymptotic solution and the KdV solution. The plots are given for different times near the point of gradient catastrophe $(x_c,t_c)$ of the Hopf solution. } \label{figpain4delta} \end{figure} In Fig.~\ref{figpain4delta} we plot in green the difference between the PI2 multiscale solution and the KdV solution and in blue the difference between the KdV solution and the asymptotic solutions (\ref{Hopfsol}) and (\ref{elliptic}). It is thus possible to identify a zone around $x_{c}$ in which the multiscale solution gives a better asymptotic description. The limiting values of this zone rescaled by $x_{c}$ are shown in Fig.~\ref{figwidthp12c} for the critical time. It can be seen that the zone always extends much further to the left (the direction of propagation) than to the right. \begin{figure}[!htb] \centering \epsfig{figure=widthp12c.eps, width=0.7\textwidth} \caption{Limiting values of the zone where the multiscale solution provides a better asymptotic description of the KdV solution than the Hopf solution for $t=t_{c}$. The $x$ values are rescaled with $x_{c}$.} \label{figwidthp12c} \end{figure} The width of this zone scales roughly as $\epsilon^{3/7}$, more precisely we find $\epsilon^a$ with $a=0.468$, $r=0.981$ and $\sigma_{a}=0.073$. We observe that the numerical scaling is smaller than the one predicted by the formula (\ref{scalings}). The matching of the multiscale and the Hopf solution can be seen in Fig.~\ref{figdelta2ec}. \begin{figure}[!htb] \centering \epsfig{figure=delta2ec.eps, width=0.7\textwidth} \caption{Difference of the KdV and the multiscale solution (blue) and the KdV and the Hopf solution (green) for the initial data $u_0(x)=-1/\cosh^2x$ at $t=t_{c}$ for two values of $\epsilon$.} \label{figdelta2ec} \end{figure} For larger times, the asymptotic solution (\ref{Hopfsol}) and (\ref{elliptic}) gives as expected a better description of the KdV solution, see Fig.~\ref{figpain4delta1e6.226} for $\epsilon=10^{-3}$ and $t=0.226$. Close to the leading edge, the oscillations are, however, better approximated by the multiscale solution. \begin{figure}[!htb] \centering \epsfig{figure=pain4delta1e6.226.eps, width=1.1\textwidth} \caption{The blue line is the difference between solution of the KdV equation for the initial data $u_0(x)=-1/\cosh^2x$ and $\epsilon=10^{-3}$ and the multiscale solution, and the green line is the difference between the asymptotic solution and the KdV solution. The plots are given for $t=0.226$.} \label{figpain4delta1e6.226} \end{figure} To study the scaling of the difference between the KdV and the multiscale solution, we compute the $L_{\infty}$ norm of the difference between the solutions in the rescaled $x$-interval (\ref{interval}) with $\alpha=3$. We find that this error scales at the critical time roughly like $\epsilon^{5/7}$. More precisely we find a scaling $\epsilon^a$ where $a=0.708$ ($5/7=0.7143\ldots$) with correlation coefficient $r=0.9998$ and standard error $\sigma_{a}=0.012$. Before breakup at the times $t_{-}(\epsilon)$ we obtain $a=0.748$ with $r=0.9996$ and $\sigma_{a}=0.016$, after breakup at the times $t_{+}(\epsilon)$ we get $a=0.712$ with $r=0.9999$ and $\sigma_{a}=6.2*10^{-3}$. Notice that the values for the scaling parameters are roughly independent of the precise value of the constant $\alpha$ which defines the length of the interval (\ref{interval}). For instance for $\alpha=2$, we find within the observed accuracy the same value. In \cite{CV} Claeys and Vanlessen showed that the corrections to the multiscale solution appear in order $\epsilon^{3/7}$. For the values of $\epsilon$ we could study for our KdV example, the corrections are apparently of order $\epsilon^{5/7}$. \section{Outlook} The Camassa-Holm equation \cite{CH} (see also \cite{Fo}) \begin{equation} \label{CH} u_t+6uu_x-\epsilon^2(u_{xxt}+4u_xu_{xx}+2uu_{xxx})=0 \end{equation} admits a bi-Hamiltonian description after the following Miura-type transformation \begin{equation}\label{cam-holm1} m=u-\epsilon^2 u_{xx}. \end{equation} One of the Hamiltonian structure takes the form \begin{equation} \label{cam-holm-pb1} \{ m(x), m(y)\} =\delta'(x-y) -\epsilon^2 \delta'''(x-y) \end{equation} so that the Camassa-Holm flow can be written in the form \begin{equation} \label{hamCH} m_{t}=\{m(x), H\},\quad H= \int (u^3+uu_x^2)dx. \end{equation} To compare the Hamiltonian flow in (\ref{riem2}) with the one given in (\ref{hamCH}) one must first reduce the Poisson bracket to the standard form $\{ \tilde u(x), \tilde u( y)\}_1=\delta'(x-y)$ by the transformation \[ \tilde u =\left( 1-\epsilon^2 \partial_x^2\right)^{-1/2} m=m+\frac12 \epsilon^2 m_{xx} +\frac38 \epsilon^4 m_{xxxx}+\dots. \] After this transformation, the Camassa-Holm equation will take for terms up to order $\epsilon^{4}$ the form \[ \tilde u_t +6\tilde u\, \tilde u_x +\epsilon^2 ( 8\tilde u_{x} \tilde u_{xx} + 4\tilde u\, \tilde u_{xxx}) +\epsilon^4( 20\, \tilde u_{xx} \tilde u_{xxx} + 12\, \tilde u_x \tilde u_{xxxx} +4\tilde u\, \tilde u_{xxxxx})+\dots=0 . \] which is equivalent to (\ref{riem2}) after the substitution \[ c=48\tilde{u}, \quad p =2\tilde{u}. \] At the critical point $(x_c,t_c,u_c)$ the Camassa-Holm solution behaves according to the conjecture in \cite{dubcr} as \[ u(x,t,\epsilon) = u_c -\left(\dfrac{\epsilon^2|c_0|}{ k^2}\right)^{1/7} U(X,T)+O(\epsilon^{\frac{4}{7}}),\quad c_0=4u_c \] where \[ X =-\dfrac{1}{\epsilon}\left(\dfrac{\epsilon}{k|c_0^3|}\right)^{1/7} ( x - x_c - 6 u_c (t-t_c)),\quad T=\left(\dfrac{1}{\epsilon^4k^3c_0^2}\right)^{1/7}(t-t_c) \] In Fig.~\ref{figpainch9} we show the numerical solution to the CH equation for the initial data $u_{0}=-\mbox{sech}^{2}(x)$ and $\epsilon=10^{-2}$ at several values of time near the point of gradient catastrophe of the Hopf equation. It is interesting to compare this to the corresponding situation for the KdV equation in Fig.~\ref{figpain4breakg}. It can be seen that there are no oscillations of the CH equation on left side (the direction of the propagation) of the critical point, whereas in the KdV case all oscillations are on this side. The quality of the approximation of the CH and the KdV solution by the multiscale solution is also different. In the KdV case, the solution is well described by the multiscale solution on the leading part which includes the oscillations, whereas the approximation is less satisfactory on the trailing side. A similar behavior is observed in the CH case, but since the oscillations are now on the trailing side, they are not as well approximated as in the KdV case. The leading part of the solution near the critical point is, however, described in a better way. \begin{figure}[!htb] \centering \epsfig{figure=pain4ch9_1e4.eps, width=0.7\textwidth} \caption{The blue line is the solution of the CH equation for the initial data $u_0(x)=-1/\cosh^2x$ and $\epsilon=10^{-2}$, and the green line is the corresponding multiscale solution. The plots are given for different times near the point of gradient catastrophe $(x_c,t_c)$ of the Hopf solution. Here $x_c\simeq -1.524 $, $t_c\simeq 0.216$.} \label{figpainch9} \end{figure} The same qualitative behavior can also be observed for smaller $\epsilon$ in Fig.~\ref{figpainch4}, though the quality of the approximation increases as expected on the respective scales. Note that we plotted in Fig.~\ref{figpainch9} and Fig.~\ref{figpainch4} the CH solution instead of the function $\tilde{u}$, since there are no visible differences between the two for the used values of $\epsilon$. \begin{figure}[!htb] \centering \epsfig{figure=pain4ch4_1e6.eps, width=0.7\textwidth} \caption{The blue line is the solution of the CH equation for the initial data $u_0(x)=-1/\cosh^2x$ and $\epsilon=10^{-3}$, and the green line is the corresponding multiscale solution. The plots are given for different times near the point of gradient catastrophe $(x_c,t_c)$ of the Hopf solution.} \label{figpainch4} \end{figure}
2,869,038,156,055
arxiv
\section{Introduction} Quantum entanglement is one of the most interesting and debated properties of quantum mechanics. It has become an essential resource for the quantum communication created in recent years, with some potential applications such as quantum cryptography \cite{Bennett84,Ekert91} and quantum teleportation \cite{Bennett93}. The idea of quantum entanglement goes back to the early days of quantum theory where it was initiated by Schr\"{o}dinger, Einstein, Podolsky and Rosen \cite{Sch35,EPR35} and was later extended by Bell \cite{Bell64} in the form of Bell inequalities. Quantification of multipartite state entanglement \cite{Lewen00,Dur99} is difficult and is a task that is directly linked to linear algebra, geometry and functional analysis. The definition of separability and entanglement of a multipartite state was introduced in \cite{Vedral97} following the definition for bipartite states, given in 1989 by Werner \cite{Werner89}. One widely used measure of entanglement for a pair of qubits, is entanglement of formation \cite{Bennett96}. A closely related measure is concurrence, that gives an analytic formula for the entanglement of formation \cite{Wootters98}. In recent years, there have been several proposals to generalize this measure to general bipartite states, e.g., Uhlmann \cite{Uhlmann00} has generalized the concept of concurrence by considering arbitrary conjugation, then Audenaert, Verstraete, and De Moor \cite{Audenaert} generalized this formula in spirit of Uhlmann's work, by defining a concurrence vector for pure states. Another generalization of concurrence have been done by Rungta \emph{et al.} \cite{Rungta01} based on an idea of a superoperator called universal state inversion. And finally Gerjuoy, Albeverio and Fei, Akhtarshenas, and Bhaktavatsala and Ravishankar \cite{Gerjuoy,Albeverio,Akhtarshenas,Bhaktavatsala} have given explicit expression in terms of the state amplitude coefficient of a pure bipartite state in any dimension. In this paper, we put the concurrence in another perspective, namely we establish a relation between Schwarz' inequality and concurrence for bipartite states and then extend our connection to multipartite states. We show that, the generalized concurrence \cite{Albeverio} and entanglement tensor \cite{Hosh4} for a three-partite state can be derived using the concept of Schwarz inequality. Generalization of this relation to a multipartite state with more than three subsystems can be tried out in the same way as for three-partite state but it gives only information about the set of separable state and can not quantify a general pure multipartite state completely. \section{Entanglement} In this section we will establish the notation for separable states and entangled states. Let us denote a general, pure, composite quantum system with $m$ subsystems $\mathcal{Q}=\mathcal{Q}^{p}_{m}(N_{1},N_{2},\ldots,N_{m}) =\mathcal{Q}_{1}\mathcal{Q}_{2}\cdots\mathcal{Q}_{m}$, consisting of a state \begin{equation}\label{Mstate} \ket{\Psi}=\sum^{N_{1}}_{i_{1}=1}\sum^{N_{2}}_{i_{2}=1}\cdots\sum^{N_{m}}_{i_{m}=1} \alpha_{i_{1},i_{2},\ldots,i_{m}} \ket{i_{1},i_{2},\ldots,i_{m}} \end{equation} defined on a Hilbert space \begin{eqnarray} \mathcal{H}_{\mathcal{Q}}&=&\mathcal{H}_{\mathcal{Q}_{1}}\otimes \mathcal{H}_{\mathcal{Q}_{2}}\otimes\cdots\otimes\mathcal{H}_{\mathcal{Q}_{m}}\\\nonumber &=&\mathbf{C}^{N_{1}}\otimes\mathbf{C}^{N_{2}}\otimes\cdots\otimes\mathbf{C}^{N_{m}}, \end{eqnarray} where the dimension of the $j$th Hilbert space is given by $N_{j}=\dim(\mathcal{H}_{\mathcal{Q}_{j}})$. We are going to use this notation throughout this paper, i.e., we denote a pure pair of qubits by $\mathcal{Q}^{p}_{2}(2,2)$. Next, let $\rho_{\mathcal{Q}}$ denote a density operator acting on $\mathcal{H}_{\mathcal{Q}}$. The density operator $\rho_{\mathcal{Q}}$ is said to be fully separable, which we will denote by $\rho^{sep}_{\mathcal{Q}}$, with respect to the Hilbert space decomposition, if it can be written as \begin{equation}\label{eq:sep} \rho^{sep}_{\mathcal{Q}}=\sum^\mathrm{K}_{k=1}p_k \bigotimes^m_{j=1}\rho^k_{\mathcal{Q}_{j}},~\sum^N_{k=1}p_{k}=1 \end{equation} for some positive integer $\mathrm{K}$, where $p_{k}$ are positive real numbers and $\rho^k_{\mathcal{Q}_{j}}$ denote a density operator on Hilbert space $\mathcal{H}_{\mathcal{Q}_{j}}$. If $\rho^{p}_{\mathcal{Q}}$ represents a pure, fully separable state, then $\mathrm{K}=1$. If a state is not fully separable, then it is called an entangled state. A completely nonseparable quantum system is one that in any basis must be written \begin{equation}\label{eq:completely} \rho^{nonsep}_{\mathcal{Q}}=\sum^\mathrm{K}_{k=1}p_k \rho^k_{\mathcal{Q}},~\sum^N_{k=1}p_{k}=1, \end{equation} where $\mathcal{Q}=\mathcal{Q}^p_1(N_1 N_2 \cdots N_m)$. The simplest such completely nonseparable, generic states (that, moreover, are maximally entangled) are Bell states and $\mathrm{W}$-states. \section{The Schwarz inequality and concurrence} \label{Analytical} In this section, we will investigate the relation between concurrence, the Schwarz inequality, and the minors of determinant of bipartite states, which are directly related to the geometry of the Hilbert space and Segre variety \cite{Hosh5}. Let us begin by reviewing the Schwarz inequality on an inner product space such as a Hilbert space, and then use this inequality to relate it to the geometry of concurrence. Let $X_{1}=(\xi_{1},\xi_{2})$ and $X_{2}=(\eta_{1},\eta_{2})$ be two vectors defined on the complex Hilbert space $\mathcal{H}=\mathbf{C}^{2}$. Then $X_{1}$ and $X_{2}$ are parallel if and only if $\left|\xi_{1}\eta_{2}-\eta_{1}\xi_{2}\right|=0$. We will prove this using the Schwarz inequality $\langle X_{1}\ket{X_{2}}\langle X_{2}\ket{X_{1}}\leq\|X_{1}\|^{2}\cdot\|X_{2}\|^{2}$ as follows: \begin{eqnarray} \langle X_{1}\ket{X_{2}}\langle X_{2}\ket{X_{1}}&=&(\xi_{1}\bar{\eta}_{1}+\xi_{2}\bar{\eta}_{2}) (\bar{\xi}_{1}\eta_{1}+\bar{\xi}_{2}\eta_{2})\\\nonumber &=& |\xi_{1}|^{2} |\eta_{1}|^{2} +\xi_{1}\bar{\eta}_{1}\bar{\xi}_{2}\eta_{2} +\xi_{2}\bar{\eta}_{2}\bar{\xi}_{1}\eta_{1}+|\xi_{2}|^{2} |\eta|_{2}^{2}, \end{eqnarray} where, i.e., $\bar{\xi}$ is the complex conjugate of $\xi$. The product of the norms of these vectors is given by \begin{equation} \|X_{1}\|^{2}\cdot\|X_{2}\|^{2} = |\xi_{1}|^{2} |\eta_{1}|^{2}+|\xi_{1}|^{2} |\eta_{2}|^{2}+ |\xi_{2}|^{2} |\eta_{1}|^{2}+|\xi_{2}|^{2} |\eta_{2}|^{2}. \end{equation} If $X$ and $Y$ are parallel, then we have $ \langle X_{1}\ket{X_{2}}\langle X_{2}\ket{X_{1}}=\|X_{1}\|^{2}\cdot\|X_{2}\|^{2} $, which implies that \begin{eqnarray} \xi_{1}\bar{\eta}_{1}\bar{\xi}_{2}\eta_{2} +\xi_{2}\bar{\eta}_{2}\bar{\xi}_{1}\eta_{1} &=& |\xi_{1}|^{2} |\eta_{2}|^{2}+ |\xi_{2}|^{2} |\eta_{1}|^{2} \Longrightarrow \left| \xi_{1}\eta_{2}-\eta_{1}\xi_{2} \right|^{2} = 0. \end{eqnarray} That is, $X_{1}$ and $X_{2}$ are parallel if, and only if, \begin{equation} \det\left(% \begin{array}{cc} \xi_{1} &\xi_{2} \\ \eta_{1} & \eta_{2}\\ \end{array}% \right)=0 , \end{equation} where $\det$ denotes the determinant. We note that the area of a parallelogram spanned by two vectors is equal to the value of their 2-by-2 determinant. Now we set out to generalize this simple result to a larger bipartite product space. Let $X_{1}=(\xi_{1},\xi_{2},\ldots,\xi_{N_{2}})$ and $X_{2}=(\eta_{1},\eta_{2},\ldots,\eta_{N_{2}})$ be two vectors defined on the Hilbert space $\mathcal{H}=\mathbf{C}^{N_{2}}$. Again by using the Schwarz inequality, we get \begin{eqnarray} \langle X_{1}\ket{X_{2}}\langle X_{2}\ket{X_{1}}&=&\nonumber(\xi_{1}\bar{\eta}_{1}+\xi_{2}\bar{\eta}_{2}+\ldots+\xi_{N_{2}}\bar{\eta}_{N_{2}}) (\bar{\xi}_{1}\eta_{1}+\bar{\xi}_{2}\eta_{2}+\ldots+\bar{\xi}_{N_{2}}\eta_{N_{2}})\\\nonumber &=& |\xi_{1}|^{2} |\eta_{1}|^{2} +\xi_{1}\bar{\eta}_{1}\bar{\xi}_{2}\eta_{2} +\xi_{1}\bar{\eta}_{1}\bar{\xi}_{3}\eta_{3} +\xi_{2}\bar{\eta}_{2}\bar{\xi}_{1}\eta_{1} +|\xi_{2}|^{2} |\eta_{2}|^{2}\\\nonumber && +\ldots+ \xi_{N_{2}-1}\bar{\eta}_{N_{2}-1}\bar{\xi}_{N_{2}}\eta_{N_{2}} +\xi_{N_{2}}\bar{\eta}_{N_{2}}\bar{\xi}_{N_{2}-1}\eta_{N_{2}-1} \\ &&+|\xi_{N_{2}}|^{2}|\eta_{N_{2}}|^{2}, \end{eqnarray} and, in the same way as we have done above, we calculate the product of the norms of these vector as follows: \begin{eqnarray} \|X_{1}\|^{2}\cdot\|X_{2}\|^{2}&=&\nonumber |\xi_{1}|^{2}(|\eta_{1}|^{2}+|\eta_{2}|^{2}+\ldots+|\eta|_{N_{2}}^{2})\\\nonumber &&+ |\xi_{2}|^{2}(|\eta_{1}|^{2}+|\eta_{2}|^{2}+\ldots+|\eta_{N_{2}}|^{2}) \\\nonumber &&+ \ldots+|\xi_{N_{2}}|^{2}(|\eta_{1}|^{2}+|\eta_{2}|^{2}+\ldots+|\eta_{N_{2}}|^{2}) \\\nonumber &=& |\xi_{1}|^{2}|\eta_{1}|^{2}+|\xi_{1}|^{2}|\eta_{2}|^{2}+\ldots+|\xi_{1}|^{2}|\eta_{N_{2}}|^{2} \\\nonumber &&+ |\xi_{2}|^{2}|\eta_{1}|^{2}+|\xi_{2}|^{2}|\eta_{2}|^{2}+\ldots+|\xi_{2}|^{2}|\eta_{N_{2}}|^{2} \\&& + \ldots+|\xi_{N_{2}}|^{2}|\eta_{1}|^{2}+|\xi_{3}|^{2}|\eta_{2}|^{2}+\ldots+|\xi_{N_{2}}|^{2}|\eta_{N_{2}}|^{2}. \end{eqnarray} Again, if $X_{1}$ and $X_{2}$ are parallel, then we have $\langle X_{1}\ket{X_{2}}\langle X_{2}\ket{X_{1}}=\|X_{1}\|^{2}\cdot\|X_{2}\|^{2}$ which, after some simplification, can be rewritten as follows: \begin{eqnarray} &&\nonumber |\xi_{1}|^{2}|\eta_{2}|^{2}-\xi_{1}\eta_{2}\bar{\eta}_{1}\bar{\xi}_{2} -\xi_{2}\eta_{1}\bar{\eta}_{2}\bar{\xi}_{1}+|\xi_{2}|^{2}|\eta_{1}|^{2} +\ldots+|\xi_{N_{2}-1}|^{2}|\eta_{N_{2}}|^{2}\\\nonumber&&-\xi_{N_{2}-1}\eta_{N_{2}}\bar{\eta}_{N_{2}-1}\bar{\xi}_{N_{2}} -\xi_{N_{2}}\eta_{N_{2}-1}\bar{\eta}_{N_{2}}\bar{\xi}_{N_{2}-1}+|\xi_{N_{2}}|^{2}|\eta_{N_{2}-1}|^{2}\\\nonumber &&=\left|\xi^{2}_{1}\eta^{2}_{2}-\xi_{1}\eta_{2}\right|^{2} +\ldots+\left|\xi_{N_{2}-1}\eta_{N_{2}} -\xi_{N_{2}}\eta_{N_{2}-1}\right|^{2}=0. \end{eqnarray} That is, $X_{1}$ and $X_{2}$ are parallel if, and only if, \begin{equation} \left|\xi_{1}\eta_{2}-\eta_{1}\xi_{2}\right| =\left|\xi_{1}\eta_{3}-\eta_{1}\xi_{3}\right| =\cdots=\left|\xi_{N_{2}-1}\eta_{N_{2}}-\eta_{N_{2}-1}\xi_{N_{2}}\right|=0. \end{equation} This result implies that $$ \det\left(% \begin{array}{cc} \xi_{1} &\xi_{2} \\ \eta_{1} & \eta_{2}\\ \end{array}% \right) =\cdots =\det\left(% \begin{array}{cc} \xi_{N_{2}-1} &\xi_{N_{2}} \\ \eta_{N_{2}-1} & \eta_{N_{2}}\\ \end{array}% \right)=0 $$ if the vectors are parallel. To establish a relation between the Schwarz inequality and the concurrence, let us consider the quantum system $\mathcal{Q}^{p}_{2}(N_{1},N_{2})$ be a pure, bipartite quantum system. Then, the concurrence can be written as \cite{Albeverio} \begin{equation} \mathcal{C}(\mathcal{Q}^{p}_{2}(N_{1},N_{2})) = \left(\mathcal{N}\sum_{l_1>k_1}^{N_1}\sum_{k_1=1}^{N_1}\sum_{ l_2>k_2}^{N_2}\sum_{k_2=1}^{N_2}|\mathrm{T} \left ( \begin{array}{cc} k_1 & l_1 \\ k_2 & l_2 \end{array} \right )|^{2}\right)^{\frac{1}{2}} \label{eq:concurrence} \end{equation} where $\mathrm{T} \left ( \begin{array}{cc} k_1 & l_1 \\ k_2 & l_2 \end{array} \right ) = \det\left ( \begin{array}{cc} \alpha_{k_1, k_2} & \alpha_{k_1, l_2}\\ \alpha_{l_1, k_2} & \alpha_{l_1, l_2} \end{array} \right ) $ is a second order minor of the $2 \times N_{2}$ matrix \begin{equation}\label{eq:pq} \left(% \begin{array}{ccccc} \alpha_{k_1,1} &\alpha_{k_1,2} & \cdots & \alpha_{k_1,N_{2}-1} &\alpha_{k_1,N_{2}}\\ \alpha_{l_1,1} &\alpha_{l_1,2} & \cdots & \alpha_{l_1,N_{2}-1} &\alpha_{l_1,N_{2}}\\ \end{array}% \right), \end{equation} where $\mathcal{N}$ is a normalization constant. We recognize the expression (\ref{eq:concurrence}) as the sum of all parallelograms computed above. Hence, the concurrence is zero only if the Schwartz inequality is satisfied with equality for all pairs of vectors $X_{k_1}=(\alpha_{k_1,1},\alpha_{k_1,2},\ldots,\alpha_{k_1,N_{2}})$ and $X_{l_1}=(\alpha_{l_1,1},\alpha_{l_1,2},\ldots,\alpha_{l_1,N_{2}})$. This implies that all the vectors $X_{k_1}$ and $X_{l_1}$ are parallel. If so, the state is obviously separable because this means that the state can be written \begin{equation} \ket{\Psi}=(\alpha_{1,1}\ket{1_1} + \ldots + \alpha_{N_1,1}\ket{{N_1}_1}) \otimes (\ket{1_2}+ \alpha_{1,2}/\alpha_{1,1} \ket{2_2} + \ldots + \alpha_{1,N_{2}}/\alpha_{1,1} \ket{{N_2}_2}). \end{equation} We also see that the concurrence for a bipartite, pure state, loosely speaking, has the geometrical interpretation of the summed pairwise deviation from parallelism of all the vectors $X_{k_1}$ and $X_{l_1}$. \section{The Schwarz inequality and concurrence of a general pure three-partite state} Let us now see what happens if we consider the simplest example of a three-partite system. The simplest tripartite system, consisting of three qubits, is denoted $\mathcal{Q}^{p}_{3}(2,2,2)$. The concurrence of this state is then given by \cite{Albeverio} \begin{equation} \mathcal{C}(\mathcal{Q}^{p}_{3}(2,2,2)) = \left( \mathcal{N} \sum^{3}_{j=1}\sum_{ l>k}^{2,2}\sum_{k=1,1}^{2,2}\sum_{l_j>k_j}^2 \sum_{k_j=1}^{2} |\mathrm{T} \left ( \begin{array}{cc} k_j & l_j \\ k \neq k_j & l \neq l_j \end{array} \right )|^{2}\right)^{\frac{1}{2}}\end{equation} Where $\mathrm{T}\left ( \begin{array}{cc} k_1 & l_1 \\ k \neq k_1 & l \neq l_1 \end{array} \right )$ is a minor of the $2 \times 4$ matrix \begin{equation} \left(% \begin{array}{cccc} \alpha_{k_1,1,1} & \alpha_{k_1,1,2}&\alpha_{k_1,2,1}&\alpha_{k_1,2,2} \\ \alpha_{l_1,1,1} & \alpha_{l_1,1,2}&\alpha_{l_1,2,1}&\alpha_{l_1,2,2} \\ \end{array}% \right), \end{equation} and where the two-digit indices $k$ and $l$ run from $1,1$ to $2,2$, and, where of course the only possibility is that $k_1=1$ and $l_1=2$. In the same manner, $\mathrm{T}\left ( \begin{array}{cc} k_3 & l_3 \\ k \neq k_3 & l \neq l_3 \end{array} \right )$ is a minor of the matrix \begin{equation} \left(% \begin{array}{cccc} \alpha_{1,1,k_3} & \alpha_{1,2,k_3}&\alpha_{2,1,k_3}&\alpha_{2,2,k_3} \\ \alpha_{1,1,l_3} & \alpha_{1,2,l_3}&\alpha_{2,1,l_3}&\alpha_{2,2,l_3} \\ \end{array}% \right) , \end{equation} etc. If we apply the Schwarz inequality to all the combinations of the above pairs of vectors, then we get the desired result. The interpretation of the result is that $\mathrm{T} \left ( \begin{array}{cc} k_j & l_j \\ k \neq k_j & l \neq l_j \end{array} \right )$ generates the minor determinants that establish whether system $\mathcal{Q}^{p}_j$ is separable from the rest of the system. Again we can easily generalize the result above to a general, pure, three-partite state $\mathcal{Q}^{p}_{3}(N_{1},N_{2},N_{3})$. Then concurrence of this state is given by \cite{Albeverio} \begin{equation} \mathcal{C}(\mathcal{Q}^{p}_{3}(N_{1},N_{2},N_{3}))=\left( \mathcal{N} \sum^{3}_{j=1}\sum_{ l>k}\sum_{k}\sum_{ l_j>k_j}^{N_j}\sum_{k_j=1}^{N_j}|\mathrm{T}\left ( \begin{array}{cc} k_j & l_j \\ k \neq k_j & l \neq l_j \end{array} \right )|^{2}\right)^{\frac{1}{2}}\end{equation} where, e.g., the indices $k$ and $l$ for $j=1$ run through the $N_2 N_3$, two-digit numbers $1,1$ to $N_2,N_3$. From the above discussion we can see that the equality in Schwarz inequality can be used as a criterion for separability, and the deviation from equality, in the sense outlined above, can be used as measure of entanglement which coincides with generalized concurrence and our entanglement tensor for bi- and three-partite states. \section{Conclusion} We have discussed the relation between Schwarz inequality (or, rather, equality) and concurrence for bi- and three-partite states and possible generalization to multi-partite states. This relation helps us visualize the geometrical properties of concurrence and the relation is directly related to the geometry of the Hilbert space as a normed complex space with an inner-product defined on it. Moreover, we have shown that the deviation from the Schwarz inequality upper bound (perhaps this bound can be called the ``Schwarz equality'') can be used as measure of entanglement for concurrence for bi- and three-partite states. \begin{flushleft} \textbf{Acknowledgments:} This work was supported by the Swedish Foundation for Strategic Research, the Swedish Research Council, and the Wenner-Gren Fundations. \end{flushleft}
2,869,038,156,056
arxiv
\section{Introduction} There is a growing consensus, across many fields and applications, that forecasts should have a probabilistic nature \citep{gneiting14}. This is particularly true in decision-making scenarios where cost-loss analyses are designed to take into account the uncertainties associated to a given forecast \citep{murphy77,owens14}. Unfortunately, it is often the case that well established predictive models are completeley deterministic and thus provide single-point estimates only. For example, in engineering and applied physics, models often rely on computer simulations. A typical strategy to assign confidence intervals to deterministic predictions is to perform ensemble forecasting, that is to repeat the same simulation with slightly different initial or boundary conditions \citep{gneiting05,leutbecher08}. However, this is rather expensive and it often requires a trade-off between computational cost and accuracy of the model, especially when there is a need for real-time predictions. Likewise, the most successful applications in machine learning techniques have focused on estimating target variables, with less emphasis on the estimation of the uncertainty of the prediction, even though uncertainity estimate is becoming an important topic in the machine learning community \citep{gal16}. \\ In this paper we focus on the problem of assigning uncertainties to single-point predictions, with a particular emphasis on the requirement of calibration. When dealing with a probabilistic forecast, calibration is as important as accuracy. Calibration, also known as reliability (for instance, in the meteorological literature), is the requirement that the probabilities should give an estimate of the expected frequencies of the event occurring, that is a statistical consistence between predictions and observations \citep{gneiting2007,johnson09}. We restrict our attention on predictive models that output a scalar continuous variable, and whose uncertainties are in general input-dependent. For the sake of simplicity, and for its widespread use, we assume that the probabilistic forecast that we want to generate is in the form of a Gaussian distribution. Hence, the problem can be cast in terms of the estimation of the input-dependent variance associated to a normal distribution centered around forecasted values provided by a model.\\ In the machine learning community, elegant and practical ways of deriving uncertainties based on non-parametric Bayesian methods are well established, either based on Bayesian neural networks \citep{mackay92,neal12,hernandez15}, deep learning \citep{gal16}, or Gaussian Processes (GPs) \citep{rasmussen06}. However, it is important to emphasize that whilst in the classical heteroskedastic regression problem, one is interested in learning simultaneously the mean function $f(\mathbf{x})$ and the variance $\sigma^2(\mathbf{x})$, here we assume that the mean function is provided by a black-box model (for instance, a physics simulation) that cannot easily be improved, hence the whole attention is focused on the variance estimation. This is realistic in several applied fields, where decades of work have resulted in very accurate physics-based models, that however suffer the drawback of being completely deterministic. Hence, we decouple the problem of learning mean function and variance, focusing solely on the latter. Also, it is important to keep in mind that we aim at estimating the variance using a single mean function, and not an ensemble. \subsection{Summary of Contributions and Novelty} The task of generating uncertainties associated with black-box predictions, thus transforming a deterministic model into a probabilistic one, and simultaneously ensuring that such uncertainties are both accurate and calibrated is novel. The closest early works in the machine learning literature that are worth mentioning are concerned with post-processing calibration. In that case, a model outputs probabilistic predictions that are not well-calibrated and the task is to re-calibrate these outputs by deriving a function $[0,1]\rightarrow [0,1]$ that maps the original probabilities to new, well-calibrated probabilities. Re-calibration has been studied extensively in the context of classification, with methods such as Platt scaling \citep{platt1999}, isotonic regression \citep{zadrozny2001}, temperature scaling \citep{guo2017}. Applications to regression is less studied. A recent work is \citet{kuleshov2018}, where isotonic regression is used to map the predicted cumulative distribution function of a continuous target variable to the observed one, effectively re-calibrating the prediction. This approach has later been criticized for not being able to distinguish between informative and non-informative uncertainty predictions \citep{levi2019} and for not being able to ensure calibration for a specific prediction (but only in an average sense) \citep{song2019}. Finally, a relevant approach has recently been proposed in \citet{lakshminarayanan17}, building on the original idea of \citet{weigend94} of designing a neural network that outputs simultaneously mean and variance of a Gaussian distribution, by minimizing a proper score, namely the negative log likelihood of the predictive distribution. \citet{lakshminarayanan17} point out the importance of calibration of probabilistic models, even though in their work calibration is not explicitly enforced.\\ Overall, it appears that none of the previous works has recognized that calibration is only one aspect of a two-objective optimization problem. In fact, we will demonstrate that calibration (reliability) is competing with accuracy (sharpness) and therefore one must seek for the optimal trade-off between these two equally important qualities of a probabilistic forecast.\\ Our method is very general and does not depend on any particular choice for the black-box model that predicts the output targets (which indeed is not even required; all that is needed are the errors between predictions and real targets). The philosophy is to introduce a cost function which encodes a trade-off between the accuracy and the reliability of a probabilistic forecast. Assessing the goodness of a forecast through proper scores, such as the Negative Log Probability Density, or the Continuous Rank Probability Score, is a common practice in many applications, like weather predictions \citep{matheson76,brocker07}. Also, the notion that a probabilistic forecast should be well calibrated, or statistically consistent with observations, has been discussed at length in the atmospheric science literature \citep{murphy92,toth03}. However, the basic idea that these two metrics (accuracy and reliability) can be combined to estimate the empirical variance from a sample of observations, and possibly to reconstruct the underlying noise as a function of the inputs has never been proposed. Moreover, as we will discuss, the two metrics are competing, when interpreted as functions of the variance only. Hence, this gives rise to a two-objective optimization problem, where one is interested in achieving a good trade-off between these two properties.\\ Our main contributions are the introduction of the Reliability Score (RS), that measures the discrepancy between empirical and ideal calibration, and the Accuracy-Reliability (AR) cost function. We show that for a Gaussian distribution the RS has a simple analytical formula. The accuracy part of the AR cost function is measured by means of the Continuous Rank Probability Score, that we argue has better properties than the more standard Negative Log Probability Density.\\ The paper is organized as follows. We first introduce the Negative Logarithm of the Probability Density and the Continuous Rank Probability Score as scores for accuracy. We then comment on the reliability, how to construct a reliability diagram for continuous probabilistic forecast, and we show that accuracy does not implies reliability and indeed the two metrics are competing. We then introduce a new score to measure reliability for Gaussian distributions and the Accuracy-Reliability score. Finally, we show how the new score can be used to estimate uncertaintiy both in toy and real-world examples. \section{Loss functions for Accuracy} A standard way of estimating the empirical variance of a Gaussian distribution is by maximizing its likelihood with respect to a set of observations. In practice, one can use a loss function based on the Negative Logarithm of the Probability Density (NLPD): \begin{equation} {NLPD}(\varepsilon,\sigma)=\frac{\log\sigma^2}{2}+\frac{\varepsilon^2}{2\sigma^2}+\frac{\log 2\pi}{2}, \end{equation} where we define $\varepsilon=y^o-\mu$ as the error between a given observation $y^o$ and the corresponding prediction $\mu$. Here, we propose to use the Continuous Rank Probability Score (CRPS), in lieu of the NLPD. CRPS is a generalization of the well-known Brier score \citep{wilks11}, used to assess the probabilistic forecast of continuous scalar variables, when the forecast is given in terms of a probability density function, or its cumulative distribution. CRPS is defined as \begin{equation} {CRPS} = \int_{-\infty}^\infty \left[C(y) - H(y-y^o) \right]^2 dy \end{equation} where $C(y)$ is the cumulative distribution (cdf) of the forecast, $H(y)$ is the Heaviside function, and $y^o$ is the true (observed) value of the forecasted variable. For Gaussian distributions, the forecast is simply given by the mean value $\mu$ and the variance $\sigma^2$, and in this case the CRPS can be calculated analytically \citep{gneiting05} as \begin{equation}\label{CRPS} {CRPS}(\mu,\sigma,y^o) = \sigma\left[\frac{y^o-\mu}{\sigma}\erf\left(\frac{y^o-\mu}{\sqrt{2}\sigma}\right) + \right. \left.\sqrt{\frac{2}{\pi}}\exp\left(-\frac{(y^o-\mu)^2}{2\sigma^2} \right) -\frac{1}{\sqrt{\pi}}\right] \end{equation} Several interesting properties of the CRPS have been studied in the literature. Notably, its decomposition into reliability and uncertainty has been shown in \citet{hersbach00}.There are several reasons for preferring CRPS to NLPD. They are both negatively oriented, but CRPS is equal to zero for a perfect forecast with no uncertainty (deterministic). Indeed, the CRPS has the same unit as the variable of interest, and it collapses to the Absolute Error $|y^o-\mu|$ for $\sigma\rightarrow 0$, that is when the forecast becomes deterministic. On the other hand, the limit $\sigma\rightarrow 0$ is problematic for NLPD (being finite only for $\varepsilon=0$). Figure \ref{NLPD_vs_CRPS} shows a graphical comparison between NLPD (top panel) and CRPS (bottom panel). Different curves show the isolines for the two scores, as a function of the error $\varepsilon$ (vertical axis) and the standard deviation $\sigma$ (horizontal axis). The black dashed line indicates the minimum value of the score, for a fixed value of $\varepsilon$. Because we are approaching the problem of variance estimation by assigning an empirical variance to single-point black-box predictions, it makes sense to minimize a score as a function of $\sigma$ only, for a fixed value of the error $\varepsilon=y^0-\mu$. By differentiating Eq.(\ref{CRPS}) with respect to $\sigma$, one obtains \begin{equation} \frac{d {CRPS}}{d\sigma} = \sqrt{\frac{2}{\pi}}\exp\left(-\frac{\varepsilon^2}{2\sigma^2} \right) -\frac{1}{\sqrt{\pi}} \end{equation} and the minimizer is found to be $ \sigma_{{min, CRPS}}^{2} = {\varepsilon^2}/{{\log 2}}.$ Note that the minimizer for NLPD is $\sigma_{{min, NLPD}}^{2} = \varepsilon^2$. As it is evident from Figure \ref{NLPD_vs_CRPS}, CRPS penalizes under- and over-confident predictions in a much more symmetric way than NLPD. Both scores are defined for a single instance of forecast and observation, hence they are usually averaged over an ensemble of predictions, to obtain the score relative to a given model, for instance: $\overline{{CRPS}} = \sum_k {CRPS}(\mu_k,\sigma_k,y^o_k)$. \section{Reliability} An important consideration is that scores such as NLPD and CRPS do not automatically enforce a correct model calibration. Calibration is the property of a probabilistic model that measures its statistical consistence with observations. For forecasts of discrete events, it measures if an event predicted with probability $p$ occurs, on average, with frequency $p$. This concept can be extended to forecasts of a continuous scalar quantity by examining the so-called reliability diagram \citep{anderson96,hamill97,hamill01}. Note that in this paper we use the terms calibration and reliability interchangeably. A reliability diagram is produced in the following way. One collects the values of the probability predicted at all observed points, that is $P(y\leq y^o)$, which for a Gaussian distribution can be expressed analytically and we denote with $\Phi_i=\frac{1}{2}(\erf(\eta_i)+1)$, with $\eta_i=\varepsilon_i/(\sqrt{2}\sigma_i)$ being the standardized errors (the index $i$ denotes that the error is associated to the $i-$th observation/prediction in a set of size $N$). The empirical cumulative distribution of $\Phi_i$, defined as $C(y)=\frac{1}{N}\sum_{i=1}^N H(y-\Phi_i)$ ($H$ is the Heaviside function), provides the reliability diagram, with the obvious interpretation of observed frequency as a function of the predicted probability (note that this method of producing a reliability diagram does not require binning). A perfect calibration shows in the reliability diagram as a straight diagonal line. \\ The motivating argument of this work is that two models with identical accuracy score (and we use here NLPD to illustrate the argument, but the same would be true for CRPS) can have remarkably different reliability diagrams. We show an example in Figure \ref{fig:ex_reliability}. 1000 data points have been generated as $\mathcal{N}(0,\sigma(x)^2)$, with $x\in[0,1]$ and $\sigma(x)=x+\frac{1}{2}$, as in the synthetic dataset proposed in \citet{goldberg98}. A model completely consistent with the data generation mechanism (i.e. with zero mean and variance $\sigma^2$) produces the blue line in the reliability diagram in the top panel, that is almost perfect calibration. However, one can generate a second model with a modified (wrong) variance $\tilde{\sigma}^2$ such that $ {NLPD}(\varepsilon,\tilde{\sigma})= {NLPD}(\varepsilon,\sigma)$, that is \begin{equation}\label{ex_reliability} \frac{\log\tilde{\sigma}^2}{2}+\frac{\varepsilon^2}{2\tilde{\sigma}^2} =\frac{\log\sigma^2}{2}+\frac{\varepsilon^2}{2\sigma^2} \end{equation} Eq. (\ref{ex_reliability}) always produces a solution $\tilde{\sigma}\neq\sigma$, as long as $\sigma^2\neq\varepsilon^2$ (the global minimum of NLPD, for fixed $\varepsilon$). Graphically this can be seen in Figure \ref{NLPD_vs_CRPS}: for a constant $\varepsilon$ value, there are two values of $\sigma$ on the same NLPD contour. The red line in the top panel of Figure \ref{fig:ex_reliability} has been derived from such a modified model $\mathcal{N}(0,\tilde{\sigma}^2)$, which is obviously mis-calibrated. For this example NLPD=0.4 (equal for both cases). As a complementary argument, we show in the bottom panel of Figure \ref{fig:ex_reliability} the reliability diagram of several models, with decreasing values of NLPD. One can appreciate that progressively decreasing NLPD results in a worse and worse calibration (note that NLPD is negatively oriented). These models have been generated again starting from the perfectly calibrated synthetic model, progressively shifting the values assigned to $\sigma^2$, towards the global minimum $\sigma^2=\varepsilon^2$ (hence decreasing NLPD). Thus, minimizing a traditional cost function such as NLPD does not necessarily implies to achieve a well-calibrated model. Of course, we are not suggesting that any model generated by means of minimizing NLPD is inevitably mis-calibrated. However, unless explicitly enforced, calibration will be a by-product of other properties. Once again, the same is true for CRPS. \subsection{Reliability Score for Gaussian forecast} Reliability is a statistical property of a model, defined for a large enough ensemble of forecasts-observations. Here, we introduce the reliability score for normally distributed forecasts. In this case, we expect the standardized errors $\eta$ calculated over a sample of $N$ predictions-observations to have a standard normal distribution with cdf $\Phi(\eta)=\frac{1}{2}(\erf(\eta)+1)$. Hence we define the Reliability Score (RS) as: \begin{equation}\label{RS_1} {RS} = \int_{-\infty}^\infty \left[\Phi(\eta) - C(\eta)\right]^2 d\eta \end{equation} where $C(\eta)$ is the empirical cumulative distribution of the standardized errors $\eta$, that is \begin{equation} C(y) = \frac{1}{N}\sum_{i=1}^N H(y-\eta_i) \end{equation} with $\eta_i = (y^o_i-\mu_i)/(\sqrt{2}\sigma_i)$. Note that each error $(y^o_i-\mu_i)$ is standardized with respect to a different (input-dependent) $\sigma_i$. RS measures the divergence of the empirical distribution of standardized errors $\eta$ from a standard normal distribution. From now on we will use the convention that the set $\{\eta_1,\eta_2,\ldots \eta_N\}$ is sorted ($\eta_i\leq\eta_{i+1}$). Obviously this does not imply that $\mu_i$ or $\sigma_i$ are sorted as well. Interestingly, the integral in Eq. (\ref{RS_1}) can be calculated analytically, via expansion into a telescopic series, yielding: \begin{equation} {RS} = \sum_{i=1}^N \left[\frac{\eta_i}{N}\left(\erf(\eta_i)+1\right) - \frac{\eta_i}{N^2}(2i-1) + \frac{\exp(-\eta_i^2)}{\sqrt{\pi}N}\right] -\frac{1}{2}\sqrt{\frac{2}{\pi}}\label{RS} \end{equation} \normalsize Differentiating the $i$-th term of the above summation, RS$_i$, with respect to $\sigma_i$ (for fixed $\varepsilon_i$), one obtains \begin{equation} \frac{d{RS}_i}{d\sigma_i} = \frac{\eta_i}{N\sigma_i}\left(\frac{2i-1}{N}-\erf(\eta_i)-1 \right) \end{equation} Hence, ${RS}_i$ is minimized when the values $\sigma_{{min}}^{{RS}}$ satisfy \begin{equation}\label{optimal_eta} \erf(\eta_i)=\erf\left(\frac{\varepsilon_i}{\sqrt{2}\sigma_{{min}}^{{RS}}}\right) = \frac{2i-1}{N}-1 \end{equation} This could have been trivially derived by realizing that the distribution of $\eta_i$ that minimizes RS is the one such that the values $\Phi(\eta_i)$ are uniform in the interval $[0,1]$. \section{The Accuracy-Reliability cost function} The Accuracy-Reliability (AR) cost function introduced here follows from the simple principle that the variances $\sigma_i^2$ estimated from an ensemble of errors $\varepsilon_i$ should result in a model that is both accurate (with respect to the CRPS score), and reliable (with respect to the RS score). Clearly, this gives rise to a two-objective optimization problem. It is trivial to verify that CRPS and RS cannot simultaneously attain their minimum value (as was evident from Figure \ref{fig:ex_reliability}). Indeed, by minimizing the former, $\eta_i = \frac{1}{2}\sqrt{\log 4}$ for any $i$. On the other hand, a constant $\eta_i$ cannot result in a minimum for RS, according to Eq. (\ref{optimal_eta}). This demonstrates that methods that focus solely on re-calibration (any method of choice will have an equivalent into minimizing RS) can possibly result in the deterioration of accuracy. In passing, we note that any cost function that is minimized (for constant $\varepsilon$) by a value of the variance $\sigma^2$ that is linear in $\varepsilon^2$ suffers this problem (because $\eta_i$ will be a constant). Finally, notice that trying to minimize RS as a function of $\sigma_i$ (for fixed errors $\varepsilon_i$) results in an ill-posed problem, because RS is solely expressed in terms of the standardized errors $\eta$. Hence, there is no unique solution for the variances that minimizes RS. Hence, RS can be more appropriately thought of as a regularization term in the Accuracy-Reliability cost function. The simplest strategy to deal with multi-objective optimization problems is to scalarize the cost function, which we define here as \begin{equation}\label{AR} {AR} = \beta\cdot \overline{{CRPS}} + (1-\beta){RS}. \end{equation} We choose the scaling factor $\beta$ as \begin{equation}\label{beta} \beta={{RS}}_{min}/(\overline{{CRPS}}_{min} + {RS}_{min}). \end{equation} The minimum of $\overline{{CRPS}}$ is $\overline{{CRPS}}_{min}=\frac{\sqrt{\log 4}}{2N}\sum_{i=1}^N \varepsilon_i$, which is simply the mean of the errors, rescaled by a constant. The minimum of RS follows from Eqs. (\ref{RS}) and (\ref{optimal_eta}): \begin{equation} {RS}_{min} = \frac{1}{\sqrt{\pi} N}\sum_{i=1}^N \exp\left(-\left[\erf^{-1}\left(\frac{2i-1}{N}-1\right)\right]^2\right)-\frac{1}{2}\sqrt{\frac{2}{\pi}} \end{equation} Notice that ${RS}_{min}$ is only a function of the size of the sample $N$, and it converges to zero for $N\rightarrow \infty$. The heuristic choice in Eq. (\ref{beta}) is justified by the fact that the two scores might have different orders of magnitude, and therefore we rescale them in such a way that they are comparable in our cost function (\ref{AR}). We believe this to be a sensible choice, although there might be applications where one would like to weigh the two scores differently. In future work, we will explore the possibility of optimizing $\beta$ in a principled way, for instance constraining the difference between empirical and ideal reliability score to be within limits given by the dataset size $N$, or by making $\beta$ a learnable parameter. Finally, in our practical implementation, we neglect the last constant term in the definition (\ref{RS}) so that, for sufficiently large $N$, ${RS}_{min}\simeq \frac{1}{2}\sqrt{\frac{2}{\pi}}\simeq 0.4$ \section{Results} In summary, we want to estimate the input-dependent values of the empirical variances $\sigma_i^2$ associated to a sample of $N$ observations for which we know the errors $\varepsilon_i$. We do so by solving an optimization problem in which the set of estimated $\sigma_i$ minimizes the AR cost function defined in Eq. (\ref{AR}). This newly introduced cost function has a straightforward interpretation as the trade-off between accuracy and reliability, which are two essential but conflicting properties of probabilistic models. In practice, because we want to generate a model that is able to predict $\sigma^2$ as a function of the inputs $\mathbf{x}$ on any point of a domain, we introduce a structure that enforces a certain degree of smoothness of the unknown variance, in the form of a regression model. In the following we show some experiments on toy problems and on multidimensional real dataset to demonstrate the easiness, robustness and accuracy of the method.\\ \subsection{Toy problems} In order to facilitate comparison with previous works, we choose some of the datasets used in \citet{kersting07}, although for simplicity of implementation we rescale the standard deviation so to be always smaller or equal to 1. Since in our method we assume that a mean function is provided, for the topy problems we use the result of a standard (homoskedastic) Gaussian Process regression as $f(x)$. For all datasets the targets $y_i$ are sampled from a Gaussian distribution $\mathcal{N}(f(x),\sigma(x)^2)$. The first three datasets are one-dimensional in $x$, while in the fourth we will test the method on a five-dimensional space, thus showing the robustness of the proposed strategy.\\ {\bf G} dataset: $x \in [0,1]$, $f(x) = 2\sin(2\pi x)$, $\sigma(x) = \frac{1}{2}x+\frac{1}{2}$ \cite{goldberg98}. \\ {\bf Y} dataset: $x \in [0,1]$, $f(x) = 2(\exp(-30(x-0.25)^2)+\sin(\pi x^2))-2$, $\sigma(x) = \exp(\sin(2\pi x))/3$ \cite{yuan04}. \\ {\bf W} dataset: $x \in [0,\pi]$, $f(x) = \sin(2.5x)\sin(1.5x)$, $\sigma(x) = 0.01+0.25(1-\sin(2.5x))^2$ \cite{weigend94, williams96}. \\ {\bf 5D} dataset: $\mathbf{x} \in [0, 1]^5$, $f(\mathbf{x})=0$, $\sigma(\mathbf{x})=0.45(\cos(\pi + \sum_{i=1}^5 5x_i) + 1.2)$ \cite{genz84}. Examples of 100 points sampled from the {\bf G, Y, W} dataset ar shown in Figure \ref{fig:toy_regression} (circles), along with the true mean function $f(x)$ (red), and the one predicted by a standard Gaussian Process regression model (blue). The bottom-right plot in Figure \ref{fig:toy_regression} shows the distribution of $\sigma$, which ranges in the interval $[0.09,0.99]$.\\ For the {\bf G}, {\bf Y}, and {\bf W} datasets the model is trained on 100 points uniformly sampled in the domain. The {\bf 5D} dataset is obviously more challenging, hence we use 10,000 points to train the model (note that this results in less points per dimension, compared to the one-dimensional tests). For all experiments we test 100 independent runs. We have tested a neural network and a polynomial best fit as regression model. For simplicity, we choose a single neural network architecture, that we use for all the tests. We use a network with 2 hidden layers, respectively with 50 and 10 neurons. The activation functions are rectified linear (ReLU) and a symmetric saturating linear function, respectively. The output is given in terms of $\log\sigma$, to enforce positivity of $\sigma^2$. For all experiments, the datasets are randomly divided into training ($33\%$), validation ($33\%$) and test ($34\%$) sets. All the reported metrics are calculated on the test set only. The network is trained using a standard BFGQ quasi-Newton algorithm, and the iterations are forcefully stopped when the loss function does not decrease for 10 successive iterations on the validation set. The only inputs needed are the inputs $\mathbf{x}_i$ and the corresponding errors $\varepsilon_i$. Finally, in order to avoid local minima due to the random initialization of the neural network weights, we train five independent networks and choose the one that yields the smallest cost function.\\ In the case of low-dimensional data one might want to try simpler and faster approaches than a neural network, especially if smoothness of the underlying function $\sigma(x)$ can be assumed. For the one-dimensional test cases ({\bf G, Y, W}) we have devised a simple polynomial best fit strategy. We assume that $\sigma(x)$ can be approximated by a polynomial of unknown order, equal or smaller than 10: $\sigma(x)=\sum_{l=0}^{10} \theta_l x^l$, where in principle one or more $\theta_l$ can be equal to zero. The vector $\Theta=\{\theta_0,\theta_1,\ldots,\theta_{10}\}$ is initialized with $\theta_0=const$ and all the others equal to zero. The constant can be chosen, for instance, as the standard deviation of the errors $\varepsilon$. The polynomial best fit is found by means of an iterative procedure (Algorithm 1). \begin{algorithm}[ht] \caption{Polynomial best fit} \label{alg:poly} \begin{algorithmic} \STATE {\bfseries Input:} data $x_i, \varepsilon_i$ \STATE Initialize $p = 0$, $\theta_0=const$, $P_{max}=10$, $tol$ \WHILE{$p\leq P_{max}$ \& $err>tol$} \STATE ${p=p+1}$ \STATE Initial guess for optimization $\Theta=\{\theta_0,\ldots,\theta_{p-1},0\}$ \STATE $\Theta=\text{argmin AR}(\sigma_i)$ (with $\sigma_i = \sum_{l=0}^p \theta_lx_i^l$) \STATE err = $||\text{AR}(\sigma(\Theta_{old})) - \text{AR}(\sigma(\Theta_{new}))||_2$ \ENDWHILE \end{algorithmic} \end{algorithm} In words, the algorithm finds the values of $\Theta$ for a given polynomial order that minimizes the Accuracy-Reliability cost function. Then it tests the next higher order, by using the previous solution as initial guess. Whenever the difference between the solutions obtained with two successive orders is below a certain tolerance, the algorithm stops. The multidimensional optimization problem is solved by a BFGQ Quasi-Newton method with a cubic line search procedure. Note that whenever a given solution is found to yield a local minimum for the next polynomial order, the iterations are terminated. The results for the 1D datasets $\bf G, Y, W$ are shown in Figures \ref{G_dataset} - \ref{W_dataset}, in a way consistent with \citet{kersting07}. The red lines denote the true standard deviation $\sigma(x)$ used to generate the data. The black line indicates the values of the estimated $\sigma$ averaged over 100 independent runs, and the gray areas represent one and two standard deviations from the mean. A certain spread in the results is due to different training sets (in each run 100 points are sampled independently) and, for the Neural Network, to random initialization. The top panels show the results obtained with the Neural Network, while the bottom panels show the result obtained with the polynomial fit. In all cases, except for the {\bf W} dataset (polynomial case, bottom panel), the results are very accurate. For the {\bf 5D} dataset it is impractical to compare graphically the real and estimated $\sigma(\mathbf{x})$ in the 5-dimensional domain. Instead, in Figure \ref{multiD_1} we show the probability density of the real versus predicted values of the standard deviation. Values are normalized such that the maximum value in the colormap for any value of predicted $\sigma$ is equal to one (i.e. along vertical lines). The red line shows a perfect prediction. The colormap has been generated by 10e6 points, while the model has been trained with 10,000 points only. For this case, we have used an exact mean function (equal to zero), in order to focus exclusively on the estimation of the variance. We believe that this is an excellent result for a very challenging task, given the sparsity of the training set, that shows the robustness of the method. \subsection{Real-World dataset} We have tested our method on the same datasets used in \citep{hernandez15}. The only difference with the topy problems is that we use 70\% of the data for training, and we only use a neural network as regressor. The results reported in Table \ref{table:realdata} are computed over 50 independent runs. For each run, we first train a standard neural network to provide the mean function $f(\mathbf{x})$, by minimizing the mean square errors with respect to the targets. We then compare our method against three different models (first row in Table 1): CRPS means that the variance is estimated by minimizing CRPS only; KM denotes a K-means method; RECAL indicates the recalibration method of \citep{kuleshov2018}, and AR denotes our method. The scores reported (second row) are the median values (calculated on the test set only) of CRPS and of the calibration error. To estimate the latter we derive the reliability diagram (in the way described in section 2), and we compute the maximum distance to the optimal reliability (straight diagonal line). This is denoted, in Table \ref{table:realdata}, as Cal. err. (in percentage). For the K-means method (which is possibly the simplest baseline method) we have clustered the training data in $k$ groups, calculated the standard deviation $\sigma$ for each cluster, and assigned the same value of $\sigma$ for all test points belonging to a given cluster. We have run experiments with $k$ ranging from 1 to 10, and we report the minimum values obtained for CRPS, and Cal. err. for the model that yields the best calibration. The RECAL method takes the $\sigma$ estimated by the AR method and applies the recalibration algorithm of \citep{kuleshov2018}. The training sets used for all methods are the same. The results obtained by using the AR cost function are always better calibrated than minimizing CRPS only and than the KM method. In two cases only (Protein and Wine datasets) RECAL yields a slightly better calibration error. However, in both cases, the accuracy (CRPS) of teh RECAL method is penalized and we believe that the best trade-off is still acheived by AR. In fact, AR offers the best trade-off between accuracy and calibration across all dataset, as expected. \begin{table*}[ht] \caption{Comparison between different methods on several multidimensional dataset. Median values are reported, calculated over 50 runs. Best values are in bold.}\label{table:realdata} \centering \begin{scriptsize} \begin{tabular}{|ccc|cccc|cccc|} \toprule \multicolumn{3}{ |c| }{Method } & CRPS &RECAL& KM& AR & CRPS& RECAL& KM &AR\\ \multicolumn{3}{ |c| }{Score} & \multicolumn{4}{ |c| }{CRPS} & \multicolumn{4}{ |c| }{Cal. err. ($\%$)} \\ \midrule Dataset & Size & Dim. & \multicolumn{8}{ |c| }{} \\ \midrule Boston Housing &506&13& 0.25 & 0.25 & 0.25 & {\bf 0.23} & 26.2 & 20.6 & 17.5 & {\bf 16.7}\\ Concrete & 1,030& 8 & 0.22 & 0.23 & 0.26 & {\bf 0.21} & 22.6 & 14.4 & 22.1 & {\bf 11.5}\\ Energy & 768 & 8 & 0.059 & 0.056 & 0.087 & {\bf 0.052} & 29.3 & 29.2 & 28.3 & {\bf 13.0}\\ Kin8nm & 8,192 & 8 & 0.17 & {\bf 0.16} & 0.24 & {\bf 0.16} & 15.9 & 8.3 & 25.5 & {\bf 5.8}\\ Power plant & 9,568 & 4 & 0.13 & 0.13 & 0.15 & {\bf 0.12} & 12.5 & 3.4 & 16.1 & {\bf 2.6}\\ Protein & 45,730 & 9 & 0.38 & 0.47 & 0.40 & {\bf 0.37} & 13.1 & {\bf 5.0} & 10.6 & 5.4\\ Wine & 1,599 & 11 & 0.48 & 0.50 & {\bf 0.46} & 0.48 & 16.0 & {\bf 7.9} & {8.0} & 8.3\\ Yacht & 308 & 6 & {\bf 0.06} & {\bf 0.06} & 0.19 & {\bf 0.06} & 26.0 & 24.3 & 36.6 & {\bf 19.5}\\ \bottomrule \end{tabular} \end{scriptsize} \end{table*} \section{Discussion and future work} We have presented a simple parametric model for estimating the input-dependent variance of probabilistic forecasts. We assume that the data is distributed as $\mathcal{N}(f(\mathbf{x}),\sigma(\mathbf{x})^2)$, and that an approximation of the mean function $f(\mathbf{x})$ is available (the details of the model that approximates the mean function are not important). In order to generate the variance $\sigma^2(\mathbf{x})$, we propose to minimize the Accuracy-Reliability (AR) cost function, which depends only on $\sigma$, on the errors $\varepsilon$, and on the size of the training set $N$. We have shown that the classical method of minimizing the Negative Log Probability Density (NLPD) does not guarantees that the result will be well-calibrated. On the other hand, methods that exclusively focus on the post-process calibration tend to spoil their accuracy. Indeed, we have discussed how accuracy and reliability are two conflicting metrics for a probabilistic forecast and how the latter can serve as a regularization term for the former. We have shown that by using the new AR cost function, one is able to accurately discover the hidden noise function. Several tests for synthetic and real-world (large) datasets have been shown. An important point to notice is that the method will inherently attempt to correct any inaccuracy in $f(\mathbf{x})$ by assigning larger variances. Fir instance, the agreement between predicted and true values of the standard deviation $\sigma$ presented in Figures \ref{G_dataset}-\ref{W_dataset} must be understood within the limits of the approximation of the mean function (provided by a Gaussian Process regression in those toy examples). By decoupling the prediction of the mean function from the estimation of the variance, this method is not very expensive, and it is suitable for large datasets. Moreover, for the same reason this method is very appealing in all applications where the mean function is necessarily computed via an expensive black-box, such as computer simulations, for which the de-facto standard of uncertainty quantification is based on running a large (time-consuming and expensive) ensemble, and for which large dataset of archived runs are often avaiable. Finally, the formulation is well suited for high-dimensional problems, since the cost function is calculated point-wise for any instance of prediction and observation.\\ Although very simple and highly efficient the method is still fully parametric, and hence it bears the usual drawback of possibly dealing with a large number of choices for the model selection. Interesting future directions will be to incorporate the Accuracy-Reliability cost function in a non-parametric Bayesian method for heteroskedastic regression and to generalize the constraint of Gaussian residuals. \bibliographystyle{plainnat}
2,869,038,156,057
arxiv
\section{Introduction.}\label{intro} Detecting projective and affine equivalences implies recognizing whether or not two objects are the same in a certain setup. Also, finding the symmetries of an object is important in order to understand its shape, and also to efficiently visualize and store the information regarding the object. For these reasons, these questions have been treated in fields like Computer Vision, Computer Graphics, Computer Aided Geometric Design and Pattern Recognition. Several studies addressing the problem are, for instance, \citep{Bokeloh2009,Brass20043,Huang19961473,Lebmeir2008707,Lebmeir2009}; a more comprehensive review can be found in \citep{Alcazar201551}. In recent years several papers \citep{Alcazar2014715,Alcazar2014b199,Alcazar2014a269,Alcazar201551,Alcazar2019775,ALCAZAR2019302,Hauer2019424,BIZZARRI2020101794,Hauer201868,DBLP,JUTTLER2022571} have pursued these problems for rational curves and surfaces, using tools from Algebraic Geometry and Computer Algebra. In the case of curves, the main idea behind these approaches is the fact that projective or affine equivalences between the curves, and symmetries as a particular case, have a corresponding transformation in the parameter domain which must be a M\"obius transformation whenever the curves are properly, i.e. birationally, parametrized. Thus, the usual approach is to compute the M\"obius transformations, and derive the equivalences themselves from there. For projective equivalences, the algorithms in \citep{Hauer201868,BIZZARRI2020112438} follow this strategy and compute the M\"obius transformations by solving a polynomial system which is increasingly big as the degree of the curves involved in the computation grows. Solving {this} polynomial system implies using Gr\"obner bases, which results in higher complexity. In this paper we use a different approach following the idea in \citep{Alcazar201551}, where the classical curvature and torsion, two well-known differential invariants, are used to compute the symmetries of a space rational curve. In \citep{Alcazar201551} the M\"obius transformations are derived as special factors of a gcd of two polynomials, computed from the curvature and torsion functions. On one hand, this has the advantage of working with smaller polynomials, since taking the gcd already reduces the degree of the polynomial one has to analyze. On the other hand, one avoids solving polynomial systems by using factoring instead. In a similar way, in this paper we present a strategy for 3D space rational curves that also pursues the M\"obius transformations first. However, in order to compute them we introduce two differential invariants, which we call {\it projective curvatures}, that allow us to obtain the M\"obius transformations using an analogous procedure to that in \citep{Alcazar201551}, i.e. using gcd computing and factoring over the reals, without sorting to polynomial system solving. The projective curvatures are {inspired by} ideas from differential invariant theory \citep{MR836734,dolgachev_2003,olver_1995,mansfield_2010}. To this aim, we first present four {rational expressions} that completely characterize projective equivalence but that, however, are not well suited for computation, because they do not {have a good behavior with respect to M\"obius transformations; in the terminology of this paper, we express this by saying that they ``do not commute" with M\"obius transformations.} From here, we develope two more {rational expressions}, the {\it projective curvatures}, that do commute with M\"obius transformations, and we characterize projective equivalence between the curves using these curvatures. {In particular, it is this good behavior with respect to M\"obius transformations that allows us to solve the problem by using gcd and factoring.} The experimentation carried out in \citet{maple} shows that our approach is efficient and works better than \citep{Hauer201868,BIZZARRI2020112438} as the degree of the curves grow. The structure of the paper is the following. In Section \ref{prelmn} we provide {some background on the problem treated in the paper, as well as some preliminary notions and results to be use later.} The main results behind the algorithm are developed in Section \ref{newmeth}, where we introduce several {rational expressions} to finally derive the projective curvatures, and the theorems relating them to the projective equivalences between the curves. The algorithm itself is provided in Section \ref{sec5}. We present the results of the experimentation carried out in \citet{maple} in Section \ref{Imp}, where a comparison with the results in \citep{Hauer201868,BIZZARRI2020112438} is also given. Finally, we close with our conclusion in Section \ref{sec-conclusion}. Several technical results and technical proofs are deferred to two appendixes, so as to improve the reading of the paper. \section*{Acknowledgements} The first three authors are partially supported by the project FAY-2021-9648 of Scientific Research Projects Unit, Karadeniz Technical University. The first author would like to thank TUBITAK (The Scientific and Technological Research Council of Turkey) for their financial supports during his doctorate studies. Juan G. Alc\'azar is supported by the grant PID2020-113192GB-I00 (Mathematical Visualization: Foundations, Algorithms and Applications) from the Spanish MICINN, and is also a member of the Research Group {\sc asynacs} (Ref. {\sc ccee2011/r34}). Juan G. Alc\'azar and U\u{g}ur G\"oz\"utok were also supported by TUBITAK for a short research visit to Yildiz Technical University in Istanbul (Turkey). {The authors are also thankful to the reviewers for their comments, which made it possible to improve the paper with respect to an earlier version.} \section{Preliminaries.}\label{prelmn} \subsection{General notions and assumptions.}\label{subsec1} For the sake of comparability, in general we will follow the notation in \citep{Hauer201868}. Thus, let $\bm{C_1}$ and $\bm{C_2}$ be two parametric rational curves embedded in the three real projective space ${{\Bbb P}^3(\mathbb{R})}$. The points $\bm{x}\in{{\Bbb P}^3(\mathbb{R})}$ are represented by $\bm{x}=(x_0,x_1,x_2,x_3)^T$, where the $x_i$ are real numbers and correspond to the \emph{homogeneous} coordinates of $\bm{x}$. In particular, whenever $\lambda\neq 0$, the vectors $\bm{x}$ and $\lambda \bm{x}$ represent the same point in ${{\Bbb P}^3(\mathbb{R})}$. The curves $\bm{C_1}$ and $\bm{C_2}$ are defined by means of parametrizations \begin{align*} \bm{p}:{\Bbb P}^1(\mathbb{R})\rightarrow \mathbf{C_1} \subset{{\Bbb P}^3(\mathbb{R})}, &&(t_0,t_1)\rightarrow \bm{p}(t_0,t_1)=(p_0(t_0,t_1),p_1(t_0,t_1),p_2(t_0,t_1),p_3(t_0,t_1)), \\ \bm{q}:{\Bbb P}^1(\mathbb{R})\rightarrow \mathbf{C_2} \subset{{\Bbb P}^3(\mathbb{R})}, &&(t_0,t_1)\rightarrow \bm{q}(t_0,t_1)=(q_0(t_0,t_1),q_1(t_0,t_1),q_2(t_0,t_1),q_3(t_0,t_1)), \end{align*} where ${\Bbb P}^1(\mathbb{R})$ denotes the real projective line. {The components of each parametric map} are homogeneous polynomials of degree $n$, \begin{align*} p_i(t_0,t_1)=\sum_{j=0}^{n} c_{j,i}t_0^{n-j}t_1^{j} \textit{ and } q_i(t_0,t_1)=\sum_{j=0}^{n} c'_{j,i}t_0^{n-j}t_1^{j}, \end{align*} with $i\in \{0,1,2,3\}$, and $c_{j,i},c'_{j,i}\in {\Bbb R}$. Additionally, we denote \begin{align*} \bm{c_j}=(c_{j,0},c_{j,1},c_{j,2},c_{j,3})^T, \; \bm{c'_{j}}=(c'_{j,0},c'_{j,1},c'_{j,2},c'_{j,3})^T, \end{align*} which will be referred to as the \emph{coefficient vectors} of the curves. {\color{black} For instance, consider the parametrization \begin{equation*} \bm{p}(t_0,t_1)=\begin{pmatrix} t_0^4+t_1^4 \\ 4t_0^3t_1 \\ -8t_0^2t_1^2 \\ t_0t_1^3-2t_1^4 \end{pmatrix}. \end{equation*} The coefficient vectors are $\bm{c_0}=(1,0,0,0)^T$, $\bm{c_1}=(0,4,0,0)^T$, $\bm{c_2}=(0,0,-8,0)^T$, $\bm{c_3}=(0,0,0,1)^T$, $\bm{c_4}=(1,0,0,-2)^T$. Hence, we can rewrite $\bm{p}$ as \begin{equation*} \bm{p}(t_0,t_1)=\begin{pmatrix} 1 & 0 & 0 & 0 & 1 \\ 0 & 4 & 0 & 0 & 0 \\ 0 & 0 & -8 & 0 & 0 \\ 0 & 0& 0 & 1& -2 \end{pmatrix}\begin{pmatrix} t_0^4 \\ t_0^3t_1\\ t_0^2t_1^2\\ t_0t_1^3\\ t_1^4 \end{pmatrix}. \end{equation*} } Furthermore, we make the following assumptions on $\bm{C_1}$ and $\bm{C_2}$; we will refer later to these hypotheses as \emph{hypotheses (i-iv)}. \begin{itemize} \item [(i)] The parametrizations $\bm{p}$ and $\bm{q}$ defining $\bm{C_1}$ and $\bm{C_2}$ are \emph{proper}, i.e. birational, so that almost all points in $\bm{C_i}$ {are reached by the parametrizations}. It is well-known that every rational curve can be reparametrized to obtain a proper parametrization \cite[see][]{Sendra2008}. \item [(ii)] The parametrizations $\bm{p}$ and $\bm{q}$ are in reduced form, i.e., \begin{equation*} gcd(p_0(t_0,t_1),p_1(t_0,t_1),p_2(t_0,t_1),p_3(t_0,t_1)) =gcd(q_0(t_0,t_1),q_1(t_0,t_1),q_2(t_0,t_1),q_3(t_0,t_1))=1. \end{equation*} \item [(iii)] Both parametrizations $\bm{p}$ and $\bm{q}$ have the same degree $n$. Notice that since projective transformations preserve the degree, the degree of projectively equivalent curves must be equal. Furthermore, we assume $n\geq 4$. \item [(iv)] None of the $\bm{C_i}$ is contained in a hyperplane. Consequently, the matrices $(c_{j,k})$, $(c'_{j,k})$ formed by the coefficient vectors $\bm{c_j}$ and $\bm{c'_{j}}$ have rank $4$ \citep{Hauer201868}. \end{itemize} \begin{remark}\label{isnotzero} Notice that because of these assumptions, the coefficient vectors $\bm{c_0}$ and $\bm{c'_{0}}$ cannot be identically zero. \end{remark} A \emph{projectivity} is a mapping $f$ defined in {${\Bbb P}^3(\mathbb{R})$} such that \begin{align*} f:{{\Bbb P}^3(\mathbb{R})}\rightarrow {{\Bbb P}^3(\mathbb{R})}:\bm{x}\mapsto f(\bm{x})=M\cdot\bm{x}, \end{align*} where $M=(m_{ij})_{0\leq i,j\leq 3}$ is a non-singular $4\times 4$ matrix. If $m_{00}\neq 0,\,\, m_{01}=m_{02}=m_{03}=0$, then $f$ is an affine transformation. {Observe that the matrix $M$ is defined up to a nonzero scalar, so that $M,\mu M$ with $\mu\in {\Bbb R}-\{0\}$ define the same projectivity.} Then we have the following definition. \begin{definition}\label{def3} Two curves $\bm{C_1}$ and $\bm{C_2}$ are said to be \emph{projectively equivalent} if there exists a projectivity $f$ such that $f(\bm{C_1})=\bm{C_2}$. A curve $\bm{C}$ has a \emph{projective symmetry} if there exists a non-trivial projectivity $f$ such that $f(\bm{C})=\bm{C}$. \end{definition} It is well-known \citep{Sendra2008,Hauer201868,BIZZARRI2020112438} that any two proper parametrizations of a rational curve are related by a linear rational transformation \begin{equation}\label{eq-moeb} \varphi:{\Bbb P}^1(\mathbb{R})\to {\Bbb P}^1(\mathbb{R}),\quad (t_0,t_1)\to\varphi(t_0,t_1)=(at_0+bt_1,ct_0+dt_1), \end{equation} with $ad-bc\neq 0$. {Notice that $a,b,c,d$ are defined up to a common nonzero scalar factor.} The mapping $\varphi$ is called a \emph{M\"obius transformation}. This fact is essential to prove the following result, which is used in \citep{Hauer201868,BIZZARRI2020112438}. \begin{theorem}\label{theo0.1} Two rational curves $\bm{C_1},\bm{C_2}$ properly parametrized by $\bm{p}$ and $\bm{q}$ are projectively equivalent if and only if there exist a non-singular $4\times 4$ matrix $M$ and a M\"obius transformation $\varphi(t_0,t_1)=(at_0+bt_1,ct_0+dt_1)$ with $ad-bc\neq 0$ such that \begin{equation}\label{eq0.1} M\bm{p}={\bm{q}\circ\varphi}. \end{equation} \end{theorem} \begin{color}{black} \begin{remark}\label{lambd} In general, projective equivalence implies that $M\bm{p}=\lambda(\bm{q}\circ \varphi)$ with $\lambda\neq 0$. However, since we are working in a projective setting, we can always safely assume that $\lambda=1$; this is the case in Eq. \eqref{eq0.1}. \end{remark} \end{color} \subsection{{A notion of invariance.}}\label{subsec2.2} {\color{black} We are interested in building rational expressions in terms of the components of the parametrizations that are invariant under projectivities, and that can help us recognize when two given rational curves are projectively equivalent. In more detail, let $\bm{u}=\bm{u}(t_0,t_1)$, $\bm{v}=\bm{v}(t_0,t_1)$ be two homogeneous parametrizations in ${\Bbb P}^3({\Bbb R})$, defining two curves $\bm{D_1},\bm{D_2}$, and assume that $M \bm{u}=\bm{v}$ with $\mbox{det}(M)\neq 0$. Therefore, according to Definition \ref{def3} the curves $\bm{D_1},\bm{D_2}$ are projectively equivalent. We want to build expressions $I_i$, which are rational in the components of ${\boldsymbol{u}}$ and its derivatives, so that $I_i(\bm{u})=I_i(M\bm{u})$ for all non-singular $4\times 4$ matrices $M$. This helps to recognize whether or not $\bm{u},\bm{v}$ satisfy that $M \bm{u}=\bm{v}$ for some $M$, since $I_i(\bm{u})=I_i(\bm{v})$ are necessary conditions for this. We say that $I_i$ is \emph{invariant} under projectivities, and we will often refer to $I_i$ simply as an \emph{invariant}. Since the $I_i$ are rational functions of ${\boldsymbol{u}}$ and its derivatives, when considering rational parametrizations ${\boldsymbol{u}}$, the $I_i$ are, in turn, rational functions. In order to build expressions of this type, we will use two ingredients. The first one has to do with determinants. Let ${\boldsymbol{w}}_i\in {\Bbb R}^n$ for $i=1,\ldots,n$, and let $\Vert{\boldsymbol{w}}_1\,\ldots\,{\boldsymbol{w}}_n\Vert$ denote the determinant of the vectors ${\boldsymbol{w}}_i$; we will keep this notation in the rest of the paper. Let $A\in {\mathcal M}_{n\times n}({\Bbb R})$ be a matrix whose determinant $\mbox{det}(A)$ is nonzero. Then we recall that \begin{equation}\label{ref-det} \Vert A{\boldsymbol{w}}_1\, \ldots \, A{\boldsymbol{w}}_n\Vert=\mbox{det}(A) \Vert{\boldsymbol{w}}_1\, \ldots\,{\boldsymbol{w}}_n\Vert. \end{equation} The second ingredient is differentiation. Let us denote, and we will also keep this notation in the rest of the paper, \begin{equation}\label{notation} \bm{u}_{t_0^k t_1^l}=\dfrac{\partial^{k+l} \bm{u}}{\partial t_0^k\partial t_1^l}(t_0,t_1), \end{equation} and analogously for ${\boldsymbol{v}}$. By repeatedly differentiating the equality $M \bm{u}=\bm{v}$ with respect to $t_0,t_1$, we get that \begin{equation}\label{ing2} M \bm{u}_{t_0^k t_1^l}= \bm{v}_{t_0^k t_1^l} \end{equation} for any choice $(k,l)\in ({\Bbb Z}^+\cup \{0\})\times ({\Bbb Z}^+\cup \{0\})$. Now let us see how to use these two ingredients to build rational expressions with the desired property. We will show it with one example. Let us pick four different elements $(k,l)\in ({\Bbb Z}^+\cup \{0\})\times ({\Bbb Z}^+\cup \{0\})$, say $\{(4,0), (0,1),(2,0),(3,0)\}$, which according to Eq. \eqref{notation} correspond to the derivatives $\bm{u}_{t_0^4} , \bm{u}_{t_1}, \bm{u}_{t_0^2}, \bm{u}_{t_0^3}$. Because of Eq. \eqref{ref-det}, we get that \begin{equation}\label{ba1} \Vert M\bm{u}_{t_0^4}\, M \bm{u}_{t_1}\, M\bm{u}_{t_0^2}\, M\bm{u}_{t_0^3}\Vert=\mbox{det}(M) \Vert\bm{u}_{t_0^4} \,\bm{u}_{t_1}\, \bm{u}_{t_0^2}\, \bm{u}_{t_0^3}\Vert. \end{equation} Taking Eq. \eqref{ing2} into account, \begin{equation}\label{ba2} \Vert M\bm{u}_{t_0^4}\, M \bm{u}_{t_1}\, M\bm{u}_{t_0^2}\, M\bm{u}_{t_0^3}\Vert= \Vert\bm{v}_{t_0^4} \, \bm{v}_{t_1}\, \bm{v}_{t_0^2}\, \bm{v}_{t_0^3}\Vert. \end{equation} Therefore, from Eq. \eqref{ba1} and Eq. \eqref{ba2}, we get that \begin{equation}\label{ba3} \mbox{det}(M)\Vert\bm{u}_{t_0^4} \, \bm{u}_{t_1}\, \bm{u}_{t_0^2}\, \bm{u}_{t_0^3}\Vert=\Vert\bm{v}_{t_0^4} \, \bm{v}_{t_1}\, \bm{v}_{t_0^2}\, \bm{v}_{t_0^3}\Vert. \end{equation} Any other choice of four different elements $(k,l)\in ({\Bbb Z}^+\cup \{0\})\times ({\Bbb Z}^+\cup \{0\})$ provides a relationship similar to Eq. \eqref{ba3}. For instance, if we pick $\{(1,0),(0,1),(2,0),(3,0)\}$ then we get \begin{equation}\label{ba4} \mbox{det}(M)\Vert\bm{u}_{t_0} \, \bm{u}_{t_1}\, \bm{u}_{t_0^2}\, \bm{u}_{t_0^3}\Vert= \Vert\bm{v}_{t_0} \, \bm{v}_{t_1}\, \bm{v}_{t_0^2}\, \bm{v}_{t_0^3}\Vert. \end{equation} Finally, if we divide Eq. \eqref{ba3} and Eq. \eqref{ba4}, assuming that the determinants in the denominators do not vanish, $\mbox{det}(M)$ is canceled out and we obtain \begin{equation}\label{thezero} \dfrac{\Vert \bm{u}_{t_0^4} \, \bm{u}_{t_1}\, \bm{u}_{t_0^2}\, \bm{u}_{t_0^3} \Vert}{\Vert{\boldsymbol{u}}_{t_0}\, {\boldsymbol{u}}_{t_1}\, {\boldsymbol{u}}_{t_0^2}\, {\boldsymbol{u}}_{t_0^3}\Vert}=\dfrac{\Vert \bm{v}_{t_0^4} \, \bm{v}_{t_1}\, \bm{v}_{t_0^2}\, \bm{v}_{t_0^3} \Vert}{\Vert{\boldsymbol{v}}_{t_0}\, {\boldsymbol{v}}_{t_1}\, {\boldsymbol{v}}_{t_0^2}\, {\boldsymbol{v}}_{t_0^3}\Vert} \end{equation} Thus, the rational expression at the left of Eq. \eqref{thezero}, which we denote as \begin{equation}\label{thefirst} I_1({\boldsymbol{u}})=\dfrac{\Vert \bm{u}_{t_0^4} \, \bm{u}_{t_1}\, \bm{u}_{t_0^2}\, \bm{u}_{t_0^3} \Vert}{\Vert{\boldsymbol{u}}_{t_0}\, {\boldsymbol{u}}_{t_1}\, {\boldsymbol{u}}_{t_0^2}\, {\boldsymbol{u}}_{t_0^3}\Vert}, \end{equation} is invariant under projectivities, because $I_i({\boldsymbol{u}})=I_i(M{\boldsymbol{u}})$ for any non-singular $4\times 4$ matrix $M$; hence, $M{\boldsymbol{u}}={\boldsymbol{v}}$ implies that $I_1({\boldsymbol{u}})=I_1({\boldsymbol{v}})$. Other choices for $(k,l)$ give rise to other invariants, some of which we will be using in the next sections. If we have two parametrizations ${\boldsymbol{u}},{\boldsymbol{v}}$ and several invariants $I_i$, $i=1,\ldots,m$, $I_i({\boldsymbol{u}})=I_i({\boldsymbol{v}})$ are necessary conditions for the equality $M{\boldsymbol{u}}={\boldsymbol{v}}$ to hold. In the next section we will see that a good choice of invariants can ensure that these conditions are also sufficient. \begin{remark} Certainly, from Theorem \ref{theo0.1} we observe that what we want to recognize is not a relationship like $M{\boldsymbol{u}}={\boldsymbol{v}}$, which could be addressed directly, but $M{\boldsymbol{u}}={\boldsymbol{v}}\circ\varphi$, with $\varphi$ an unknown M\"obius transformation. It is the presence of $\varphi$ that makes the problem more difficult. But even though other tools will also be required, as we will see in the next section the notion of invariant addressed here will be crucial. \end{remark} } \section{A new method to detect projective equivalence.}\label{newmeth} In this section we consider two curves $\bm{C_1},\bm{C_2}$, defined by homogeneous parametrizations $\bm{p}$ and $\bm{q}$, satisfying the assumptions in Subsection \ref{subsec1}. \subsection{Overall strategy.}\label{subsec3.1} As in previous approaches, our strategy takes advantage of Theorem \ref{theo0.1}, and proceeds by first computing the M\"obius transformation $\varphi$ in the statement of Theorem \ref{theo0.1}. If no such transformation is found, the curves $\bm{C_1},\bm{C_2}$ are not projectively equivalent. Otherwise, the matrix $M$ defining the projectivity between $\bm{C_1},\bm{C_2}$ is determined from $\varphi$; we will see that in our case this just amounts to performing a matrix multiplication. The main ideas in our approach, which we will develop in order later, are the following. \begin{itemize} \item [(A)] {\it Invariant functions.} We start with four {rational expressions in terms of the components of a parametrization and its derivatives}, that we denote by $I_i$, $i\in \{1,2,3,4\}$. These {expressions} are defined as quotients of certain determinants, {and are invariant under projectivities in the sense of Subsection \ref{subsec2.2}; their invariance can be proven with the same strategy used in Subsection \ref{subsec2.2}, and follows from the property for determinants recalled in Eq. \eqref{ref-det}.} From Theorem \ref{theo0.1}, if $\bm{p},\bm{q}$ correspond to projectively equivalent curves then $M\bm{p}=\bm{q}\circ\varphi$, with $\varphi$ a M\"obius transformation. {Then, using the notion of invariance provided in Subsection \ref{subsec2.2}}, we have \begin{equation}\label{neweq} I_i(\bm{p})=I_i(\bm{q}\circ \varphi) \end{equation} for $i\in \{1,2,3,4\}$. These are necessary conditions for $\bm{C_1},\bm{C_2}$ to be projectively equivalent, and will be revealed to be also sufficient. The equations that stem from Eq. \eqref{neweq} would give rise to a polynomial system in the parameters of $\varphi$. However, this system has a high order, and therefore solving it implies a high computational cost that we want to avoid. \item [(B)] {\it Projective curvatures.} In order to avoid solving a big polynomial system, we will derive, from the $I_i$, two more rational expressions $\kappa_1,\kappa_2$, {also invariant in the sense of Subsection \ref{subsec2.2}}, that we will refer to as \emph{projective curvatures}. Since $\kappa_1,\kappa_2$ are also invariants, $\kappa_1,\kappa_2$ do satisfy that \begin{equation}\label{neweq2} \kappa_i(\bm{p})=\kappa_i(\bm{q}\circ \varphi) \end{equation} for $i=1,2$. But the advantage of the $\kappa_i$ is that while in general $I_i(\bm{q}\circ \varphi)\neq I_i(\bm{q})\circ \varphi$, the $\kappa_i$ do satisfy $\kappa_i(\bm{q}\circ \varphi)=\kappa_i(\bm{q})\circ \varphi${; because of this, we say that the $\kappa_i$ {\it commute} with M\"obius transformations, while the $I_i$ do not}. By taking together the two relationships in Eq. \eqref{neweq2} for $i=1,2$ we can find the whole $\varphi$ as a special quadratic factor of the gcd of two polynomials built from the $\kappa_i$. Thus, to compute $\varphi$ we just need {to employ} gcd computing and factoring, and we do not need to solve any polynomial system. This idea was inspired by the strategy in \citet{Alcazar201551} to compute the symmetries of a rational space curve, where the classical curvature and torsion are used in a similar way. {The term {\it projective curvatures} refers to the fact that the $\kappa_i$ play here a role similar to the role played by the classical curvature and torsion to identify a curve up to rigid motions.} \item [(C)] {\it Projective equivalences.} Once $\varphi$ is obtained, the nonsingular matrix $M$ defining the projective equivalence can be computed. This just requires performing matrix multiplications involving the parametrizations ${\boldsymbol{p}}$ and ${\boldsymbol{q}}\circ \varphi$. \end{itemize} \subsection{{Invariants}.}\label{subsec3.2} In the rest of the paper, we will use the notation $\Vert{\boldsymbol{w}}_1\,\ldots\,{\boldsymbol{w}}_n\Vert$ for the determinant of $n$ vectors ${\boldsymbol{w}}_i\in {\Bbb R}^n$, and the notation $[{\boldsymbol{w}}_1\,\ldots\,{\boldsymbol{w}}_n]$ for the $n\times n$ matrix whose columns are the ${\boldsymbol{w}}_i$. Additionally, we will use the notation for partial derivatives introduced in Eq. \eqref{notation}. In order to motivate {the rational expressions we are going to introduce}, we consider first two homogeneous parametrizations ${\boldsymbol{u}}(t_0,t_1),{\boldsymbol{v}}(t_0,t_1)$ of curves of degree $n$ in ${{\Bbb P}^3(\mathbb{R})}$ defining two projectively equivalent curves, such that \begin{equation}\label{Meq} M{\boldsymbol{u}}(t_0,t_1)={\boldsymbol{v}}(t_0,t_1) \end{equation} where $M$ represents a projectivity; notice that because of Theorem \ref{theo0.1}, what we are pursuing is exactly Eq. \eqref{Meq}, with ${\boldsymbol{u}}:={\boldsymbol{p}}$, and ${\boldsymbol{v}}:={\boldsymbol{q}}\circ \varphi$ (where $\varphi$ is unknown). Now let $D({\boldsymbol{u}})(t_0,t_1),D({\boldsymbol{v}})(t_0,t_1)$ be the matrices defined as \begin{equation}\label{Dmatrices} D({\boldsymbol{u}})=[ {\boldsymbol{u}}_{t_0}\, {\boldsymbol{u}}_{t_1}\, {\boldsymbol{u}}_{t_0^2}\, {\boldsymbol{u}}_{t_0^3}],\quad D({\boldsymbol{v}})=[ {\boldsymbol{v}}_{t_0}\, {\boldsymbol{v}}_{t_1}\, {\boldsymbol{v}}_{t_0^2}\, {\boldsymbol{v}}_{t_0^3}]. \end{equation} Because of Eq. \eqref{Meq}, one can see that $M\cdot D({\boldsymbol{u}})=D({\boldsymbol{v}})$. Assume that the determinant of $D({\boldsymbol{u}})$ is not identically zero, i.e. that $\Vert {\boldsymbol{u}}_{t_0}\, {\boldsymbol{u}}_{t_1}\, {\boldsymbol{u}}_{t_0^2}\, {\boldsymbol{u}}_{t_0^3}\Vert$ does not vanish identically; later, in Lemma \ref{lem9}, we will see that this holds in our case. Then \begin{equation}\label{MD} M=D({\boldsymbol{v}})(D({\boldsymbol{u}}))^{-1}. \end{equation} Not let us differentiate $D({\boldsymbol{v}})(D({\boldsymbol{u}}))^{-1}$ with respect to $t_k$, $k=0,1$, we get that \begin{equation}\label{long} \begin{split} \dfrac{\partial (D(\bm{v})\cdot (D(\bm{u}))^{-1})}{\partial t_k} &= \dfrac{\partial D(\bm{v})}{\partial t_k}\cdot (D(\bm{u}))^{-1} +D(\bm{v})\cdot \dfrac{\partial (D(\bm{u}))^{-1}}{\partial t_k} \\ & =\dfrac{\partial D(\bm{v})}{\partial t_k}\cdot (D(\bm{u}))^{-1}-D(\bm{v})(D(\bm{u}))^{-1}\cdot \dfrac{\partial D(\bm{u})}{\partial t_k}\cdot (D(\bm{u}))^{-1} \\ & =D(\bm{v})\cdot \left((D(\bm{v}))^{-1}\cdot\dfrac{\partial D(\bm{v})}{\partial t_k}-(D(\bm{u}))^{-1}\cdot\dfrac{\partial D(\bm{u})}{\partial t_k}\right)\cdot (D(\bm{u}))^{-1}. \end{split} \end{equation} Since $M=D({\boldsymbol{v}})(D({\boldsymbol{u}}))^{-1}$, {and $M$ is a constant matrix,} we get that the matrices defined by the derivatives at the left-hand side of the above expression are identically zero, and therefore that \begin{equation}\label{eq55} (D(\bm{u}))^{-1}\cdot\dfrac{\partial D(\bm{u})}{\partial t_k}=(D(\bm{v}))^{-1}\cdot\dfrac{\partial D(\bm{v})}{\partial t_k} \end{equation} for $k=0,1$. We need a closer look at the matrices $U_k,V_k$, defined as \begin{equation}\label{UVk} U_k=(D(\bm{u}))^{-1}\cdot\dfrac{\partial D(\bm{u})}{\partial t_k},\mbox{ }V_k=(D(\bm{v}))^{-1}\cdot\dfrac{\partial D(\bm{v})}{\partial t_k}. \end{equation} \ifnum \value{num}=1 {Notice that, according to Eq. \eqref{eq55}, $U_k=V_k$. Now {performing elementary but lengthy calculations, one can check that (see \cite[Section 3.2.1]{Gozutok2022} for a detailed deduction)}, \small \begin{equation}\label{Uk0} \begin{array}{cc} U_0=\begin{bmatrix} 0 & \frac{n-1}{t_1} & 0 & \dfrac{\Vert \bm{u}_{t_0^4} \, \bm{u}_{t_1}\, \bm{u}_{t_0^2}\, \bm{u}_{t_0^3} \Vert}{\Vert{\boldsymbol{u}}_{t_0}\, {\boldsymbol{u}}_{t_1}\, {\boldsymbol{u}}_{t_0^2}\, {\boldsymbol{u}}_{t_0^3}\Vert} \\ 0 & 0 & 0 & \dfrac{\Vert \bm{u}_{t_0} \, \bm{u}_{t_0^4}\, \bm{u}_{t_0^2}\, \bm{u}_{t_0^3} \Vert}{\Vert{\boldsymbol{u}}_{t_0}\, {\boldsymbol{u}}_{t_1}\, {\boldsymbol{u}}_{t_0^2}\, {\boldsymbol{u}}_{t_0^3}\Vert}\\ 1 & -\frac{t_0}{t_1} & 0 & \dfrac{\Vert \bm{u}_{t_0}\, \bm{u}_{t_1}\, \bm{u}_{t_0^4}\, \bm{u}_{t_0^3} \Vert}{\Vert{\boldsymbol{u}}_{t_0}\, {\boldsymbol{u}}_{t_1}\, {\boldsymbol{u}}_{t_0^2}\, {\boldsymbol{u}}_{t_0^3}\Vert}\\ 0 & 0 & 1 & \dfrac{\Vert \bm{u}_{t_0}\, \bm{u}_{t_1}\, \bm{u}_{t_0^2}\, \bm{u}_{t_0^4}\Vert}{\Vert{\boldsymbol{u}}_{t_0}\, {\boldsymbol{u}}_{t_1}\, {\boldsymbol{u}}_{t_0^2}\, {\boldsymbol{u}}_{t_0^3}\Vert} \end{bmatrix}, & U_1=\begin{bmatrix} \frac{n-1}{t_1} & -\frac{(n-1)t_0}{t_1^2} & 0 & -\frac{t_0}{t_1} \dfrac{\Vert \bm{u}_{t_0^4} \, \bm{u}_{t_1}\, \bm{u}_{t_0^2}\, \bm{u}_{t_0^3} \Vert}{\Vert{\boldsymbol{u}}_{t_0}\, {\boldsymbol{u}}_{t_1}\, {\boldsymbol{u}}_{t_0^2}\, {\boldsymbol{u}}_{t_0^3}\Vert}\\ 0 & \frac{n-1}{t_1} & 0 & -\frac{t_0}{t_1}\dfrac{\Vert \bm{u}_{t_0} \, \bm{u}_{t_0^4}\, \bm{u}_{t_0^2}\, \bm{u}_{t_0^3} \Vert}{\Vert{\boldsymbol{u}}_{t_0}\, {\boldsymbol{u}}_{t_1}\, {\boldsymbol{u}}_{t_0^2}\, {\boldsymbol{u}}_{t_0^3}\Vert}\\ -\frac{t_0}{t_1} & \frac{t_0^2}{t_1^2} & \frac{n-2}{t_1} & -\frac{t_0}{t_1}\dfrac{\Vert \bm{u}_{t_0}\, \bm{u}_{t_1}\, \bm{u}_{t_0^4}\, \bm{u}_{t_0^3} \Vert}{\Vert{\boldsymbol{u}}_{t_0}\, {\boldsymbol{u}}_{t_1}\, {\boldsymbol{u}}_{t_0^2}\, {\boldsymbol{u}}_{t_0^3}\Vert}\\ 0 & 0 & -\frac{t_0}{t_1} & \frac{n-3}{t_1}-\frac{t_0}{t_1}\dfrac{\Vert \bm{u}_{t_0}\, \bm{u}_{t_1}\, \bm{u}_{t_0^2}\, \bm{u}_{t_0^4}\Vert}{\Vert{\boldsymbol{u}}_{t_0}\, {\boldsymbol{u}}_{t_1}\, {\boldsymbol{u}}_{t_0^2}\, {\boldsymbol{u}}_{t_0^3}\Vert} \end{bmatrix} \end{array} \end{equation} \normalsize The expressions for $V_0,V_1$ are the same, after replacing ${\boldsymbol{u}}$ by ${\boldsymbol{v}}$. In the fourth column of $U_0$ we observe four quotients of determinants, the first of them being the quotient in Eq. \eqref{thefirst} (see Subsection \ref{subsec2.2}), which also arise in the fourth column of $V_0$. All these quotients {correspond to rational expressions which are invariant under projectivities in the sense of Subsection \ref{subsec2.2}}: this follows from the fact that $U_k=V_k$ for $k=0,1$, but also from the arguments in Subsection \ref{subsec2.2}. These four quotients are the {invariants} that we are going to use.} \else { \subsubsection{Demonstrating $U_k=V_k$.} In this subsection, let us show $U_k=V_k$ for $k=0,1$. First let us show that for all $k=0,1$, \begin{equation}\label{eq0.3} (D(\bm{p}))^{-1}\cdot\dfrac{\partial D(\bm{p})}{\partial t_k}=(D(\bm{q}))^{-1}\cdot\dfrac{\partial D(\bm{q})}{\partial t_k}, \end{equation} where $(D(\bm{p}))^{-1}$ denotes the inverse matrix of $D(\bm{p})$. Let $(D(\bm{p}))^{-1}\cdot\dfrac{\partial D(\bm{p})}{\partial t_k}=U_k$ for an unknown matrix $U_k$, $k=0,1$. Then $D(\bm{p})\cdot U_k=\dfrac{\partial D(\bm{p})}{\partial t_k}$. Denote the $j$th column of the matrix $D(\bm{p})$ by $D(\bm{p})_j$, and similarly denote the $j$th column of the matrix $U_k$ by $U_k^j$ for $1\leq j\leq 4$. So we have $8$ systems each of which corresponds to a pair of the values $j$ and $k$, namely \begin{equation}\label{eq0.4} D(\bm{p})\cdot U_k^j=\dfrac{\partial D(\bm{p})_j}{\partial t_k}, \end{equation} for $k=0,1$ and $1\leq j\leq 4$. The system corresponding to each pair is linear in the components of $U_k^j$. On the other hand, since the coefficient matrix $D(\bm{p})$ of each system is non singular ($\Delta(\bm{p})\neq 0$), each system has only one solution. The solution to the systems are \begin{equation}\label{eq0.5} U_k^j=\begin{bmatrix} \dfrac{\lVert \dfrac{\partial D(\bm{p})_j}{\partial t_k}\, \bm{p}_{t_1}\, \bm{p}_{t_0^2}\, \bm{p}_{t_0^3} \rVert}{\Delta(\bm{p})} \\ \dfrac{\lVert \bm{p}_{t_0} \, \dfrac{\partial D(\bm{p})_j}{\partial t_k}\, \bm{p}_{t_0^2}\, \bm{p}_{t_0^3} \rVert}{\Delta(\bm{p})} \\ \dfrac{\lVert \bm{p}_{t_0}\, \bm{p}_{t_1}\, \dfrac{\partial D(\bm{p})_j}{\partial t_k}\, \bm{p}_{t_0^3} \rVert}{\Delta(\bm{p})} \\ \dfrac{\lVert \bm{p}_{t_0}\, \bm{p}_{t_1}\, \bm{p}_{t_0^2}\, \dfrac{\partial D(\bm{p})_j}{\partial t_k} \rVert}{\Delta(\bm{p})} \\ \end{bmatrix}. \end{equation} Using Euler's homogeneous function theorem, we conclude that for $k=0$ and $j<4$, \begin{equation}\label{eq0.6} U_0^1=\begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix},\,\, U_0^2=\begin{bmatrix} \frac{n-1}{t_1} \\ 0 \\ -\frac{t_0}{t_1} \\ 0 \end{bmatrix},\,\, U_0^3=\begin{bmatrix} 0 \\ 0\\ 0\\ 1 \end{bmatrix} \end{equation} and for $k=1$ and $j<4$, \begin{equation}\label{eq0.7} U_1^1=\begin{bmatrix} \frac{n-1}{t_1} \\ 0 \\ -\frac{t_0}{t_1} \\ 0 \end{bmatrix},\,\, U_1^2=\begin{bmatrix} -\frac{(n-1)t_0}{t_1^2} \\ \frac{n-1}{t_1} \\ \frac{t_0^2}{t_1^2} \\ 0 \end{bmatrix},\,\, U_1^3=\begin{bmatrix} 0 \\ 0\\ \frac{n-2}{t_1}\\ -\frac{t_0}{t_1} \end{bmatrix}. \end{equation} It is seen that for $k=0,1$ and $j<4$, $U_k^j$ do not depend on $p$. Now for $k=0$ and $j=4$, we have $\dfrac{\partial D(\bm{p})_4}{\partial t_0}=\dfrac{\partial \bm{p}_{t_0^3}}{\partial t_0}=\bm{p}_{t_0^4}$. It is obtained \begin{equation}\label{eq0.8} U_0^4=\begin{bmatrix} \dfrac{\lVert \bm{p}_{t_0^4} \, \bm{p}_{t_1}\, \bm{p}_{t_0^2}\, \bm{p}_{t_0^3} \rVert}{\Delta(\bm{p})} \\ \dfrac{\lVert \bm{p}_{t_0} \, \bm{p}_{t_0^4}\, \bm{p}_{t_0^2}\, \bm{p}_{t_0^3} \rVert}{\Delta(\bm{p})} \\ \dfrac{\lVert \bm{p}_{t_0}\, \bm{p}_{t_1}\, \bm{p}_{t_0^4}\, \bm{p}_{t_0^3} \rVert}{\Delta(\bm{p})} \\ \dfrac{\lVert \bm{p}_{t_0}\, \bm{p}_{t_1}\, \bm{p}_{t_0^2}\, \bm{p}_{t_0^4} \rVert}{\Delta(\bm{p})} \\ \end{bmatrix}=\begin{bmatrix} I_1(\bm{p}) \\ I_2(\bm{p})\\ I_3(\bm{p})\\ I_4(\bm{p}) \end{bmatrix}. \end{equation} Similarly, for $k=1$ and $j=4$, we have $\dfrac{\partial D(\bm{p})_4}{\partial t_1}=\dfrac{\partial \bm{p}_{t_0^3}}{\partial t_1}=\bm{p}_{t_0^3t_1}=\frac{n-3}{t_1}\bm{p}_{t_0^3}-\dfrac{t_0}{t_1}\bm{p}_{t_0^4}$. It is obtained \begin{equation}\label{eq0.9} U_1^4=\begin{bmatrix} \dfrac{\lVert \frac{n-3}{t_1}\bm{p}_{t_0^3}-\dfrac{t_0}{t_1}\bm{p}_{t_0^4} \, \bm{p}_{t_1}\, \bm{p}_{t_0^2}\, \bm{p}_{t_0^3} \rVert}{\Delta(\bm{p})} \\ \dfrac{\lVert \bm{p}_{t_0} \, \frac{n-3}{t_1}\bm{p}_{t_0^3}-\dfrac{t_0}{t_1}\bm{p}_{t_0^4}\, \bm{p}_{t_0^2}\, \bm{p}_{t_0^3} \rVert}{\Delta(\bm{p})} \\ \dfrac{\lVert \bm{p}_{t_0}\, \bm{p}_{t_1}\, \frac{n-3}{t_1}\bm{p}_{t_0^3}-\dfrac{t_0}{t_1}\bm{p}_{t_0^4}\, \bm{p}_{t_0^3} \rVert}{\Delta(\bm{p})} \\ \dfrac{\lVert \bm{p}_{t_0}\, \bm{p}_{t_1}\, \bm{p}_{t_0^2}\, \frac{n-3}{t_1}p_{t_0^3}-\dfrac{t_0}{t_1}\bm{p}_{t_0^4} \rVert}{\Delta(\bm{p})} \\ \end{bmatrix}=\begin{bmatrix} -\frac{t_0}{t_1} I_1(\bm{p}) \\ -\frac{t_0}{t_1} I_2(\bm{p})\\ -\frac{t_0}{t_1} I_3(\bm{p})\\ \frac{n-3}{t_1}-\frac{t_0}{t_1} I_4(\bm{p}) \end{bmatrix}. \end{equation} Now let $(D(\bm{q}))^{-1}\cdot\dfrac{\partial D(\bm{q})}{\partial t_k}=V_k$ for an unknown matrix $V_k$, $k=0,1$. Then $D(\bm{q})\cdot V_k=\dfrac{\partial D(\bm{q})}{\partial t_k}$. Denote the $j$th column of the matrix $D(\bm{q})$ by $D(\bm{q})_j$, and similarly denote the $j$th column of the matrix $V_k$ by $V_k^j$ for $1\leq j\leq 4$. Similarly, we again have $8$ systems each of which corresponds to a pair of the values $j$ and $k$ for $q$. For the solutions $V_k^j$ we have $U_k^j=V_k^j$ for all $k$ and $j<4$, since $U_k^j$ and $V_k^j$ do not depend on the parametrizations. In addition, similar operations lead to \begin{equation}\label{eq0.10} V_0^4=\begin{bmatrix} I_1(\bm{q}) \\ I_2(\bm{q})\\ I_3(\bm{q})\\ I_4(\bm{q}) \end{bmatrix}, \end{equation} and \begin{equation}\label{eq0.11} V_1^4=\begin{bmatrix} -\frac{t_0}{t_1} I_1(\bm{q}) \\ -\frac{t_0}{t_1} I_2(\bm{q})\\ -\frac{t_0}{t_1} I_3(\bm{q})\\ \frac{n-3}{t_1}-\frac{t_0}{t_1} I_4(\bm{q}) \end{bmatrix}. \end{equation} Since $I_i(\bm{p})=I_i(\bm{q})$ for all $1\leq i\leq 4$, $U_k^4=V_k^4$ for all $k$. Therefore $U_k=V_k$ for all $k$. } \fi Thus, let us denote \begin{align} &A_1(\bm{u}):=\Vert \bm{u}_{t_0^4}\, \bm{u}_{t_1}\, \bm{u}_{t_0^2}\, \bm{u}_{t_0^3} \Vert,A_2(\bm{u}):=\Vert \bm{u}_{t_0}\, \bm{u}_{t_0^4}\, \bm{u}_{t_0^2}\, \bm{u}_{t_0^3} \Vert, A_3(\bm{u}):=\Vert \bm{u}_{t_0}\, \bm{u}_{t_1}\, \bm{u}_{t_0^4}\, \bm{u}_{t_0^3} \Vert, \nonumber \\ &A_4(\bm{u}):=\Vert \bm{u}_{t_0}\, \bm{u}_{t_1}\, \bm{u}_{t_0^2}\, \bm{u}_{t_0^4} \Vert, \Delta(\bm{u}):=\Vert \bm{u}_{t_0}\, \bm{u}_{t_1}\, \bm{u}_{t_0^2}\, \bm{u}_{t_0^3} \Vert. \label{As} \end{align} Then we {have the} following four rational expressions, which are the expressions arising in Eq. \eqref{Uk0}: \begin{equation}\label{first-inv} I_1(\bm{u}):=\dfrac{A_1(\bm{u})}{\Delta(\bm{u})},\, I_2(\bm{u}):=\dfrac{A_2(\bm{u})}{\Delta(\bm{u})},\, I_3(\bm{u}):=\dfrac{A_3(\bm{u})}{\Delta(\bm{u})},\, I_4(\bm{u}):=\dfrac{A_4(\bm{u})}{\Delta(\bm{u})}. \end{equation} The following lemma, proven in \ref{AppendixA}, guarantees that under our hypotheses the {rational expressions in Eq. \eqref{first-inv} are well defined}. \begin{lemma}\label{lem9} Let $\bm{C}$ in ${{\Bbb P}^3(\mathbb{R})}$ be a rational algebraic curve properly parametrized by $\bm{u}$, satisfying the hypotheses (i-iv) in Subsection \ref{subsec1}. Then $\Delta(\bm{u})$ is not identically zero. \end{lemma} \vspace{0.3 cm} Now let us go back to our curves $\bm{C_1},\bm{C_2}$, defined by homogeneous parametrizations $\bm{p}$ and $\bm{q}$ of degree $n\geq 4$, as in Subsection \ref{subsec1}. We recall that we are assuming that $\bm{p},\bm{q}$ satisfy hypotheses (i-iv) in Subsection \ref{subsec1}. {The fact that the expressions in Eq. \eqref{first-inv} are invariant in the sense of Subsection \ref{subsec2.2} implies that $I_i({\boldsymbol{p}})=I_i(M{\boldsymbol{p}})=I_i({\boldsymbol{q}}\circ \varphi)$ for $i=1,2,3,4$. Thus, the existence of a M\"obius transformation $\varphi$ such that $I_i({\boldsymbol{p}})=I_i({\boldsymbol{q}}\circ \varphi)$ for $i=1,2,3,4$ is a necessary condition for $\bm{C_1},\bm{C_2}$ to be projective equivalent. However, the next theorem proves that in this case, these conditions are also sufficient, which is a consequence of how these invariants were chosen. The intuitive idea is as follows: the conditions $I_i({\boldsymbol{p}})=I_i({\boldsymbol{q}}\circ \varphi)$ ensure that Eq. \eqref{UVk}, with ${\boldsymbol{p}},{\boldsymbol{q}}\circ \varphi$ replacing ${\boldsymbol{u}},{\boldsymbol{v}}$, is satisfied. From here, Eq. \eqref{eq55} is also satisfied, and therefore the derivatives at the left-hand side of Eq. \eqref{long} are zero. Thus, Eq. \eqref{MD} must be satisfied by some constant matrix $M$, with ${\boldsymbol{p}},{\boldsymbol{q}}\circ \varphi$ replacing ${\boldsymbol{u}},{\boldsymbol{v}}$. Theorem \ref{theo0.1} makes the rest.} \begin{theorem}\label{teo29} Let $\bm{C}_1,\bm{C}_2$ be two rational algebraic curves properly parametrized by $\bm{p},\bm{q}$ satisfying hypotheses $(i-iv)$. Then $\bm{C}_1,\bm{C}_2$ are projectively equivalent if and only if there exists a M\"obius transformation $\varphi$ such that \begin{equation}\label{Ipqmob} I_i({\boldsymbol{p}})=I_i({\boldsymbol{q}}\circ \varphi) \end{equation} for $i\in\{1,2,3,4\}$. \end{theorem} \begin{proof} The implication $(\Rightarrow)$ follows from Theorem \ref{theo0.1} and the discussion at the beginning of this subsection. So let us focus on $(\Leftarrow)$. Let ${\boldsymbol{u}}:={\boldsymbol{p}}$, ${\boldsymbol{v}}:={\boldsymbol{q}}\circ \varphi$. Since by hypothesis $I_i({\boldsymbol{u}})=I_i({\boldsymbol{v}})$ for $i=1,2,3,4$, taking Eq. \eqref{Uk0} into account we have $U_k=V_k$ for $k=0,1$, so Eq. \eqref{eq55} holds for $k=0,1$; notice that since the determinants of $D({\boldsymbol{u}}), D({\boldsymbol{v}})$ are, precisely, $\Delta({\boldsymbol{u}}),\Delta({\boldsymbol{v}})$, by Lemma \ref{lem9} the inverses $D({\boldsymbol{u}})^{-1}, D({\boldsymbol{v}})^{-1}$ exist. Therefore, the matrix $D(\bm{v})\cdot (D(\bm{u}))^{-1}$ is a constant nonsingular matrix $M$. Thus, $M\cdot D(\bm{u})=D(\bm{v})$, so $M \bm{u}_{t_0}=\bm{v}_{t_0}$ and $M \bm{u}_{t_1}=\bm{v}_{t_1}$. Using Euler's Homogeneous Function Theorem, we have \begin{equation*} n\bm{v}=t_0\bm{v}_{t_0}+t_1\bm{v}_{t_1}=t_0M \bm{v}_{t_0}+t_1M \bm{v}_{t_1}=M (t_0\bm{u}_{t_0}+t_1\bm{u}_{t_1})=nM \bm{u}, \end{equation*} so $M \bm{u}=\bm{v}$. \end{proof} The relationships in Eq. \eqref{Ipqmob} lead to a polynomial system in the parameters of the M\"obius transformation $\varphi${, which we might try to solve to find $\varphi$}. However, this system has a high degree. Because of this, we will derive other {rational expressions, also invariant,} that we call \emph{projective curvatures} {and that will allow us to solve our problem without using polynomial system solving}. This is done in the next subsection. \subsection{Projective curvatures.} \label{subsec-proj-curv} A first question when examining the relationships in Eq. \eqref{Ipqmob} is how the $I_i({\boldsymbol{q}})$ change when ${\boldsymbol{q}}$ is composed with $\varphi$. Writing $\varphi(t_0,t_1)=(at_0+bt_1,ct_0+dt_1)=(u,v)$, calling $\delta=ad-bc\neq 0$ and using the Chain Rule, we get that \begin{align} v^4 I_1(\bm{q}\circ\varphi) &=c^3(n-1)(n-2)(n-3)(3v+dt_1)+c^2(n-1)(n-2)\delta t_1(2v+dt_1)I_4(\bm{q})\circ\varphi\nonumber \\ & -c(n-1)\delta ^2t_1^2(v+dt_1)I_3(\bm{q})\circ\varphi+\delta ^3t_1^4(dI_1(\bm{q})\circ\varphi-bI_2(\bm{q})\circ\varphi)\label{eq17} \\ v^4I_2(\bm{q}\circ\varphi) &=-c^4(n-1)(n-2)(n-3)t_1-c^3(n-1)(n-2)\delta t_1^2I_4(\bm{q})\circ\varphi\nonumber \\ & +c^2(n-1)\delta ^2t_1^3I_3(\bm{q})\circ\varphi+\delta ^3t_1^4(aI_2(\bm{q})\circ\varphi-cI_1(\bm{q})\circ\varphi)\label{eq18} \\ v^2I_3(\bm{q}\circ\varphi) &=-6c^2(n-2)(n-3)-3c(n-2)\delta t_1I_4(\bm{q})\circ\varphi+\delta ^2t_1^2I_3(\bm{q})\circ\varphi\label{eq19} \\ vI_4(\bm{q}\circ\varphi) &=4c(n-3)+\delta t_1I_4(\bm{q})\circ\varphi \label{eq20}. \end{align} From these expressions we see that the $I_i$ do not commute with $\varphi$, i.e. in general $I_i({\boldsymbol{q}}\circ \varphi)\neq I_i({\boldsymbol{q}})\circ \varphi$. We aim to find {rational functions} that do commute with $\varphi$. In order to do that, we substitute the equations \eqref{eq17}-\eqref{eq20} into the relationships in Eq. \eqref{Ipqmob}, and we get \begin{align} v^4 I_1(\bm{p})(t_0,t_1) &=c^3(n-1)(n-2)(n-3)(3v+dt_1)+c^2(n-1)(n-2)\delta t_1(2v+dt_1)I_4(\bm{q}(u,v)\nonumber \\ & -c(n-1)\delta ^2t_1^2(v+dt_1)I_3(\bm{q})(u,v)+\delta ^3t_1^4(dI_1(\bm{q})(u,v)-bI_2(\bm{q})(u,v))\label{eq23} \\ v^4I_2(\bm{p})(t_0,t_1) &=-c^4(n-1)(n-2)(n-3)t_1-c^3(n-1)(n-2)\delta t_1^2I_4(\bm{q})(u,v)\nonumber \\ & +c^2(n-1)\delta ^2t_1^3I_3(\bm{q})(u,v)+\delta ^3t_1^4(aI_2(\bm{q})(u,v)-cI_1(\bm{q})(u,v))\label{eq24} \\ v^2I_3(\bm{p})(t_0,t_1) &=-6c^2(n-2)(n-3)-3c(n-2)\delta t_1I_4(\bm{q})(u,v)+\delta ^2t_1^2I_3(\bm{q})(u,v)\label{eq25} \\ vI_4(\bm{p})(t_0,t_1) &=4c(n-3)+\delta t_1I_4(\bm{q})(u,v) \label{eq26}, \end{align} {where we have performed the substitution $u:=at_0+bt_1$, $v:=ct_0+dt_1$ for the components of the M\"obius transformation $\varphi$ (see Eq. \eqref{eq-moeb}). At this point $u,v$ are considered as independent variables, i.e. we forget about the dependence between $u,v$ and $t_0,t_1$.} {Next, we} want to eliminate the parameters $a,b,c,d$ from these equations, something that we can {translate} in terms of polynomial ideals. Indeed, let us write $J_i=I_i(\bm{q})(u,v)$ for $i\in \{1,2,3,4\}$, and let us denote the $I_i(\bm{p})$ by just $I_i$. Then after clearing denominators, the equations \eqref{eq23}-\eqref{eq26} generate a polynomial ideal ${\mathcal I}$ of \[ {\Bbb R}[a,b,c,d,t_1,v,I_1,I_2,I_3,I_4,J_1,J_2,J_3,J_4]. \] Eliminating $a,b,c,d$ from equations \eqref{eq23}-\eqref{eq26} amounts to finding elements in the elimination ideal \[ {\mathcal I}^\star={\mathcal I}\cap{\Bbb R}[t_1,v,I_1,I_2,I_3,I_4,J_1,J_2,J_3,J_4]. \] \ifnum \value{num}=1 In our case, this can be done by hand, without using Gr\"obner bases; the process consists of several easy, but lengthy, substitutions and manipulations \cite[see][Section 3.3.1]{Gozutok2022}. Eventually we get the expressions} \else \subsubsection{Eliminating variables.} In our case, this can be done by hand, without using Gr\"obner bases; the process consists of several easy, but lengthy, following substitutions and manipulations. Now we are ready to eliminate the coefficients $a,b,c,d$ from the above system to find a system such that $\left\lbrace k_1(\bm{p})=k_1(\bm{q})(u,v),k_1(\bm{p})=k_1(\bm{q})(u,v)\right\rbrace$. We first try to eliminate $a,b,d$ from \eqref{eq23} and \eqref{eq24}, then multiplying \eqref{eq23} by $t_0$ and \eqref{eq24} by $-t_1$ and summing the results, we have \begin{align} v^4I_0(\bm{p})(t_0,t_1)&=4c^3(n-1)(n-2)(n-3)t_1v+3c^2(n-1)(n-2)\delta t_1^2vI_4(\bm{q})(u,v) \nonumber\\ &-2c(n-1)\delta ^2t_1^3vI_3(\bm{q})(u,v)+\delta ^3t_1^4I_0(\bm{q})(u,v), \label{eq32} \end{align} where $I_0(\bm{p})(t_0,t_1)=t_1I_1(\bm{p})(t_0,t_1)-t_0I_2(\bm{p})(t_0,t_1)$. Again to eliminate $a$ from \eqref{eq24}, we use $av-cu=\delta$. Substituting $a=\dfrac{\delta +cu}{v}$ in \eqref{eq24}, it is obtained \begin{align} v^5I_2(\bm{p})(t_0,t_1)&=-c^4(n-1)(n-2)(n-3)t_1v-c^3(n-1)(n-2)\delta t_1^2vI_4(\bm{q})(u,v)\nonumber \\ &+c^2(n-1)\delta ^2t_1^3vI_3(\bm{q})(u,v)-c\delta^3t_1^4I_0(\bm{q})(u,v)+\delta^4t_1^5I_2(\bm{q})(u,v).\label{eq33} \end{align} We know that the coefficients of the Möbius transformation satisfy $\delta =ad-bc\neq 0$. Then there is a number $s\neq 0$ such that $\delta s=1$. Now we use this in the equations \eqref{eq23}-\eqref{eq26} in order to eliminate the parameters $\delta ,c$. Thus multiplying the equations \eqref{eq26}, \eqref{eq25}, \eqref{eq32}, \eqref{eq33} by $s$, $s^2$, $s^3$, $s^4$, respectively, we have \begin{align} svI_4(\bm{p})(t_0,t_1) &=4(cs)(n-3)+t_1I_4(\bm{q})(u,v) \label{eq34}\\ s^2v^2I_3(\bm{p})(t_0,t_1) &=-6(cs)^2(n-2)(n-3)-3(cs)(n-2) t_1I_4(\bm{q})(u,v)+t_1^2I_3(\bm{q})(u,v)\label{eq35} \\ s^3v^4I_0(\bm{p})(t_0,t_1) &=4(cs)^3(n-1)(n-2)(n-3)t_1v+3(cs)^2(n-1)(n-2)t_1^2vI_4(\bm{q})(u,v)\nonumber \\ &-2(cs)(n-1)t_1^3vI_3(\bm{q})(u,v)+t_1^4I_0(\bm{q})(u,v)\label{eq36} \\ s^4v^5I_2(\bm{p})(t_0,t_1) &=-(cs)^4(n-1)(n-2)(n-3)t_1v-(cs)^3(n-1)(n-2)t_1^2vI_4(\bm{q})(u,v)\nonumber\\ &+(cs)^2(n-1)t_1^3vI_3(\bm{q})(u,v)-(cs)t_1^4I_0(\bm{q})(u,v)+t_1^5I_2(\bm{q})(u,v).\label{eq37} \end{align} From now on unless otherwise stated explicitly, we denote $I_i(\bm{q})(u,v)$ by $J_i(\bm{q})$ and drop $(t_0,t_1)$ from the function $I_i(\bm{p})(t_0,t_1)$ for the sake of shortening of the equations. We are ready to eliminate $c$ from the above equations. Let us get $cs$ from first equation and write as \begin{equation*} cs=\dfrac{vsI_4(\bm{p})-t_1J_4(\bm{q})}{4(n-3)}. \end{equation*} Substituting $cs$ in the equations \eqref{eq35},\eqref{eq36}, \eqref{eq37} and using Theorem \ref{teo17}, we have \begin{equation}\label{eq38} s^2=\dfrac{t_1^2(8(n-3)J_3(\bm{q})+3(n-2)J_4^2(\bm{q})}{v^2(8(n-3)I_3(\bm{p})+3(n-2)I_4^2(\bm{p}))}, \end{equation} \begin{align} s^3 &=\dfrac{t_1^4(8(n-3)^2J_0(\bm{q})+4(n-1)(n-3)t_1J_3(\bm{q})J_4(\bm{q})+(n-1)(n-2)t_1J_4^3(\bm{q}))}{v^4(8(n-3)^2I_0(\bm{p})+4(n-1)(n-3)t_1I_3(\bm{p})I_4(\bm{p})+(n-1)(n-2)t_1I_4^3(\bm{p}))},\label{eq39} \end{align} \small \begin{align} s^4 &=\dfrac{t_1^5(256(n-3)^3J_2(\bm{q})+64(n-3)^2J_0(\bm{q})J_4(\bm{q})+16(n-1)(n-3)t_1J_3(\bm{q})J_4^2(\bm{q})+3(n-1)(n-2)t_1J_4^4(\bm{q}))}{v^5(256(n-3)^3I_2(\bm{p})+64(n-3)^2I_0(\bm{p})I_4(\bm{p})+16(n-1)(n-3)t_1I_3(\bm{p})I_4^2(\bm{p})+3(n-1)(n-2)t_1I_4^4(\bm{p}))},\label{eq40} \end{align} \normalsize respectively. Eliminating $s$ by equating the cube of \eqref{eq38} and the square of \eqref{eq39} it is obtained \begin{align} &\dfrac{(8(n-3)^2I_0(\bm{p})+4(n-1)(n-3)t_1I_3(\bm{p})I_4(\bm{p})+(n-1)(n-2)t_1I_4^3(\bm{p}))^2}{t_1^2(8(n-3)I_3(\bm{p})+3(n-2)I_4^2(\bm{p}))^3} \nonumber\\ &=\dfrac{(8(n-3)J_0(\bm{q})+4(n-1)(n-3)t_1J_3(\bm{q})J_4(q)+(n-1)(n-2)t_1J_4^3(\bm{q}))^2}{v^2(8(n-3)J_3(\bm{q})+3(n-2)J_4^2(\bm{q}))^3}. \label{eq41} \end{align} And by equating the square of \eqref{eq38}, and \eqref{eq40} it is obtained \begin{align} &\dfrac{256(n-3)^3I_2(\bm{p})+64(n-3)^2I_0(\bm{p})I_4(\bm{p})+16(n-1)(n-3)t_1I_3(\bm{p}))I_4^2(\bm{p})+3(n-1)(n-2)t_1I_4^4(\bm{p})}{t_1(8(n-3)I_3(\bm{p})+3(n-2)I_4^2(\bm{p}))^2}\nonumber\\ &=\dfrac{256(n-3)^3J_2(\bm{q})+64(n-3)^2J_0(\bm{q})J_4(\bm{q})+16(n-1)(n-3)t_1J_3(\bm{q}))J_4^2(\bm{q})+3(n-1)(n-2)t_1J_4^4(\bm{q})}{v(8(n-3)J_3(\bm{q})+3(n-2)J_4^2(\bm{q}))^2}.\label{eq42} \end{align} Eventually we get } \fi \small \begin{align} &\dfrac{(8(n-3)^2I_0(\bm{p})+4(n-1)(n-3)t_1I_3(\bm{p})I_4(\bm{p})+(n-1)(n-2)t_1I_4^3(\bm{p}))^2}{t_1^2(8(n-3)I_3(\bm{p})+3(n-2)I_4^2(\bm{p}))^3} \nonumber\\ &=\dfrac{(8(n-3)J_0(\bm{q})+4(n-1)(n-3)t_1J_3(\bm{q})J_4(q)+(n-1)(n-2)t_1J_4^3(\bm{q}))^2}{v^2(8(n-3)J_3(\bm{q})+3(n-2)J_4^2(\bm{q}))^3}, \label{eq41*} \end{align} \normalsize and \small \begin{align} &\dfrac{256(n-3)^3I_2(\bm{p})+64(n-3)^2I_0(\bm{p})I_4(\bm{p})+16(n-1)(n-3)t_1I_3(\bm{p})I_4^2(\bm{p})+3(n-1)(n-2)t_1I_4^4(\bm{p})}{t_1(8(n-3)I_3(\bm{p})+3(n-2)I_4^2(\bm{p}))^2}\nonumber\\ &=\dfrac{256(n-3)^3J_2(\bm{q})+64(n-3)^2J_0(\bm{q})J_4(\bm{q})+16(n-1)(n-3)t_1J_3(\bm{q})J_4^2(\bm{q})+3(n-1)(n-2)t_1J_4^4(\bm{q})}{v(8(n-3)J_3(\bm{q})+3(n-2)J_4^2(\bm{q}))^2},\label{eq42*} \end{align} \normalsize where \[ I_0({\boldsymbol{p}})= t_1I_1({\boldsymbol{p}})-t_0I_2({\boldsymbol{p}}),\mbox{ }J_0({\boldsymbol{q}})= vJ_1({\boldsymbol{q}})-uJ_2({\boldsymbol{q}}). \] Notice that Eq. \eqref{eq41*} and \eqref{eq42*} have a very special structure: if we examine the right-hand side and the left-hand side of each of these equations, we detect the same function but evaluated at $(t_0,t_1)$, at the left, and at $(u,v)$, at the right. This motivates our definition of the following two functions, that we call \emph{projective curvatures}: \small \begin{equation}\label{kappas} \begin{split} & \kappa_1(\bm{p})=\dfrac{(8(n-3)^2I_0(\bm{p})+4(n-1)(n-3)t_1I_3(\bm{p})I_4(\bm{p})+(n-1)(n-2)t_1I_4^3(\bm{p}))^2}{t_1^2(8(n-3)I_3(\bm{p})+3(n-2)I_4^2(\bm{p}))^3} \\ & \kappa_2(\bm{p})=\dfrac{256(n-3)^3I_2(\bm{p})+64(n-3)^2I_0(\bm{p})I_4(\bm{p})+16(n-1)(n-3)t_1I_3(\bm{p})I_4^2(\bm{p})+3(n-1)(n-2)t_1I_4^4(\bm{p})}{t_1(8(n-3)I_3(\bm{p})+3(n-2)I_4^2(\bm{p}))^2}. \end{split} \end{equation} \normalsize \begin{remark} \label{otherproj} Notice that there are additional possibilities for projective curvatures, other than $\kappa_1,\kappa_2$ in Eq. \eqref{kappas}. What we really want are elements in the ideal ${\mathcal I}^\star$ which correspond to the subtraction of the evaluations of a certain rational function at $t_1,I_1,I_2,I_3,I_4$ and at $v,J_1,J_2,J_3,J_4$, respectively. We do not have yet a complete theoretical explanation of why the ideal ${\mathcal I}^\star$ contains such elements. This probably requires further look into the theory of differential invariants {(see \citep{MR836734,dolgachev_2003,olver_1995,mansfield_2010} for further information on this topic)}. \end{remark} The next result follows directly from Eq. \eqref{eq41*} and \eqref{eq42*}. \begin{lemma}\label{lem22} Let $\bm{C}$ be a rational algebraic curve properly parametrized by $\bm{p}$ satisfying hypotheses (i-iv) and let $\varphi(t_0,t_1)=(at_0+bt_1,ct_0+dt_1)$ be a M\"obius transformation with $ad-bc\neq 0$. The following equalities hold. \begin{itemize} \item[i.] $\kappa_1(\bm{p}\circ \varphi)=\kappa_1(\bm{p})\circ \varphi$, \item[ii.] $\kappa_2(\bm{p}\circ \varphi)=\kappa_2(\bm{p})\circ \varphi$. \end{itemize} \end{lemma} The fact that $\kappa_1,\kappa_2$ are well defined follows from the following result, which is proven in \ref{AppendixB}. In fact, in \ref{AppendixB} we prove a stronger result which implies this lemma, namely that the $I_i$, $i\in\{1,2,3,4\}$, are algebraically independent. \begin{lemma} \label{denomnot} The denominators in $\kappa_1,\kappa_2$ do not identically vanish, and therefore $\kappa_1,\kappa_2$ are well defined. \end{lemma} Now we are ready to present our main result, that characterizes the projective equivalences of rational $3D$ curves in terms of the rational invariant functions $\kappa_1$ and $\kappa_2$. \begin{theorem}\label{teo23} Let $\bm{C}_1,\bm{C}_2$ be two rational algebraic curves properly parametrized by $\bm{p},\bm{q}$ satisfying hypotheses (i-iv). Then $\bm{C}_1, \bm{C}_2$ are projectively equivalent if and only if there exist {two linear functions $u:=at_0+bt_1,v:=ct_0+dt_1$, with $ad-bc\neq 0$, satisfying the following equations} \begin{align} \kappa_1(\bm{p})(t_0,t_1)&=\kappa_1(\bm{q})(u,v) \label{eq44} \\ \kappa_2(\bm{p})(t_0,t_1)&=\kappa_2(\bm{q})(u,v), \label{eq45} \end{align} and such that $D({\boldsymbol{q}} \circ \varphi)(D({\boldsymbol{p}}))^{-1}${, with $\varphi (t_0,t_1)=(at_0+bt_1,ct_0+dt_1)$,} is a constant matrix $M$. Furthermore, $f({\boldsymbol{x}})=M\cdot{\boldsymbol{x}}$ is a projective equivalence between $\bm{C}_1, \bm{C}_2$. \end{theorem} \begin{proof} $(\Rightarrow)$ From Theorem \ref{theo0.1}, there exists a M\"obius transformation $\varphi$ such that $M\cdot{\boldsymbol{p}}={\boldsymbol{q}}\circ \varphi$; furthermore, from the discussion at the beginning of Subsection \ref{subsec3.2}, $M=D({\boldsymbol{q}} \circ \varphi)(D({\boldsymbol{p}}))^{-1}$. By Theorem \ref{teo29}, $I_i({\boldsymbol{p}})=I_i({\boldsymbol{q}} \circ \varphi)$ for $i\in\{1,2,3,4\}$, and therefore Eq. \eqref{eq41*} and \eqref{eq42*} hold. {Picking $u,v$ as the components of $\varphi$, the} rest follows from the definition of $\kappa_1,\kappa_2$. $(\Leftarrow)$ From the proof of the implication $``\Leftarrow"$ in Theorem \ref{teo29}, if $D({\boldsymbol{q}} \circ \varphi)(D({\boldsymbol{p}}))^{-1}=M$, with $M$ constant, then $M\cdot{\boldsymbol{p}}={\boldsymbol{q}}\circ \varphi$, so $f({\boldsymbol{x}})=M\cdot{\boldsymbol{x}}$ is a projective equivalence between $\bm{C}_1,\bm{C}_2$. \end{proof} \section{The algorithm.}\label{sec5} In this section we will see how to turn the result in Theorem \ref{teo23} into an algorithm to detect projective equivalence. \begin{color}{black}Before proceeding, let us provide some insight on the main idea. It is probably clearer to consider the problem in an affine setting. The projective curvatures $\kappa_1,\kappa_2$ introduced in Eq. \eqref{kappas}, in an affine setting, correspond to two rational functions \[ \tilde{\kappa}_1(t),\mbox{ }\tilde{\kappa}_2(t), \] where now $t$ is an affine parameter. Furthermore, from Theorem \ref{teo23} we know that \begin{equation}\label{aux01} \tilde{\kappa}_1(t)=\tilde{\kappa}_1(\tilde{\varphi}(t)),\mbox{ }\tilde{\kappa}_2(t)=\tilde{\kappa}_2(\tilde{\varphi}(t)) \end{equation} where \begin{equation}\label{aux02} \tilde{\varphi}(t)=\dfrac{at+b}{ct+d}, \end{equation} which is also an affine rational function, corresponding to the M\"obius function $\varphi$ in Theorem \ref{teo23}. Now let $s$ be a new variable, and consider the expressions \begin{equation}\label{aux11} \tilde{\kappa}_1(t)-\tilde{\kappa}_2(s)=0,\mbox{ }\tilde{\kappa}_2(t)-\tilde{\kappa}_2(s)=0. \end{equation} After clearing denominators, one can see the expressions in Eq. \eqref{aux11} as defining two algebraic curves in the $(t,s)$ plane. Because of Eq. \eqref{aux01}, all the $(t,s)$ points satisfying that $s=\tilde{\varphi}(t)$ are points of these two curves. Thus, these two curves have infinitely many points in common, so by Bezout's Theorem they must share a factor. And this factor is, precisely, the factor obtained from $s-\tilde{\varphi}(t)=0$ after clearing denominators. We just need to formalize this idea, and work projectively. \end{color} In order to do this, first we write \begin{align} &\kappa_1(\bm{p})(t_0,t_1)=\dfrac{U(t_0,t_1)}{V(t_0,t_1)} &\kappa_2(\bm{p})(t_0,t_1)=\dfrac{Y(t_0,t_1)}{Z(t_0,t_1)}, \label{eq46} \\ &\kappa_1(\bm{q})(t_0,t_1)=\dfrac{\bar{U}(t_0,t_1)}{\bar{V}(t_0,t_1)} &\kappa_2(\bm{q})(t_0,t_1)=\dfrac{\bar{Y}(t_0,t_1)}{\bar{Z}(t_0,t_1)}, \label{eq47} \end{align} where $U,V,Y,Z$ and $\bar{U},\bar{V},\bar{Y},\bar{Z}$ are homogeneous polynomials such that $\gcd(U,V)=1$, $\gcd(Y,Z)=1$, $\gcd(\bar{U},\bar{V})=1$ and $\gcd(\bar{Y},\bar{Z})=1$. From Theorem \ref{teo23} we know that if the curves are projectively equivalent, then \begin{equation}\label{eq48} \kappa_1(\bm{p})(t_0,t_1)-\kappa_1(\bm{q})(u,v)=0,\quad \kappa_2(\bm{p})(t_0,t_1)-\kappa_2(\bm{q})(u,v)=0 \end{equation} {must hold for two functions $u:=at_0+bt_1,v:=ct_0+dt_1$.} Clearing the denominators of these equations, {where we see $u,v$ as independent variables from $t_0,t_1$,} we define two homogeneous polynomials $E_1$ and $E_2$ in $t_0,t_1,u,v$, \begin{align} E_1(t_0,t_1,u,v)&:=U(t_0,t_1)\bar{V}(u,v)-V(t_0,t_1)\bar{U}(u,v) \label{eq49} \\ E_2(t_0,t_1,u,v)&:=Y(t_0,t_1)\bar{Z}(u,v)-Z(t_0,t_1)\bar{Y}(u,v). \label{eq50} \end{align} We are interested in the common factors of $E_1$ and $E_2$. Thus, let us write \begin{equation}\label{eq51} G(t_0,t_1,u,v):=\gcd(E_1(t_0,t_1,u,v),E_2(t_0,t_1,u,v)). \end{equation} Finally, for an arbitrary M\"obius transformation $\varphi(t_0,t_1)=(at_0+bt_1,ct_0+dt_1)=(u,v)$, $ad-bc\neq 0$, we say that \begin{equation}\label{eq52} F(t_0,t_1,u,v)=u(ct_0+dt_1)-v(at_0+bt_1) \end{equation} is the associated \emph{M\"obius-like factor}. Notice that the condition $ad-bc\neq 0$ guarantees that $F$ is irreducible. Then we have the following result. \begin{theorem}\label{teo24} Let $\bm{C}_1,\bm{C}_2$ be two rational algebraic curves properly parametrized by $\bm{p},\bm{q}$ satisfying hypotheses (i-iv), and let $G$ be as in Eq. \eqref{eq51}. If $\bm{C}_1$ and $\bm{C}_2$ are projectively equivalent then there exists a M\"obius-like factor $F$ such that $F$ divides $G$. \end{theorem} \begin{color}{black} \begin{proof} From Theorem \ref{teo23}, the zeroset of $F$ is included in the zerosets of the expressions in Eq. \eqref{eq48}, and therefore in the zerosets of $E_1,E_2$. Thus, the zeroset of $F$ is included in the zeroset of $G$, and therefore, by Study's Lemma (see Section 6.13 in \cite{Fisher}), $F$ divides $G$. \end{proof} \end{color} Thus, in order to compute the M\"obius transformation $\varphi$, we just need to compute the polynomial $G(t_0,t_1,u,v)$ in Eq. \eqref{eq51}, factor it, and look for the M\"obius-like factors. In general we need to factor over the reals, which can be efficiently done with the command {\tt AFactors} in \citet{maple}. Once the $\varphi$ are found, we check whether or not $D({\boldsymbol{q}}\circ \varphi)(D({\boldsymbol{p}}))^{-1}$ is constant: in the affirmative case, $M=D({\boldsymbol{q}}\circ \varphi)(D({\boldsymbol{p}}))^{-1}$ defines a projectivity between the curves. For this last part, it is computationally cheaper to compute $D(({\boldsymbol{q}}\circ \varphi)(t_0))(D({\boldsymbol{p}}(t_0)))^{-1}$ for some $t_0\in {\Bbb R}$, and then check whether or not $M\cdot{\boldsymbol{p}}={\boldsymbol{q}}\circ \varphi$ holds. Therefore, we get the following algorithm, {\tt Prj3D}, to check whether or not two given rational curves are projectively equivalent. In order to execute the algorithm, we need that not both $\kappa_1,\kappa_2$ are constant. We conjecture that the space curves with both $\kappa_i$ constant may be related to $W$-curves \citep{Sasaki1937}, but at this point we must leave this case out of our study. \begin{algorithm}[H] \caption*{\textbf{Algorithm} $Prj3D$} \textbf{Input:} \textit{Two proper parametrizations $\bm{p}$ and $\bm{q}$ in homogeneous coordinates such that not both projective curvatures $\kappa_1$, $\kappa_2$ are constant} \\ \textbf{Output:} \textit{Either the list of M\"obius transformations and projectivities, or the warning: "The curves are not projectively equivalent"} \begin{algorithmic}[1] \Procedure{$Prj3D$}{$\bm{p},\bm{q}$} \State {Compute the set of factors $FS$ of the polynomial $G$ that is defined at \eqref{eq51}.} \Comment{Here we use the {\tt AFactor} function} \State Check $FS$ to find the set $MF$ of M\"obius-like factors \If{$MF=\emptyset$} \Return "The curves are not projectively equivalent." \Else \State Compute the set $MS$ of M\"obius transformations corresponding to $MF$ \For{$\varphi\in MF$} \State Check if $D(\bm{q}(\varphi))D(\bm{p})^{-1}$ is a constant matrix $M$. \State In the affirmative case, {\bf return} the projectivity defined by $M$, and the corresponding $\varphi$. \EndFor \EndIf \EndProcedure \end{algorithmic} \end{algorithm} \begin{color}{black} \begin{remark}\label{rem-multiples} Notice that M\"obius-like factors are computed up to a constant, which is coherent with the fact that $\varphi$, which is a projective transformation, is also defined up to a constant. Because of this, the matrix $M$ derived by the algorithm {\tt Prj3D} is also defined up to a constant, i.e. if $F$ is a M\"obius-like factor, which in turn provides a matrix $M$, by picking $\lambda F$ with $\lambda\neq 1$ instead, in general we obtain a multiple $\mu M$ of $M$. However, both $M,\mu M$ define the same projectivity. \end{remark} \end{color} Below we provide a detailed example to illustrate the steps of the method. \begin{example} Consider the curves given by the rational parametrizations \small \begin{equation*} \bm{p}(t_0,t_1)=\begin{pmatrix} (t_0 - t_1)^4 + 16t_0^4 - 8t_0^3(t_0 - t_1) + 4t_0^2(t_0 - t_1)^2 \\ 4t_0^2(t_0 - t_1)^2 \\ 8t_0^3(t_0 - t_1) \\ 2t_0(t_0 - t_1)((t_0 - t_1)^2 + 4t_0^2) \end{pmatrix},\quad \bm{q}(t_0,t_1)=\begin{pmatrix} (t_0 - t_1)^4 + 16t_0^4 \\ 2t_0(t_0 - t_1)((t_0 - t_1)^2 + 4t_0^2) \\ 2(t_0 - t_1)^3t_0 \\ 4t_0^2(t_0 - t_1)^2 \end{pmatrix}. \end{equation*} \normalsize \noindent The projective curvatures are, in this case, \small \begin{align*} \kappa_1(\bm{p})(t_0,t_1)&=\frac{\left(17 t_0^{4}-4 t_0^{3} {t_1}+6 t_0^{2} t_1^{2}-4 {t_0} \,t_1^{3}+t_1^{4}\right)^{2}}{384 t_0^{4} \left({t_0}-{t_1}\right)^{2} \left(t_0^{2}-2 {t_0} {t_1}+t_1^{2}\right)} \\ \kappa_2(\bm{p})(t_0,t_1)&=\frac{273 t_0^{8}-72 t_0^{7} {t_1}+124 t_0^{6} t_1^{2}-120 t_0^{5} t_1^{3}+86 t_0^{4} t_1^{4}-56 t_0^{3} t_1^{5}+28 t_0^{2} t_1^{6}-8 {t_0} \,t_1^{7}+t_1^{8}}{96 \left(t_0^{2}-2 {t_0} {t_1}+t_1^{2}\right)^{2} t_0^{4}} \end{align*} \normalsize \noindent Thus we get \begin{align*} E_1&=384 \left(17 t_0^{4}-4 t_0^{3} {t_1}+6 t_0^{2} t_1^{2}-4 {t_0} \,t_1^{3}+t_1^{4}\right)^{2} u^{4} \left(-v+u\right)^{2} \left(u^{2}-2 u v+v^{2}\right)\\ &-384 t_0^{4} \left({t_0}-{t_1}\right)^{2} \left(t_0^{2}-2 {t_0} {t_1}+t_1^{2}\right) \left(17 u^{4}-4 u^{3} v+6 u^{2} v^{2}-4 u \,v^{3}+v^{4}\right)^{2} \\ E_2&=96 \left(273 t_0^{8}-72 t_0^{7} {t_1}+124 t_0^{6} t_1^{2}-120 t_0^{5} t_1^{3}+86 t_0^{4} t_1^{4}-56 t_0^{3} t_1^{5}+28 t_0^{2} t_1^{6}-8 {t_0} \,t_1^{7}+t_1^{8}\right)\\ & \left(u^{2}-2 u v+v^{2}\right)^{2} u^{4}-96 \left(t_0^{2}-2 {t_0} {t_1}+t_1^{2}\right)^{2} t_0^{4}\\ & \left(273 u^{8}-72 u^{7} v+124 u^{6} v^{2}-120 u^{5} v^{3}+86 u^{4} v^{4}-56 u^{3} v^{5}+28 u^{2} v^{6}-8 u \,v^{7}+v^{8}\right) \end{align*} \noindent The computation of $G=\gcd(E_1,E_2)$ yields \begin{align*} G(t_0,t_1,u,v)&=\left({t_0} v-u {t_1}\right) \left(3 {t_0} u+{t_0} v+u {t_1}-{t_1} v\right) \left(5 {t_0} u-{t_0} v-u {t_1}+{t_1} v\right) \left(2 {t_0} u-{t_0} v-u {t_1}\right) \\ &\left(2 t_0^{2} u^{2}-2 t_0^{2} u v+t_0^{2} v^{2}-2 u^{2} {t_0} {t_1}+u^{2} t_1^{2}\right) \\ &\left(17 t_0^{2} u^{2}-2 t_0^{2} u v+t_0^{2} v^{2}-2 u^{2} {t_0} {t_1}+4 {t_0} {t_1} u v-2 {t_0} {t_1} \,v^{2}+u^{2} t_1^{2}-2 t_1^{2} u v+t_1^{2} v^{2}\right) \end{align*} \noindent Factoring $G$, we get the following M\"obius-like factors: \begin{align*} f_1&={t_0} u-\frac{1}{2} {t_0} v-\frac{1}{2} {t_1} u \\ f_2&={t_0} u+\frac{1}{3} {t_0} v+\frac{1}{3} {t_1} u-\frac{1}{3} {t_1} v \\ f_3&={t_0} v-{t_1} u \\ f_4&={t_0} u-\frac{1}{5} {t_0} v-\frac{1}{5} {t_1} u+\frac{1}{5} {t_1} v, \end{align*} which correspond to the following four M\"obius transformations \begin{align*} \varphi_1(t_0,t_1)&=(t_0,2t_0-t_1),& \varphi_2(t_0,t_1)&=(-t_0+t_1,3t_0+t_1), \\ \varphi_3(t_0,t_1)&=(t_0,t_1),& \varphi_4(t_0,t_1)&=(t_0-t_1,5t_0-t_1). \end{align*} For $i\in\{1,2,3,4\}$, the product $D(q(\varphi_i))D(p)^{-1}$ yields a constant matrix $M_i$, so we get four projectivities $f({\boldsymbol{x}})=M_i\cdot{\boldsymbol{x}}$ between the curves defined by $\bm{p}$ and $\bm{q}$ corresponding to \begin{align*} M_1=\begin{pmatrix} 1 & -1 & 1 & 0 \\ 0 & 0 & 0 & -1 \\ 0 & 0 & 1 & -1 \\ 0 & 1 & 0 & 0 \end{pmatrix}, &\quad M_2=\begin{pmatrix} 16 & -16 & 16 & 0 \\ 0 & 0 & 0 & 16 \\ 0 & 0 & 16 & 0 \\ 0 & 16 & 0 & 0 \end{pmatrix} \\ M_3=\begin{pmatrix} 1 & -1 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & -1 & 1 \\ 0 & 1 & 0 & 0 \end{pmatrix}, &\quad M_4=\begin{pmatrix} 16 & -16 & 16 & 0 \\ 0 & 0 & 0 & -16 \\ 0 & 0 & -16 & 0 \\ 0 & 16 & 0 & 0 \end{pmatrix}. \end{align*} \end{example} \section{Implementation and Performance.}\label{Imp} The algorithm \texttt{Prj3D} was implemented in the computer algebra system \citet{maple}, and was tested on a PC with a $3.6$ GHz Intel Core i$7$ processor and $32$ GB RAM. In order to factor the gcd we use \citet{maple}'s \texttt{AFactors} function, since in general we want to factor over the reals. We want to explicitly mention that the \citet{maple} command {\tt AFactors} works very well in practice. In fact, in our experimentation we observed that most of the time is spent computing the $\gcd$ of the polynomials $E_1$ and $E_2$. Technical details, examples and source codes of the procedures are provided in the first author's personal website \citep{website}. In this section, first we provide tables and examples to compare the performance of our algorithm with the algorithms in \citep{Hauer201868,BIZZARRI2020112438}. Then we provide a more detailed analysis of our own implementation. We recall that the bitsize $\tau$ of an integer $k$ is the integer $\tau=\lceil log_2k\rceil+1$. If the bitsize of an integer is $\tau$, then the number of digits of the integer could be calculated by the formula $d=\lceil log\tau\rceil +1$, where $d$ is the number of digits. By an abuse of notation, in this section we have used $\tau$ for representing the maximum bitsize of the coefficients of the components of the parametrization corresponding to a curve. \subsection{Comparison of the Results.}\label{subsec6.1} To the best of our knowledge, there are two simple and efficient algorithms to detect the projective equivalences of $3D$ rational curves \citep{Hauer201868,BIZZARRI2020112438}. Although their methods differ, in both cases the authors rely on Gr\"obner bases to solve a polynomial system on the coefficients of the M\"obius transformations corresponding to the equivalences. Thus, in both methods most of the time is spent computing the Gr\"obner basis of the system, which is considerably large. In contrast, our method does not require to solve any polynomial system. Instead, our algorithm computes the M\"obius-like factors by factoring a polynomial of small degree, compared to the degrees in the polynomials involved in the methods \citep{Hauer201868,BIZZARRI2020112438}. The reason is that the polynomial that we need to factor is a gcd of two polynomials where the projective curvatures $\kappa_1$ and $\kappa_2$ are involved. In order to compare our results with those in \citep{Hauer201868,BIZZARRI2020112438}, we provide two tables, Table \ref{table3} and Table \ref{table4}, with the timing $t_h$ corresponding to the so-called ``reduced method" in \citep{Hauer201868}, the timing $t_b$ corresponding to \citet{BIZZARRI2020112438}, and the timing $t_{\mbox{our}}$ corresponding to our algorithm. We consider both projective equivalences and symmetries. Since \citet{BIZZARRI2020112438} provide no implementation or tests in their paper, we implemented this algorithm in \citet{maple} to compare with our own, and the timings $t_b$ we are including are the timings obtained with this implementation. For \citep{Hauer201868} we just reproduce the timings in their paper, taking into accout that the machine in \citep{Hauer201868} is similar to ours. We understand that the comparison is unfair because \citep{Hauer201868} uses {\tt Singular} to compute Gr\"obner bases, but perhaps this same fact, i.e. not using the power of {\tt Singular}, that we do not need because we do not compute any Gr\"obner basis, may partially compensate for this unfairness. The results in Table \ref{table3} and Table \ref{table4} show that as the degree of the parametrizations grow, the timings for our algorithm grow much less that the timings for \citep{Hauer201868,BIZZARRI2020112438}, in accordance with the fact that Gr\"obner bases have an exponential complexity. Let us present the results corresponding to Table \ref{table3}. The parametrizations used in this table are given in Table \ref{table2}; the first three are taken from \citep{Hauer201868}. Here we have highlighted in blue the best timing for each example. One may notice that our method always beats \citet{BIZZARRI2020112438}, while \citep{Hauer201868} is better for the first two examples, of small degree. \begin{table}[H] \centering \begin{tabular}{l l} Degree & Parametrization \\ \hline \\ $4$ & $\scalemath{0.8}{\left(\begin{array}{c} t_0^{4}+t_1^{4} \\ t_0^{3} t_1 +t_0 \,t_1^{3} \\ t_0 \,t_1^{3} \\ t_0^{2} t_1^{2} \end{array}\right)}$\\ \\ $6$ & $\scalemath{0.8}{\left(\begin{array}{c} 125 t_0^{6}+450 t_0^{5} t_1 +690 t_0^{4} t_1^{2}+576 t_0^{3} t_1^{3}+276 t_0^{2} t_1^{4}+72 t_0 \,t_1^{5}+8 t_1^{6} \\ -27 t_0^{6}-54 t_0^{5} t_1 -36 t_0^{4} t_1^{2}-8 t_0^{3} t_1^{3} \\ 64 t_0^{6}+288 t_0^{5} t_1 +528 t_0^{4} t_1^{2}+504 t_0^{3} t_1^{3}+264 t_0^{2} t_1^{4}+72 {t_0} \,t_1^{5}+8 t_1^{6} \\ 21 t_0^{6}+122 t_0^{5} {t_1} +216 t_0^{4} t_1^{2}+168 t_0^{3} t_1^{3}+60 t_0^{2} t_1^{4}+8 {t_0} \,t_1^{5} \end{array}\right)}$ \\ \\ $8$ & $\scalemath{0.8}{\left(\begin{array}{c} 625 t_0^{8}+3000 t_0^{7} {t_1} +6400 t_0^{6} t_1^{2}+7920 t_0^{5} t_1^{3}+6216 t_0^{4} t_1^{4}+3168 t_0^{3} t_1^{5}+1024 t_0^{2} t_1^{6}+192 {t_0} \,t_1^{7}+16 t_1^{8} \\ -2027 t_0^{8}-8392 t_0^{7} {t_1} -14344 t_0^{6} t_1^{2}-12768 t_0^{5} t_1^{3}-5960 t_0^{4} t_1^{4}-1056 t_0^{3} t_1^{5}+224 t_0^{2} t_1^{6}+128 {t_0} \,t_1^{7}+16 t_1^{8} \\ 1664 t_0^{8}+7744 t_0^{7} {t_1} +16288 t_0^{6} t_1^{2}+20528 t_0^{5} t_1^{3}+17040 t_0^{4} t_1^{4}+9472 t_0^{3} t_1^{5}+3392 t_0^{2} t_1^{6}+704 {t_0} \,t_1^{7}+64 t_1^{8} \\ 405 t_0^{8}+1080 t_0^{7} {t_1} +1080 t_0^{6} t_1^{2}+480 t_0^{5} t_1^{3}+80 t_0^{4} t_1^{4} \end{array}\right)}$\\ \\ $9$ & $\scalemath{0.8}{\left(\begin{array}{c} t_0^{9} \\ t_1^{9} \\ t_0^{8} {t_1} +t_0^{6} t_1^{3} \\ t_0^{6} t_1^{3}+t_0^{4} t_1^{5} \end{array}\right)}$ \\ \\ $10$ & $\scalemath{0.8}{\left(\begin{array}{c} 49 t_0^{10}-22 t_0^{9} {t_1} +87 t_0^{8} t_1^{2}+84 t_0^{7} t_1^{3}+75 t_0^{6} t_1^{4}-96 t_0^{5} t_1^{5}-28 t_0^{4} t_1^{6}-76 t_0^{3} t_1^{7}-36 t_0^{2} t_1^{8}-55 {t_0} \,t_1^{9}+27 t_1^{10} \\ 97 t_0^{10}-97 t_0^{9} {t_1} -73 t_0^{8} t_1^{2}+57 t_0^{7} t_1^{3}+73 t_0^{6} t_1^{4}+64 t_0^{5} t_1^{5}-20 t_0^{4} t_1^{6}+85 t_0^{3} t_1^{7}+99 t_0^{2} t_1^{8}+57 {t_0} \,t_1^{9}+96 t_1^{10} \\ 74 t_0^{10}-69 t_0^{9} {t_1} -9 t_0^{8} t_1^{2}+47 t_0^{7} t_1^{3}+44 t_0^{6} t_1^{4}-62 t_0^{5} t_1^{5}+8 t_0^{4} t_1^{6}-84 t_0^{3} t_1^{7}+38 t_0^{2} t_1^{8}-{t_0} \,t_1^{9}+55 t_1^{10} \\ -35 t_0^{10}-35 t_0^{9} {t_1} +63 t_0^{8} t_1^{2}+41 t_0^{7} t_1^{3}+16 t_0^{6} t_1^{4}-77 t_0^{5} t_1^{5}+76 t_0^{4} t_1^{6}+95 t_0^{3} t_1^{7}+56 t_0^{2} t_1^{8}-16 {t_0} \,t_1^{9}-95 t_1^{10}\end{array}\right)}$ \\ \\ $11$ & $\scalemath{0.8}{\left(\begin{array}{c} -62 t_0^{11}-16 t_0^{10} {t_1} +68 t_0^{9} t_1^{2}-15 t_0^{8} t_1^{3}-31 t_0^{7} t_1^{4}+62 t_0^{6} t_1^{5}-14 t_0^{5} t_1^{6}+67 t_0^{4} t_1^{7}+49 t_0^{3} t_1^{8}+52 t_0^{2} t_1^{9}-20 {t_0} \,t_1^{10}-74 t_1^{11} \\ -19 t_0^{11}-68 t_0^{10} {t_1} -48 t_0^{9} t_1^{2}+45 t_0^{8} t_1^{3}+59 t_0^{7} t_1^{4}-96 t_0^{6} t_1^{5}-6 t_0^{5} t_1^{6}+89 t_0^{4} t_1^{7}+41 t_0^{3} t_1^{8}+20 t_0^{2} t_1^{9}+ 25 {t_0} \,t_1^{10} \\ -80 t_0^{11}+42 t_0^{10} {t_1} -67 t_0^{9} t_1^{2}+63 t_0^{8} t_1^{3}-81 t_0^{7} t_1^{4}+76 t_0^{6} t_1^{5}-44 t_0^{5} t_1^{6}-59 t_0^{4} t_1^{7}-11 t_0^{3} t_1^{8}-75 t_0^{2} t_1^{9}-84 {t_0} \,t_1^{10}+47 t_1^{11} \\ -27 t_0^{11}-34 t_0^{10} {t_1} +96 t_0^{9} t_1^{2}+82 t_0^{8} t_1^{3}-58 t_0^{7} t_1^{4}+59 t_0^{6} t_1^{5}+36 t_0^{5} t_1^{6}+33 t_0^{4} t_1^{7}+35 t_0^{3} t_1^{8}+27 t_0^{2} t_1^{9}+46 {t_0} \,t_1^{10}+19 t_1^{11} \end{array}\right)}$ \\ \\ $12$ & $\scalemath{0.8}{\left(\begin{array}{c} -62 t_0^{12}-26 t_0^{11} {t_1} +46 t_0^{10} t_1^{2}+65 t_0^{9} t_1^{3}-51 t_0^{8} t_1^{4}+60 t_0^{7} t_1^{5}-56 t_0^{6} t_1^{6}-46 t_0^{5} t_1^{7}+86 t_0^{4} t_1^{8}-31 t_0^{3} t_1^{9}+84 t_0^{2} t_1^{10}+5 {t_0} \,t_1^{11}+25 t_1^{12} \\ -17 t_0^{12}+79 t_0^{11} {t_1} +73 t_0^{10} t_1^{2}-78 t_0^{9} t_1^{3}+13 t_0^{8} t_1^{4}+93 t_0^{7} t_1^{5}+64 t_0^{6} t_1^{6}-70 t_0^{5} t_1^{7}-71 t_0^{4} t_1^{8}-51 t_0^{3} t_1^{9}-71 t_0^{2} t_1^{10}+10 {t_0}t_1^{11} \\ -76 t_0^{12}-25 t_0^{11} {t_1} +38 t_0^{10} t_1^{2}+89 t_0^{9} t_1^{3}-92 t_0^{8} t_1^{4}-84 t_0^{7} t_1^{5}-77 t_0^{6} t_1^{6}-34 t_0^{5} t_1^{7}-20 t_0^{4} t_1^{8}+73 t_0^{3} t_1^{9}-94 t_0^{2} t_1^{10}+99 {t_0}t_1^{11}+18 t_1^{12} \\ 39 t_0^{12}-77 t_0^{11} {t_1} -70 t_0^{10} t_1^{2}-49 t_0^{9} t_1^{3}-46 t_0^{8} t_1^{4}+34 t_0^{7} t_1^{5}-84 t_0^{6} t_1^{6}+98 t_0^{5} t_1^{7}+41 t_0^{4} t_1^{8}-46 t_0^{3} t_1^{9}+13 t_0^{2} t_1^{10}-3 {t_0} t_1^{11}+8 t_1^{12} \end{array}\right)}$ \end{tabular} \caption{Parametrizations of the curves considered in Section \ref{subsec6.1}} \label{table2} \end{table} \begin{table}[H] \centering \begin{tabular}{r r r r r r r r} \hline & \multicolumn{1}{c}{$\sharp$ of} &\multicolumn{1}{c}{$t_b$} & \multicolumn{1}{c}{$t_b$} & \multicolumn{1}{c}{$t_h$} & \multicolumn{1}{c}{$t_h$} & \multicolumn{1}{c}{$t_{\mbox{our}}$} & \multicolumn{1}{c}{$t_{\mbox{our}}$} \\ Deg. & Eqvl. & Symm. & Eqvl. & Symm. & Eqvl & Symm. & Eqvl\\ \hline $4$ & $4$ & $0.344$ & $0.703$ & \textcolor{blue}{$0.01$} & \textcolor{blue}{$0.01$} & $0.078$ & $0.219$ \\ $6$ & $4$ & $1.391$ & $2.547$ & \textcolor{blue}{$0.06$} & \textcolor{blue}{$0.02$} & $0.078$ & $0.172$ \\ $8$ & $2$ & $3.094$ & $ 2.500$ & $37$ & $0.78$ & \textcolor{blue}{$0.063$} & \textcolor{blue}{$0.188$} \\ $9$ & $2$ & $1.140$ & $1.000$ & & & \textcolor{blue}{$0.016$} & \textcolor{blue}{$0.031$} \\ $10$ & $1$ & $14.750$ & $10.000$ & & & \textcolor{blue}{$0.422$} & \textcolor{blue}{$0.375$} \\ $11$ & $1$ & $31.625$ & $21.172$ & & & \textcolor{blue}{$0.421$} & \textcolor{blue}{$0.547$} \\ $12$ & $1$ & $40.313$ & $41.437$ & & & \textcolor{blue}{$0.625$} & \textcolor{blue}{$0.531$} \\ \hline \end{tabular} \caption{CPU time in seconds for projective symmetries and equivalences for the curves represented by the parametrizations in Table \ref{table2}} \label{table3} \end{table} Now let us introduce Table \ref{table4}. In this table we test random curves with a fixed bitsize $3<\tau<4$ (coefficients are ranges between $-10$ and $10$) as in \citep{Hauer201868}. The first six examples are taken from \citep{Hauer201868}. Again we have highlighted in blue the best timing {among} the methods in \citep{BIZZARRI2020112438}, \citep{Hauer201868} and ours. Our method is only beaten in the first example, of degree 4. For higher degrees not only our algorithm is better, but the growing of the timings is much slower. \begin{table}[H] \centering \begin{tabular}{r r r r r r r} \hline & \multicolumn{1}{c}{$t_b$} & \multicolumn{1}{c}{$t_b$} & \multicolumn{1}{c}{$t_h$} & \multicolumn{1}{c}{$t_h$} & \multicolumn{1}{c}{$t_{\mbox{our}}$} & \multicolumn{1}{c}{$t_{\mbox{our}}$} \\ Deg. & Symm. & Eqvl. & Symm. & Eqvl. & Symm. & Eqvl.\\ \hline $4$ & $0.390$ & \textcolor{blue}{$0.400$} & \textcolor{blue}{$0.04$} & \textcolor{blue}{$0.4$} & $0.687$ & $0.860$ \\ $5$ & $0.110$ &$0.172$ & $1$ & $1.6$ & \textcolor{blue}{$0.015$} & \textcolor{blue}{$0.016$} \\ $6$ &$0.234$ &$0.359$ & $8.4$ & $1.2$ & \textcolor{blue}{$0.047$} & \textcolor{blue}{$0.031$} \\ $7$ &$0.610$ &$1.047$ & $37$ & $8.6$ & \textcolor{blue}{$0.187$} & \textcolor{blue}{$0.063$} \\ $8$ &$1.579$ &$2.546$ & $150$ & $310$ & \textcolor{blue}{$0.125$} & \textcolor{blue}{$0.110$} \\ $9$ &$4.844$ &$4.969$ & $670$ & $1700$ & \textcolor{blue}{$0.297$} & \textcolor{blue}{$0.343$} \\ $10$ & $10.439$ &$10.484$ & & & \textcolor{blue}{$0.496$} & \textcolor{blue}{$0.391$} \\ $11$ & $22.265$ &$22.438$ & & & \textcolor{blue}{$0.625$} & \textcolor{blue}{$0.453$} \\ $12$ & $42.625$ &$42.797$ & & & \textcolor{blue}{$0.906$} & \textcolor{blue}{$0.547$} \\ \hline \end{tabular} \caption{CPU time in seconds for projective equivalences and symmetries of random curves with fixed bitsize ($3<\tau<4$)} \label{table4} \end{table} \subsection{Further Tests.} The tables given in this subsection are provided to better understand the performance of our method and to assist performance testing of similar studies in the future. These tables list timings for homogeneous curve parametrizations with various degrees $m$ and coefficients with bitsizes at most $\tau$. \subsubsection{Projective Equivalences and Symmetries of Random Curves.} In order to generate projectively equivalent curves, we apply the following non-singular matrix and M\"obius transformation to a random parametrization $\bm{q}$ of degree $n$ and bitsize $\tau$. \begin{equation*} M=\begin{pmatrix} 1 & -1 & 1 & 0 \\ 0 & 0 & 0 & -1 \\ 0 & 0 & -1 & 0 \\ 0 & 1 & 0 & 0 \end{pmatrix},\quad \varphi(t_0,t_1)=(-t_0+t_1,2t_0). \end{equation*} \noindent Thus, taking $\bm{p}=M\bm{q}(\varphi)$, we run $\texttt{Prj3D}(\bm{p},\bm{q})$ to get the results for projective equivalences, shown in Table \ref{table5}, and $\texttt{Prj3D}(\bm{q},\bm{q})$ for the results in Table \ref{table6} (symmetries); since $\bm{q}$ is randomly generated, in general only the trivial symmetry is expected. Looking at Table \ref{table5} and Table \ref{table6} one observes a smooth increase in the timings for $n\geq 5$; however $n=4$ has, comparatively, higher timings because for degree four curves the homogeneous polynomials $E_1$ and $E_2$ have more redundant common factors {compared to} higher degrees. \begin{table}[H] \centering \begin{tabular}{r r r r r r r r} \hline $t$ & $\tau=4$ & $\tau=8$ & $\tau=16$ & $\tau=32$ & $\tau=64$ & $\tau=128$ & $\tau=256$ \\ \hline $4$ & $0.703$ & $0.641$ & $1.500$ & $3.140$ & $4.640$ & $10.281$ & $85.989$ \\ $6$ & $0.062$ & $0.062$ & $0.047$ & $0.063$ & $0.110$ & $0.203$ & $0.531$ \\ $8$ & $0.109$ & $0.125$ & $0.140$ & $0.172$ & $0.969$ & $1.469$ & $3.578$ \\ $10$ & $0.343$ & $0.531$ & $0.250$ & $0.344$ & $1.000$ & $2.203$ & $6.000$ \\ $12$ & $0.641$ & $0.718$ & $0.891$ & $0.860$ & $2.063$ & $3.078$ & $10.719$ \\ $14$ & $0.890$ & $1.188$ & $1.313$ & $1.641$ & $2.922$ & $5.719$ & $15.704$ \\ $16$ & $1.218$ & $1.172$ & $1.593$ & $1.875$ & $3.437$ & $7.484$ & $23.828$ \\ $18$ & $1.797$ & $1.844$ & $2.313$ & $2.688$ & $5.656$ & $9.890$ & $32.985$ \\ $20$ & $2.344$ & $2.125$ & $3.281$ & $4.219$ & $7.297$ & $14.203$ & $46.282$ \\ $22$ & $2.985$ & $3.609$ & $4.203$ & $5.391$ & $8.781$ & $18.062$ & $65.000$ \\ $24$ & $4.125$ & $4.672$ & $4.859$ & $6.344$ & $11.110$ & $20.954$ & $74.766$ \\ \hline \end{tabular} \caption{CPU times in seconds for projective equivalences of random curves with various degrees $m$ and bitsizes at most $\tau$} \label{table5} \end{table} \begin{table}[H] \centering \begin{tabular}{r r r r r r r r} \hline $t$ & $\tau=4$ & $\tau=8$ & $\tau=16$ & $\tau=32$ & $\tau=64$ & $\tau=128$ & $\tau=256$ \\ \hline $4$ & $0.688$ & $1.438$ & $0.797$ & $2.078$ & $3.125$ & $9.891$ & $219.750$ \\ $6$ & $0.110$ & $0.016$ & $0.047$ & $0.062$ & $0.093$ & $0.172$ & $0.531$ \\ $8$ & $0.110$ & $0.109$ & $0.438$ & $0.625$ & $0.250$ & $0.593$ & $2.344$ \\ $10$ & $0.344$ & $0.235$ & $0.562$ & $0.781$ & $1.047$ & $2.297$ & $5.203$ \\ $12$ & $0.547$ & $0.750$ & $0.812$ & $1.609$ & $2.281$ & $3.547$ & $9.281$ \\ $14$ & $0.688$ & $0.922$ & $1.672$ & $1.546$ & $3.297$ & $5.172$ & $17.531$ \\ $16$ & $1.297$ & $1.609$ & $1.828$ & $2.672$ & $5.219$ & $7.360$ & $22.156$ \\ $18$ & $2.047$ & $1.750$ & $2.156$ & $3.281$ & $6.797$ & $10.907$ & $34.718$ \\ $20$ & $2.562$ & $2.281$ & $3.516$ & $4.687$ & $8.656$ & $13.906$ & $45.093$ \\ $22$ & $3.375$ & $3.469$ & $4.735$ & $5.500$ & $11.609$ & $16.859$ & $57.500$ \\ $24$ & $4.093$ & $4.703$ & $5.391$ & $7.375$ & $12.781$ & $22.343$ & $75.469$ \\ \hline \end{tabular} \caption{CPU times in seconds for projective symmetries (only trivial symmetry) of random curves with various degrees $m$ and bitsizes at most $\tau$} \label{table6} \end{table} \subsubsection{Projective Symmetries of Random Curves with Central Inversion.} To analyze the effect of an additional non-trivial symmetry, we considered random parametrizations $\bm{p}(t_0,t_1)=(\bm{p}_0(t_0,t_1),\bm{p}_1(t_0,t_1),\bm{p}_2(t_0,t_1), \bm{p}_3(t_0,t_1))$ with a symmetric $\bm{p}_0(t_0,t_1)$ and an anti-symmetric triple $\bm{p}_1(t_0,t_1)$, $\bm{p}_2(t_0,t_1)$ and $\bm{p}_3(t_0,t_1)$ of the same even-degree $m$ and with bitsize at most $\tau$, i.e. of the form \begin{align*} \bm{p}_0(t_0,t_1)& =c_{0,0}t_0^m+c_{1,0}t_0^{m-1}t_1+...+c_{1,0}t_0t_1^{m-1}+c_{0,0}t_1^m \\ \bm{p}_i(t_0,t_1)& =c_{0,i}t_0^m+c_{1,i}t_0^{m-1}t_1+...-c_{1,i}t_0t_1^{m-1}-c_{0,i}t_1^m , \end{align*} with $c_{\frac{m}{2},i}=0$ for all $i\in\{1,2,3\}$. Since $\bm{p}(t_1,t_0)=(\bm{p}_0(t_0,t_1),-\bm{p}_1(t_0,t_1),-\bm{p}_2(t_0,t_1),-\bm{p}_3(t_0,t_1))$, such homogeneous parametric curves are invariant under a central inversion with respect to the origin. Table \ref{table7} lists the timings to detect projective symmetries (central inversions, in this case) of random curves with various degrees $m$ and bitsizes at most $\tau$. As expected, one can see that the computation times remain within the same magnitude order with respect to previous tables. \begin{table}[H] \centering \begin{tabular}{r r r r r r r r} \hline $t$ & $\tau=4$ & $\tau=8$ & $\tau=16$ & $\tau=32$ & $\tau=64$ & $\tau=128$ & $\tau=256$ \\ \hline $8$ & $0.078$ & $0.078$ & $0.078$ & $0.093$ & $0.141$ & $0.500$ & $1.109$ \\ $10$ & $0.234$ & $0.172$ & $0.313$ & $0.453$ & $0.781$ & $1.609$ & $4.516$ \\ $12$ & $0.360$ & $0.516$ & $0.531$ & $0.704$ & $1.282$ & $3.031$ & $8.140$ \\ $14$ & $0.625$ & $0.812$ & $0.703$ & $1.078$ & $2.062$ & $5.000$ & $13.032$ \\ $16$ & $0.921$ & $1.047$ & $1.203$ & $1.937$ & $3.125$ & $7.172$ & $20.109$ \\ $18$ & $1.329$ & $1.250$ & $1.578$ & $2.516$ & $4.969$ & $9.141$ & $28.922$ \\ $20$ & $1.765$ & $1.922$ & $2.282$ & $3.390$ & $5.594$ & $14.000$ & $39.125$ \\ \hline \end{tabular} \caption{CPU times in seconds for projective symmetries (central inversion) of random curves with various degrees $m$ and bitsizes at most $\tau$} \label{table7} \end{table} \subsubsection{Projective Equivalences of Non-equivalent Curves.} In the last table that we present here, Table \ref{table8}, we generate both curves randomly, so in general no projective equivalence is expected. Table \ref{table8} shows the computation times for non-equivalent random curves with various degrees $m$ and bitsizes at most $\tau$. As expected, the timings are faster than those of Table \ref{table5}, Table \ref{table6}, Table \ref{table7}. The reason is that in most cases the gcd $G$ is constant and therefore the algorithm finishes earlier. \begin{table}[H] \centering \begin{tabular}{r r r r r r r r} \hline $t$ & $\tau=4$ & $\tau=8$ & $\tau=16$ & $\tau=32$ & $\tau=64$ & $\tau=128$ & $\tau=256$ \\ \hline $4$ & $0.016$ & $0.015$ & $0.016$ & $0.015$ & $0.015$ & $0.015$ & $0.015$ \\ $6$ & $0.094$ & $0.329$ & $0.031$ & $0.047$ & $0.047$ & $0.046$ & $0.688$ \\ $8$ & $0.062$ & $0.062$ & $0.078$ & $0.094$ & $0.110$ & $0.829$ & $0.328$ \\ $10$ & $0.313$ & $0.141$ & $0.157$ & $0.453$ & $0.796$ & $0.328$ & $0.656$ \\ $12$ & $0.281$ & $0.250$ & $0.718$ & $0.297$ & $0.391$ & $0.937$ & $1.343$ \\ $14$ & $0.547$ & $0.625$ & $0.391$ & $0.703$ & $0.953$ & $1.344$ & $2.500$ \\ $16$ & $0.922$ & $0.547$ & $0.969$ & $0.609$ & $1.031$ & $1.937$ & $3.595$ \\ $18$ & $1.062$ & $1.046$ & $1.047$ & $1.313$ & $1.500$ & $2.922$ & $4.266$ \\ $20$ & $1.438$ & $1.375$ & $1.312$ & $1.890$ & $2.344$ & $3.594$ & $6.359$ \\ $22$ & $2.109$ & $1.704$ & $1.609$ & $2.187$ & $3.219$ & $4.453$ & $8.891$ \\ $24$ & $1.719$ & $2.343$ & $2.391$ & $2.782$ & $4.234$ & $6.735$ & $11.078$ \\ \hline \hline \end{tabular} \caption{CPU times in seconds for non-equivalent random curves with various degrees $m$ and bitsizes at most $\tau$} \label{table8} \end{table} \subsubsection{Effect of the Bitsize and Degree on the Algorithm.} Our implementation can deal with curves of degree $24$ and bitsize $256$ at the same time. When we attempt to solve the problem for higher degrees and bitsizes at the same time, the computer runs out of memory. However, by fixing either the bitsize or the degree we are able to go further and explore the limits of the method. Here we present the results of two different tests on random homogeneous parametrizations, one for a fixed bitsize and one for a fixed degree. In these tests the second parametrization is obtained by applying a projective transformation and a M\"obius transformation to the first, random, parametrization. For the first test we fix the bitsize at $4$, and increase the degree up to $128$; for the second test, we fix the degree at $8$, and increase the bitsize up to to $2^{12}$. The results are shown in Figure \ref{fig1}; Figure \ref{left} exhibits log plots of CPU times against the degree, and Figure \ref{right} exhibits non-log plots of CPU times against the coefficient bitsizes. The data were analysed using the \texttt{PowerFit} function of the \texttt{Statistics} package of \citet{maple}. Thus, as a function of the degree $m$, the CPU time $t$ satisfies \begin{equation}\label{eq64} t \sim \alpha m^\beta, \;\;\; \alpha \approx 2.0*10^{-4}, \;\;\; \beta \approx 3.1, \end{equation} and as a function of the bitsize $\tau$, the CPU time $t$ satisfies \begin{equation}\label{eq65} t \sim \alpha \tau^\beta, \;\;\; \alpha \approx 5.7*10^{-2}, \;\;\; \beta \approx 0.6. \end{equation} \begin{figure}[H] \centering \begin{subfigure}[b]{0.45\textwidth}\centering \centering \includegraphics[width=\textwidth]{RandomCentral} \caption{$t$ versus $m$} \label{left} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth}\centering \centering \includegraphics[width=\textwidth]{RandomCentral2} \caption{$t$ versus $\tau$} \label{right} \end{subfigure} \caption{\ref{left}: CPU time $t$ in seconds versus the degree $m$ with a fixed bitsize $\tau=4$. The asterisk represents the computations corresponding to degrees and line represents the fitting by the power law \eqref{eq64}. \ref{right}: CPU time $t$ in seconds versus the bitsize $\tau$ with a fixed degree $n=8$. The asterisk represents the computations corresponding to bitsizes and line represents the fitting by the power law \eqref{eq65}.} \label{fig1} \end{figure} \section{Conclusion and Future Work.}\label{sec-conclusion} We have presented a new approach to the problem of detecting projective equivalences of space rational curves. The method is inspired in the ideas developed in \citep{Alcazar201551} for computing symmetries of 3D space rational curves{, as well as in the theory of differential invariants}. The method proceeds by introducing two {rational functions}, called projective curvatures, so that the projectivities between the curves are derived after computing the M\"obius-like factors of two polynomials built from the projective curvatures. From an algorithmic point of view, it only requires gcd computing and factoring of a polynomial of relatively small degree, and therefore differs from previous approaches, where big polynomial systems were used. The experimentation carried out with \citet{maple} shows that the method is efficient and works better than previous approaches as the degree of the curves involved in the computation grow. \begin{color}{black} We conjecture that the method is generalizable to other dimensions, transformation groups and parametric varieties (e.g. surfaces). The essential requirement is to know what kind of transformation we have in the parameter space (in the case treated in this paper, M\"obius transformations). The sketched general scheme is: \begin{itemize} \item [(1)] {\it Generate invariants} (in the case treated in this paper, the $I_i$). In general, these invariants will not have a nice behavior with respect to the transformations in the parameter space; that is what we mean when we speak about ``commuting with M\"obius transformations"', in our case. \item [(2)] From the invariants in (1), {\it generate other invariants that behave nicely with respect to the transformations in the parameter space}. In our case, these are the $\kappa_i$. In general, this is a problem of eliminating variables cleverly. \item [(3)] From the invariants in (2), {\it find efficiently the transformation in the parameter space}. In our case, this is done by gcd computation and factoring. \end{itemize} This is a general outline that needs to be adapted depending on the dimension, the transformation group and the varieties involved. We do not have at the moment a proof that this scheme always works, although our ongoing investigation suggests that it certainly succeeds for planar rational curves. So the theoretical study of the viability, generality and correctness of the suggested strategy is an open question that requires further investigation. For rational curves in dimension $n$, the $I_i$ in step (1) would be \begin{align*} I_1(\bm{p}):=\dfrac{\Vert \bm{p}_{t_0^{n+1}}\, \bm{p}_{t_1}\, \bm{p}_{t_0^2}\, \cdots \bm{p}_{t_0^n} \Vert}{\Vert \bm{p}_{t_0}\, \bm{p}_{t_1}\, \bm{p}_{t_0^2}\, \cdots \bm{p}_{t_0^n} \Vert},\, I_2(\bm{p}):=\dfrac{\Vert \bm{p}_{t_0}\, \bm{p}_{t_0^{n+1}}\, \bm{p}_{t_0^2}\, \cdots \bm{p}_{t_0^n} \Vert}{\Vert \bm{p}_{t_0}\, \bm{p}_{t_1}\, \bm{p}_{t_0^2}\, \cdots \bm{p}_{t_0^n} \Vert},\ldots, I_{n+1}(\bm{p}):=\dfrac{\Vert \bm{p}_{t_0}\, \bm{p}_{t_1}\, \bm{p}_{t_0^2}\, \cdots \bm{p}_{t_0^{n+1}} \Vert}{\Vert \bm{p}_{t_0}\, \bm{p}_{t_1}\, \bm{p}_{t_0^2}\, \cdots \bm{p}_{t_0^n} \Vert}. \end{align*} However, deriving the functions in step (2) for a general $n$ is more complicated and requires further research. \end{color} Additionally, the method opens other interesting theoretical questions as well, like the geometric interpretation of the curvatures introduced in this paper, as well as a study of the curves where these curvatures are constant, which is a particular case that the algorithm in this paper cannot deal with.
2,869,038,156,058
arxiv
\section{Introduction} \label{intro} The organization control in MRS for manufacturing and logistics is mostly centralized in the state of the art with few exceptions like \cite{dec1, slidingDec, panRobotics, FarinelliMRS} where the architecture is decentralized, as it is relatively easy than a distributed control. However, the decentralized architecture in \cite{panRobotics} is focused on only the planning of route to generate feasible, sub-optimal and collision-free paths for multiple MRs. Thus, the system architecture is not general enough to handle different kinds of control and planning functions. In \cite{slidingDec}, the linear dynamic model is generated for a specific task of collectively transporting load in automated factories. The sliding mode controller is provided through a non-linear terms with bounds. This kind of stochastic control using dynamic model of the robot is useful in simple cases where the controller depends on minimal information which is available to the robot, unaware of the dynamics of the environment. On the other hand, there are various investigations conducted using partially observable Markov decision processes (POMDPs) to solve the general decentralised control and planning problems in MRS \cite{auctionPOMDP, mitPOMDP}, due to the the improvement of the general concepts of multi-agent systems (MAS). However, these solutions are computationally expensive and provide sub-optimal solution. Also, the requirements of scalability and robustness in a smart factory is not met with these solutions. Likewise, the problems solved in latter case using POMDP ignore the aspect of improving the cooperative functions based on performances or state of individual MRs and their environment. Thus, the problem still persists where a decentralized system architecture is necessary for MRS which will be scalable and robust, yet computationally inexpensive. The robot computational architecture in an MRS is classified into 4 categories, such as, deliberative, reactive, hybrid and behavior-based \cite{Michaud2016}. Though, all the four categories of control have their advantages and limitations, each contribute with interesting but different insights and not a single approach is ideal \cite{Michaud2016}. Nevertheless, current demands of smart factories are adaptability, real-time response and modularity which is served excellently by behavior-based control \cite{Michaud2016}. Moreover, decentralized control can be efficiently designed, implemented and handled through behavior-based systems. Numerous investigations have been carried out recently towards solving the problems of formation control in MRS using behavior-based systems \cite{Lee2018}, while much attention is not paid toward behavior-based control of MRS in logistics and transportation tasks. There are few investigations where this direction is investigated \cite{icraBehavior}, \cite{gonzalez2016behavioral}, but these behavior-based MRSs lack the much needed involvement of support and help by other MRs to each MR in decision making. This help can be successful when cost incurred by one robot for a particular performance can be shared to other robots to estimate their costs to carry out the same performance. These costs arise through the different battery and floor conditions while performances. Thus, the time to complete performance is dependent on all these factors and thus reflects the costs incurred. When movement is a performance, travel time of an edge is the performance time. When travel times can be estimated to obtain close-to-real values, they become different than heuristics costs and depict the real states which are impossible to obtain from heuristics. Nevertheless, a good estimation is dependent on historical data which are close in time. But, there are situations when all the travel times for one or more edge(s) are not available for the entire duration of operation of the MRS to an individual robot. Then, it is imperative for that robot to gather the necessary travel times from others in the system as a reference observation. We demonstrate this concept with the following example. \begin{figure}[h] \centering\includegraphics[width=1\linewidth]{examplePic.jpg} \caption{Problem: An Example Scenario} \label{figExmplCollaborative} \end{figure} Figure~\ref{figExmplCollaborative} illustrates a scaled down MRs-based internal transportation system in a factory. Traversing a path is considered as a task in this example. Let, at any instance of time, $t_0$, $A$1 is assigned to carry some material to $P$1 through the computed path marked by the dotted line. Again, at time $t_m$ ($m$ > 0), $A$1 needs to carry same material to $P$1. But at $t_m$, $A$1 will need more time and energy to reach $P$1 than at $t_0$ due to mainly two reasons. First, the battery capability of $A$1 has decreased due to execution of previous tasks. Also, the condition of the part of the floor, designated by the given path, can get deteriorated (as marked by black dotted lines). As said previously, travel times of edges depict states of battery and floor condition \citet{wafPragna}. So, the travel times at previous instances are useful to estimate travel time at $t_m$, if only battery state has changed. But, condition of floor has also changed. This can only be anticipated through travel times at $t_m$ if the robot has traversed that part of the floor in the previous or nearly previous time instance. Nevertheless, travel time from other MRs who has traversed that part in nearly previous instance can be useful to $A$1, along with its own travel times at previous instances to estimate its travel time at current. Thus, travel times of these two sources are useful to estimate it's future travel time. This work addresses this area of investigation where each MR get information like travel time of edges from other MRs in order to make better decisions. The above example explains that the amount of time and energy required to complete a task has an existing correlation with state of charge of batteries and environmental conditions. These time and energy can be formed as cost coefficients to express state of battery's charge and environment. These cost coefficients can be of various forms like travel time of edges, rotating time, loading time, \textit{et~cetera} depending on the functions. Further, they can serve as a deciding factor in several planning decisions for better cost efficient decisions. However, these cost coefficients need to be either known apriori or estimated to be used in decision making. In case of knowing apriori, observations of these costs in various forms like travel time of edges, rotating time, \textit{et~cetera} need to be measured for all possibilities, which is not only cumbersome but also impractical. Hence, estimating them during run-time is a good solution. But, estimation requires observation of the same at previous instances. The observations values can be gathered from the beginning of first decision making and can be used in subsequent calls for estimation. The first few iterations of decision making is a learning phase to gather few observations to start the estimation. But, an MR may need to estimate the travel time of one or more edge which it did not traverse previously. This can be mitigated by sharing the observation value from other MR who has travelled that edge in nearly previous instance. This way the knowledge sharing can help an MR to estimate the travel cost for an unexplored edge at current instance. In the example, $A$1 has travelled the edges in the region (marked by dotted line) towards $P$1 long back at $t_0$ and hence it does not have the latest information about its condition at $t_m$. In this case, travel time of edges, annotated with time stamps, along the region marked by dotted line from other MRs who has travelled it in nearly previous instance, must be communicated to $A$1, so that $A$1 can utilize it while estimating it's own cost at $t_m$. The travel times have inherent contexts like time stamp and the edge between pair of nodes. This underlying context has been exploited to form a semantic knowledge sharing mechanism to communicate the costs of edges inform of travel times in this work. This is instrumental in deriving more accurate estimates of travel times to ascertain cost at current time in each MR. This improves decisions in each MR for efficiency where MRs help each other to gather states of environment and other factors. Moreover, all the MRs are autonomous and have their own control separately, which make the whole system decentralized. This type of control is implemented using a behavior-based system, to utilize the benefits of both decentralized architecture and behavior-based system. The subsequent sections elaborate on the background (Section~\ref{bckgrnd}), problem statement and contribution (Section~\ref{probContr}), methodology (Section~\ref{methControl} and Section~\ref{semantics}) and implementation (Section~\ref{implControl} and Section~\ref{ontology}). Results of utilising the proposed methodology is tabulated and analyzed on Section~\ref{res}, while discussions and conclusions are put forward in \label{bckgrnd} \section{Problem statement and contribution} \label{probContr} This work addresses the problem of building a decentralized system architecture where the planning decisions can be based on the dynamically changing state of MRs and their environment. This paves the way towards robustness and scalability in the MRS. One of the most suitable methods of control for MRS is behavior-based system as it can handle significant dynamic changes with fast response and enforce adaptability, few of the major requirements of current smart factories \cite{Michaud2016}. The system architecture for the MRS in current work is developed using concepts of behavior-based system with specific behaviors for planing and task execution. Current work implements an MRS for automated logistics where each robot need to transport materials to designated placeholders or racks, termed as ports. Also, \textbf{reaching a particular port} by an MR is considered as a task, along with route computing being considered as a decision making process. Thus an MR is required to traverse from one node to another in a floor, described by a topological map. This enables the MRs to perform single task at a time. \begin{figure}[h] \centering\includegraphics[scale = 0.38]{probDesc.jpg} \caption{Problem description} \label{map} \end{figure} The travel time of each arc (like $a_{a,b}$, $a_{f,d}$) in a floor map given in Figure~\ref{map} is influenced by energy exhaustion, condition of floors, physical parameters of robot, among others, which incurs cost. Thus time to traverse an arc by an MR or \textit{travel time} can be conceptualized as its cost coefficients. In this work, \textit{travel time} is considered as weight or cost for an edge. This is formalized as $X_{p,q}(k)$ to denote travel cost from $n_p$ to $n_q$, where $k$ is the instance of time of traversing an edge. $X_{p,q}(k)$ is time-varying from the perspective that at a particular instance of the time, the cost of that particular arc is dependent on battery discharge and condition of the floor which changes over passage of time. A path $P$ is formed as a series of arcs between connecting nodes for an MR and thus $P$ can be defined as \begin{equation} \label{pathL} P = \langle a_{a,b}, a_{b,e}, a_{e,g}, a_{g,j}, ...........\rangle \end{equation} Now, the cost of traversing $P$ can be written in a form $C_P$ of \begin{equation} \label{pathC} C_P = \langle X_{a,b}(k), X_{b,e}(k), X_{e,g}(k),...,... ...........\rangle \end{equation} The elements of $C_P$ are required to be identified for each call of path planning. From now on, $X_{p,q}(k)$ will be written as $X(k)$ for simplicity. For continuous performance of the MR, path needs to be computed for the MR after it reaches a destination. Let, at $i$th call of path planning, path cost was \begin{equation} \label{pathCI} C^i_P = \langle X^i_{a,b},..., X^i_{e,g}, ....., X^i_{q,r}, ..... \rangle \end{equation} Now, in any instance, an MR may need to traverse an arc which it had traversed in previous instances. Let at $j$th ($j$\textgreater$i$+1) call of path planning, estimation of $X(k)$ for $a_{e,g}$ is required. As , the MR do not have the observation of $X(k)$ for $a_{e,g}$ at the previous instance. It can only use the $X(k)$ for $a_{e,g}$ obtained during traversing $P$ after $i$th call. In this scenario, the obtained estimate can be significantly inaccurate which has the potential to produce inaccurately optimized path. Thus, the observation of $X(k)$ for $a_{e,g}$ at $j$th call can be also obtained from other robots performing in the system which has traversed that arc in the previous instance or in a nearly previous past instance. Moreover, at $j$th call, estimation of $X(k)$ for some edge may be needed which that MR has not been yet traversed during all its previous traversals. This can be also solved by fetching the observation data of $X(k)$ for the required edge from other robots which have traversed that at previous or nearly previous instances. Improved estimated values of $X(k)$ can be obtained by transferring the right knowledge from one robot to another, which will generate more cost efficient decision. Thus, an information sharing framework is imperative to be formed in order to improve the estimation and in turn improve the decisions where robots can support and help each other in their decisions. The contribution of this work include the following \begin{itemize} \item A completely decentralised system architecture is developed based on behavior-based system in a hierarchical model for each robot, which ensures scalability and improves robustness \item A semantic knowledge sharing mechanism is devised in each robot to share estimated values of travel times of one robot to others. This eventually helps in obtaining better estimates of travel times in each robot, which produces for more optimal path with minimum path costs. \end{itemize} \section{Behavior-based decentralised multi-robot system} \label{controller} \subsection{Methodology} \label{methControl} The objective of this work is to form an MRS with a decentralized flow of control, suitable to logistics. The flow of control is based on the concept of sub-sumption, where each robot has the same sub-sumption model. Each robot is capable of taking the decision itself, with the capability of gathering information about the environment from other MRs. This sub-sumption model achieves the goal making each MR autonomous. The sub-sumption model involves the control structure to be organized in layers one above the other with increasing level of competencies and each level can interact with all other levels with messages. This technique of flow of control is described in Figure~\ref{agvControl}, which consists of two major control layers. \begin{figure}[h] \centering\includegraphics[scale = 0.38]{subSumptionCntrl.jpg} \caption{Controller architecture} \label{agvControl} \end{figure} The top most layer is $L$1 level and the $L$0 level is below it. The $L$0 level is divided into two sub levels $L$0.1 and $L$0.0 levels respectively. The $L$1 level is the agent level control layer where it functions on all the agents in the transportation system and is engaged in controlling more complex functions like finding path, organizing task, finding destination poses, \textit{et~cetera} for each of the robots. The $L$0 level functions on each of the agents individually and controls the movements. Each robot has its own $L$0.0 and $L$0.1 levels respectively which controls the movements in each of them. Here, the $L$0.0 level communicates with the $L$0.1 level and have no communication with the $L$1 level. The $L$0.1 level is the intermediate level which communicates both with $L$0.0 level and $L$1 level. The control levels functions in co-ordination with each other to control the movements of the robots in the environment \cite{Norouzi2013}. Thus, essentially the MRs in the system are autonomous. Moreover, the top most $L$1 level is responsible for intelligent decision making for task assignments and path traversal, based on the available \textit{travel time}, which represents knowledge about the individual robot and the environment. \subsection{Implementation} \label{implControl} The control technique \cite{Ismael2015},\cite{Ismael2017}, described in previous section, is achieved using behaviors as the building block of both decision-making level ($L$1 level) and action execution level ($L$0 level). Separate sets of behaviors are designed for two layers as illustrated in Figure~\ref{masLayers}, which is based on the control framework proposed by R. Brooks in \cite{brookControl86}. The hierarchical control framework has three behavioral levels and each level has an objective and a corresponding output, formed as commands or replies. Moreover, the process of execution of all levels start simultaneously. However, the output in the form of commands from the highest level ($L$1) need to pass on to the next priority level ($L$0.1) for it to start execution, and similar process is followed in $L$0.0 level. This happens because of the hierarchical framework and the command from the highest-priority behavioral level is required as input to process the low-priority behavioral level. On the other hand, the replies from the lower level act as a feed back to the control rules of the higher level which determines the final decision and output of the control framework. \begin{figure}[h] \centering\includegraphics[scale = 0.33]{behaviorPic.jpg} \caption{Behaviors for the layers of control} \label{masLayers} \end{figure} The general design of an MR is considered in the prototype system which consists of servo-motors to rotate wheels and camera. The sensing is conducted with infra-red sensors and camera. The beagle-bone forms the processor for the robot. Each MR in the system is autonomous with its own three level of behavioral control framework. The following are the behaviors developed in each level. \begin{itemize} \item In $L$0.0 level, actuation behaviors are developed. This behavior conducts the starting of motors for wheel rotation, camera movements and infra-red sensor movements. The commands which refer to target poses are obtained from $L$0.1 level in this behavior using extended finite state machines to conduct the movement of the individual robots. The sensor readings are transferred to $L$0.1 for processing as feedback of commands. \item In $L$0.1 level, three behaviors are developed. They are generating target poses from high level commands like destination port, finding obstacle and obstacle avoidance, processing sensor data to be used in planning and decision in $L$1. All these behaviors are developed using extended finite state machine. \item In $L$1 level, decision making behaviors are developed which are finding paths and assigning tasks. These behaviors are developed using extended finite stacked-state machine. Also, behavior of maintaining and sharing the knowledge of \textit{travel time} is developed. More details about the sharing mechanism is provided in next sections. \end{itemize} All these three levels of behavior correspond to a singular behavior for a single robot and this is repeated for each MR. Thus, the control flow in the MRS is decentralized. The knowledge sharing mechanism is incorporated in the behavior of $L$ so that each robot can communicate through them. The highest level ($L$1 level) is implemented on the desktop computer in our model to reduce communication costs among the $L$1 level agents in each MR. The next two lower levels are implemented on body on individual MR using embedded system techniques. The decisions for planning need the information about the states of itself and environment. As behavior in $L$ level conducts the process of decision, the knowledge sharing process is realized in $L$ in each level which provides each robot an opportunity to seek help about states of environment from other MRs. As discussed in Section~\ref{intro}, travel time $X$($k$) (Section~\ref{probContr}) for a particular edge provides the necessary representation of state of robots and environment. A direct correlation has been found between $X$($k$) with state of charge of batteries and conditions of floor in the prototype system. This is depicted in Figure~\ref{disvstt}. Part (a) plots the cell voltage of Li-ion batteries over time, Part (b) plots the progressive mean of observed values of $X$($k$) for $m$th edge with the change of state of charge of batteries and Part (c) plots the observed value of $X$($k$) for the same edge with both the change of state of the charge of batteries and the floor condition. The floor is changed from rough at the beginning to smooth during the experiment. \begin{figure}[h] \centering\includegraphics[scale = 0.6]{discharge_traveltime.jpg} \caption{change pic} \label{disvstt} \end{figure} The plot (b) shows that progressive mean of $X$($k$) increase first, then steadily decrease and then increase gradually till complete discharge. Thus values of $X$($k$) first increase due to sudden fall of cell voltage at beginning, then decreasing fast due to cell voltage increasing fast to a steady level and the values gradually decrease towards complete discharge of batteries. Thus, a correlation between $X$($k$) is observed through plots (a) and (b). On the other hand, the increase in progressive mean of $X$($k$) is longer than that of plot (b) at equal battery capacity. The longer increase of values of $X$($k$) in (c) can be attributed to the rough floor, as more energy is required to traverse in rough surface. Plot of $X$($k$) in different conditions of floor demonstrate that travel time can reflect not only state of charge of batteries \cite{wafPragna} but also environmental conditions. During the run-time of MRS, the estimation of $X$($k$) is conducted for all necessary edges while finding the optimal path. Thus, estimated values of $X$($k$) will be generated at every instance of control decisions, producing a pool of estimated values. More significantly, every estimated value of travel time has inherent context associated with it, which when shared with the other MRs help in the estimation of $X$($k$) in them. This concept is elaborated in the next section. \section{Information sharing in behavior-based control} \label{infoShare} \subsection{Semantics in travel time} \label{semTT} An MRS is dynamic as its states change over time. Also, it is evolving as it gathers more and more knowledge about its states through the course of its operation. Moreover, the source of knowledge of an MRS is distributed to each of its constituent robots. The behavior in $L$1 has the role of finding paths using Dijkstra's algorithm. Dijkstra's algorithm needs to know the estimated $X$($k$) for the concerned edges to decide the path as $X$($k$) designates the cost of traveling the edge. Now, there are possibilities when an MR has not yet traversed many edges. The estimation of $X$($k$) for these edges depends on the obtained travel cost of them from other MRs. Thus, knowledge sharing mechanism improves the estimation of $X$($k$) for accuracy. This will be instrumental to Dijkstra's algorithm to produce better optimal paths with minimum path costs. The $X$($k$)s originate from each MR depending on the instance of travelling, zone of the floor, previously traversed edge, neighboring AGVs, state of charge, \textit{et~cetera}. All these factors provide context to the estimated values of $X$($k$). \begin{figure}[h] \centering\includegraphics[scale = 0.36]{examplePic.jpg} \caption{change pic} \label{explSemtc} \end{figure} For example, in Figure~\ref{explSemtc}, the $X$($k$) of an edge by $A$1 at $t_m$ will be different than that at $t_0$ due to discharging of batteries as explained in Figure~\ref{disvstt}. On the other hand, $X$($k$) for $n$th edge ($n$ $\neq$ $m$) by $A$2 in a different zone (marked by double dotted line) will be different than that of $m$th edge, though both $m$th and $n$th edge can have same lenghth. This happens due to different states of floor. Moreover, $X$($k$) for $n$th edge by $A$1 will be different than that by $A$2 at any $t_i$ because of differently discharged batteries for different previous tasks. Thus, estimated travel time provides contextual information representing state of charge, condition of floor, instance of travelling. These values of $X$($k$) at a particular instance for a particular edge of one MR provide contextual information about cost for that edge to other MRs when communicated. Hence, semantics can be built from these knowledge of travel time as they have inherent contextual information. They convey information about the costs of traversing through different edges in the topological map, which describes the factory floor. \subsection{Using semantics for knowledge sharing} \label{semantics} Semantic relationships are built in this work to form semantic sentences in order to fetch the values of travel times with the inherent contextual information. This concept is illustrated in Figure~\ref{semPic}. \begin{figure}[h] \centering \framebox{\parbox{3.3in} \includegraphics[scale = 0.47]{ontoFig2.jpg} }} \caption{An example of semantic relationship in MRS} \label{semPic} \end{figure} From the above example, a semantic sentence can be derived as \begin{itemize} \label{egSentence} \item Cost from node $N_a$ to node $N_b$ is $X_{a,b}$ at time instance $k$ \end{itemize} where, $k$ is the instance of estimation. $N_a$ and $N_b$ refer to specific nodes, travel time $X_{a,b}$ refer to specific kind of cost. Cost refer to specific kind of utility expenditure while performing the task. Thus, \textbf{cost} establishes the relationship between nodes $N_a$ and $N_b$ and \textbf{travel time} $X_{a,b}$. When the system knows the meaning of \textbf{nodes}, \textbf{utility cost}, \textbf{travel time}, then the above sentence will convey some meaning to the system. This is precisely the method of developing semantics in the MRS in order to convey the contextual meaning instilled in travel time to the $L$ level controller. \subsection{Ontology to represent semantics} \label{ontForSem} The most traditional, flexible and useful method of representing knowledge using semantics is expressions based on subject, predicate and object logic \cite{SegaranPTSW}. Positioning the obtained knowledge is the next progressive step which is defined in philosophical terms as ontology. Ontology helps to create order and define relationships among things useful to an application. A domain specific ontology is developed in this work to efficiently store, access and communicate meaningful semantics across all the MRs in the system regarding the real-time travel costs of edges. There are significant advantages of implementing ontology for the already mentioned application of this work. \begin{itemize} \label{dbComparison} \item \textbf{Conceptualization of information}: An ontology is defined explicitly to form a specification for a shared conceptualization of a pool of knowledge \cite{sageOnto}, \cite{ontoGRUBER1993}, \cite{ontoSTUDER1998}. Ontologies define the concepts of the domain formally and explicitly making further modifications or reversals less cumbersome. \item \textbf{Data representation}: Ontology is based on dynamic data representation where a new instance definition is not constrained to a definite rule. Thus adding new elements is easy and fast as and when required. This virtue of ontology is essentially beneficial to share the knowledge of travel time in MRS. The number of travel time grows with the increase of operation time. Moreover, reasoners in ontology solve the problem of data parity, integrity and adhering to constraints. When a new element is added to an ontology, the reasoner performs to check the integrity of the information. This capability of ontology makes the knowledge sharing method in the MRS flexible yet robust. Data addition in MRS is not required to be done on all instances and when it is added the reasoner checks for data integrity and new information can be added smoothly without adhering to rules, previously defined. \item \textbf{Modeling technique}: Ontology possesses the capability to express semantic concepts. In case of MRS, conveying the contextual information inherent to any cost parameter like travel time requires this semantic expressiveness than just defining or extracting data. Moreover, the pool of knowledge gathered in the MRS through travel time or similar parameters need to be reused which is only possible through the descriptive logic models of ontology. \end{itemize} In nutshell, ontology provide an unrestricted framework to represent a machine readable reality, which assumes that information can be explicitly defined, shared, reused or distributed. Moreover, information can also be interchanged and used to make deductions or queries. Such representation is imperative for representing the travel time for reasons described above. \subsection{Application of ontology} \label{ontology} Semantics is an efficient way to communicate enough meaning which can actuate some action. The focus on representing semantic data is through entities. Semantic models are property oriented and semantic entities are members of a class. Semantic classes are defined on properties, it is also possible to define classes in terms of value of a property. A property type is \textit{object property} when it signifies some abstract property like character, contribution, virtue \textit{et~cetera}. A property type is a \textit{data property} when it signifies some literal value. On the other hand, classes can have any of the type of properties. The subclasses are defined which can avail all the properties of the superclass. The properties have range and domain. Range is the source type of a property, while domain is the destination type of the property. \begin{figure}[ht] \centering \framebox{\parbox{3.3in} \includegraphics[scale = 0.2]{ontoFinal.jpg} }} \caption{Ontology} \label{desOnto} \end{figure} Based on these concepts, the ontology stores and shares the knowledge of travel time (Figure~\ref{desOnto}). The ontology has two types of classes (\textbf{owl:Class}), \textbf{NS:Edge} and \textbf{NS:Node}, as shown in Figure~\ref{desOnto}. Thus, \textbf{NS:Edge} and \textbf{NS:Node} are subclasses of \textbf{owl:Class}. There are two properties a class can possess, \textbf{owl:ObjectProperty} and \textbf{owl:DatatypeProperty}. \textbf{NS:Origin} and \textbf{NS:Destination} are of types of \textbf{owl:ObjectProperty}, while \textbf{NS:tt}, \textbf{timeStamped} are of types of \textbf{owl:DatatypeProperty}. The range of \textbf{NS:Origin} is subclass \textbf{NS:Node}, being the source type of a property, while domain is \textbf{NS:Edge} being the destination type of the property. Similarly, the range of \textbf{NS:Destination} is subclass \textbf{NS:Node}, being the source type of a property, while domain is \textbf{NS:Edge} being the destination type of the property. On the other hand, the range of \textbf{NS:tt} is a float, being the source type of a property, while domain is \textbf{NS:Edge} being the destination type of the property. Similar is the case for \textbf{timeStamped}. The tupled relationships are formed by using these domain and range connections. For example, let $m$th edge be between nodes $n_g$ and $n_h$. $X$($k$) for $m$th edge at $k$ can be formed as \textbf{NS:tt} value at \textbf{timeStamped} value $k$ for the $m$th individual of subclass \textbf{NS:Edge} whose \textbf{NS:Origin} is individual $n_g$ of subclass \textbf{NS:Node} and \textbf{NS:Destination} is individual $n_h$ of subclass \textbf{NS:Node}. This semantic sentence can be disintegrated into several subject, predicate and object logic to derive the necessary $X$($k$). For example, \begin{itemize} \item individual $m$th edge is of type \textbf{NS:Edge} \item individual $n_g$ is of type \textbf{NS:Node} \item individual $n_h$ is of type \textbf{NS:Node} \item $m$th edge has \textbf{NS:Origin} $n_g$ \item $m$th edge has \textbf{NS:Destination} $n_h$ \item $m$th edge has \textbf{NS:tt} $X$($k$) \item $m$th edge has \textbf{timeStamped} $k$ \end{itemize} This way the \textbf{owl:ObjectProperty} and \textbf{owl:DatatypeProperty} of the subclass \textbf{NS:Edge} provides the $X$($k$) for the $m$th edge. Also, the $X$($k$) gets a context about its edge (between a pair of nodes) and time stamp. The advantage of this ontology lies in this formation, as discussed in previous Section~\ref{dbComparison}, where any new element can be inserted through these property formations without being restrained semantically. With the use of ontology, travel time $X$($k$) can be efficiently stored annotated with a pair of nodes demarcating the edge and the time stamp of traversing it. The structure illustrated in Figure~\ref{desOnto} shows the formation of ontology which is replicated in each robot in the MRS. Thus, when the information of travel cost for any edge for any time instance is required by any MR, $X$($k$) for that edge at the required time stamp can be retrieved from ontology of other MRs. This shared information from other MRs can provide as observation or historical data for those edges which either have not been yet travelled or have been travelled long back. This helps in achieving accurate estimates of $X$($k$) of these edges. For example, in Figure~\ref{explSemtc}, when $A$2 requires to estimate $X$($k$) for edges through the marked zone (marked by dotted line), the historical observation data of $X$($k$) in that zone can be obtained from the ontology of $A$1 whi h as traversed those edges in previous or nearly previous instance. The estimated values at current instance become more accurate using $X$($k$) of the same edges by $A$1 at previous instances. This information can be sought by the $L$1 level behaviors in any MR to other $L$1 level behaviors in other MRs. Thus, this ontology fulfills the mechanism of knowledge sharing inside the $L$ level behaviors. A co-operative approach in achieved through this knowledge sharing for better cost efficient decisions in each MR, which in turn enhances the cost efficiency of the MRS. \section{Retrieval of travel time and using in estimation} The sharing of travel time to all MRs is implemented through ontology in each of them to generate better estimate of travel time among all (Section~\ref{need2share}). This section describes the methodology of using travel time of others in the estimation process of an MR. The travel time of an MR is modelled using bi-linear state dependent time series \cite{priestley1988}, which is described in Section~\ref{exp2} in Chapter~\ref{costPathPln}. This is again produced here for convenience. The bi-linear model, provided in equation~\ref{bilinear}, is used to model the change of travel costs depending upon all the previous travel costs.\\ \begin{align} \label{bilinear} &X(k)+a_1X(k-1)+.....+a_jX(k-j)\\ \nonumber &=\xi_k+b_1\xi(k-1)+...+b_l\xi(k-l)\\ \nonumber &+\sum\sum c_{rz}\xi(k-r)X(k-z) \nonumber \end{align} The model described in equation~\ref{bilinear} is a special case of the general class of non-linear models called state dependent model (SDM) \cite{priestley1988}. In equation~\ref{bilinear}, $X$($k$) denotes the edge travel cost at $k$ and $\xi$ at $k$ denotes the inherent variation of the edge travel cost. In equation~\ref{bilinear}, $X$($k$) depends on all the previous values of $X$ and $\xi$, whose number is provided by the variables $j$ and $l$. However, a fixed number of previous values of $X$ and $\xi$ is used for estimation of current $X$ like an window which moves with increase of time. This fixed size of this window is termed as \textit{regression number} and it is chosen as a design parameter, designated by $j$ and $l$. The double summation factor over $X$ and $\xi$ in equation~\ref{bilinear} provides the nonlinear variation of $X$ due to state of batteries and changes in environment. The state space form of the bi-linear model is given in equation~\ref{bStateEqn} and equation~\ref{bObsEqn}. \begin{align} \label{bStateEqn} &s(k) = F(s(k-1))s(k-1) + V\xi_k+G\omega_{k-1}\\ \label{bObsEqn} &Y(k) = Hs(k-1) + \xi_k+ \eta_k \end{align} The equation~\ref{bStateEqn} is the state equation which provides the next state from the current state. In equation~\ref{bStateEqn}, the state vector $s$($k$) is of the form $(1, \xi_{k-l+1},...., \xi_k, X_{k-j+1},......, X_k)^T$. The state vector contains the edge costs obtained progressively over time from $X_{k-j+1}$ to $X_k$. The variable $\xi$ provides values of innovation or evolution of edge costs over the time as the exploration proceeds. Here, $j$ denote number of previous edge costs to be included in the state vector among all edges included in the path till $k$th instance. Also, $l$ denotes the number of previous evolution values of these edges. The $\xi$ values are specific for each MR and originate from the changes in travel time of the particular MR. The values of $\xi$ are obtained by sampling using the observation data of travel time. This observation data is obtained for the static online estimation of travel time (Section~\ref{exp1} in Chapter~\ref{costPathPln}). The $\xi$ values obtained through this method represents the projection of change of travel time. Though, these sampling method does not produce the perfect data to represent the change of travel time, this is suitable to this simple case where cost factor of one task is considered. This method should be improved for the case where cost factors of two or more tasks are to be considered. The matrices of equation~\ref{bStateEqn} are $F$, $V$ and $G$ which are explained in the following. \[ F = \begin{bmatrix} 1\mspace{18.0mu}0\mspace{18.0mu} 0\mspace{18.0mu}\dots\mspace{18.0mu}0\mspace{18.0mu}\vdots\mspace{18.0mu}0\mspace{18.0mu} 0\mspace{18.0mu}\dots\mspace{18.0mu} 0\mspace{18.0mu}0\\ 0\mspace{18.0mu}0\mspace{18.0mu} 1\mspace{18.0mu}\dots\mspace{18.0mu}0\mspace{18.0mu}\vdots\mspace{18.0mu}0\mspace{18.0mu} 0\mspace{18.0mu}\dots\mspace{18.0mu} 0\mspace{18.0mu}0\\ 0\mspace{18.0mu}0\mspace{18.0mu} 0\mspace{18.0mu}\dots\mspace{18.0mu}1\mspace{18.0mu}\vdots\mspace{18.0mu}0\mspace{18.0mu} 0\mspace{18.0mu}\dots\mspace{18.0mu} 0\mspace{18.0mu}0\\ 0\mspace{18.0mu}0\mspace{18.0mu} 0\mspace{18.0mu}\dots\mspace{18.0mu}0\mspace{18.0mu}\vdots\mspace{18.0mu}0\mspace{18.0mu} 0\mspace{18.0mu}\dots\mspace{18.0mu} 0\mspace{18.0mu}0\\ \vdots\mspace{18.0mu}\vdots\mspace{18.0mu}\vdots\mspace{18.0mu}\dots\mspace{18.0mu} \vdots\mspace{18.0mu}\vdots\mspace{18.0mu}\vdots\mspace{18.0mu} \vdots \mspace{18.0mu}\dots\mspace{18.0mu} \vdots\mspace{18.0mu} \vdots\\ 0\mspace{18.0mu}0\mspace{18.0mu} 0\mspace{18.0mu}\dots\mspace{18.0mu}0\mspace{18.0mu}\vdots\mspace{18.0mu}0\mspace{18.0mu} 1\mspace{18.0mu}\dots\mspace{18.0mu} 0\mspace{18.0mu}0\\ 0\mspace{18.0mu}0\mspace{18.0mu} 0\mspace{18.0mu}\dots\mspace{18.0mu}0\mspace{18.0mu}\vdots\mspace{18.0mu}0\mspace{18.0mu}0 \mspace{18.0mu}1\mspace{18.0mu}\dots\mspace{18.0mu}0\\ 0\mspace{18.0mu}0\mspace{18.0mu} 0\mspace{18.0mu}\dots\mspace{18.0mu}0\mspace{18.0mu}\vdots\mspace{18.0mu}0\mspace{18.0mu} 0\mspace{18.0mu}0\mspace{18.0mu}\dots\mspace{18.0mu} 1\\ \mu\mspace{18.0mu}\psi_l\mspace{18.0mu}\psi_{l-1}\mspace{18.0mu}\dots\psi_1\vdots-\phi_k-\phi_{k-1}\dots-\phi_1 \end{bmatrix} \] The number of rows of $F$ depends on the number of $regression\_no$ and given by (2*$regression\_no$ + 1). The matrix $F$ contains many new terms like $\psi$ , $\phi$ , $\mu$. The $\psi$ terms are denoted as in equation~\ref{psiterms} \begin{equation} \label{psiterms} \begin{aligned} \psi_l =b_l+ \sum_{i=1}^{l} c_{li}X(k-i)\\ \end{aligned} \end{equation} All the $\phi$ terms in $F$ are constants. The term $\mu$ is the average value of $X$ till $k$th instance. Thus, the state transition matrix $F$ depends on the travel times of the previously traversed edges. Also, the matrix $V$ is denoted as \[ V = \begin{bmatrix} 0 & 0 & 0 & \dots & 1 & \vdots & 0 & 0 & \dots & 1 \end{bmatrix} \] The number of rows of $V$ is again given by (2*$regression\_no$ + 1). The equation~\ref{bObsEqn} is the observation equation which forms the observation for the current instance. The matrix in equation~\ref{bObsEqn} is $H$ which is described as \[ H = \begin{bmatrix} 0 & 0 & 0 & \dots & 0 & \vdots & 0 & 0 & \dots & 1 \end{bmatrix} \] The observation is formed by multiplying the $H$ matrix with the state vector $s$ and added with the innovation at the current instance. In both equation~\ref{bStateEqn} and equation~\ref{bObsEqn}, $s$($k$-1) denotes the state vector at current instance. Here, $s$($k$-1) is of the form $(1, \xi_{(k-1)-l+1},...., \xi_{k-1}, X_{(k-1)-j+1},......, X_{k-1})^T$. The $X$ values in this vector are the travel times obtained for the edges which are already explored and included in the path. But, the available travel times may not be enough to fulfill all the data till the previous instance. Thus, the travel times for the previous instance which are not available are gathered from other MRs. In order to gather this data, the relevant edge costs are queried in the ontology of other MRs. Then after retrieval of the data they are filled in the state vector for both equation~\ref{bStateEqn} and equation~\ref{bObsEqn}. Both the equations have $\xi$ which corresponds to the innovation or change of travel time. Thus, this factor plays the role of projecting the travel time of the particular MR at some particular instance. In equation~\ref{bStateEqn}, this factor contributes not only to the formation of state but also forms the state equation to predict the next state. In equation~\ref{bObsEqn}, this $\xi_k$ is added to the product of $H$ and $s$($k$-1) to form observation. The product of $H$ and $s$($k$-1) is $X_{k-1}$ The addition of $X_{k-1}$, $\xi_k$ and error term $\eta_k$ produces the observation $Y$($k$). In this way, the travel time of other MRs are used in the model for estimation of travel time. The estimation is done by Kalman filtering. The equations obtained after applying Kalman filtering this bi-linear model are explained in Section~\ref{exp2} in Chapter~\ref{costPathPln}. The same process is continued to obtain the travel time of relevant edges. This travel times are the instruments to decide the path using Dijkstra's algorithm. This whole process is summarized in Algorithm~\ref{sharingDiijkstra}. \begin{algorithm} \caption{Dijkstra's algorithm using dynamic estimation of travel time} \label{sharingDiijkstra} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \underline{Initialise\_Single\_Source} $(V,E,s)$\\%\; \Input{$V$-list of nodes, $E$-list of edges, $s$-source node} \Output{$d$[$v$]-attribute for each each node, $\pi$[$v$]-predecessor of each node} \SetAlgoLined \For{each $x_i \in V$}{ $\pi$[$x_i$] = infinity\\ $d$[$x_i$] = NIL\\ } $d$[$s$] = 0\\ \underline{findedgedCost} $(u,v,j)$\\%\; \Input{$u$-current node, $v$- neighbor node} \Output{$w$- estimated travel\_time (cost) from $u$ to $v$} $findPredEdge$($u$)\\ $prev_x$ := $x_(prevEdge)$\\ $w$ = $estimateKF$($prev_x$,$j$,$X$)\\ \underline{findPredEdge} $(u)$\\%\; \Input{$u$-current node} \Output{$prevEdge$-edge connection $u$ and $predU$} $prevEdge$ = edge between $u$ and $predU$\\ \underline{estimateKF} $(prev_x,j,X)$\\%\; \Input{$prev_x$-$x_(j-1)$, $j$-instance for estimation, $X$- observation variable} \Output{$x_j$-travel cost at current j for current edge} Apply Kalman filtering to find $s_j$ and return $x_j$ \underline{Relax} $(u,v,w)$\\%\; \Input{$u$-current node, $v$- neighbor node, $w$- estimated travel\_time (cost) from $u$ to $v$} \Output{$d$[$v$]-attribute for each each node, $\pi$[$v$]-predecessor of each node} \If{$d$[$v$] $> d$[$u$] + $w$($u,v$)}{ $d$[$v$] = $d$[$u$] + $w$($u,v$)\\ $\pi$[$v$] = $u$\\ } \underline{Main} $(V, E, w, s)$\\%\; \Input{$V$-list of nodes, $E$-list of edges, $w$-edge weight matrix, $s$-source node} \Output{$\pi$[$v$]-predecessor of each node} $P$ := NIL\\ $Q$ := $V$\\ $j$ := 0\\ \While{$Q != $0}{ $j$ = $j$+1 $u$ := Extract min ($Q$)\\ $P$ := $P$ $\bigcup u$\\ \For{each $v \in Adj$[$u$]}{ $w$ = $findedgedCost$($u$,$v$, $j$)\\ $relax$($u$,$v$,$w$)\\ } } \end{algorithm} \section{Experiment and Results} \label{res} This work proposes a behavior-based control method which uses online estimated \textit{travel time} as a decision parameter for computing optimal routes between pairs of ports. \subsection{Experiment-I: Behavior-based decentralized control system for MRS based logistics} \label{exp1} A prototype multi-robot system for logistics in factory is developed based on the proposed behavior-based decentralized planning and control method. The experimentation platform is briefly described in this section to provide elaborate explanation of the experiment. A scaled down prototype of automated indoor logistics system is built. \begin{figure}[h] \centering\includegraphics[scale = 0.36]{envrn.jpg} \caption{Environment of MRS} \label{envrn} \end{figure} An environment has been developed using uniform sized boxes as shown in Figure~\ref{envrn} for the robots to work, doing single task at a time and is named as single-task robot. The boxes create a closed labyrinth path to navigate. Also designated ports are marked on the boxes. The floor is described in three different topological maps. These maps are provided in Figure~\ref{3maps}. \begin{figure*}[t] \centering\includegraphics[scale = 0.28]{newMapsOnt.jpg} \caption{The three topology maps} \label{3maps} \end{figure*} The control structure is same in each MR which consists of two layers of sub-sumption structure (Section~\ref{controller}). The lowest $L$0.0 level is implemented in the body of each MR inside the beagle board which forms the main processor of each robot. The middle $L$0.1 level and $L$1 are implemented in desktop PCs where each level is separated for each MR. Thus, the entire two-layer control architecture is formed through the designated behaviors (Fig~\ref{masLayers}) for each MR in the system. The MRs carry out the task of pick-up or drop and carrying materials between different pair of ports. The $L$1 level controller in each MR is responsible for planning decisions to make them reach designated ports. The optimal path between different pair of ports is found out using Dijsktra's algorithm. These functions are carried out through the \textit{DECISION BEHAVIOR} (Fig~\ref{masLayers}) in $L$1 level. Dijsktra uses $X$($k$) as weight of an edge at every step of forming the path. $X$($k$) is estimated on real-time by Kalman Filtering at required $k$ in each MR. It was stated in Section~\ref{intro} that online estimation of $X$($k$) requires observation of the same at $k$-1. In this experiment, these observations are gathered from the beginning of first decision making and are used in subsequent calls for estimation. But, an MR may need to estimate the travel time of one or more edge which it did not traverse previously or has traversed long back. In this case, the observation of $X$($k$) for the concerned edge is not available. In this experiment, observation of $X$($k$) for the concerned edge at $k$-1 could not be obtained always and thus the available observation for the concerned edge is used. Thus, $X$($k$)s are estimated solely based on the historical observation of the concerned MR. Dijsktra's algorithm uses the estimated $X$($k$)s for each edge. Then it chooses the predecessor node of the current node, from which arrival to current node becomes least cost expending. This way the optimal path is formed using $X$($k$). These paths are shown in Section~\ref{res2} \subsection{Results-I} \label{res1} The resultant paths (Figure~\ref{outputPath}) form a high level command or macro-command to be transferred to the next lower level $L$0.1. \begin{figure*}[t] \centering\includegraphics[scale = 0.28]{paths3maps.jpg} \caption{change the pic, make it double column} \label{outputPath} \end{figure*} In $L$0.1, these high level commands or paths are disintegrated into macro-actions like \textbf{turn-left}, \textbf{turn-right}, \textbf{move-ahead}, \textbf{stop-left}, \textbf{stop-right}, \textit{et~cetera}. This disintegration happens due to the functions of three behaviors in $L$0.1 level (Fig~\ref{masLayers}). Further, these macro-actions are processed in $L$0.1 level to produce easy and low-level commands like \textbf{GO-$\langle$angle$\rangle$-$\langle$distance$\rangle$} which can be easily understood by the lowest level controller $L$0.0. These low-level commands enable the behaviors in $L$0.0 for accurate and prompt servo actions to generate movements in MR. This process is shown in Figure~\ref{pathByMR} where the MR traverses the path. Figure~\ref{pathByMR} shows different sections of the path in different steps \footnote{Video of path traversal is available at .....} from beginning (Part (A)) to end (Part (H)). In this way, the decentralized control is performed where each MR is the master of their own decision with the facility which is enabled due to the two layer subs-sumption control architecture based on behaviors. \subsection{Experiment-II} \label{exp2} In this experiment, ontological data sharing is incorporated. The MRs are made to traverse repeatedly between different pairs of nodes. The pairs are designated previously from a list in order to suit the carriage necessity. The route computation between different pairs of node are done similarly as in Experiment~1 in Section~\ref{exp1}, using online estimated values of $X$($k$) as weight of edge. Online estimation of $X$($k$) at $k$ requires observation of the same at ($k$-1). In both Experiment~I and Experiment~II, these observations are gathered from the beginning of first decision making and are used in subsequent calls for path planning. However, when an MR needs to estimate the travel time of one or more edge(s) which it did not traverse previously, the available observation of $X$ for the concerned edge is at some previous instance which may not be ($k$-1) or close to that in many cases. These observations of distant past are used in experiment~I for estimating $X$($k$) at $k$. Thus, this will generate less accurate estimates. This is mitigated in this experiment~II by sharing the observation value from other MR who has travelled that edge in nearly previous instance. This way the observation of $X$ for the concerned edge at ($k$-1) or close will be available during estimation at $k$. The knowledge sharing contributes to estimate the travel cost for an unexplored edge at current instance in an MR. The behavior in $L$1 layer in each robot can ask the $L$1 level of other neighboring robots for observation values of $X$($k$) whenever required. $X$($k$)s are estimated for the necessary edges using observations either from the own MR or from neighbors. Meanwhile, before deployment of the behavior-based system and ontology, some legacy data for travel times of different edges at different instances are obtained. These pool of date gathered by recording the travel times during the operation of MRS correspond to experiences of the MRs in the system. The estimated $X$($k$) values are compared to these legacy data to measure the accuracy which is discussed in Section~\ref{res2}. On the other hand, estimated $X$($k$) values of the relevant edges are used by Dijsktra's algorithm as weights of edges. These estimates are the main instrument at every step of deciding the predecessor to the current node. Dijsktra's algorithm makes a node predecessor to current, when weight or cost from the former to later becomes minimum. Thus, accurate estimated value of $X$($k$) plays a vital role in deciding the predecessor to current node, in turn deciding the path. More accurate estimates contributes to generate paths with less total cost. Optimal paths are obtained with (experiment~II) and without (experiment~I) sharing the $X$($k$) values. These paths are compared in Section~\ref{res2}. \subsection{Results-II} \label{res2} This section tabulates the results of experiment~II. $X$($k$)s were estimated for each required edge at every step of Dijkstra's algorithm. These estimates are obtained using a non-linear model and Kalman filtering, with observation data being shared from other MRs. These estimates are compared with that of experiment~I where estimation is done without sharing data among MRs. Figure~\ref{accuracy} plots the comparison of estimated $X$($k$) at different $k$ for different edges with that of legacy data obtained. This section illustrates the comparison of paths and their costs obtained with and without sharing the travel times in the MRS. The path planning is done for 100 repetitions while increasing the $regression_no$ from 4 to 7. \begin{figure}[h] \centering\includegraphics[scale = 0.25]{allMapBars.jpg} \caption{Average of total path costs} \label{pathCostsMap} \end{figure} Figure~\ref{pathCostsMap} illustrates average total path costs of 100 paths obtained in both Experiment~I and Experiment~II in four MRs operating in all three maps (Figure~\ref{3maps}). The average path costs of 100 paths obtained by sharing (Experiment~II) and not sharing (Experiment~I) travel times are plotted for each regression, namely $Reg$4, $Reg$5, $Reg$6 and $Reg$7. For each regression in Map~1, the average of path costs obtained through collective intelligence are 40\% less than the average of path costs obtained without it. For each MR, the average of total path costs is almost same or vary in small margin with the increase of regression number. The reason of this is the lack of variation in environmental conditions. The travel times are varying on battery condition and floor. No other factor for affecting travel time could be incorporated in the laboratory set-up. On the other hand, the save of total path costs are same for all MRs in a single map. This signifies that paths found through collective intelligence in each MR is 40\% more cost efficient than the paths obtained without it. Thus, collective intelligence using travel time can affect to find more cost efficient paths in MRS. The average path costs decrease in case of Experiment~II as through collective intelligence more relevant observation of travel times are obtained in each MR. These values are instrumental for obtaining more accurate estimates of travel time. As a matter of fact, more accurate estimated values result in more optimal path with less cost than in that of obtained in Experiment~I. Few examples of these paths are discussed in the next section. Moreover, the save on total path costs is consistent in all the maps. Thus, the travel time is estimated better due to sharing of travel times from other MRs and this is true for all the representative structures of the floor. This signifies that more accurate estimation is possible through collective intelligence and this is independent of the structure of the floor. \subsection{Analysis of obtained paths} This section illustrates few paths obtained in Experiment~I and Experiment~II under the same condition of regression no and MR. \begin{figure}[h] \centering\includegraphics[scale = 0.37]{pathsA1Map1Ont.jpg} \caption{Paths found by MR~1 in Map~1} \label{pathA1M1} \end{figure} Figure~\ref{pathA1M1} plots two paths, $P_A$ and $P_B$ obtained in Map~1 for MR~1. $P_A$ and $P_B$ both have same source and destination. $P_A$ is obtained in Experiment~I in the third iteration of path planning, while $P_B$ is obtained in Experiment~II at the same iteration. Thus, they are both obtained at the same battery level and in the same map. Still, both the paths are different and have different total path cost. As described in Section\ref{probContr}, $C_P$ denotes the cost of a path. $C_{PA}$ and $C_{PB}$ denote cost of $P_A$ and $P_B$ of Figure~\ref{pathA1M1} respectively. The results show $C_{PA}$ = 66.5326 and $C_{PB}$ = 39.5385. Thus, \begin{align} C_{PB} < C_{PA}~by~40\% \nonumber \end{align} \begin{figure}[h] \centering\includegraphics[scale = 0.37]{pathsA2Map2Ont.jpg} \caption{Paths found by MR~2 in Map~2} \label{pathA2M2} \end{figure} Figure~\ref{pathA2M2} plots two paths, $P_C$ and $P_D$ obtained in Map~2 for MR~2. $P_C$ and $P_D$ both have same source and destination. $P_C$ is obtained in Experiment~I in the third iteration of path planning, while $P_D$ is obtained in Experiment~II at the same iteration. Thus, they are both obtained at the same battery level and in the same map. Still, both the paths are different and have different total path cost. $C_{PC}$ and $C_{PD}$ denote cost of $P_C$ and $P_D$ of Figure~\ref{pathA2M2} respectively. The results show $C_{PC}$ = 58.0729 and $C_{PD}$ = 33.5707. Thus, \begin{align} C_{PD} < C_{PC}~by~42\% \nonumber \end{align} From these two comparisons, it is evident that after sharing the travel times among the MRs, the path obtained in each MR have improved and are of less cost than that obtained without the sharing. \section{Discussion and Conclusion} \label{discConc} The new method to compute cost parameter to be used in transportation and automation industry is proposed. With this new method, parameters now reflect the states of individual robots, their batteries and their environment. They usually arise locally at the robots as a result of performances of task In case of planning, the current state of robots and environment plays crucial role. The usual practice is to decide path using Euclidean distance and a path is considered optimal with optimal length or distance. Many industries (like BlueBotics \cite{Blue:2009}) use topology maps to describe the floor and employs a depth-first search to generates a length-optimal path. However, the true cost of traversing a path is not accounted in this case. The cost involved in traversing the path is generated from condition of floor, state of batteries, mechanical parts of robots. It is intuitive that an edge of same length will incur more cost in a rough floor than in smooth one. Thus, travel time is a better tool to decide a path than heuristics based on Euclidean distance. In this work, the decision making of each robot is based solely on the travel costs of its own. In the dynamic estimation process, there are possibilities of not being able to learn observation of few edges due to lack of experience. Also, the observation gathered for a particular edge is too old to be relevant at the current instance of estimation. To address this, the sharing of travel time is incorporated to be able to share data of travel time from one MR to others. This enables the MR to generate more accurate estimation for travel times. \bibliographystyle{plain}
2,869,038,156,059
arxiv
\section{Introduction} \label{sec-intro} In this paper, we consider the classification and embedding problems for matchbox manifolds, from the viewpoint of Lipschitz pseudogroups, and develop invariants which are obstructions to realizing a matchbox manifold as a minimal set. Matchbox manifolds are a class of continua that occur naturally in the study of dynamical systems, and in foliation theory as exceptional minimal sets. The overall goal of this research program is to develop tools for the classification of these spaces, and to understand which matchbox manifolds are homeomorphic to exceptional minimal sets for $C^r$-foliations, for $r \geq 1$. We first discuss an important motivation for interest in this program of study. \begin{prob}[Sondow \cite{Sondow1975}] \label{problem4} When is a smooth connected $n$-manifold $L$ without boundary, diffeomorphic to a leaf of a foliation $\F_M$ of a compact smooth manifold $M$? \end{prob} For the case where $L$ has dimension $n=1$, the problem is trivial. Also, for dimension $n=2$, Cantwell and Conlon showed in \cite{CC1987} that any surface without boundary is a diffeomorphic to a leaf of a smooth codimension-$1$ foliation of a compact $3$-manifold. On the other hand, Ghys \cite{Ghys1985} and Inaba, Nishimori, Takamura and Tsuchiya \cite{INTT1985}, constructed $3$-manifolds which are not homeomorphic to a leaf of any codimension-$1$ foliation of a compact manifold. Souza and Schweitzer \cite{SouzaSchweitzer2013} give further examples in higher dimensions, of manifolds which cannot be leaves in codimension one. The non-embedding examples by these authors are essentially the only known results on Problem~\ref{problem4} in this generality, and they are for codimension-one foliations. There is a natural variant of Problem~\ref{problem4}, posed in the 1974 ICM address by Sullivan \cite{Sullivan1975}: \begin{prob}\label{problem5} Let $L$ be a complete Riemannian smooth manifold without boundary. When is $L$ \emph{quasi-isometric} to a leaf of a $C^r$-foliation $\F_M$ of a compact smooth manifold $M$, for $r\geq 1$? \end{prob} A quasi-isometric embedding of $L$ must preserve its quasi-invariant geometric properties, which can be used to construct obstructions to such an embedding. For example, Cantwell and Conlon studied in \cite{CC1977}, \cite{CC1978} how the asymptotic behavior of the metric on $L$ is related to the dynamics of the leaf in a codimension-one foliation. The work of Phillips and Sullivan in \cite{PS1981} introduced the asymptotic Euler class of a non-compact Riemannian $2$-manifold $L$ which has subexponential volume growth rate, and showed this can be used as an obstruction to a quasi-isometric embedding of $L$ as a leaf, depending on the topology of the ambient manifold $M$. This result was generalized by Januszkiewicz in \cite{Januszkiewicz1984} to obtain obstructions in terms of the \emph{asymptotic Pontrjagin numbers} of an open Riemannian $n$-manifold with subexponential volume growth rate, for $n =4k$ with $k \geq 1$. In an alternate direction, Attie and Hurder in \cite{AttieHurder1996} introduced an invariant of open manifolds, its ``leaf entropy'', or ``asymptotic leaf complexity'', and gave examples of open manifolds with exponential volume growth rate that cannot be quasi-isometric to a leaf in a foliation of any codimension. Examples of surfaces with exponential growth rate that cannot be quasi-isometric to a leaf were constructed by Schweitzer in \cite{Schweitzer1995} and Zeghib in \cite{Zeghib1994}, using a variant of the approach in \cite{AttieHurder1996}. The work of Schweitzer \cite{Schweitzer2011} exhibits further examples of complete Riemannian manifolds which are not quasi-isometric to a leaf in any codimension-one foliation. The work of the author and Lukina \cite{HL2014} generalizes the results of \cite{AttieHurder1996} to the broader class of matchbox manifolds. The non-embedding results mentioned above rely on the simple strategy, that a leaf in a compact foliated manifold $M$ has some type of recurrence properties, and the idea is to formulate such a property, \emph{intrinsic} to $L$, which cannot be satisfied if $L$ is homeomorphic to a leaf, or possibly quasi-isometric to a leaf. Each such criteria for \emph{non-recurrence} then yields non-embeddability results. A leaf $L$ contained in a minimal set ${\mathfrak{M}}$ for a foliation $\F_M$ on a compact manifold $M$ has much stronger recurrence properties. For example, Cass observed in \cite{Cass1985} that such a leaf must be ``quasi-homogeneous'', and that this property is an invariant of the quasi-isometry class of a Riemannian metric on $L$. He consequently gave examples of complete Riemannian manifolds, including leaves of foliations, which cannot be quasi-isometric to a leaf in a minimal set. For example, Cass showed that any non-compact leaf in a Reeb foliation of ${\mathbb S}^3$ cannot be realized as a leaf of a minimal set in any codimension. The question raised by Cass' work suggests a variant of the above questions, where we consider the closure ${\mathfrak{M}} = \overline{L}$ of a non-compact leaf $L \subset M$, where ${\mathfrak{M}}$ has the structure of a \emph{foliated space}. The formal definition of a foliated space ${\mathfrak{M}}$ was given by Moore and Schochet \cite[Chapter 2]{MS2006}, as part of their development of a general formulation of the Connes measured leafwise-index theorem \cite{Connes1994}. Candel and Conlon \cite[Chapter 11]{CandelConlon2000} further developed the theory of foliated spaces, and gave many interesting examples. We are particularly interested in those cases where the transverse model space for the foliated space ${\mathfrak{M}}$ is totally disconnected. A compact connected foliated space ${\mathfrak{M}}$ with totally disconnected transversals is called a ``matchbox manifold'', in accordance with terminology introduced in continua theory \cite{AO1991,AO1995,AM1988}. A matchbox manifold with $2$-dimensional leaves is a lamination by surfaces, as defined in \cite{Ghys1999,LM1997}. If all leaves of ${\mathfrak{M}}$ are dense, then it is called a \emph{minimal matchbox manifold}. A compact minimal set ${\mathfrak{M}} \subset M$ for a foliation $\F_M$ on a manifold $M$ yields a foliated space with foliation $\F = \F | {\mathfrak{M}}$. If the minimal set is exceptional, then ${\mathfrak{M}}$ is a minimal matchbox manifold. The formal definition and some basic properties of matchbox manifolds are discussed in Section~\ref{sec-foliated}. The leaves of the foliation $\F$ of a foliated space ${\mathfrak{M}}$ admit a smooth Riemannian metric, and for each leaf $L \subset {\mathfrak{M}}$ there is a well-defined quasi-isometry class of Riemannian metrics on $L$. The obstructions used in the works above, to show that a particular Riemannian manifold $L$ cannot be quasi-isometric to a leaf of a foliation of a compact manifold $M$, also provide obstructions to realizing $L$ as a leaf in a compact foliated space ${\mathfrak{M}}$. The following problem is addressed in this work: \begin{prob}\label{problem7} Let ${\mathfrak{M}}$ be a minimal matchbox manifold. Does there exists a homeomorphism of ${\mathfrak{M}}$ to an exceptional minimal set of a $C^r$-foliation $\F_M$ of a manifold $M$, for $r \geq 1$? \end{prob} When such an embedding exists, then each leaf $L \subset {\mathfrak{M}}$ is quasi-isometric to a leaf of $\F_M$. If the leaf $L$ is dense in ${\mathfrak{M}}$ and ${\mathfrak{M}}$ is non-embeddable, then this gives a criteria for the non-embedding of $L$, that depends not just on the intrinsic geometry and topology of $L$, but includes ``extrinsic properties'' of $L$ in ${\mathfrak{M}}$, such as the transverse geometry and dynamics of the foliated space ${\mathfrak{M}}$. Observe that if ${\mathfrak{M}}$ is an invariant set for a $C^r$-foliation $\F_M$ of a Riemannian manifold $M$, where $r \geq 1$, then the holonomy maps for the foliation $\F$ on ${\mathfrak{M}}$ are induced by the holonomy maps of $\F_M$, and there is a metric on the transversals to ${\mathfrak{M}}$ such that the holonomy maps of $\F$ are Lipschitz, as discussed in Section~\ref{sec-Lipschitz}. Problem~\ref{problem7} can be thus be reformulated as. \begin{prob}\label{problem8} Let ${\mathfrak{M}}$ be a Lipschitz matchbox manifold. Find obstructions to the existence of a foliated Lipschitz embedding $\iota \colon {\mathfrak{M}} \to M$, where $M$ has a $C^r$-foliation $\F_M$ with $r \geq 1$. \end{prob} This problem can also be considered as asking for a characterization of the Lipschitz structures which can arise for the transverse Cantor sets to exotic minimal sets in $C^r$-foliations. For example, in the case of a foliation obtained from the suspension of a diffeomorphism of the circle ${\mathbb S}^1$, McDuff studied in \cite{McDuff1981} the question: which Cantor sets embedded in ${\mathbb S}^1$ are the invariant sets for $C^{1+\alpha}$-diffeomorphisms of the circle? The general observations and results of this paper are combined in Section~\ref{sec-nonembedding} to yield the following non-embedding results. \begin{thm}\label{thm-noLip1} There exist Lipshitz matchbox manifolds which are not homeomorphic to the minimal set of any $C^1$-foliation. \end{thm} \begin{thm}\label{thm-noLip2} There exist minimal matchbox manifolds which are not homeomorphic to the minimal set of any $C^1$-foliation. \end{thm} Many further questions and problems are posed throughout the text, which is organized as follows. Section~\ref{sec-foliated} collects together some definitions and results concerning matchbox manifolds that we use in the paper. More details can be found in the works \cite{CandelConlon2000,ClarkHurder2013,CHL2013a,CHL2013b,MS2006}. Section~\ref{sec-foliated} is rather dense, and can be skipped if the reader is only interested in Cantor pseudogroup actions. Section~\ref{sec-dynamics} gives some some definitions concerning the dynamical properties of Cantor pseudogroup actions. Then in Section~\ref{sec-Lipschitz} the Lipschitz property for pseudogroup actions is introduced. The main result of this section is a proof that an embedding of a matchbox manifold as an exceptional minimal set in a $C^1$-foliation yields a Lipschitz structure on it. Section~\ref{sec-foliations} discusses some examples from the literature of embeddings of matchbox manifolds as exceptional minimal sets for foliations. In Section~\ref{sec-solenoids}, the notion of \emph{normal}, \emph{weak} and \emph{generalized} solenoids are introduced. These are basic examples for the study of minimal matchbox manifolds. Section~\ref{sec-fusion} introduces an operation on minimal matchbox manifolds, called their ``fusion'', which amalgamates their pseudogroups. The fusion process is inspired by the method introduced by Lukina in \cite{Lukina2012}. The fusion process is used to construct the examples in Section~\ref{sec-nonembedding} of minimal pseudogroup Cantor actions, which cannot be homeomorphic to an exceptional minimal set in any $C^1$-foliation. Finally, in Section~\ref{sec-classification}, \emph{Morita equivalence} and \emph{Lipschitz equivalence} of minimal Lipschitz pseudogroups are introduced. The problem of the classification of matchbox manifolds up to Lipschitz equivalence is considered for the the special case of weak solenoids. \section{Foliated spaces and matchbox manifolds} \label{sec-foliated} We recall the notions of foliated spaces and matchbox manifolds, and their basic properties. The book by Moore and Schochet in \cite[Chapter 2]{MS2006} introduced foliated spaces, as part of their development of a general form of the Connes measured leafwise index theorem. The textbook by Candel and Conlon \cite[Chapter 11]{CandelConlon2000} further develops the theory, with many examples. Matchbox manifolds are a special class of connected foliated spaces, which have totally disconnected transversal spaces. The papers \cite{ClarkHurder2011,ClarkHurder2013,CHL2013a,CHL2013b,CHL2013c,CHL2014} discuss the topology and dynamics of \emph{matchbox manifolds}, especially with the goal of classifying these spaces up to homeomorphism. First we recall some basic notions. A topological space $\Omega$ is a \emph{continuum}, if it is \emph{compact, connected, and metrizable}. A Cantor set ${\mathfrak{X}}$ is a non-empty, compact, perfect and totally disconnected set. A set $V \subset {\mathfrak{X}}$ is \emph{clopen} if it is both open and closed, and a topological space is totally disconnected if and only if it admits a basis for its topology consisting of clopen sets. The definition of a foliated space is modeled on the definition of a smooth foliation. \begin{defn} \label{def-fs} A \emph{foliated space of dimension $n$} is a compact metric space ${\mathfrak{M}}$, such that there exists a separable metric space ${\mathfrak{X}}$, and for each $x \in {\mathfrak{M}}$ there is a compact subset ${\mathfrak{X}}_x \subset {\mathfrak{X}}$, an open subset $U_x \subset {\mathfrak{M}}$, and a homeomorphism defined on the closure ${\varphi}_x \colon {\overline{U}}_x \to [-1,1]^n \times {\mathfrak{X}}_x$ such that ${\varphi}_x(x) = (0, w_x)$ where $w_x \in int({\mathfrak{X}}_x)$. Moreover, it is assumed that each ${\varphi}_x$ admits an extension to a foliated homeomorphism ${\widehat \varphi}_x \colon {\widehat U}_x \to (-2,2)^n \times {\mathfrak{X}}_x$ where ${\overline{U}}_x \subset {\widehat U}_x$. The space ${\mathfrak{X}}_x$ is called the \emph{local transverse model} at $x$. \end{defn} Let $\pi_x \colon {\overline{U}}_x \to {\mathfrak{X}}_x$ denote the composition of ${\varphi}_x$ with projection onto the second factor. For $w \in {\mathfrak{X}}_x$ the set ${\mathcal P}_x(w) = \pi_x^{-1}(w) \subset {\overline{U}}_x$ is called a \emph{plaque} for the coordinate chart ${\varphi}_x$. We adopt the notation, for $z \in {\overline{U}}_x$, that ${\mathcal P}_x(z) = {\mathcal P}_x(\pi_x(z))$, so that $z \in {\mathcal P}_x(z)$. Note that each plaque ${\mathcal P}_x(w)$ for $w \in {\mathfrak{X}}_x$ is given the topology so that the restriction ${\varphi}_x \colon {\mathcal P}_x(w) \to [-1,1]^n \times \{w\}$ is a homeomorphism. Then $int ({\mathcal P}_x(w)) = {\varphi}_x^{-1}((-1,1)^n \times \{w\})$. Let $U_x = int ({\overline{U}}_x) = {\varphi}_x^{-1}((-1,1)^n \times int({\mathfrak{X}}_x))$. Note that if $z \in U_x \cap U_y$, then $int({\mathcal P}_x(z)) \cap int( {\mathcal P}_y(z))$ is an open subset of both ${\mathcal P}_x(z) $ and ${\mathcal P}_y(z)$. The collection of sets $${\mathcal V} = \{ {\varphi}_x^{-1}(V \times \{w\}) \mid x \in {\mathfrak{M}} ~, ~ w \in {\mathfrak{X}}_x ~, ~ V \subset (-1,1)^n ~ {\rm open}\}$$ forms the basis for the \emph{fine topology} of ${\mathfrak{M}}$. The connected components of the fine topology are called \emph{leaves}, and define the foliation $\F$ of ${\mathfrak{M}}$. Let $L_x \subset {\mathfrak{M}}$ denote the leaf of $\F$ containing $x \in {\mathfrak{M}}$. \begin{defn} \label{def-sfs} A \emph{smooth foliated space} is a foliated space ${\mathfrak{M}}$ as above, for which there exists a choice of local charts ${\varphi}_x \colon {\overline{U}}_x \to [-1,1]^n \times {\mathfrak{X}}_x$ such that for all $x,y \in {\mathfrak{M}}$ with $z \in U_x \cap U_y$, there exists an open set $z \in V_z \subset U_x \cap U_y$ such that ${\mathcal P}_x(z) \cap V_z$ and ${\mathcal P}_y(z) \cap V_z$ are connected open sets, and the composition $\displaystyle \psi_{x,y;z} \equiv {\varphi}_y \circ {\varphi}_x ^{-1}\colon {\varphi}_x({\mathcal P}_x (z) \cap V_z) \to {\varphi}_y({\mathcal P}_y (z) \cap V_z)$ is a smooth map, where ${\varphi}_x({\mathcal P}_x (z) \cap V_z) \subset {\mathbb R}^n \times \{w\} \cong {\mathbb R}^n$ and ${\varphi}_y({\mathcal P}_y (z) \cap V_z) \subset {\mathbb R}^n \times \{w'\} \cong {\mathbb R}^n$. The maps $\psi_{x,y;z}$ are assumed to depend continuously on $z$ in the $C^{\infty}$-topology on maps between subsets of ${\mathbb R}^n$. \end{defn} A map $f \colon {\mathfrak{M}} \to {\mathbb R}$ is said to be \emph{smooth} if for each flow box ${\varphi}_x \colon {\overline{U}}_x \to [-1,1]^n \times {\mathfrak{X}}_x$ and $w \in {\mathfrak{X}}_x$ the composition $y \mapsto f \circ {\varphi}_x^{-1}(y, w)$ is a smooth function of $y \in (-1,1)^n$, and depends continuously on $w$ in the $C^{\infty}$-topology on maps of the plaque coordinates $y$. As noted in \cite{MS2006} and \cite[Chapter 11]{CandelConlon2000}, this allows one to define smooth partitions of unity, vector bundles, and tensors for smooth foliated spaces. In particular, one can define leafwise Riemannian metrics. We recall a standard result, whose proof for foliated spaces can be found in \cite[Theorem~11.4.3]{CandelConlon2000}. \begin{thm}\label{thm-riemannian} Let ${\mathfrak{M}}$ be a smooth foliated space. Then there exists a leafwise Riemannian metric for $\F$, such that for each $x \in {\mathfrak{M}}$, $L_x$ inherits the structure of a complete Riemannian manifold with bounded geometry, and the Riemannian geometry of $L_x$ depends continuously on $x$. In particular, each leaf $L_x$ has the structure of a complete Riemannian manifold with bounded geometry. \end{thm} Bounded geometry implies, for example, that for each $x \in {\mathfrak{M}}$, there is a leafwise exponential map $\exp^{\F}_x \colon T_x\F \to L_x$ which is a surjection, and the composition $\exp^{\F}_x \colon T_x\F \to L_x \subset {\mathfrak{M}}$ depends continuously on $x$ in the compact-open topology on maps. \begin{defn} \label{def-mm} A \emph{matchbox manifold} is a smooth foliated connected space ${\mathfrak{M}}$, such that its transverse model space ${\mathfrak{X}}$ is totally disconnected, and for each $x \in {\mathfrak{M}}$, the transverse model space ${\mathfrak{X}}_x \subset {\mathfrak{X}}$ in Definition~\ref{def-fs} is a clopen subset. \end{defn} All matchbox manifolds are assumed to be smooth with a given leafwise Riemannian metric. The space ${\mathfrak{M}}$ is assumed to be metrizable, and we fix a choice for the metric $\dM$ on ${\mathfrak{M}}$. The leafwise Riemannian metric $\dF$ is continuous with respect to the metric $\dM$ on ${\mathfrak{M}}$, but otherwise the two metrics can be chosen independently. The metric $\dM$ is used to define the metric topology on ${\mathfrak{M}}$, while the metric $\dF$ depends on an independent choice of the Riemannian metric on leaves. An important difference between a foliated matchbox manifold and a smooth foliated manifold is that the local foliation charts for a matchbox manifold are not connected, and so must be chosen appropriately to ensure that each chart is ``local''. We introduce the following conventions. For $x \in {\mathfrak{M}}$ and $\e > 0$, let $D_{{\mathfrak{M}}}(x, \e) = \{ y \in {\mathfrak{M}} \mid \dM(x, y) \leq \e\}$ be the closed $\e$-ball about $x$ in ${\mathfrak{M}}$, and $B_{{\mathfrak{M}}}(x, \e) = \{ y \in {\mathfrak{M}} \mid \dM(x, y) < \e\}$ the open $\e$-ball about $x$. Similarly, for $w \in {\mathfrak{X}}$ and $\e > 0$, let $D_{{\mathfrak{X}}}(w, \e) = \{ w' \in {\mathfrak{X}} \mid d_{{\mathfrak{X}}}(w, w') \leq \e\}$ be the closed $\e$-ball about $w$ in ${\mathfrak{X}}$, and $B_{{\mathfrak{X}}}(w, \e) = \{ w' \in {\mathfrak{X}} \mid d_{{\mathfrak{X}}}(w, w') < \e\}$ the open $\e$-ball about $w$. Given a leaf $L$ and a piecewise $C^1$-path $\gamma \colon [0,1] \to L$, let $\| \gamma \|_{\F}$ denote its path-length for the leafwise Riemannian metric. Then give $L \subset {\mathfrak{M}}$ the path-length metric: if $x, y \in L$ then set $$\dF(x,y) = \inf \left\{\| \gamma\|_{\F} \mid \gamma \colon [0,1] \to L ~{\rm is ~ piecewise ~~ C^1}~, ~ \gamma(0) = x ~, ~ \gamma(1) = y ~, ~ \gamma(t) \in L \quad \forall ~ 0 \leq t \leq 1\right\},$$ and otherwise, if $x,y \in {\mathfrak{M}}$ are not on the same leaf, then set $\dF(x,y) = \infty$. For each $x \in {\mathfrak{M}}$ and $r > 0$, let $D_{\F}(x, r) = \{y \in L_x \mid \dF(x,y) \leq r\}$. For each $x \in {\mathfrak{M}}$, the {Gauss Lemma} implies that there exists $\lambda_x > 0$ such that $D_{\F}(x, \lambda_x)$ is a \emph{strongly convex} subset for the metric $\dF$. That is, for any pair of points $y,y' \in D_{\dF}(x, \lambda_x)$ there is a unique shortest geodesic segment in $L_x$ joining $y$ and $y'$ and contained in $D_{\F}(x, \lambda_x)$ (cf. \cite[Chapter 3, Proposition 4.2]{doCarmo1992}, or \cite[Theorem 9.9]{Helgason1978}). Then for all $0 < \lambda < \lambda_x$ the disk $D_{\F}(x, \lambda)$ is also strongly convex. The leafwise metrics have uniformly bounded geometry, so we obtain: \begin{lemma}\label{lem-stronglyconvex} There exists ${\lambda_{\mathcal F}}} % leafwise convexity diameter defined in Lemma~\ref{lem-stronglyconvex > 0$ such that for all $x \in {\mathfrak{M}}$, $D_{\F}(x, {\lambda_{\mathcal F}}} % leafwise convexity diameter defined in Lemma~\ref{lem-stronglyconvex)$ is strongly convex. \end{lemma} The following proposition summarizes results in \cite[sections 2.1 - 2.2]{ClarkHurder2013}. \begin{prop}\label{prop-regular} For a smooth foliated space ${\mathfrak{M}}$, given ${\epsilon_{{\mathfrak{M}}}}} % convex diameter defined in Lemma~\ref{lem-conhull > 0$, there exist constants ${\lambda_{\mathcal F}}} % leafwise convexity diameter defined in Lemma~\ref{lem-stronglyconvex>0$ and $0< \delta^{\F}_{\cU}} % leafwise radius defined by Equation~\ref{eq-Fdelta < {\lambda_{\mathcal F}}} % leafwise convexity diameter defined in Lemma~\ref{lem-stronglyconvex/5$, and a covering of ${\mathfrak{M}}$ by foliation charts $\displaystyle \left\{{\varphi}_i \colon {\overline{U}}_i \to [-1,1]^n \times {\mathfrak{X}}_i \mid 1 \leq i \leq \nu \right\}$ with the following properties: For each $1 \leq i \leq \nu$, let $\pi_i = \pi_{x_i} \colon {\overline{U}}_i \to {\mathfrak{X}}_i$ be the projection, then \begin{enumerate} \item Interior: $U_i \equiv int({\overline{U}}_i) = {\varphi}_i^{-1}\left( (-1,1)^n \times B_{{\mathfrak{X}}}(w_i, \e_i)\right)$, where $w_i \in {\mathfrak{X}}_i$ and $\e_i>0$. \item Locality: for $x_i \equiv {\varphi}_i^{-1}(w_i, 0) \in {\mathfrak{M}}$, ${\overline{U}}_i \subset B_{{\mathfrak{M}}}(x_i, {\epsilon_{{\mathfrak{M}}}}} % convex diameter defined in Lemma~\ref{lem-conhull)$. \end{enumerate} For $z \in {\overline{U}}_i$, the \emph{plaque} of the chart ${\varphi}_i$ through $z$ is denoted by ${\mathcal P}_i(z) = {\mathcal P}_i(\pi_i(z)) \subset {\overline{U}}_i$. \begin{enumerate}\setcounter{enumi}{2} \item Convexity: the plaques of ${\varphi}_i$ are strongly convex subsets for the leafwise metric. \item Uniformity: for $w \in {\mathfrak{X}}_i$ let $x_{w} = {\varphi}_{x_i}^{-1}(0 , w)$, then \begin{equation}\label{eq-Fdelta} D_{\F}(x_{w} , \delta^{\F}_{\cU}} % leafwise radius defined by Equation~\ref{eq-Fdelta/2) ~ \subset ~ {\mathcal P}_i(w) ~ \subset ~ D_{\F}(x_{w} , \delta^{\F}_{\cU}} % leafwise radius defined by Equation~\ref{eq-Fdelta) \end{equation} \item \label{item-clopen} The projection $\pi_i(U_i \cap U_j) = {\mathfrak{X}}_{i,j} \subset {\mathfrak{X}}_i$ is a clopen subset for all $1 \leq i, j \leq \nu$. \end{enumerate} A \emph{regular foliated covering} of ${\mathfrak{M}}$ is one that satisfies the above conditions (\ref{prop-regular}.1) to (\ref{prop-regular}.5). \end{prop} This technical result highlights one of the main issues with foliated spaces and matchbox manifolds, as contrasted with smooth foliations of compact manifolds, one has to assume or prove for these more exotic foliated spaces many of the regularity properties that are used in the study of foliations. We assume in the following that a regular foliated covering of ${\mathfrak{M}}$ as in Proposition~\ref{prop-regular} has been chosen. Let ${\mathcal U} = \{U_{1}, \ldots , U_{\nu}\}$ denote the corresponding open covering of ${\mathfrak{M}}$. We can assume that the spaces ${\mathfrak{X}}_i$ form a \emph{disjoint clopen covering} of ${\mathfrak{X}}$, so that $\displaystyle {\mathfrak{X}} = {\mathfrak{X}}_1 \dot{\cup} \cdots \dot{\cup} {\mathfrak{X}}_{\nu}$. Let $\eU > 0$ be a Lebesgue number for ${\mathcal U}$. That is, given any $z \in {\mathfrak{M}}$ there exists some index $1 \leq i_z \leq \nu$ such that the open metric ball $B_{{\mathfrak{M}}}(z, \eU) \subset U_{i_z}$. For $1 \leq i \leq \nu$, let $ \lambda_i \colon {\overline{U}}_i \to [-1,1]^n$ be the projection, so that for each $z \in U_i$ the restriction $\lambda_i \colon {\mathcal P}_i(z) \to [-1,1]^n$ is is a smooth coordinate system on the plaque. For each $1 \leq i \leq \nu$ the set ${\mathcal T}_i = {\varphi}_i^{-1}(0 , {\mathfrak{X}}_i)$ is a compact transversal to $\F$. Without loss of generality, we can assume that the transversals $\displaystyle \{ {\mathcal T}_{1} , \ldots , {\mathcal T}_{\nu} \}$ are pairwise disjoint in ${\mathfrak{M}}$. Then define sections \begin{equation}\label{eq-taui} \tau_i \colon {\mathfrak{X}}_i \to {\overline{U}}_i ~ , ~ {\rm defined ~ by} ~ \tau_i(\xi) = {\varphi}_i^{-1}(0 , \xi) ~ , ~ {\rm so ~ that} ~ \pi_i(\tau_i(\xi)) = \xi. \end{equation} Then ${\mathcal T}_i = {\mathcal T}_{x_i}$ is the image of $\tau_i$ and we let ${\mathcal T} = {\mathcal T}_1 \cup \cdots \cup {\mathcal T}_{\nu} \subset {\mathfrak{M}}$ denote their disjoint union, and $\tau \colon {\mathfrak{X}} \to {\mathcal T}$ the union of the maps $\tau_i$. A map $f \colon {\mathfrak{M}} \to {\mathfrak{M}}'$ between foliated spaces is said to be a \emph{foliated map} if the image of each leaf of $\F$ is contained in a leaf of $\F'$. If ${\mathfrak{M}}'$ is a matchbox manifold, then each leaf of $\F$ is path connected, so its image is path connected, hence must be contained in a leaf of $\F'$. Thus, \begin{lemma} \label{lem-foliated1} Let ${\mathfrak{M}}$ and ${\mathfrak{M}}'$ be matchbox manifolds, and $f \colon {\mathfrak{M}}' \to {\mathfrak{M}}$ a continuous map. Then $f$ maps the leaves of $\F'$ to leaves of $\F$. In particular, any homeomorphism $f \colon {\mathfrak{M}}' \to {\mathfrak{M}}$ of matchbox manifolds is a foliated map. \hfill $\Box$ \end{lemma} A \emph{leafwise path} is a continuous map $\gamma \colon [0,1] \to {\mathfrak{M}}$ such that there is a leaf $L$ of $\F$ for which $\gamma(t) \in L$ for all $0 \leq t \leq 1$. If ${\mathfrak{M}}$ is a matchbox manifold, and $\gamma \colon [0,1] \to {\mathfrak{M}}$ is continuous, then $\gamma$ is a leafwise path by Lemma~\ref{lem-foliated1}. In the following, we will assume that all paths are piecewise differentiable. The holonomy pseudogroup of a smooth foliated manifold $(M, \F)$ generalizes the concept of a Poincar\'{e} section for a flow, which induces a discrete dynamical system associated to the flow. Associated to a leafwise path $\gamma$ is a holonomy map $h_{\gamma}$, which is a local homeomorphism on the transversal space. For a matchbox manifold $({\mathfrak{M}}, \F)$ the holonomy along a leafwise path is defined analogously. We briefly recall below the ideas and notations of the construction of holonomy maps for matchbox manifolds; further details and proofs are given in \cite{ClarkHurder2013,CHL2013a}. A pair of indices $(i,j)$, $1 \leq i,j \leq \nu$, is said to be \emph{admissible} if $U_i \cap U_j \ne \emptyset$. For $(i,j)$ admissible, set ${\mathfrak{X}}_{i,j} = \pi_i(U_i \cap U_j) \subset {\mathfrak{X}}_i$. The regularity of foliation charts imply that plaques are either disjoint, or have connected intersection. For $(i,j)$ admissible, there is a well-defined transverse change of coordinates homeomorphism $h_{i,j} \colon {\mathfrak{X}}_{i,j} \to {\mathfrak{X}}_{j,i}$ with domain $\Dom(h_{i,j}) = {\mathfrak{X}}_{i,j}$ and range $R(h_{i,j}) = \Dom(h_{j,i}) = {\mathfrak{X}}_{j,i}$. By definition they satisfy $h_{i,i} = Id$, $h_{i,j}^{-1} = h_{j,i}$, and if $U_i \cap U_j\cap U_k \ne \emptyset$ then $h_{k,j} \circ h_{j,i} = h_{k,i}$ on their common domain of definition. Note that the domain and range of $h_{i,j}$ are clopen subsets of ${\mathfrak{X}}$ by Proposition~\ref{prop-regular}.\ref{item-clopen}. Recall that for $1 \leq i \leq \nu$, $\tau_i \colon {\mathfrak{X}}_i \to {\mathcal T}_i$ denotes the transverse section for the coordinate chart $U_i$, where ${\mathcal T} = {\mathcal T}_1 \cup \cdots \cup {\mathcal T}_{\nu} \subset {\mathfrak{M}}$ denotes their disjoint union, and $\pi \colon {\mathcal T} \to {\mathfrak{X}}$ is the coordinate projection restricted to ${\mathcal T}$ which is a homeomorphism, with $\tau \colon {\mathfrak{X}} \to {\mathcal T}$ its inverse. The \emph{holonomy pseudogroup} $\cGF$ of $\F$ is the topological pseudogroup modeled on ${\mathfrak{X}}$ generated by the elements of $\cGF^{(1)} = \{h_{j,i} \mid (i,j) ~{\rm admissible}\}$. We also define a subpseudogroup $\displaystyle \cGF^* \subset \cGF$ which is based on the holonomy along paths. A sequence ${\mathcal I} = (i_0, i_1, \ldots , i_{\alpha})$ is \emph{admissible} if each pair $(i_{\ell -1}, i_{\ell})$ is admissible for $1 \leq \ell \leq \alpha$, and the composition $\displaystyle h_{{\mathcal I}} = h_{i_{\alpha}, i_{\alpha-1}} \circ \cdots \circ h_{i_1, i_0}$ has non-empty domain $\Dom(h_{{\mathcal I}})$, which is defined to be the maximal clopen subset of ${\mathfrak{X}}_{i_0}$ for which the compositions are defined. Given a open subset $U \subset \Dom(h_{{\mathcal I}})$ define the restriction $h_{{\mathcal I}} | U \in \cGF$. Introduce \begin{equation}\label{eq-restrictedpseudogroup} \cGF^* = \left\{ h_{{\mathcal I}} | U \mid {\mathcal I} ~ {\rm admissible~ and} ~ U \subset \Dom(h_{{\mathcal I}}) \right\} \subset \cGF ~ . \end{equation} The range of $g = h_{{\mathcal I}} | U$ is the open set $R(g) = h_{{\mathcal I}}(U) \subset {\mathfrak{X}}_{i_{\alpha}} \subset {\mathfrak{X}}$. Note that each map $g \in \cGF^*$ admits a continuous extension $\overline{g} \colon \overline{\Dom(g)} = \overline{U} \to {\mathfrak{X}}_{i_{\alpha}}$ as $\Dom( h_{{\mathcal I}})$ is a clopen set for each ${\mathcal I}$. Let ${\mathcal I} = (i_0, i_1, \ldots , i_{\alpha})$ be an admissible sequence. For each $1 \leq \ell \leq \alpha$, set ${\mathcal I}_{\ell} = (i_0, i_1, \ldots, i_{\ell})$, and let $h_{{\mathcal I}_{\ell}}$ denote the corresponding holonomy map. For $\ell = 0$, let ${\mathcal I}_0 = (i_0 , i_0)$. Note that $h_{{\mathcal I}_{\alpha}} = h_{{\mathcal I}}$ and $h_{{\mathcal I}_{0}} = Id \colon {\mathfrak{X}}_0 \to {\mathfrak{X}}_0$. Given $w \in \Dom(h_{{\mathcal I}})$, let $x = \tau_{i_0}(w) \in L_{w}$. For each $0 \leq \ell \leq \alpha$, set $w_{\ell} = h_{{\mathcal I}_{\ell}}(w)$ and $x_{\ell}= \tau_{i_{\ell}}(w_{\ell})$. Recall that ${\mathcal P}_{i_{\ell}}(x_{\ell}) = {\mathcal P}_{i_{\ell}}(w_{\ell})$, where each ${\mathcal P}_{i_{\ell}}(w_{\ell})$ is a strongly convex subset of the leaf $L_w$ in the leafwise metric $d_{\F}$. Introduce the \emph{plaque chain} \begin{equation}\label{eq-plaquechain} {\mathcal P}_{{\mathcal I}}(w) = \{{\mathcal P}_{i_0}(w_0), {\mathcal P}_{i_1}(w_1), \ldots , {\mathcal P}_{i_{\alpha}}(w_{\alpha}) \} ~ . \end{equation} Adopt the notation ${\mathcal P}_{{\mathcal I}}(x) \equiv {\mathcal P}_{{\mathcal I}}(w)$. Intuitively, a plaque chain ${\mathcal P}_{{\mathcal I}}(x)$ is a sequence of successively overlapping convex ``tiles'' in $L_{w}$ starting at $x = \tau_{i_0}(w)$, ending at $y = x_{\alpha} = \tau_{i_{\alpha}}(w_{\alpha})$, and with each ${\mathcal P}_{i_{\ell}}(x_{\ell})$ ``centered'' on the point $x_{\ell} = \tau_{i_{\ell}}(w_{\ell})$. Let $\gamma \colon [0,1] \to {\mathfrak{M}}$ be a path. Set $x_0 = \gamma(0) \in U_{i_0}$, $w = \pi(x_0)$ and $x = \tau(w) \in {\mathcal T}_{i_0}$. Let ${\mathcal I}$ be an admissible sequence with $w \in \Dom(h_{{\mathcal I}})$. We say that $({\mathcal I} , w)$ \emph{covers} $\gamma$ if the domain of $\gamma$ admits a partition $0 = s_0 < s_1 < \cdots < s_{\alpha} = 1$ such that ${\mathcal P}_{{\mathcal I}}(w)$ satisfies \begin{equation}\label{eq-cover} \gamma([s_{\ell} , s_{\ell + 1}]) \subset {\mathcal P}_{i_{\ell}}(\xi_{\ell}) ~ , ~ 0 \leq \ell < \alpha, ~ {\rm and} ~ \gamma(1) \in {\mathcal P}_{i_{\alpha}}(\xi_{\alpha}) . \end{equation} For a path $\gamma$, we construct an admissible sequence ${\mathcal I} = (i_0, i_1, \ldots, i_{\alpha})$ with $w \in \Dom(h_{{\mathcal I}})$ so that $({\mathcal I} , w)$ covers $\gamma$, and has ``uniform domains''. Inductively choose a partition of the interval $[0,1]$, say $0 = s_0 < s_1 < \cdots < s_{\alpha} = 1$, such that for each $0 \leq \ell \leq \alpha$, $$\gamma([s_{\ell}, s_{\ell + 1}]) \subset D_{\F}(x_{\ell}, {\epsilon^{\F}_{\cU}}} % leafwise Lebesgue number defined in equation~\ref{eq-leafdiam) \quad , \quad x_{\ell} = \gamma(s_{\ell}).$$ As a notational convenience, we have let $s_{\alpha+1} = s_{\alpha}$, so that $\gamma([s_{\alpha}, s_{\alpha + 1}]) = x_{\alpha}$. Choose $s_{\ell + 1}$ to be the largest value of $s_{\ell} < s \leq 1$ such that $\dF(\gamma(s_{\ell}), \gamma(t)) \leq {\epsilon^{\F}_{\cU}}} % leafwise Lebesgue number defined in equation~\ref{eq-leafdiam$ for all $s_{\ell} \leq t \leq s$, then $\alpha \leq \| \gamma \|/{\epsilon^{\F}_{\cU}}} % leafwise Lebesgue number defined in equation~\ref{eq-leafdiam$. For each $0 \leq \ell \leq \alpha$, choose an index $1 \leq i_{\ell} \leq \nu$ so that $ B_{{\mathfrak{M}}}(x_{\ell}, \eU) \subset U_{i_{\ell}}$. Note that, for all $s_{\ell} \leq t \leq s_{\ell +1}$, $B_{{\mathfrak{M}}}(\gamma(t), \eU/2) \subset U_{i_{\ell}}$, so that $x_{\ell+1} \in U_{i_{\ell}} \cap U_{i_{\ell +1}}$. It follows that ${\mathcal I}_{\gamma} = (i_0, i_1, \ldots, i_{\alpha})$ is an admissible sequence. Set $h_{\gamma} = h_{{\mathcal I}_{\gamma}}$ and note that $h_{\gamma}(w) = w'$. Next, consider paths $\gamma, \gamma' \colon [0,1] \to {\mathfrak{M}}$ with $x = \gamma(0) = \gamma'(0)$ and $y = \gamma(1) = \gamma'(1)$. Suppose that $\gamma$ and $\gamma'$ are homotopic relative endpoints. That is, assume there exists a continuous map $H \colon [0,1] \times [0,1] \to {\mathfrak{M}}$ with $$H(0,t) = \gamma(t) ~, ~ H(1,t) = \gamma'(t) ~ , ~ H(s,0) = x ~ {\rm and} ~ H(s,1) = y \quad {\rm for ~ all} ~ 0 \leq s \leq 1$$ Then there exists partitions $0 = s_0 < s_1 < \cdots < s_{\beta} = 1$ and $0 = t_0 < t_1 < \cdots < t_{\alpha} = 1$ such that for each pair of indices $0 \leq j < \beta$ and $0 \leq k < \alpha$, there is an index $1 \leq i(j,k)\leq \nu$ such that $$H([s_j,s_{j+1}] \times [t_k, t_{k+1}] ) \subset D_{\F}(H(s_j, t_k), {\epsilon^{\F}_{\cU}}} % leafwise Lebesgue number defined in equation~\ref{eq-leafdiam) \subset U_{i(j,k)}$$ A standard argument then yields the following basic fact about holonomy maps. \begin{lemma}\label{lem-homotopy} Let $\gamma, \gamma' \colon [0,1] \to {\mathfrak{M}}$ be paths with $x = \gamma(0) = \gamma'(0)$ and $y = \gamma(1) = \gamma'(1)$, and suppose they are homotopic relative endpoints. Then the induced holonomy maps $h_{\gamma}$ and $h_{\gamma'}$ agree on an open neighborhood of $\xi_0 = \pi_{i_0}(x)$. \end{lemma} Next consider the \emph{groupoid} formed by germs of maps in $\cGF$. Let $U, U', V, V' \subset {\mathfrak{X}}$ be open subsets with $w \in U \cap U'$. Given homeomorphisms $h \colon U \to V$ and $h' \colon U' \to V'$ with $h(w) = h'(w)$, then $h$ and $h'$ have the same \emph{germ at $w$}, and write $h \sim_w h'$, if there exists an open neighborhood $w \in W \subset U \cap U'$ such that $h | W= h' |W$. Note that $\sim_w$ defines an equivalence relation. \begin{defn}\label{def-germ} The \emph{germ of $h$ at $w$} is the equivalence class $[h]_w$ under the relation ~$\sim_w$. The map $h \colon U \to V$ is called a \emph{representative} of $[h]_w$. The point $w$ is called the source of $[h]_w$ and denoted $s([h]_w)$, while $w' = h(w)$ is called the range of $[h]_w$ and denoted $r([h]_w)$. \end{defn} The collection of all such germs $[h]_w$ for $h \in \cGF$ and $w \in \Dom(h)$, forms the \emph{holonomy groupoid} $\GF$, which has the natural topology associated to sheaves of maps over ${\mathcal X}$. Let $\cRF \subset {\mathfrak{X}} \times {\mathfrak{X}}$ denote the equivalence relation on ${\mathfrak{X}}$ induced by $\F$, where $(w,w') \in \cRF$ if and only if $w,w'$ correspond to points on the same leaf of $\F$. The product map $s \times r \colon \GF \to \cRF$ is \'etale; that is, the map is a local homeomorphism with discrete fibers. These notions were introduced by Haefliger for foliations \cite{Haefliger1958,Haefliger1984}, and naturally extend to the case of matchbox manifolds. We introduce a convenient notation for elements of $\GF$. Let $(w,w') \in \cRF$, and let $\gamma$ denote a path from $x = \tau(w)$ to $y = \tau(w')$. We may assume that $\gamma$ is a geodesic for the leafwise metric, and let $[h_{\gamma}]_w$ (or sometimes just $\gamma_w$) denote the germ at $w$ of the holonomy map defined by $\gamma$. It follows that there is a well-defined surjective homomorphism, the \emph{holonomy map}, \begin{equation}\label{eq-holodef} h_{\F,x} \colon \pi_1(L_x , x) \to \Gamma_w^w \equiv \left\{ [g]_w \in \GF \mid r([g]_w) =w \right\} \end{equation} Moreover, if $y,z \in L$ then the homomorphism $h_{\F , y}$ is conjugate (by an element of $\cGF$) to the homomorphism $h_{\F , z}$. A leaf $L$ is said to have \emph{non-trivial germinal holonomy} if for some $y \in L$, the homomorphism $h_{\F , y}$ is non-trivial. If the homomorphism $h_{\F , y}$ is trivial, then we say that $L_y$ is a \emph{leaf without holonomy}. This property depends only on $L$, and not the choice of $y \in L$. \begin{lemma}\label{lem-homotopymin} Given a path $\gamma \colon [0,1] \to {\mathfrak{M}}$ with $x = \gamma(0)$ and $y = \gamma(1)$. Suppose that $L_x$ is a leaf without holonomy. Then there exists a leafwise geodesic segment $\gamma' \colon [0,1] \to {\mathfrak{M}}$ with $x = \gamma'(0)$ and $y = \gamma'(1)$, such that $\|\gamma' \| = \dF(x,y)$, and $h_{\gamma}$ and $h_{\gamma'}$ agree on an open neighborhood of $\xi_0$. \end{lemma} \proof The leaf $L_x$ containing $x$ is a complete Riemannian manifold, so there exists a geodesic segment $\gamma'$ which is length minimizing between $x$ and $y$. Then the holonomy maps $h_{\gamma}$ and $h_{\gamma'}$ agree on an open neighborhood of $\xi_0 = \pi_{i_0}(x)$ by the definition of germinal holonomy. \endproof Next, we introduce the filtrations of $\cGF^*$ by word length, and of $\GF$ by path length, then derive estimates comparing these notions of length. For $\alpha \geq 1$, let $\cGF^{(\alpha)}$ be the collection of holonomy homeomorphisms $h_{{\mathcal I}} | U \in \cGF^*$ determined by admissible paths ${\mathcal I} = (i_0,\ldots,i_k)$ such that $k \leq \alpha$ and $U \subset \Dom(h_{{\mathcal I}})$ is open. For each $\alpha$, let $C(\alpha)$ denote the number of admissible sequences of length at most $\alpha$. As there are at most $\nu^2$ admissible pairs $(i,j)$, we have the basic estimate that $C(\alpha) \leq \nu^{2 \alpha}$. This upper bound estimate grows exponentially with $\alpha$, though the exact growth rate of $C(\alpha)$ may be much less. For each $g \in \cGF^*$ there is some $\alpha$ such that $g \in \cGF^{(\alpha)}$. Let $\|g\|$ denote the least such $\alpha$, which is called the \emph{word length} of $g$. Note that $\cGF^{(1)}$ generates $\cGF^*$. We use the word length on $\cGF^*$ to define the word length on $\GF$, where for $\gamma_w \in \GF$, set \begin{equation} \| \gamma_w \| ~ = ~ \min ~ \left\{ \| g \| \mid [g]_w = \gamma_w ~ {\rm for}~ g \in \cGF^* \right\} . \end{equation} Introduce the \emph{path length} of $\gamma_w \in \GF$, by considering the infimum of the lengths $\| \gamma'\|$ for all piecewise smooth curves $\gamma'$ for which $\gamma_w' = \gamma_w$. That is, \begin{equation}\label{eq-groupoidpathlength} \ell(\gamma_w) ~ = ~ \inf ~ \left\{ \| \gamma' \| \mid \gamma'_w = \gamma_w \right\} . \end{equation} Note that if $L_w$ is a leaf without holonomy, set $x = \tau(w)$ and $y = \tau(w')$, then Lemma~\ref{lem-homotopymin} implies that $\ell(\gamma_w) = \dF(x,y)$. This yields a fundamental estimate, whose proof can be found in \cite{CHL2013b}: \begin{lemma}\label{lem-comparisons} Let $[g]_w \in \GF$ where $w$ corresponds to a leaf without holonomy. Then \begin{equation}\label{eq-comparisons} \dF(x,y)/2\delta^{\F}_{\cU}} % leafwise radius defined by Equation~\ref{eq-Fdelta ~ \leq ~ \| [g]_w \| ~ \leq ~ 1 + \dF(x,y)/{\epsilon^{\F}_{\cU}}} % leafwise Lebesgue number defined in equation~\ref{eq-leafdiam \end{equation} \end{lemma} \section{Pseudogroup Dynamics} \label{sec-dynamics} In this section, we consider some aspects of the topological dynamics of pseudogroups, which are useful for obtaining dynamical invariants for the pseudogroup $\cGX$ associated to a matchbox manifold. The sources \cite{CandelConlon2000,Hurder2014,Walczak2004} give more detailed discussions. The study of the dynamics of a pseudogroup $\cGX$ acting on ${\mathfrak{X}}$ is a generalization of the study of continuous actions of finitely-generated groups on Cantor sets, though it differs in some fundamental ways. For a group action, each $\gamma \in \Gamma$ defines a homeomorphism $h_{\gamma} \colon {\mathfrak{X}} \to {\mathfrak{X}}$. For a pseudogroup action, given $g \in \cGX$ and $w \in \Dom(g)$, there is some clopen neighborhood $w \in U \subset \Dom(g)$ for which $g | U = h_{{\mathcal I}} | U$ where ${\mathcal I}$ is admissible sequence with $w \in \Dom(h_{{\mathcal I}})$. By the definition of a pseudogroup, every $g \in \cGX$ is the ``union'' of such maps, and the dynamical properties of the action may reflect the fact that the domains of the actions are not all of ${\mathfrak{X}}$. We first recall some basic definitions. \begin{defn}\label{def-lipequiv} A pseudogroup $\cGX$ acting on a Cantor set ${\mathfrak{X}}$ is \emph{compactly generated}, if there exists two collections of \emph{clopen} subsets, $\{U_1, \ldots, U_k\}$ and $\{V_1, \ldots, V_k\}$ of ${\mathfrak{X}}$, and homeomorphisms $\{h_i \colon U_i \to V_i \mid 1 \leq i \leq k\}$ which generate all elements of $\cGX$. The collection of maps $\cGX^*$ is defined to be all compositions of the generators on the maximal domains for which the composition is defined. \end{defn} Let $\dX$ be a metric on ${\mathfrak{X}}$ which defines the topology on the space. \begin{defn} \label{def-expansive} The action of a compactly generated pseudogroup $\cGX$ on ${\mathfrak{X}}$ is \emph{expansive}, or more properly \emph{$\e$-expansive}, if there exists $\e > 0$ such that for all $w, w' \in {\mathfrak{X}}$, there exists $g \in \cGX^*$ with $w, w' \in D(g)$ such that $\dX(g(w), g(w')) \geq \e$. \end{defn} \smallskip \begin{defn} \label{def-equicontinuous} The action of a compactly generated pseudogroup $\cGX$ on ${\mathfrak{X}}$ is \emph{equicontinuous} if for all $\e > 0$, there exists $\delta > 0$ such that for all $g \in \cGX^*$, if $w, w' \in D(g)$ and $\dX(w,w') < \delta$, then $\dX(g(w), g(w')) < \e$. Thus, $\cGX^*$ is equicontinuous as a family of local group actions. \end{defn} The \emph{geometric entropy} for pseudogroup actions, introduced by Ghys, Langevin and Walczak \cite{GLW1988}, gives a measure of the ``exponential complexity'' of the orbits of the action. See also the discussion of entropy for pseudogroup actions in Candel and Conlon \cite[\S 13.2B]{CandelConlon2000}, and in Walczak \cite{Walczak2004}. The key idea is the notion of $\e$-separated sets, due to Bowen \cite{Bowen1971}. Let $\e > 0$ and $\ell > 0$. A subset ${\mathcal E} \subset {\mathfrak{X}}$ is said to be $(\dX, \e, \ell)$-separated if for all $w,w' \in {\mathcal E} \cap {\mathfrak{X}}_i$ there exists $g \in \cGX^*$ with $w,w' \in \mathrm{Dom}(g) \subset {\mathfrak{X}}_i$, and $\|g\|_w \leq \ell $ so that $\dX(g(w), g(w')) \geq \e$. If $w \in {\mathfrak{X}}_i$ and $w' \in {\mathfrak{X}}_j$ for $i \ne j$ then they are $(\e, \ell)$-separated by default. The ``expansion growth function'' counts the maximum of this quantity: $$h(\cGX, \dX, \e, \ell) = \max \{ \# {\mathcal E} \mid {\mathcal E} \subset {\mathfrak{X}} ~ \text{is} ~ (\dX, \e,\ell) \text{-separated} \} $$ The entropy is then defined to be the exponential growth type of the expansion growth function: $$ h(\cGX, \dX, \e) = \limsup_{\ell \to \infty} ~ \ln \left\{ h(\cGX, \dX, \e, \ell) \right\}/ \ell \quad , \quad h(\cGX,\dX) = \lim_{\e \to 0} ~ h(\cGX, \dX, \e) $$ Note that the quantity $h(\cGX, \dX) \geq 0$, and it may take the value $h(\cGX, \dX) = \infty$. We recall two key properties of pseudogroup entropy. The first property follows directly from the definition of entropy. \begin{prop}[Proposition~2.6, \cite{GLW1988}]\label{prop-metricentropy1} Let $\cGX$ be a compactly generated pseudogroup, acting on the compact space ${\mathfrak{X}}$ with the metric $\dX$. Then the geometric entropy $h(\cGX,\dX)$ is independent of the choice of metric $\dX$. \end{prop} The second property is an exercise using standard properties of the pseudogroup length function. \begin{prop}[Exercise~13.2.21, \cite{CandelConlon2000}]\label{prop-metricentropy2} Let $\cGX$ be a compactly generated pseudogroup, acting on a compact space ${\mathfrak{X}}$ with the metric $\dX$. Then the property that $h(\cGX,\dX)$ is either zero, finite, or infinite, is independent of the choice of generating set for $\cGX$. \end{prop} \section{Lipschitz foliations and geometry} \label{sec-Lipschitz} In this section, we define the Lipschitz property for matchbox manifolds ${\mathfrak{M}}$. The basic result is that if ${\mathfrak{M}}$ is homeomorphic to an exceptional minimal set in a $C^1$-foliation, then its transversal space ${\mathfrak{X}}$ has a metric for which the induced pseudogroup ${\mathcal G}_{\mathfrak{X}}$ is Lipschitz. It is a standard fact that there is a {unique} Cantor set, up to \emph{homeomorphism}. That is, any two compact, perfect, totally disconnected and non-empty sets are homeomorphic. See \cite[Chapter~12]{Moise1977} for a proof and discussion of this result. In particular, for a given Cantor set ${\mathfrak{X}}$, any non-empty clopen subset $U \subset {\mathfrak{X}}$ is homeomorphic to ${\mathfrak{X}}$. Two metrics $\dX$ and $\dXp$ are \emph{Lipschitz equivalent} if for some $C \geq 1$, they satisfy the condition: \begin{equation} C^{-1} \cdot \dX(x,y) ~ \leq ~ \dXp(x,y) ~ \leq ~ C \cdot \dX(x,y) \quad {\rm for ~ all} ~ x,y \in {\mathfrak{X}} \end{equation} On the other hand, there are many possible metrics on ${\mathfrak{X}}$ which are compatible with its topology, and need not be Lipschitz equivalent. The study of the \emph{Lipschitz geometry} of the pair $({\mathfrak{X}}, \dX)$ investigates the geometric properties common to all metrics in the Lipschitz class of the given metric $\dX$. Problem~\ref{problem8} can be rephrased as asking for characterizations of the transverse Lipschitz geometry of exceptional minimal sets. We next consider the Lipschitz property of matchbox manifolds. The choice of a regular foliated covering $\displaystyle \left\{{\varphi}_i \colon {\overline{U}}_i \to [-1,1]^n \times {\mathfrak{X}}_i \mid 1 \leq i \leq \nu \right\}$ for the matchbox manifold ${\mathfrak{M}}$, as in Proposition~\ref{prop-regular}, yields the pseudogroup $\cGX$ which acts via homeomorphisms on the transversal space ${\mathfrak{X}}$ to $\F$. \begin{defn} \label{def-Lipschitz} The action of a compactly generated pseudogroup $\cGX$ is \emph{C-Lipschitz} with respect to $\dX$, if there exists a generating set $\{h_i \colon U_i \to V_i \mid 1 \leq i \leq k\}$ as in Definition~\ref{def-lipequiv}, and $C \geq 1$, such that for each $1 \leq i \leq k$ and for all $w, w' \in U_i$ we have \begin{equation}\label{eq-Lipschitz} C^{-1} \cdot \dX(w,w') \leq \dX(h_i(w), h_i(w')) \leq C \cdot \dX(w,w') ~ . \end{equation} \end{defn} The condition \eqref{eq-Lipschitz} is equivalent to saying that $\cGX$ is generated by \emph{bi-Lipschitz homeomorphisms}, though we use the notation Lipschitz for the action of the pseudogroup $\cGX$. Recall that $\tau \colon {\mathfrak{X}} \to {\mathcal T} \subset {\mathfrak{M}}$ is the transversal to $\F$ associated to a regular covering of ${\mathfrak{M}}$. Let $\dX$ be the metric induced on ${\mathfrak{X}}$ by the restriction of $\dM$ on ${\mathfrak{M}}$ to the image of $\tau \colon {\mathfrak{X}} \to {\mathcal T}$. The claim of the following result is intuitively clear, but its proof requires care with the subtleties of working with foliation charts that have totally disconnected transversals. \begin{prop}\label{prop-lipembed} Let ${\mathfrak{M}}$ be a minimal matchbox manifold, and $M$ a smooth Riemannian manifold with a $C^1$-foliation $\F_M$, and ${\mathcal Z} \subset M$ an exceptional minimal set for $\F_M$. Suppose there exists a homeomorphism $f \colon {\mathfrak{M}} \to {\mathcal Z}$, then there exists a metric $\dX$ on ${\mathfrak{X}}$ such that the action of the holonomy pseudogroup $\cGX$ on ${\mathfrak{X}}$ is Lipschitz. \end{prop} \proof The map $f$ maps leaves of $\F$ to leaves of $\F_M$ by Lemma~\ref{lem-foliated1}, and as $f$ is a homeomorphism onto its image, this implies the restriction of $f$ to a leaf $L$ of $\F$ is a homeomorphism onto a leaf ${\mathcal L}$ of $\F_M$, in the restricted topology on ${\mathcal Z}$. Choose a good covering $\displaystyle \{ \phi_{\alpha} \colon V_{\alpha} \to (-1,1)^n \times (-1,1)^q \mid 1 \leq {\alpha} \leq k\}$ for the foliation $\F$ of $M$, as in \cite{CandelConlon2000}, where $n$ is the leaf dimension of $\F$, and $q$ is the codimension of $\F$ in $M$. Set $T_{\alpha} = \phi_{\alpha}^{-1}(\{0\} \times (-1,1)^q)$, then the union $\displaystyle T = T_1 \cup \cdots \cup T_k$ is a complete transversal for $\F$. We can assume without loss of generality that the closures of the transversals are disjoint. The Riemannian metric on $TM$ restricts to a Riemannian metric on each $T_{\alpha}$ and thus defines a path-length metric denoted by $d_{T_{\alpha}}$ on each submanifold $T_{\alpha} \subset M$. Extend the metrics on each $T_{\alpha}$ to a metric $d_T$ on $T$, by declaring $d_T(u,v) = 1$ if $u \in T_{\alpha}$ and $v \in T_{\beta}$ for $\alpha \ne \beta$. Recall that for $(\alpha,\beta)$ admissible, the overlap of plaques in the charts $V_{\alpha}$ and $V_{\beta}$ defines the holonomy map $g_{\alpha, \beta}$. The assumption that $\F$ is a $C^1$-foliation implies that $g_{\alpha, \beta}$ is a $C^1$-map from an open subset of $T_{\alpha}$ to an open set of $T_{\beta}$. For each $u \in \Dom(g_{\alpha, \beta})$, let $D_u(g_{\alpha, \beta})$ denote the matrix of differentials for $g_{\alpha, \beta}$ at $u \in \Dom(g_{\alpha, \beta})$, with respect to the framing of the tangent spaces to the sections $T_{\alpha}$ induced by the coordinate charts. Let $\| D_u (g_{\alpha, \beta}) \|$ denote the matrix sup-norm of $D_u (g_{\alpha, \beta}) $ with respect to the Riemannian metric induced on the sections. The assumption that we have a good covering implies that the maps $g_{\alpha, \beta}$ admit continuous $C^1$-extensions, so the norms $\| D_u (g_{\alpha, \beta}) \|$ have uniform upper bounds for all admissible pairs $(\alpha, \beta)$ and all $u \in \Dom(g_{\alpha, \beta})$. Define: \begin{equation} C_{\F}' = \max \left\{ \| D_u (g_{\alpha, \beta}) \| \mid (\alpha, \beta)~ {\rm admissible} ~, ~ u \in \Dom(g_{\alpha, \beta}) \right\} ~ < ~ \infty \end{equation} It follows that the pseudogroup for $\F$ defined by the maps $\{g_{\alpha, \beta} \mid (\alpha, \beta)~ {\rm admissible} \}$ is $C_{\F}'$-Lipschitz. Recall that ${\mathcal T}_i \subset {\mathfrak{M}}$, for $1 \leq i \leq \nu$, are the Cantor transversals to ${\mathfrak{M}}$ defined by a good covering for ${\mathfrak{M}}$, as in Definition~\ref{def-fs}. For each $x \in {\mathcal T}_i$ there exists $1 \leq \alpha \leq k$ with $f(x) \in V_{\alpha}$, and thus a clopen neighborhood $W(i,x,\alpha) \subset {\mathcal T}_i$ for which $f(W(i,x,\alpha)) \subset V_{\alpha}$. If $W(i,x,\alpha)$ is sufficiently small, then the plaque projection of the image, $\pi_{\alpha} \colon f(W(i,x,\alpha)) \to T_{\alpha}$, is a homeomorphism onto its image, and so the metric $d_{T_{\alpha}}$ on $T_{\alpha}$ induces a metric on $W(i,x,\alpha)$. As each ${\mathcal T}_i$ is compact, we can choose a finite covering $\{ W_{k} \}$ of the union ${\mathcal T} = {\mathcal T}_1 \cup \cdots {\mathcal T}_{\nu}$ where each $W_{k} = W(i,x,\alpha)$ for appropriate $(i,x,\alpha)$. It may happen that for $x,y \in W_k$ there is an admissible pair $(i,j)$ for the covering of ${\mathfrak{M}}$ such that $f(h_{i,j}(x))$ and $f(h_{i,j}(y))$ are not contained in the same foliation chart $V_{\ell}$. However, as there are only a finite number of admissible pairs $(i,j)$ for the covering of ${\mathfrak{M}}$ by foliation charts, we can refine the finite clopen covering $\{ W_{k} \}$ of ${\mathcal T}$, so that this condition is satisfied. Then for each $W_k$ and $x \in W_k$ there is an index $i_{\alpha}$ such that $f(x) \in V_{\alpha}$ and $\pi_{\alpha}(f(x)) \in T_{\alpha}$. We then obtain a metric $d_{{\mathcal T}}$ on ${\mathcal T}$ by setting, $$d_{{\mathcal T}}(x,y) = d_{T_{\alpha}}(\pi_{\alpha}(f(x)), \pi_{\alpha}(f(y))) \quad {\rm if} \quad x,y \in W_k, $$ and $d_{{\mathcal T}}(x,y) = 1$ otherwise. The metric $d_{{\mathcal T}}$ induces a metric on ${\mathfrak{X}}$, denoted by $\dX$. We claim there exists $C_{\F} \geq 1$ such that the action of $\cGX$ on ${\mathfrak{X}}$ is $C_{\F}$-Lipschitz for $\dX$ and the generating set $\{ h_{i,j} \mid (i,j) ~ {\rm admissible} \}$. Suppose that $x,y \in W_{k}$, then $f(h_{i,j}(x))$ and $f(h_{i,j}(y))$ are contained in the same foliation chart $V_{\ell}$ by construction. Note that $x$ and $h_{i,j}(x)$ are contained in the same leaf of $\F$ so their images $f(x)$ and $f(h_{i,j}(x))$ are contained in the same leaf of $\F$. Thus, there is a plaque chain of length at most $\lambda_{f,x}$ between these two points. The same holds for the point $y$, so there is a plaque-chain of length $\lambda_{f,y}$ between $f(y)$ and $f(h_{i,j}(y))$. By the compactness of ${\mathcal T}$, there is a uniform upper bound $\lambda_f$ for all such pairs. Thus, by Lemma~\ref{lem-lipalpha} we have the estimate for $x,y \in U_{k}$ with projections $w = \pi(x), w' = \pi(y) \in {\mathfrak{X}}_i$, and $C_{\F}'' = (C_{\F}')^{\lambda_f}$, \begin{equation}\label{eq-Lipschitzlambda} (C_{\F}'')^{-1} \cdot \dX(w,w') \leq \dX(h_{i,j}(w), h_{i,j}(w')) \leq C_{\F}'' \cdot \dX(w,w') ~ . \end{equation} If $x,y$ do not belong to the same clopen set $W_{k}$, then $\dX(w,w') = 1$ by definition, so their exists $C_{\F}''' \geq 1$ such that \eqref{eq-Lipschitzlambda} holds for such pairs. Set $C_{\F} = \max \{C_{\F}'', C_{\F}'''\}$, and the claim follows. \endproof \medskip We next give some properties of Lipschitz pseudogroups and their entropy. The following is an immediate consequence of the definitions. \begin{lemma}\label{lem-lipalpha} Suppose that the action of $\cGX$ on ${\mathfrak{X}}$ is $C$-Lipschitz with respect to $\dX$. Then for all $g \in \cGX^*$ with word length $\| g \| \leq \alpha$, and $w,w' \in \Dom(g)$ we have \begin{equation}\label{eq-Lipschitzalpha} C^{-\alpha} \cdot \dX(w,w') ~ \leq ~ \dX(g(w), g(w')) ~ \leq ~ C^{\alpha} \cdot \dX(w,w') ~ . \end{equation} \end{lemma} We recall an application of Proposition~2.7 in \cite{GLW1988}, which gives conditions for $h(\cGX,\dX) < \infty$. \begin{prop} \label{prop-entfinite} Let ${\mathfrak{X}} \subset {\mathbb R}^q$ be an embedded Cantor set, with metric $\dX$ obtained by the restriction of the standard metric on ${\mathbb R}^q$. Let $\cGX$ be a finitely generated pseudogroup, with generators $\{h_i \colon U_i \to V_i \mid 1 \leq i \leq k\}$, such that each $h_i$ is the restriction of a $C^1$ diffeomorphism defined on an open neighborhood in ${\mathbb R}^q$ of the compact set $U_i$. Then $\cGX$ with the metric $\dX$ is Lipshitz, and the geometric entropy $h(\cGX, \dX) < \infty$. \end{prop} \begin{cor} \label{cor-lipembed} Let ${\mathfrak{M}}$ be a matchbox manifold which embeds as an exceptional minimal set for $C^1$-foliation $\F$ on a compact smooth manifold $M$, as in Proposition~\ref{prop-lipembed}. Then there is a transverse metric $\dX$ on ${\mathfrak{X}}$ such that $h(\cGX, \dX) < \infty$. \end{cor} \proof Let $\dX$ be the metric on ${\mathfrak{X}}$ constructed in the proof of Proposition~\ref{prop-lipembed}. Then ${\mathfrak{X}}$ is covered by disjoint clopen sets for which $\dX$ is the pull-back of the metric on transversals to the foliation $\F$, so by Proposition~\ref{prop-metricentropy2} the entropy for the pseudogroup defined by $\F_M$ restricted to the image of ${\mathfrak{M}}$ and the entropy for $\cGX$ are either both zero, finite or infinite. Proposition~\ref{prop-entfinite} implies that both entropies are either zero or finite. \endproof Note that by Proposition~\ref{prop-metricentropy1}, the entropy $h(\cGX, \dX)$ is independent of the choice of metric $\dX$ chosen for ${\mathfrak{X}}$, as long as it defines the topology for ${\mathfrak{X}}$. Thus by Corollary~\ref{cor-lipembed} we have: \begin{cor} \label{cor-lipembed2} Let ${\mathfrak{M}}$ be a matchbox manifold with pseudogroup $\cGX$ for some regular covering of ${\mathfrak{M}}$. Suppose there exists a metric $\dX$ on ${\mathfrak{X}}$ for which $h(\cGX, \dX) = \infty$, then ${\mathfrak{M}}$ is not homeomorphic to an invariant set for any $C^1$-foliation. \end{cor} It is well-known that entropy of a smooth non-singular flow on a compact manifold, when restricted to a compact invariant set ${\mathcal Z} \subset M$, is related to the Hausdorff dimension of ${\mathcal Z}$, as in \cite{LY1985a,LY1985b}. For a Lipschitz pseudogroup $h(\cGX, \dX)$, the box and Hausdorff dimension of ${\mathfrak{X}}$ with respect to $\dX$ are both well-defined, as in \cite{Edgar1990}, and they do not depend on the Lipschitz equivalence class of the metric $\dX$. While there is no known direct relation between these dimensions and $h(\cGX, \dX)$, corresponding to the results for flows, there is a finiteness result based on a concept related to volume doubling for metric spaces (see \cite{Assouad1983,BonkSchramm2000,BuyaloSchroeder2007}). \begin{defn}\label{def-doubling} A complete metric space $({\mathfrak{X}}, \dX)$ has the \emph{doubling property}, if there exists a constant $C > 1$, such that for every $x \in {\mathfrak{X}}$, $r > 0$, and integer $n > 0$, the closed ball $B_{{\mathfrak{X}}}(x,r)$ of radius $r$ about $x$ admits a covering by $C^n$ balls of radius $r/2^n$. \end{defn} Note that if $({\mathfrak{X}}, \dX)$ has the doubling property, then it has finite box dimension as well. The proof of \cite[Proposition~13.2.14]{CandelConlon2000} adapts directly to give: \begin{prop} \label{prop-entfinite3} If $h(\cGX, \dX)$ is a compactly generated Lipschitz pseudogroup such that $({\mathfrak{X}}, \dX)$ has the doubling property, then $h(\cGX, \dX) < \infty$. \end{prop} Thus, one approach to constructing matchbox manifolds which cannot embed into a smooth foliation, is to consider examples for which the transversal model space ${\mathfrak{X}}$ has infinite box dimension, for some metric. This will be discussed further in Section~\ref{sec-nonembedding}. The Hausdorff dimension of the transversal Cantor set to an exceptional minimal set for a $C^1$-foliation is well-studied, especially for foliations of codimension-one as in Cantwell and Conlon \cite{CC1988}, Matsumoto \cite{Matsumoto1988}, Gelfert and Rams \cite{GelfertRams2009}, and Bi\'{s} and Urbanski \cite{BisUrbanski2008}. Hausdorff dimension is also well-studied for Cantor sets defined by contracting Iterated Function Systems (or \emph{IFS}'s), and the more general class of self-similar fractals. For example, see the works of Rams and his coauthors in \cite{CrovisierRams2006,GelfertRams2009}, and the works of Rao, Ruan, Wang and Xi as in \cite{RRX2006,RRW2012}, are closely related to the study of Lipschitz geometry of foliation minimal sets. \begin{prob} Let ${\mathfrak{M}}$ be a Lipschitz matchbox manifold, with induced Lipschitz pseudogroup $(\cGX, \dX)$, and suppose $0 < h(\cGX, \dX) < \infty$. Find properties of the Lipschitz geometry of ${\mathfrak{X}}$ which must be satisfied if the metric $\dX$ is induced by an embedding of ${\mathfrak{M}}$ into a $C^r$-foliation, for $r \geq 1$. \end{prob} One can also define finer metric conditions on the action of a pseudogroup $\cGX$, such as the Zygmund condition used in \cite{HK1990} which can be used to define ``quasi-conformal'' properties of homeomorphisms, as in \cite{GardnerSullivan1992,MackayTyson2010,Pansu1989,TukiaVaisala1980,TysonWu2006}. The study of the Lipschitz properties of Gromov hyperbolic groups acting on their boundaries is a very well-developed subject; see for example \cite{BuyaloSchroeder2007,KapovichBenakli2001}. \section{Examples from foliations} \label{sec-foliations} In this section, we recall some examples of minimal matchbox manifolds which are realized as exceptional minimal sets in $C^r$-foliations, for $r \geq 1$. We first consider the case for foliations of codimension-one, for which the strongest results have been proven. The prototypical example is the well-known construction by Denjoy: \begin{thm}[Denjoy \cite{Denjoy1932}] \label{thm-denjoy} There exist a $C^1$-diffeomorphism $f$ of the circle ${\mathbb S}^1$ with no fixed points, and with a non-empty wandering set $W$ such that the induced action of $f$ on the complement ${\bf K} = {\mathbb S}^1 - W$ gives a minimal action, ${\varphi} \colon {\mathbb Z} \times {\bf K} \to {\bf K}$, called a \emph{Denjoy minimal system}. \end{thm} The $C^1$-hypotheses on the diffeomorphism $f$ is far from optimal. For example, McDuff \cite{McDuff1981} formulated a set of necessary and sufficient conditions on an embedded Cantor set ${\bf K} \subset {\mathbb S}^1$ so that it is an invariant set of a $C^{1+\alpha}$-diffeomorphism $f \colon {\mathbb S}^1 \to {\mathbb S}^1$, for $0 < \alpha <1$. Other optimal conditions on the derivative of a diffeomorphism $f \colon {\mathbb S}^1 \to {\mathbb S}^1$ such that it admits a Cantor minimal set are discussed in Hu and Sullivan \cite{HuSullivan1997}. The Denjoy example played a fundamental role in the construction of counter-examples to the Seifert Conjecture, which enabled Schweitzer in \cite{Schweitzer1974} to construct the first $C^1$-examples of flows on $3$-manifolds without periodic orbits. Schweitzer's construction embedded a suspension of the Denjoy minimal set as an isolated minimal set for a flow contained in a plug embedded in ${\mathbb R}^3$, and motivated Harrison's construction \cite{Harrison1988,Harrison1989} of a $C^{2+\alpha}$-flow in ${\mathbb R}^3$ with an \emph{isolated} minimal limit set homeomorphic to a suspension of the Denjoy set, for $\alpha < 1$. On the other hand, Knill constructed in \cite{Knill1981} a smooth diffeomorphism in the $2$-dimensional annulus with a minimal set homeomorphic to the Denjoy set, so the suspension of this diffeomorphism yields a codimension-$2$ smooth foliation defined by a flow, with a minimal set homeomorphic to the Denjoy minimal set in ${\mathbb T}^2$. Note that the periodic orbits for the Knill diffeomorphism contain the Denjoy set in its closure, so this example is not sufficient for constructing smooth counter-examples to the Seifert Conjecture. The Knill example illustrates that the degree of differentiability $r$ for a $C^r$-embedding of a Cantor minimal system may depend on the codimension, as well as the dynamical behavior of the action in open neighborhoods. In some cases, there are analogs of the above results for the case of a finitely-generated group acting minimally on a Cantor set. For example, Pixton gave a generalization of the Denjoy construction: \begin{thm}[Pixton \cite{Pixton1977}] \label{thm-pixton} Suppose that $0 < \alpha < 1/(n+1)$, then there exist a $C^{1+\alpha}$-action of ${\mathbb Z}^n$ on the circle ${\mathbb S}^1$ with no fixed points and with a non-empty wandering set $W$ so that the complement ${\bf K} = {\mathbb S}^1 - W$ is a Cantor set which is minimal for the restricted action. \end{thm} The Pixton-type examples have been further studied by Deroin, Kleptsyn and Navas in \cite{DKN2007}, and Kleptsyn and Navas in \cite{KleptsynNavas2008}. Note that the suspension of such actions of ${\mathbb Z}^n$ on ${\mathbb S}^1$ yield foliations with exceptional minimal sets, whose leaves are diffeomorphic to ${\mathbb R}^n$. Sacksteder proved in \cite{Sacksteder1965} that if ${\mathcal Z} \subset M$ is an exceptional minimal set for a codimension-one $C^2$-foliation $\F_M$ of a compact manifold $M$, then some leaf in ${\mathcal Z}$ must have an element of holonomy which is a transverse contraction, and thus cannot be of ``Denjoy type''. A special class of such examples, the \emph{Markov minimal sets}, were studied by Hector \cite{HecHir1981,Hector1983}, Cantwell and Conlon \cite{CC1988}, and Matsumoto \cite{Matsumoto1988}. It remains an open problem to characterize the embeddings of Cantor minimal systems in $C^r$-foliations of codimension-one, for $r \geq 1$ (see \cite{Hurder2002}). There are various constructions of $C^r$-foliations of codimension $q \geq 2$ with minimal sets which are matchbox manifold. Given a finitely-generated group $\Gamma$ and a $C^r$-action ${\varphi} \colon \Gamma \times N \to N$ on a compact manifold $N$ of dimension $q$, the suspension of the action (see \cite{CN1985}) yields a $C^r$-foliation of codimension-$q$. In general, it is impossible to determine if such an action ${\varphi}$ has an invariant Cantor set on which the action is minimal, except in very special cases. For example, consider a lattice subgroup $\Gamma \subset G$ of the rank one connected Lie group $G = SO(q,1)$. The boundary at infinity for the associated symmetric space ${\mathbb H}^q = SO(q,1)/O(q)$ is diffeomorphic to ${\mathbb S}^q$. If the group $\Gamma$ is a non-uniform lattice, then the action of $\Gamma$ on its limit set in ${\mathbb S}^q$ defines a minimal Cantor action, and the suspension of this action is a minimal matchbox manifold embedded in the smooth foliation associated to the action of $\Gamma$ on ${\mathbb S}^q$. The Williams solenoids were introduced in the papers \cite{Williams1967,Williams1974}. Williams proved that for an Axiom A diffeomorphism $f\colon M \to M$ of a compact manifold $M$ with an expanding attractor $\Omega \subset M$, then $\Omega$ admits a stationary presentation, as defined in the next section, and so is homeomorphic to a generalized solenoid. The unstable manifolds for $f$ restricted to an open neighborhood $U$ of $\Omega$ form a $C^{0,\infty}$-foliation of $U$. That is, the foliation has $C^0$-pseudogroup maps, with smoothly embedded leaves, and $\Omega$ is the unique minimal set. \section{Solenoids} \label{sec-solenoids} In this section, we describe the constructions of \emph{weak}, \emph{normal} and \emph{generalized solenoids}, and recall some of their properties. We also give a construction of metrics on the transverse Cantor sets for which the holonomy action is by isometries, and hence equicontinuous. There are many open questions about when such examples can be realized as exceptional minimal sets for $C^r$-foliations. A \emph{presentation} is a collection ${\mathcal P} = \{ p_{\ell+1} \colon M_{\ell+1} \to M_{\ell} \mid \ell \geq 0\}$, where each $M_{\ell}$ is a connected compact simplicial complex of dimension $n$, and each \emph{bonding} map $p_{\ell +1}$ is a proper surjective map of simplicial complexes with discrete fibers. For $\ell \geq 0$ and $x \in M_{\ell}$, the set $\{p_{\ell +1}^{-1}(x) \} \subset M_{\ell +1}$ is compact and discrete, so the cardinality $\# \{p_{\ell +1}^{-1}(x) \} < \infty$. It need not be constant in $\ell$ or $x$. Associated to the presentation ${\mathcal P}$ is an inverse limit space, called a \emph{generalized solenoid}, \begin{equation}\label{eq-presentationinvlim} {\mathcal S}_{{\mathcal P}} \equiv \lim_{\longleftarrow} ~ \{ p_{\ell +1} \colon M_{\ell +1} \to M_{\ell}\} ~ \subset \prod_{\ell \geq 0} ~ M_{\ell} ~ . \end{equation} By definition, for a sequence $\{x_{\ell} \in M_{\ell} \mid \ell \geq 0\}$, we have \begin{equation}\label{eq-presentationinvlim2} x = (x_0, x_1, \ldots ) \in {\mathcal S}_{{\mathcal P}} ~ \Longleftrightarrow ~ p_{\ell}(x_{\ell}) = x_{\ell-1} ~ {\rm for ~ all} ~ \ell \geq 1 ~. \end{equation} The set ${\mathcal S}_{{\mathcal P}}$ is given the relative topology, induced from the product topology, so that ${\mathcal S}_{{\mathcal P}}$ is itself compact and connected. For example, if $M_{\ell} = {\mathbb S}^1$ for each $\ell \geq 0$, and the map $p_{\ell}$ is a proper covering map of degree $m_{\ell} > 1$ for $\ell \geq 1$, then ${\mathcal S}_{{\mathcal P}}$ is an example of a {classic solenoid}, discovered independently by van~Dantzig \cite{vanDantzig1930} and Vietoris \cite{Vietoris1927}. We say the presentation ${\mathcal P}$ is \emph{stationary} if $M_{\ell} = M_0$ for all $\ell \geq 0$, and the bonding maps $p_{\ell} =p_1$ for all $\ell \geq 1$. A solenoid ${\mathcal S}_{{\mathcal P}}$ obtained from a stationary presentation ${\mathcal P}$ has a self-map $\sigma$ defined by the shift, $\displaystyle \sigma(x_0, x_1, \ldots ) = (x_1, x_2, \ldots )$. The map $\sigma$ can be considered as a type of expanding map on ${\mathcal S}_{{\mathcal P}}$, though in fact it may be expanding only in some directions, as discussed in Section~3 of \cite{BHS2006}. By the work of Mouron \cite{Mouron2009,Mouron2011}, these are the only examples of $1$-dimensional solenoids with an expanding map. The case for expanding maps of generalized $1$-dimensional solenoids is much richer, as described in the work of Williams \cite{Williams1967,Williams1970}, which classifies the stationary inverse limits defined by expanding maps of branched $1$-manifolds. If $M_{\ell}$ is a compact manifold without boundary for each $\ell \geq 0$, and the map $p_{\ell}$ is a proper covering map of degree $m_{\ell} > 1$ for $\ell \geq 1$, then ${\mathcal S}_{{\mathcal P}}$ is said to be a \emph{weak solenoid}. This generalization of $1$-dimensional solenoids was originally considered in the papers by McCord \cite{McCord1965} and Schori \cite{Schori1966}. In particular, McCord showed in \cite{McCord1965} that ${\mathcal S}_{{\mathcal P}}$ has a local product structure, hence \begin{prop}\label{prop-solenoidsMM} Let ${\mathcal S}_{{\mathcal P}}$ be a weak solenoid, whose base space $M_0$ is a compact manifold of dimension $n \geq 1$. Then ${\mathcal S}_{{\mathcal P}}$ is a minimal matchbox manifold of dimension $n$. \end{prop} Associated to a presentation ${\mathcal P}$ of compact manifolds is a sequence of proper surjective maps $$q_{\ell} = p_{1} \circ \cdots \circ p_{\ell -1} \circ p_{\ell} \colon M_{\ell} \to M_0 ~ .$$ For each $\ell > 1$, projection onto the $\ell$-th factor in the product $\displaystyle \prod_{\ell \geq 0} ~ M_{\ell}$ in \eqref{eq-presentationinvlim} yields a fibration map denoted by $\Pi_{\ell} \colon {\mathcal S}_{{\mathcal P}} \to M_{\ell}$, for which $\Pi_0 = \Pi_{\ell} \circ q_{\ell} \colon {\mathcal S}_{{\mathcal P}} \to M_0$. A choice of a basepoint $x \in {\mathcal S}_{{\mathcal P}}$ gives basepoints $x_{\ell} = \Pi_{\ell}(x) \in M_{\ell}$, and we define ${\mathcal H}^x_{\ell} = \pi_1(M_{\ell}, x_{\ell})$. Let ${\mathfrak{X}}_x = \Pi_0^{-1}(x)$ denote the fiber of $x$, which is Cantor set by the assumption on the cardinality of the fibers of each map $p_{\ell}$. A presentation ${\mathcal P}$ is said to be \emph{normal} if, given a basepoint $x \in {\mathcal S}_{{\mathcal P}}$, for each $\ell \geq 1$ the image subgroup of the map $\displaystyle (q_{\ell} )_{\#} \colon {\mathcal H}^x_{\ell} \longrightarrow {\mathcal H}^x_{0}$ is a normal subgroup. Then each quotient $G^x_{\ell} = {\mathcal H}^x_{0}/{\mathcal H}^x_{\ell}$ is finite group, and there are surjections $G^x_{\ell +1} \to G^x_{\ell}$. The fiber ${\mathfrak{X}}_x$ is then naturally identified with the \emph{Cantor group} defined by the inverse limit, \begin{equation}\label{eq-Galoisfiber} G^x_{\infty} = \lim_{\longleftarrow} ~ \{ p_{\ell +1} \colon G^x_{\ell +1} \to G^x_{\ell }\} ~ \subset \prod_{\ell \geq 0} ~ G^x_{\ell} ~ . \end{equation} The fundamental group ${\mathcal H}^x_0$ acts on the fiber $G^x_{\infty}$ via the coordinate-wise multiplication on the product in \eqref{eq-Galoisfiber}. In the case of the Vietoris solenoid, where each map $p_{\ell} \colon {\mathbb S}^1 \to {\mathbb S}^1$ is a double cover, the fiber $G^x_{\infty}$ is the dyadic group. More generally, a solenoid ${\mathcal S}_{{\mathcal P}}$ is said to be a \emph{normal} (or \emph{McCord}) \emph{solenoid} if the tower of coverings in the presentation is normal, and thus the fiber over $x \in M_0$ of the map ${\mathcal S}_{{\mathcal P}} \to M_0$ is the Cantor group $G^x_{\infty}$. \begin{lemma}\label{lem-denseaction} Let ${\mathcal P}$ be a presentation of a weak solenoid ${\mathcal S}_{{\mathcal P}}$, choose a basepoint $x \in {\mathcal S}_{{\mathcal P}}$ and set ${\mathfrak{X}}_x = \Pi_0^{-1}(x)$, and recall that ${\mathcal H}_0^x = \pi_1(M_0,x_0)$. Then the left action of ${\mathcal H}_0^x$ on ${\mathfrak{X}}_x$ is minimal. \end{lemma} \proof The left action of ${\mathcal H}_0^x$ on each quotient space $X_{\ell} = {\mathcal H}^x_{0}/{\mathcal H}^x_{\ell}$ is transitive, so the orbits are dense in the product topology for ${\mathfrak{X}}_x$. \endproof Let $\widetilde{M}_0$ denote the universal covering of the compact manifold $M_0$. Associated to the left action of ${\mathcal H}_0^x$ on ${\mathfrak{X}}_x$ is a suspension minimal matchbox manifold \begin{equation}\label{eq-suspensionfols} {\mathfrak{M}} = \widetilde{M}_0 \times {\mathfrak{X}}_x / (y_0 \cdot g^{-1}, x) \sim (y_0 , g \cdot x) \quad {\rm for }~ y_0 \in \widetilde{M}_0 , ~ g \in {\mathcal H}_0^x ~. \end{equation} Given coverings $\pi' \colon M' \to M$ and $\pi'' \colon M'' \to M$, such that the subgroups $$\displaystyle \pi_{\#}'(\pi_1(M',x')) = \pi_{\#}''(\pi_1(M'', x'')) \subset \pi_1(M,x) ,$$ then there is a natural homeomorphism of coverings $M' \cong M''$ which is defined using the path lifting property. From this, it easily follows (see \cite{ClarkHurder2013}) that: \begin{prop}\label{prop-weaksuspensions} Let ${\mathcal S}_{{\mathcal P}}$ be a weak solenoid with base space $M_0$ where $M_0$ is a compact manifold of dimension $n \geq 1$. Then there is a foliated homeomorphism ${\mathcal S}_{{\mathcal P}} \cong {\mathfrak{M}}$. \end{prop} \begin{cor}\label{cor-weaksuspensions} The homeomorphism type of a weak solenoid ${\mathcal S}_{{\mathcal P}}$ is completely determined by the base manifold $M_0$ and the descending chain of subgroups \begin{equation} {\mathcal H}^x_{0} ~ \supset ~ {\mathcal H}^x_{1} ~ \supset ~ {\mathcal H}^x_{2} ~ \supset ~ {\mathcal H}^x_{3} ~ \supset ~ \cdots \end{equation} \end{cor} Note the intersection $\displaystyle {\mathcal H}^x_{\infty} \equiv \cap_{\ell \geq 1} ~ {\mathcal H}^x_{2}$ is the fundamental group of the typical leaf of ${\mathcal P}$. If this intersection group is trivial , then all leaves of the foliation $\F$ for ${\mathfrak{M}} \cong {\mathcal S}_{{\mathcal P}}$ are isometric to the universal covering of the base manifold $M_0$. The presentation ${\mathcal P}$ of an inverse limit ${\mathcal S}_{{\mathcal P}}$ can be used to construct a ``natural'' metric on the space, and which is well-adapted to Lipschitz maps between such spaces. This has been studied in detail in the works by Miyata and Watanabe \cite{MiyataWatanabe2002,MiyataWatanabe2003a,MiyataWatanabe2003b,MiyataWatanabe2003c,Miyata2009}. In the case of weak solenoids, this construction of natural metrics adapted to the resolution takes on a simplified form. Let ${\mathcal S}_{{\mathcal P}}$ be a weak solenoid, with notations as above. Then each quotient $X_{\ell} = {\mathcal H}^x_{0}/{\mathcal H}^x_{\ell}$ is a finite set with a transitive left action of the fundamental group ${\mathcal H}^x_{0}$. Let $d_{\ell}$ denote the discrete metric on $X_{\ell}$, where $d_{\ell}(x,y) =1$ unless $x=y$, for $x,y \in X_{\ell}$. Observe that the left action of ${\mathcal H}^x_{0}$ acts by isometries for the metric $d_{\ell}$. Choose a positive series $\{a_{\ell} \mid a_{\ell} > 0\}$ with total sum $1$, then define a metric on ${\mathfrak{X}}_x$ by setting, for $u,v \in {\mathfrak{X}}_x$ so $u = (x_0, u_1, u_2, \ldots)$ and $v = (x_0, v_1, v_2, \ldots)$, \begin{equation}\label{eq-canonicalmetric} \dX(u,v) = a_1 d_1(u_1, v_1) + a_2 d_2(u_2 , v_2) + \cdots \end{equation} Then $\dX$ is invariant under the action of ${\mathcal H}^x_0$, so the holonomy for the fibration $\Pi_0 \colon {\mathcal S}_{{\mathcal P}} \to M_0$ acts by isometries for this metric on ${\mathfrak{X}}_x$. It may happen that we have two presentations ${\mathcal P}$ and ${\mathcal P}'$ over the same base manifold $M_0$ such that their inverse limits are homeomorphic as fibrations, $h \colon {\mathcal S}_{{\mathcal P}} \cong {\mathcal S}_{{\mathcal P}'}$. However, the map $h$ need not be Lipschitz on fibers for the metrics associated to the presentations as above, as will be seen in the examples in Section~\ref{sec-classification}. The normal solenoids have a nice characterization among the matchbox manifolds. A continuum $\Omega$ is \emph{homogeneous} if its group of homeomorphisms is transitive. That is, given any two points $x,y \in {\mathfrak{M}}$, there is a homeomorphism $\displaystyle h \colon {\mathfrak{M}} \to {\mathfrak{M}}$ such that $h(x) = y$. It was shown in \cite{ClarkHurder2013} that: \begin{thm}\label{thm-homogeneous} Let ${\mathfrak{M}}$ be a homogeneous matchbox manifold. Then ${\mathfrak{M}}$ is homeomorphic to a normal solenoid ${\mathcal S}_{{\mathcal P}}$ as foliated spaces. \end{thm} The normal solenoids are the analogs in codimension-zero foliation theory, for the transversely parallelizable (TP) equicontinuous foliations in a topological version of Molino's Theory for smooth foliations of manifolds \cite{ALMG2013}. Note that all leaves in a normal solenoid are homeomorphic, as the spaces are homogeneous. In the case of weak solenoids, the leaves of $\F$ need not be homeomorphic, and the works \cite{CFL2010,DDMN2010} give examples where the leaves of $\F$ have differing numbers of ends. There is no analog of this behavior in the context of smooth Riemannian foliations on manifolds. Now consider a matchbox manifold ${\mathfrak{M}}$ of dimension $n$, but whose associated pseudogroup $\cGX$ is not equicontinuous. This type of matchbox manifold arises in the study of the tiling spaces associated to aperiodic tilings of ${\mathbb R}^n$ with finite local complexity, and also as foliation minimal sets. For example, the Hirsch examples in \cite{Hirsch1975} (see also \cite{BHS2006}) yield real analytic foliations of codimension-one with exceptional minimal sets and expansive holonomy pseudogroups. Also, the exceptional minimal sets for the Denjoy and Pixton examples discussed in Section~\ref{sec-foliations} have the property that all of their leaves are diffeomorphic to ${\mathbb R}^n$, and so they are without leafwise holonomy, but the global holonomy pseudogroup $\cGX$ associated to them is not equicontinuous. It follows from the following result that each of their minimal sets admits a presentation of the form \eqref{eq-presentationinvlim}. \begin{thm}[\cite{CHL2013b}] \label{thm-shapemm} Let ${\mathfrak{M}}$ be a minimal matchbox manifold without germinal holonomy. Then there exists a presentation ${\mathcal P}$ by simplicial maps between compact branched manifolds, such that ${\mathfrak{M}}$ is homeomorphic to ${\mathcal S}_{{\mathcal P}}$ as foliated spaces. \end{thm} \begin{cor} \label{cor-shapemm} Let ${\mathfrak{M}}$ be an exceptional minimal set for a $C^1$-foliation $\F$ of a compact manifold $M$. If all leaves of $\F | {\mathfrak{M}}$ are simply connected, then there is a homeomorphism of ${\mathfrak{M}}$ with the inverse limit space ${\mathcal S}_{{\mathcal P}}$ defined by a presentation ${\mathcal P}$, given by simplicial maps between compact branched manifolds. \end{cor} In the case of the Denjoy and Pixton examples given in Theorems~\ref{thm-denjoy} and \ref{thm-pixton}, the geometry of their construction implies that the presentation ${\mathcal P}$ one obtains is stationary. \begin{prob}\label{prob-stationary} Let ${\mathfrak{M}}$ be an exceptional minimal set for a $C^r$-foliation $\F$ of a compact manifold $M$, where $r \geq 1$, and assume that ${\mathfrak{M}}$ is without holonomy. Find conditions on the holonomy pseudogroup $\cGX$ for $\F$ which are sufficient to imply that ${\mathfrak{M}}$ admits a stationary presentation. \end{prob} One approach to this problem, is to ask if the existence of approximations to the foliation $\F$ on ${\mathfrak{M}}$ by the compact branched manifolds $M_{\ell} = M_0$ in a stationary presentation ${\mathcal P}$, implies some form of ``finiteness'' for the holonomy maps of the pseudogroup $\cGX$. Such finiteness conditions may be derived, for example, from the induced action of the shift map $\sigma$ on the tower of maps in the presentation. Then one would try to ``fill in'' the approximations with a foliation on an open neighborhood. Such a result would be reminiscent of the approach to showing the vanishing of the Godbillon-Vey class by Duminy and Sergiescu in \cite{DS1981}. Theorem~\ref{thm-shapemm} is a generalization of a celebrated result by Anderson and Putnam in \cite{AP1998} for tiling spaces. Given a repetitive, aperiodic tiling of the Euclidean space ${\mathbb R}^n$ with finite local complexity, the associated tiling space $\Omega$ is defined as the closure of the set of translations by ${\mathbb R}^n$ of the given tiling, in an appropriate topology on the space of tilings on ${\mathbb R}^n$. The space $\Omega$ is a matchbox manifold in our sense, whose leaves are defined by a free action of ${\mathbb R}^n$ on $\Omega$ (for example, see \cite{PFS2009,SW2003,Sadun2008}.) A remarkable result in the theory of tilings of ${\mathbb R}^n$ is that the tiling space $\Omega$ admits a presentation as the inverse limit of a tower of branched flat manifolds \cite{AP1998,Sadun2003, Sadun2008}, where the branched manifolds are the union of finite collections of tiles. Other generalizations of the Anderson-Putnam theorem have been given. For example, the work of Benedetti and Gambaudo in \cite{BG2003} discusses constructing towers for special classes of matchbox manifolds \emph{with possibly non-trivial but finite holonomy}, where the leaves are defined by a locally-free action of a connected Lie group $G$. Their work suggests what appears to be a difficult problem: \begin{prob}\label{prob-torsion} Let ${\mathfrak{M}}$ be a minimal matchbox manifold with leaves having non-trivial holonomy. Show that ${\mathfrak{M}}$ is homeomorphic to an inverse limit ${\mathcal S}_{{\mathcal P}}$ for some modified notion of presentations by branched manifolds, which takes into account the leafwise holonomy groups. \end{prob} Note that a solution to this problem would yield a presentation for an exceptional minimal set in a $C^2$-foliation of codimension-one, which by the results of Sacksteder in \cite{Sacksteder1965} always have leaves with holonomy. The existence of such a presentation would provide an alternate approach to the celebrated result of Duminy on the ends of leaves in exceptional minimal sets \cite{CantwellConlon2002}. Theorem~5.8 in the paper \cite{LR2013} states a solution to Problem~\ref{prob-torsion}, though it seems that the claimed result conflicts with the results of \cite{BG2003} for a model of generalized tiling spaces defined by $G$-actions with non-trivial holonomy. Also, the results of Section~6 of the same paper conflict with other established results concerning weak solenoids. \begin{prob}\label{prob-weaksolenoids} Given a weak solenoid ${\mathcal S}_{{\mathcal P}}$ with presentation ${\mathcal P}$ and associated transverse metric given by \eqref{eq-canonicalmetric}, does there exists a Lipschitz embedding of ${\mathcal S}_{{\mathcal P}}$ as an exceptional minimal set for a $C^r$-foliation of a smooth manifold $M$? \end{prob} The problem is of interest whether $M$ is assumed compact, or open without boundary, and for any $r \geq 1$. All known results are for the case where the base $M_0 = {\mathbb T}^n$ is a torus, for $n \geq 1$. The examples of type DE (\emph{Derived from Expanders}) described by Smale in \cite[p. 788]{Smale1967}, constructs an embedding of the dyadic solenoid over ${\mathbb S}^1$ which is realised as a basic set for a smooth diffeomorphism and is an attractor. More general realizations of $1$-dimensional solenoids as minimal sets for smooth flows were constructed for flows in the works by Gambaudo and Tresser \cite{GST1994}, Gambaudo, Sullivan and Tresser \cite{GST1994}, and Markus and Meyer \cite{MM1980}. The case when the base manifold $M_0 = {\mathbb T}^n$ for $n \geq 2$ was studied by the author with Clark in \cite{ClarkHurder2011}, where it was shown that for every presentation ${\mathcal P}$ there exists a refinement ${\mathcal P}'$ which can be realized in a $C^r$-foliation. That is, every topological type can be realized, though the metric induced on the inverse limit depends on the presentation ${\mathcal P}$. All of the known examples of weak solenoids which embed as exceptional minimal sets for $C^2$-foliations have abelian fundamental group ${\mathcal H}_x$ and so are consequently normal solenoids. It seems plausible, based on the proofs in \cite{ClarkHurder2011}, to conjecture that if a weak solenoid admits an embedding in a $C^2$-foliation, then it must be a normal solenoid with nilpotent covering groups. It also seems possible that an even stronger conclusion holds, that the covering groups for such a smoothly embedded solenoid must be abelian. \section{Fusion of Cantor minimal systems} \label{sec-fusion} There is a well-known method, called \emph{tubularization}, of amalgamating the holonomy pseudogroups of two foliations $\F_1, \F_2$ of codimension-one with the same leaf dimension. We recall this method briefly, then introduce the analogue of this technique for minimal matchbox manifolds, to obtain the \emph{fusion} of their holonomy pseudogroups. Assume there are given two foliations say $\F_1$ and $\F_2$, on manifolds $M_1$ and $M_2$, of with leaf dimension $n$ and codimension-one. We assume that their normal bundles are oriented, and there are given smooth embeddings $\eta_i \colon {\mathbb S}^1 \to M_i$ which are transverse to $\F_i$ for $i=1,2$. For $\e > 0$ small, let ${\mathcal E}(\eta_i, \e) \subset M_i$ be the closed $\e$-disk neighborhood of the image of the map $\eta_i$, where we assume $\e> 0$ is chosen so that ${\mathcal E}(\eta_i, \e)$ is an embedded submanifold with boundary diffeomorphic to ${\mathbb T}^2$. Then the restriction of $\F_i$ to ${\mathcal E}(\eta_i, \e)$ is a foliation whose leaves are closed $2$-disks, and which are parametrized by ${\mathbb S}^1$ via the transversal $\eta_i$. The choice of a diffeomorphism ${\varphi} \colon {\mathbb S}^1 \to {\mathbb S}^1$ extends to give a foliated map ${\widehat \varphi} \colon {\mathcal E}(\eta_1, \e) \to {\mathcal E}(\eta_2, \e)$, which we use to identify the boundaries $\partial {\mathcal E}(\eta_1, \e)$ and $\partial {\mathcal E}(\eta_2, \e)$. Denote the resulting surgered manifold by $M = M_1 \#_{{\varphi}} M_2$. Then $M$ has a foliation of codimension-one, whose foliation pseudogroup is the amalgamation, or ``pseudogroup free product'', of the pseudogroups for $\F_1$ and $\F_2$. This very useful construction has many applications \cite{CN1985,CandelConlon2000,HecHir1981,Lawson1977}. For foliations with codimension $q > 1$, the tubularization method is not so commonly used, as the existence of a compact manifold $N$ and embeddings $\eta_i \colon N \to M_i$ transverse to the given foliations is a highly exceptional condition to assume. The tubularization method is often replaced with the method of \emph{spinnable structures} of Tamura \cite{Tamura1972}, or the \emph{open book} method as in \cite{Lawson1971,Winkelnkemper1973}. Next, we define the analog of tubularization for Cantor pseudogroups. We first describe this construction for group actions. Assume there are given actions ${\varphi}_i \colon \Gamma_i \times {\bf K}_i \to {\bf K}_i$ for $i=1,2$, of finitely generated groups $\Gamma_i$ on Cantor sets ${\bf K}_i$. Choose clopen subsets $V_i \subset {\bf K}_i$ and a homeomorphism $h \colon V_1 \to V_2$. Define the Cantor set ${\bf K} = {\bf K}_1 \#_h {\bf K}_2$ obtained from the disjoint union ${\bf K}_1 \cup {\bf K}_2$ by identifying the clopen subsets $V_1$ and $V_2$ using the map $h$. The action on ${\bf K}$ of $\gamma \in \Gamma_1$ is via ${\varphi}_1(\gamma)$ on ${\bf K}_1$, and acts as the identity on the complement ${\bf K}_2 - V_2$. Analogously, the action of ${\varphi}_2$ extends to an action of the elements of $\Gamma_2$ on ${\bf K}$. This produces an action ${\varphi}$ of the free product $\Gamma_1 * \Gamma_2$ on ${\bf K}$. Note that if $V_1 = {\bf K}_1$ and $V_2 = {\bf K}_2$ then this process is just combining the generators of ${\varphi}_1(\Gamma_1)$ with the conjugates by $h$ of the generators of ${\varphi}_2(\Gamma_2)$. If each of the actions ${\varphi}_i$ is minimal, then the action of ${\varphi}$ on ${\bf K}$ is also minimal. In the case where ${\mathcal G}_{{\bf K}_1}$ is a pseudogroup acting on ${\bf K}_1$ and ${\mathcal G}_{{\bf K}_2}$ is a pseudogroup acting on ${\bf K}_2$, then the amalgamation of their actions over a homeomorphism $h \colon V_1 \to V_2$ is actually simpler, as there is no need to extend the domains of the local actions. If the action ${\varphi}_i$ is realized as the holonomy of a suspension matchbox manifold ${\mathfrak{M}}_i$ as in \eqref{eq-suspensionfols}, then the action of ${\varphi}$ is realized as the holonomy of a surgered matchbox manifold ${\mathfrak{M}} = {\mathfrak{M}}_1 \#_h {\mathfrak{M}}_2$ constructed analogously to the method described above for codimension-one foliations. This construction is analogous to the construction of a new graph matchbox manifold, from two given graph matchbox manifolds, which was introduced by Lukina in \cite{Lukina2012} as part of her study of the dynamics of examples obtained by the Ghys-Kenyon construction. Lukina called this process ``fusion'', and we adopt the same terminology for the process described here. \begin{defn}\label{def-fusion} Let ${\mathfrak{M}}_i$ be minimal matchbox manifolds with transversals ${\mathfrak{X}}_i$ for $i =1,2$. Choose clopen subsets $V_i \subset {\mathfrak{X}}_i$ and a homeomorphism $h \colon V_1 \to V_2$. Then the minimal matchbox manifold ${\mathfrak{M}} = {\mathfrak{M}}_1 \#_h {\mathfrak{M}}_2$ is said to be the \emph{fusion} of ${\mathfrak{M}}_1$ with ${\mathfrak{M}}_2$ over $h$. \end{defn} The concept of fusion for matchbox manifolds illustrates some of their fundamental differences with smooth foliations. A clopen transversal for a smooth foliation must be a compact submanifold without boundary, which does not always exist, while the above fusion construction can always be defined, along with many variations of it. Here is an interesting basic question: \begin{prob}\label{prob-fusiondynamics} How are the dynamical properties of a fusion ${\mathfrak{M}} = {\mathfrak{M}}_1 \#_h {\mathfrak{M}}_2$ related to the dynamical properties of the factors ${\mathfrak{M}}_1$ and ${\mathfrak{M}}_2$? In particular, describe the geometric structure of the leaves in ${\mathfrak{M}}$, in terms of the structure of the leaves of the factors ${\mathfrak{M}}_1$ and ${\mathfrak{M}}_2$ and the fusion map $h \colon V_1 \to V_2$ between transversals. Show that the theory of hierarchies for the leaves of graph matchbox manifolds in Lukina \cite{Lukina2012} also apply for fusion in the context of matchbox manifolds. \end{prob} \section{Non-embeddable matchbox manifolds} \label{sec-nonembedding} In this section, we construct examples of Lipschitz pseudogroups $(\cGX, \dX)$ which cannot arise from an embedding of a matchbox manifold into a $C^1$-foliation. All of the pseudogroups constructed can be realized as the holonomy of a matchbox manifold ${\mathfrak{M}}$, using the suspension construction described in \cite{LRL2013}. Thus, the resulting matchbox manifolds ${\mathfrak{M}}$ do not embed as closed invariant sets for any $C^1$-foliation. There are many variations on the constructions, which shows that there is a wide variety of non-embeddable matchbox manifolds. The idea of the construction is to produce a Lipschitz pseudogroup $\cGX$ with infinite entropy, $h(\cGX, \dX) = \infty$, so that by Corollary~\ref{cor-lipembed} the associated suspension matchbox manifold is not homeomorphic to an exceptional minimal set. Achieving infinite entropy with Lipschitz generators for $\cGX$ requires that the space $({\mathfrak{X}}, \dX)$ have infinite Hausdorff dimension. The first step then, is the construction of the model for the metric Cantor set $({\mathfrak{X}}, \dX)$, which is based on the construction of \emph{graph matchbox manifolds}, as introduced by Ghys in \cite{Ghys1999}, and studied in \cite{Blanc2003,LR2011,LRL2013,Lukina2014}. Let ${\mathcal T}$ be an infinite connected tree with bounded valence. The example that we consider here is the Cayley graph $T_n$ for the free group on $n$-generators, ${\mathbb F}_n = {\mathbb F} * \cdots * {\mathbb F}$, for $n \geq 2$. Choose a basepoint $e \in {\mathcal T}$. Each edge of ${\mathcal T}$ is homeomorphic to $[0,1]$ so inherits a metric from ${\mathbb R}$. Then give ${\mathcal T}$ the path length metric, and let $B_{{\mathcal T}}(x,n) \subset {\mathcal T}$ denote the closed ball of radius $n$ centered at $x \in {\mathcal T}$. Thus, if $x$ is a vertex of the tree, then $B_{{\mathcal T}}(x,n)$ is a connected subtree of ${\mathcal T}$. We say that a subtree $T \subset {\mathcal T}$ has a \emph{dead end}, if there is a vertex $x \in T$ which is contained in a unique edge. Let $X$ be the set of all connected subtrees of ${\mathcal T}$ which have no dead ends, and such that $e \in T$. Define the metric $d_X$ on $X$ by declaring that, for $T, T' \in X$ then $$d_X(T, T') \leq 2^{-n} ~ \Longleftrightarrow ~ B_{{\mathcal T}}(x,n) \cap T = B_{{\mathcal T}}(x,n) \cap T'. $$ Let ${\mathfrak{X}}$ denote the closure of $X$ in this metric, then ${\mathfrak{X}}$ is a totally disconnected space. A point $z \in {\mathfrak{X}}$ is then a subtree of ${\mathcal T}$ which contains the basepoint $e$. In the case where ${\mathcal T}_n$ is the Cayley graph of ${\mathbb F}_n$, we denote the closure of the space of subtrees of ${\mathcal T}_n$ as above by ${\mathfrak{X}}_n$. The ``no dead end'' assumption on the subtrees implies that ${\mathfrak{X}}_n$ has no isolated points, hence is a Cantor set. Let $d_{{\mathfrak{X}}_n}$ denote the induced metric on ${\mathfrak{X}}_n$. Then we have: \begin{thm}[Lukina \cite{Lukina2014}]\label{thm-infdim} For $n \geq 2$, the metric space $({\mathfrak{X}}_n , d_{{\mathfrak{X}}_n})$ has infinite Hausdorff dimension. \end{thm} The translation action of ${\mathbb F}_n$ on ${\mathcal T}_n$ defines a pseudogroup ${\mathcal G}_{{\mathfrak{X}}_n}$ acting on ${\mathfrak{X}}_n$, where a word $\gamma \in {\mathbb F}^n$ acts on the pointed subtree $(T,e)$ if $\gamma \cdot e \in T$, so that $(\gamma^{-1} \cdot T, e) \in {\mathfrak{X}}_n$. This action is discussed further in \cite{LRL2013,Lukina2014}. In particular, the action is Lipschitz for the metric $d_{{\mathfrak{X}}_n}$, with constant $C = 2$. Lukina shows in \cite{Lukina2012} that there exists a dense orbit for this action, so the pseudogroup is transitive. However, the periodic orbits for the action of ${\mathcal G}_{{\mathfrak{X}}_n}$ are dense, so the action is not minimal. The proof of Theorem~\ref{thm-infdim} in \cite{Lukina2014} essentially shows the following, with the details given in \cite{HL2014}: \begin{thm}\label{thm-infent} For $n \geq 2$, $h({\mathcal G}_{{\mathfrak{X}}_n}, d_{{\mathfrak{X}}_n}) = \infty$ for the metric space $({\mathfrak{X}}_n , d_{{\mathfrak{X}}_n})$. \end{thm} The suspension construction for pseudogroups given in \cite{LRL2013} constructs a $2$-dimensional matchbox manifold ${\mathfrak{M}}_n$ whose holonomy pseudogroup is ${\mathcal G}_n$. Thus, combining Theorem~\ref{thm-infent} with Corollary~\ref{cor-lipembed} and Propositions~\ref{prop-metricentropy1} and \ref{prop-metricentropy2}, we have the consequence: \begin{thm}\label{thm-noembedMn} For $n \geq 2$, the transitive Lipschitz matchbox manifold ${\mathfrak{M}}_n$ is not homeomorphic to an invariant subset of any $C^1$-foliation $\F_M$ of a manifold $M$. \end{thm} Now consider a minimal Cantor action ${\varphi}_2 \colon {\mathfrak{X}}_2 \to {\mathfrak{X}}_2$ for some Cantor set ${\mathfrak{X}}_2$. For example, let ${\varphi}_2$ be a Denjoy type homeomorphism. Then there is a homeomorphism $h \colon {\mathfrak{X}}_n \to {\mathfrak{X}}_2$ and we can form the fusion of the action of ${\mathcal G}_{{\mathfrak{X}}_n}$ with that of ${\varphi}_2$. That is, we adjoin the action of ${\widehat \varphi}_2 \equiv h^{-1} \circ {\widehat \varphi}_2 \circ h$ to the action of ${\mathcal G}_{{\mathfrak{X}}_n}$ on ${\mathfrak{X}}_n$ to obtain a minimal action of the fusion pseudogroup, denoted by $\widehat{{\mathcal G}}_{{\mathfrak{X}}_n}$. Let $\widehat{{\mathfrak{M}}}_n$ denote the suspension matchbox manifold obtained from $\widehat{{\mathcal G}}_{{\mathfrak{X}}_n}$. The action of ${\widehat \varphi}_2$ is not assumed to be Lipschitz, but we have in any case: \begin{thm}\label{thm-noembedMn2} For $n \geq 2$, the minimal matchbox manifold $\widehat{{\mathfrak{M}}}_n$ is not homeomorphic to an invariant subset of any $C^1$-foliation $\F_M$ of a manifold $M$. \end{thm} \proof Suppose that $\widehat{{\mathfrak{M}}}_n$ is homeomorphic to an invariant subset ${\mathcal Z} \subset M$ of a $C^1$-foliation $\F_M$ on $M$. Then ${\mathcal Z}$ must be a saturated subset, and every leaf is dense as this is true for $\widehat{{\mathfrak{M}}}_n$. Moreover, the transversals to $\widehat{{\mathfrak{M}}}_n$ are Cantor sets, so ${\mathcal Z}$ must be an exceptional minimal set for $\F_M$. Then by Proposition~\ref{prop-lipembed}, the embedding induces a metric $d_{{\mathfrak{X}}_n}'$ on ${\mathfrak{X}}_n$ such that $\widehat{{\mathcal G}}_{{\mathfrak{X}}_n}$ is a Lipschitz pseudogroup for this metric. By construction, $\widehat{{\mathcal G}}_{{\mathfrak{X}}_n}$ contains ${\mathcal G}_{{\mathfrak{X}}_n}$ as a sub-pseudogroup, and so $$h(\widehat{{\mathcal G}}_{{\mathfrak{X}}_n}, d_{{\mathfrak{X}}_n}') ~ \geq ~ h({\mathcal G}_{{\mathfrak{X}}_n}, d_{{\mathfrak{X}}_n}') ~ = ~ h({\mathcal G}_{{\mathfrak{X}}_n}, d_{{\mathfrak{X}}_n}) ~ = ~\infty$$ where we use Proposition~\ref{prop-metricentropy1}. But this contradicts Corollary~\ref{cor-lipembed}. \endproof These two examples suggests the following: \begin{prob} Show that there is no metric $d_{{\mathfrak{X}}_n}''$ on ${\mathfrak{X}}_n$ for which the action of $\widehat{{\mathcal G}}_{{\mathfrak{X}}_n}$ is Lipschitz. \end{prob} It seems very likely that this has a positive solution, that no such metric can exists, though the proof of this fact may require some new insights or techniques. \medskip We conclude this section with another remark, and a question. Recall that Problem~\ref{problem8} asks for obstructions to the existence of an embedding $\iota \colon {\mathfrak{M}} \to M$ of a Lipschitz matchbox manifold as an exceptional minimal set for a $C^1$-foliation $\F$ on $M$. Such an embedding implies in particular that the transverse Cantor set ${\mathfrak{X}}$ admits a Lipschitz embedding into the Euclidean space ${\mathbb R}^q$. The question of when a metric space admits a Lipschitz embedding in ${\mathbb R}^q$ dates from the 1928 paper \cite{Bouligand1928}, and is certainly well-studied. For example, the doubling property in Definition~\ref{def-doubling} of Assouad \cite{Assouad1983}, and the weakening of this condition by Olson and Robinson \cite{OlsonRobinson2010}, prove embedding criteria for metrics. These are types of ``asymptotic small-scale homogeneity'' properties of the metric $\dX$, which suggests an alternate approach to the Lipschitz embedding problem for minimal pseudogroups. \begin{prob} \label{prob-doubling} Let ${\mathfrak{X}}$ be a Cantor space with metric $\dX$. Let $\cGX$ be a compactly-generated pseudogroup acting minimally on ${\mathfrak{X}}$, and which is Lipschitz with respect to $\dX$. If the metric $\dX$ satisfies some version of the doubling condition, so that $({\mathfrak{X}}, \dX)$ admits a Lipschitz embedding into some ${\mathbb R}^q$, does there also exists an embedding such that $\cGX$ is obtained by the restriction of some $C^1$-pseudogroup acting on an open neighborhood of the embedded Cantor set? \end{prob} For a Cantor set ${\mathfrak{X}}$ with an ultrametric $\dX$, the Lipschitz embedding problem for $({\mathfrak{X}}, \dX)$ has been solved for various special cases. The work of Julien and Savinien in \cite{JS2011} estimates the Hausdorff dimension for a self-similar Cantor set with an ultrametric, and they derive estimates for its Lipschitz embedding dimension. The embedding properties of ultrametrics on Cantor sets which are the boundary of a hyperbolic group are discussed by Buyalo and Schroeder in \cite[Chapter 8]{BuyaloSchroeder2007}. In both of these cases, it seems likely that the answer to Problem~\ref{prob-doubling} is positive. In general, one expects the solution to be more complicated, as is almost always the case with Cantor sets. Finally, recall that every Cantor set embeds homeomorphically to a Cantor set in ${\mathbb R}^2$, and any two such are homeomorphic by a homeomorphism of ${\mathbb R}^2$ restricted to the set. This classical fact, due to Brouwer, is proved in detail by Moise in Chapter 12 of \cite{Moise1977}. It has been used to construct topological embeddings of solenoids in codimension-two foliations, as in the work of Clark and Fokkink \cite{ClarkFokkink2004}. On the other hand, the tameness property of Cantor sets in ${\mathbb R}^2$ does not hold for all Cantor sets embedded in ${\mathbb R}^3$. The \emph{Antoine's Necklace} is the classical example of this, as discussed in Chapter 18 of \cite{Moise1977}, and in Section~4.6 of \cite{HockingYoung1988}. It seems natural to ask the naive question: \begin{prob} \label{prob-antoine} Let ${\mathfrak{A}}$ denote the Antoine Cantor set embedded in ${\mathbb R}^3$, with the metric $\dA$ on ${\mathfrak{A}}$ induced by the restriction of the Euclidean metric. Does there is some exceptional minimal set for a $C^1$-foliation of codimension three, whose transverse model space is Lipschitz equivalent to $({\mathfrak{A}}, \dA)$? \end{prob} \section{Classification of Lipschitz solenoids} \label{sec-classification} In this section, we define \emph{Morita equivalence} and \emph{Lipschitz equivalence} of minimal pseudogroups, and consider the problem of Lipschitz classification for the special case of normal solenoids. While the condition of Morita equivalence is well-known and studied, Lipschitz equivalence seems less commonly studied, except possibly for group and semi-group actions on their boundaries. Let $\cGX$ be a minimal pseudogroup acting on a Cantor space ${\mathfrak{X}}$, and let $V \subset {\mathfrak{X}}$ be a clopen subset. The induced pseudogroup $\cGX | V$ is defined as the subcollection of all maps in $\cGX$ with domain and range in $V$. The following is then the adaptation of the notion of Morita equivalence of groupoids, as in Haefliger \cite{Haefliger1984}, to the context of minimal Cantor actions. \begin{defn} Let $\cGX$ be a minimal pseudogroup action on the Cantor set ${\mathfrak{X}}$ via Lipschitz homeomorphisms with respect to the metric $\dX$. Likewise, let $\cGY$ be a minimal pseudogroup action on the Cantor set ${\mathfrak{Y}}$ via Lipschitz homeomorphisms with respect to the metric $\dY$. Then \begin{enumerate} \item $(\cGX, {\mathfrak{X}}, \dX)$ is \emph{Morita equivalent} to $(\cGY, {\mathfrak{Y}}, \dY)$ if there exist clopen subsets $V \subset {\mathfrak{X}}$ and $W \subset {\mathfrak{Y}}$, and a homeomorphism $h \colon V \to W$ which conjugates $\cGX | V$ to $\cGY | W$. \item $(\cGX, {\mathfrak{X}}, \dX)$ is \emph{Lipschitz equivalent} to $(\cGY, {\mathfrak{Y}}, \dY)$ if the conjugation $h$ is Lipschitz. \end{enumerate} \end{defn} Morita equivalence is sometimes called \emph{return equivalence} in the literature \cite{AO1995,Fokkink1991,CHL2013c}. Morita equivalence is a basic notion for the study of $C^*$-algebra invariants for foliation groupoids, as discussed by Renault \cite{Renault1980} and Connes \cite{Connes1994}. Lipschitz equivalence is a basic notion for the study of \emph{metric non-commutative geometry} \cite{Connes1994}. The strongest results for classification, up to Morita equivalence, have been obtained for $1$-dimensional minimal matchbox manifolds. Fokkink showed in his thesis \cite{Fokkink1991} (see also Barge and Williams \cite{BargeWilliams2000}) that if $f_1, f_2$ are $C^1$-actions on ${\mathbb S}^1$, each of which have Cantor minimal sets, then the induced minimal Cantor actions are Morita equivalent if and only if they have rotation numbers which are conjugate under the linear fractional action of $SL(2,{\mathbb Z})$ on ${\mathbb R}$. This implies there are uncountably many non-homeomorphic minimal matchbox manifolds which embed as minimal sets for $C^1$-foliations of ${\mathbb T}^2$. There is a higher-dimensional version of this result for torus-like matchbox manifolds, proved in \cite{CHL2013c}. See the papers \cite{BargeDiamond2001,BargeSwanson2007,BargeMartensen2011} for the classification of $1$-dimensional minimal matchbox manifolds embedded in compact surfaces, which are necessarily not solenoids. In general, the classification problem modulo orbit equivalence is unsolvable for the pseudogroups associated to minimal matchbox manifolds of dimension $n \geq 2$, as already for the normal solenoids with base manifold ${\mathbb T}^n$ where $n \geq 2$, they are not classifiable. See \cite{Hjorth2000, KechrisMiller2004, Thomas2001, Thomas2003} for discussions of the undecidability of the Borel classification problem up to orbit equivalence. The advantage of considering Lipschitz equivalence of pseudogroup actions, is that while the equivalence is more refined, it can also be more practical to determine when two actions are not Lipschitz equivalent. We discuss the difference between Morita and Lipschitz classification in the case of the weak solenoids, where there are a well-known criteria for Morita equivalence. First, we recall the criteria for when two weak solenoids are homeomorphic, as described in \cite[Section~9]{ClarkHurder2013}, based on a result of using a result of Mioduszewski \cite{Mioduszewski1963}. Assume that we are given two presentations, where all spaces $\{M_{\ell} \mid \ell \geq 0\}$ and $\{N_{\ell} \mid \ell \geq 0\}$ are compact oriented manifolds, and all bonding maps are orientation-preserving coverings, \begin{equation} {\mathcal P} = \{p_{\ell+1} \colon M_{\ell+1} \to M_{\ell} \mid \ell \geq 0\} \quad, \quad {\mathcal Q} = \{q_{\ell+1} \colon N_{\ell+1} \to N_{\ell} \mid \ell \geq 0\} \end{equation} which define weak solenoids ${\mathcal S}_{{\mathcal P}}$ and ${\mathcal S}_{{\mathcal Q}}$ as in \eqref{eq-presentationinvlim}, respectively. Choose basepoints $\ovx \in {\mathcal S}_{{\mathcal P}}$ and $\ovy \in {\mathcal S}_{{\mathcal Q}}$. We consider the special case where $M_0 = N_0$, as the more general case easily reduces to this one, and the key issues are more evident in this special case. Let $\Pi_{\ell}^{{\mathcal P}} \colon {\mathcal S}_{{\mathcal P}} \to M_{\ell}$ denote the fibration map onto the factor $M_{\ell}$ for ${\mathcal S}_{{\mathcal P}}$, and $\Pi_{\ell}^{{\mathcal Q}} \colon {\mathcal S}_{{\mathcal Q}} \to N_{\ell}$ that for ${\mathcal S}_{{\mathcal Q}}$. We can assume that $x_0 = y_0$ in $M_0$, where $x_0 = \Pi_0^{{\mathcal P}}(\ovx)$ and $y_0 = \Pi_0^{{\mathcal Q}}(\ovy)$, then set ${\mathcal H}_0 = \pi_1(M_0 , x_0)$, where we suppress the dependence on basepoints. Define the subgroups ${\mathcal H}_{\ell} \subset {\mathcal H}_0$ which are the images of the groups $\pi_1(M_{\ell}, x_{\ell})$ under the maps $\displaystyle (q_{\ell} )_{\#} $ associated to ${\mathcal P}$, and let ${\mathcal G}_{\ell} \subset {\mathcal H}_0$ be the corresponding images of the groups $\pi_1(N_{\ell}, y_{\ell})$. Then we obtain two nested sequences of subgroups \begin{table}[htdp] \begin{center} \begin{tabular}{cccccccccc} $\subset$ & ${\mathcal H}_{\ell+1}$ & $\subset$ & ${\mathcal H}_{\ell}$ & $\subset$ & $\cdots$ & $\subset$ & ${\mathcal H}_{1}$ & $\subset$ &${\mathcal H}_{0}$ \\ & & & & & & & & & $\parallel$ \\ $\subset$ & ${\mathcal G}_{\ell+1}$ & $\subset$ & ${\mathcal G}_{\ell}$ & $\subset$ & $\cdots$ & $\subset$ & ${\mathcal G}_{1}$ & $\subset$ & ${\mathcal G}_{0}$ \end{tabular} \end{center} \label{default} \end{table}% The proof of the following result can be found in the papers \cite{McCord1965,Mioduszewski1963,Rogers1970,Schori1966}. \begin{thm}\label{thm-classifying1} The weak solenoids ${\mathcal S}_{{\mathcal P}}$ and ${\mathcal S}_{{\mathcal Q}}$ are basepoint homeomorphic if and only if there exists $\ell_0 \geq 0$ and $\nu_0 \geq 0$, such that for every $\ell \geq \ell_0$ there exists $\nu_{\ell} \geq \nu_0$ with ${\mathcal G}_{\nu_{\ell}} \subset {\mathcal H}_{\ell}$, and for every $\nu \geq \nu_0$ there exists $\ell_{\nu} \geq \ell_0$ with ${\mathcal H}_{\ell_{\nu}} \subset {\mathcal G}_{\nu}$. \end{thm} The condition on bonding maps in Theorem~\ref{thm-classifying1} is called \emph{tower equivalence} of the subgroup chains. Let ${\mathfrak{X}}$ denote the fiber of $\Pi_0^{{\mathcal P}}$ over $\ovx$, and ${\mathfrak{Y}}$ the fiber of $\Pi_0^{{\mathcal Q}}$ over $\ovy$. Then the monodromy of the fibration $\Pi_0^{{\mathcal P}}$ defines the actions of ${\mathcal H}_0$ on ${\mathfrak{X}}$, and the action of ${\mathcal H}_0 = {\mathcal G}_0$ on ${\mathfrak{Y}}$ is defined by the monodromy of $\Pi_0^{{\mathcal Q}}$. Then results of Clark, Lukina and the author yield: \begin{thm}[\cite{ClarkHurder2013}] \label{thm-classifying2} If the weak solenoids ${\mathcal S}_{{\mathcal P}}$ and ${\mathcal S}_{{\mathcal Q}}$ are basepoint homeomorphic, with $M_0 = N_0$, then the holonomy actions of ${\mathcal H}_0$ on ${\mathfrak{X}}$ and on ${\mathfrak{Y}}$ are Morita equivalent. \end{thm} \begin{thm}[\cite{CHL2013c}] \label{thm-classifying3} If the weak solenoids ${\mathcal S}_{{\mathcal P}}$ and ${\mathcal S}_{{\mathcal Q}}$ have base manifold $M_0 = N_0 = {\mathbb T}^n$, and the holonomy actions of ${\mathcal H}_0$ on ${\mathfrak{X}}$ and on ${\mathfrak{Y}}$ are Morita equivalent, then ${\mathcal S}_{{\mathcal P}}$ and ${\mathcal S}_{{\mathcal Q}}$ are basepoint homeomorphic. \end{thm} It follows that the classification problem for matchbox manifolds which are homeomorphic to a normal solenoid with base ${\mathbb T}^n$, reduces to the study of the Morita equivalence class of their holonomy pseudogroups, which by Theorem~\ref{thm-classifying1} reduces to a problem concerning the tower equivalence of subgroup chains in ${\mathbb Z}^n$. The classification problem for subgroup chains is not Borel, for $n \geq 2$. In the case of classical Vietoris solenoids, where $M_0 = {\mathbb S}^1$ and ${\mathcal H}_0 = {\mathbb Z}$, the classification is much more straightforward. For each $\ell > 0$ there exists integers $m_{\ell} > 1$ and $n_{\ell} > 1$, defined recursively, so that ${\mathcal H}_{\ell} = \langle m_1 m_2 \cdots m_{\ell}\rangle \subset {\mathbb Z}$, and ${\mathcal G}_{\ell} = \langle n_1 n_2 \cdots n_{\ell}\rangle \subset {\mathbb Z}$. Let $P$ be the set of all prime factors of the integers $\{m_{\ell} \mid \ell > 0\}$, included with multiplicity, and let $Q$ be the same for the integers $\{n_{\ell} \mid \ell > 0\}$. For example, for the dyadic solenoid, the set $P = \{2,2,2,\ldots\}$ is an infinite collection of copies of the prime $2$. These infinite sets of primes $P$ and $Q$ are ordered by the sequence in which they appear in the factorizations of the covering degrees $m_{\ell}$ and $n_{\ell}$. If the two sets $P$ and $Q$ are in \emph{bijective} correspondence, then it is an exercise to show that the tower equivalence condition of Theorem~\ref{thm-classifying1} is satisfied for the presentations ${\mathcal P}$ and ${\mathcal Q}$, which yields the classification of Vietoris solenoids up to homeomorphism by Bing \cite{Bing1960} and McCord \cite{McCord1965} (see also Aarts and Fokkink \cite{AF1991}), and also the classification up to Morita equivalence of the associated minimal ${\mathbb Z}$-actions on the Cantor set fibers. However, for the metrics on the Cantor sections ${\mathfrak{X}} \subset {\mathfrak{M}} = {\mathcal S}_{{\mathcal P}}$ and ${\mathfrak{Y}} \subset {\mathfrak{N}} = {\mathcal S}_{{\mathcal P}}$ as defined by the formula in \eqref{eq-canonicalmetric}, it is evident that if the bijection $\sigma \colon P \leftrightarrow Q$ permutes the elements by increasingly large degrees with respect to their ordering, then the induced map between the fibers, $h_{\sigma} \colon {\mathfrak{X}} \cong {\mathfrak{Y}}$, will not be Lipschitz. This motivates introducing the following invariant of a tower of equivalences. Let ${\mathcal P}$ and ${\mathcal Q}$ be presentations with common base manifold $M_0$, and suppose there exists a tower equivalence between them. That is, there exists $\ell_0 \geq 0$ and $\nu_0 \geq 0$, such that for every $\ell \geq \ell_0$ there exists $\nu_{\ell} \geq \nu_0$ with ${\mathcal G}_{\nu_{\ell}} \subset {\mathcal H}_{\ell}$, and for every $\nu \geq \nu_0$ there exists $\ell_{\nu} \geq \ell_0$ with ${\mathcal H}_{\ell_{\nu}} \subset {\mathcal G}_{\nu}$. Define the \emph{displacement} of these indexing functions $\ell \mapsto \nu_{\ell}$ and $\nu \mapsto \ell_{\nu}$ to be \begin{equation} {\rm Disp}(\ell_{\nu}, \nu_{\ell}) = \max \left\{ \sup \left\{ |\ell_{\nu} - \nu| \ \mid \nu \geq \nu_0 \right\}~ , ~ \sup \left\{ |\nu_{\ell} - \ell| \ \mid \ell \geq \ell_0 \right\} \right\} \end{equation} If ${\rm Disp}(\ell_{\nu}, \nu_{\ell}) < \infty$, then we say that ${\mathcal P}$ and ${\mathcal Q}$ are \emph{bounded tower equivalent}. \begin{thm}\label{thm-Lipequivalent} Let ${\mathcal P}$ and ${\mathcal Q}$ be presentations with common base manifold $M_0$, and suppose there exists a tower equivalence between them, defined by maps $\ell \mapsto \nu_{\ell}$ and $\nu \mapsto \ell_{\nu}$. Let the fiber metrics be defined by the formula \eqref{eq-canonicalmetric} with $a_{\ell} = 3^{-\ell}$. Then the action of ${\mathcal H}_0$ on the fiber ${\mathfrak{X}}$ of $\Pi_0^{{\mathcal P}}$ is Lipschitz equivalent to the action of ${\mathcal H}_0$ on the fiber ${\mathfrak{Y}}$ of $\Pi_0^{{\mathcal Q}}$ if and only if ${\mathcal P}$ and ${\mathcal Q}$ are bounded tower equivalent. \end{thm} The proof that ${\rm Disp}(\ell_{\nu}, \nu_{\ell}) < \infty$ implies Lipschitz equivalence for the metrics defined by \eqref{eq-canonicalmetric} with $a_{\ell} = 3^{-\ell}$ is an exercise in the definitions, using the expression \eqref{eq-Galoisfiber} for the metric on the fibers. The converse direction, that Lipschitz equivalence implies bounded tower equivalence, follows from the works of Miyata and Watanabe \cite{MiyataWatanabe2002,MiyataWatanabe2003a}. We give a simple example of Theorem~\ref{thm-Lipequivalent}, in the case of Vietoris solenoids. With the notation as above, suppose the the covering degrees $m_{\ell}$ for the presentation ${\mathcal P}$ with base $M_0 = {\mathbb S}^1$ are given by $m_{\ell} = 2$ for $\ell$ odd, and $m_{\ell} = 3$ for $\ell$ even. Let the covering degrees for the presentation ${\mathcal Q}$ be given by the sequence $\{n_1, n_2, n_3, \ldots\} = \{2,3,2,2,3,2,2,2,2,3, \ldots\}$. In general, the $\ell$-th cover of degree $3$ is followed by $2^{\ell}$ covers of degree $2$. Then these two sequences are clearly tower equivalent, but their displacement is infinite. It follows that the matchbox manifolds ${\mathfrak{M}} = {\mathcal S}_{{\mathcal P}}$ and ${\mathfrak{N}} = {\mathcal S}_{{\mathcal P}}$ are homeomorphic, but are not Lipschitz equivalent.
2,869,038,156,060
arxiv
\part{} \parttoc \thispagestyle{empty} \pagebreak{} \clearpage \addcontentsline{toc}{section}{Acknowledgments} \section*{Acknowledgments} Adding a grain of sand to one of the very many and ever growing summits of the mountain range known as `human knowledge' is truly a great privilege. The climb, however, is usually no easy task, and the one that produced this thesis was no exception. Here, I would like to pause and thank those who supported me en route. First, I am grateful to my advisor Ady Stern, who suggested an unorthodox but tailor made research direction, and collaborated with me on the first and most challenging part of the work. Ady is a formidable physicist as well as a generous advisor, and provided a rare combination of support and freedom in my research. It was also wonderful to work under someone who views laughter as a way of life. Subsequent work was performed in mostly long distance but close collaborations with Sergej Moroz, Carlos Hoyos, Félix Rose, Zohar Ringel and Adam Smith. In particular, Sergej, Carlos and Zohar served as additional mentors, each with his own unique style of doing research. I also benefited greatly from extensive discussions with Paul Wiegmann, Andrey Gromov, Ryan Thorngren, Weihan Hsiao, Semyon Klevtsov, Barry Bradlyn and Thomas Kvorning. The members of my PhD advisory committee, Micha Berkooz and David Mross, as well as my senior group members, Yuval Baum, Ion Cosma Fulga, Jinhong Park, Raquel Queiroz and Tobias Holder, provided much needed physical and meta-physical advice along the way. Our administrative staff, Hava Shapira, Merav Laniado, Einav Yaish, Inna Dombrovsky, Yuval Toledo and Yuri Magidov, sustained an incredibly efficient and warm work environment. I also thank my fellow graduate students Eyal Leviatan, Ori Katz, Dan Klein, Avraham Moriel, Shaked Rozen, Asaf Miron, Yotam Shpira, Adar Sharon, Dan Dviri and, of course, Yuval Rosenberg, who dragged me to the Weizmann institute when we were kids, and got me hooked on physics. Zooming out, I am grateful to my parents Sharona and Gabi, for their continued support in whatever I choose to do, and to my wife and best friend Adi, for making my life happy and balanced. Since we became parents, my work would not have been possible without Adi's backing, in particular since the spreading of Coronavirus, which eliminated some of our support systems, as well as the distinction between work and home. Finally, I thank our boys Adam and Shlomi for their smiles, laughter, and curiosity - a reminder of why I was drawn to science in the first place. \pagebreak{} \clearpage \addcontentsline{toc}{section}{Publications} \section*{Publications \label{sec:List-of-publications}} This thesis is based on the following publications: \begin{itemize} \item Reference \citep{PhysRevB.98.064503}: Omri Golan and Ady Stern. Probing topological superconductors with emergent gravity. \href{https://journals.aps.org/prb/abstract/10.1103/PhysRevB.98.064503}{Phys. Rev. B, 98:064503}, 2018. \item Reference \citep{PhysRevB.100.104512}: Omri Golan, Carlos Hoyos, and Sergej Moroz. Boundary central charge from bulk odd viscosity: Chiral superfluids. \href{https://journals.aps.org/prb/abstract/10.1103/PhysRevB.100.104512}{Phys. Rev. B, 100:104512}, 2019. \item Reference \citep{PhysRevResearch.2.043032}: Omri Golan, Adam Smith, and Zohar Ringel. Intrinsic sign problem in fermionic and bosonic chiral topological matter. \href{https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.2.043032}{Phys. Rev. Research, 2:043032}, 2020. \end{itemize} Complementary results are obtained in: \begin{itemize} \item Reference \citep{10.21468/SciPostPhys.9.1.006}: Félix Rose, Omri Golan, and Sergej Moroz. Hall viscosity and conductivity of two-dimensional chiral superconductors. \href{https://scipost.org/10.21468/SciPostPhys.9.1.006}{SciPost Phys., 9:6}, 2020. \item Reference \citep{PhysRevResearch.2.033515}: Adam Smith, Omri Golan, and Zohar Ringel. Intrinsic sign problems in topological quantum field theories. \href{https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.2.033515}{Phys. Rev. Research, 2:033515}, 2020. \end{itemize} \thispagestyle{empty} \pagebreak{} \pagenumbering{arabic} \section{Introduction and summary\label{sec:Introduction}} \subsection{Overview} The study of \textit{topological phases of matter} began in 1980, when the Hall conductivity in a two-dimensional electron gas was measured to be an integer multiple of $e^{2}/h$, to within a relative error of $10^{-7}$ \citep{RevModPhys.58.519}, subsequently reduced below $10^{-10}$ \cite{vonKlitzing2017}. Following this discovery, it was theoretically understood that in many-body quantum systems, certain physical observables must be \textit{precisely} quantized, under the right circumstances \citep{avron1983homotopy,thouless1982quantized}. Around the same time, quantum field theorists extensively studied the phenomena of \textit{anomalies} \citep{alvarez1984gravitational,alvarez1985anomalies,bertlmann2000anomalies}, where classical symmetries and conservation laws are quantum mechanically violated, and discovered the seemingly exotic \textit{anomaly inflow mechanism} \citep{callan1985anomalies,naculich1988axionic}, which physically interprets anomalies in terms of \textit{topological effective actions} in higher space-time dimensions. It was only later understood that topological effective actions and anomalies actually capture the essential physics of topological phases of matter, and even classify them \citep{read2000paired,wen2013classifying,ryu2012electromagnetic,RevModPhys.88.035001,freed2016reflection,PhysRevB.98.035151}. In particular, 2+1D gapped chiral topological phases are characterized by a gravitational Chern-Simons (gCS) action \citep{Chern-Simons,jackiw2003chern,kraus2006holographic,witten2007three,stone2012gravitational} and corresponding 1+1D gravitational anomalies \citep{alvarez1984gravitational,bertlmann2000anomalies,bastianelli2006path}, having the chiral central charge $c$ as a precisely quantized coefficient, or topological invariant. The chiral central charge is of particular importance in chiral superfluids and superconductors \citep{read2000paired,volovik2009universe}, where $U\left(1\right)$ particle-number symmetry is broken spontaneously or explicitly, and $c$ is, in some cases, the only topological invariant characterizing the system at low energy. However, as opposed to topological invariants related to gauge fields for internal symmetries in place of gravity, the concrete physical implications of $c$ (and even its very definition) in the context of condensed matter physics is quite subtle, and has been the subject of ongoing research and controversy \citep{volovik1990gravitational,read2000paired,haldane2009hall,haldane2011geometrical,wang2011topological,qin2011energy,ryu2012electromagnetic,you2014theory,abanov2014electromagnetic,gromov2014density,shitade2014bulk,shitade2014heat,PhysRevB.90.045123,gromov2015framing,gromov2015thermal,bradlyn2015low,bradlyn2015topological,klevtsov2015geometric,gromov2016boundary,gromov2017bimetric,gromov2017investigating,klevtsov2017laughlin,klevtsov2017lowest,klevtsov2017quantum,nakai2017laughlin,wiegmann2018inner,Cappelli_2018,schine2018measuring,kapustin2019thermal,hu2020microscopic}. The first goal of this thesis is to physically interpret the chiral central charge in the context of chiral superfluids and superconductors, where it is of particular importance, but has nevertheless remained poorly understood. This goal is pursued in Sec.\ref{sec:Main-section-1:}-\ref{sec:Main-section-2:}. A seemingly unrelated aspect of chiral topological phases is the complexity of simulating them on classical (as opposed to quantum) computers. It is generally believed that chiral topological matter is 'hard' to simulate efficiently with classical resources. Concretely, it is known that chiral topological phases do not admit local commuting projector Hamiltonians \citep{PhysRevB.89.195130,potter2015protection,PhysRevB.98.165104,PhysRevB.97.245144,kapustin2019thermal}, nor do they admit local Hamiltonians with a PEPS state as an exact ground state \citep{PhysRevLett.111.236805,PhysRevB.90.115133,PhysRevB.92.205307,PhysRevB.98.184409}. We will be interested in quantum Monte Carlo (QMC) simulations, arguably the most important tools in computational many-body quantum physics \citep{RevModPhys.67.279,Assaad,PhysRevX.3.031010,PhysRevD.88.021701,kaul2013bridging,GazitE6987,berg2019monte,li2019sign}, and in the infamous \textit{sign problem}, which is the generic obstruction to an efficient QMC simulation \citep{troyer2005computational,marvian2018computational,klassen2019hardness}. The accumulated experience of QMC practitioners suggests that the sign problem was never solved in chiral topological matter. Since these phases are abstractly defined by their non-vanishing chiral central charge, one may suspect that the chiral central charge and related gravitational phenomena pose an obstruction to sign-problem-free QMC. Such an obstruction is termed \textit{intrinsic sign problem} \citep{hastings2016quantum,ringel2017quantized}, and is of interest beyond the context of chiral topological matter, as it is widely accepted that long-standing open problems in many-body quantum physics, such as the nature of high-temperature superconductivity \citep{Santos_2003,PhysRevB.80.075116,PhysRevX.5.041041,kantian2019understanding}, dense nuclear matter \citep{Hands_2000,PhysRevD.66.074507,10.1093/ptep/ptx018}, and the fractional quantum Hall state at filling $5/2$ \citep{banerjee2018observation,PhysRevB.98.045112,PhysRevLett.121.026801,PhysRevB.97.121406,PhysRevB.99.085309,hu2020microscopic}, remain open because no solution to the sign problem in a relevant model has thus far been found. Since the aforementioned open problems are all fermionic, we are particularly motivated to study the possibility of intrinsic sign problems in fermionic matter. The second goal of this thesis, pursued in Sec.\ref{sec:Main-section-3:}, is to establish the existence of an intrinsic sign problem in chiral topological phases of matter, based on their non-vanishing chiral central charge, and with an emphasis on fermionic systems. The following Sec.\ref{subsec:Chiral-topological-matter}-\ref{subsec:Complexity-of-simulating} introduce the central concepts described above in more detail, pose the main questions we address in this thesis, and summarize the answers we find. \subsection{Chiral topological matter \label{subsec:Chiral-topological-matter}} A gapped local many-body quantum Hamiltonian is said to be in a topological phase of matter if it cannot be deformed to a trivial reference Hamiltonian, without closing the energy gap or violating locality. If a symmetry is enforced, only symmetric deformations are considered, and it is additionally required that the symmetry is not spontaneously broken \citep{wang2017symmetric,zeng2019quantum}. For Hamiltonians defined on a lattice, a natural trivial Hamiltonian is given by the atomic limit of decoupled lattice sites, where the symmetry acts independently on each site. In this thesis we consider both lattice and continuum models. Topological phases with a unique ground state on the 2-dimensional torus exist only with a prescribed symmetry group\footnote{A subtle point is that the minimal symmetry group for fermionic systems is fermion parity - the $\mathbb{Z}_2$ group generated by $(-1)^N$, where $N$ is the fermion number. This should be contrasted with bosonic systems, which may have no symmetries.} and are termed symmetry protected topological phases (SPTs) \citep{chen2011symmetry,Kapustin_2015,Kapustin_2017}. When such phases are placed on the cylinder, they support anomalous boundary degrees of freedom which cannot be realized on isolated 1-dimensional spatial manifolds, as well as corresponding quantized bulk response coefficients. Notable examples are the integer quantum Hall states, topological insulators, and topological superconductors \citep{qi2011topological}. Topological superconductivity and superfluidity will be discussed in detail in Sec.\ref{subsec:Chiral-superfluids-and}. Topological phases with a degenerate ground state subspace on the torus are termed topologically ordered, or symmetry enriched if a symmetry is enforced \citep{doi:10.1142/S0217979290000139,PhysRevB.82.155138}. Beyond the phenomena exhibited by SPTs, these support localized quasiparticle excitations with anyonic statistics and fractional charge under the symmetry group. Notable examples are fractional quantum Hall states \citep{nayak2008non,PhysRevLett.110.067208}, quantum spin liquids \citep{Savary_2016}, and fractional topological insulators \citep{PhysRevLett.103.196803,doi:10.1146/annurev-conmatphys-031115-011559}. In this thesis, we consider \textit{chiral} topological phases, where the boundary degrees of freedom that appear on the cylinder propagate unidirectionally. At energies small compared with the bulk gap, the boundary can be described by a chiral conformal field theory (CFT) \citep{ginsparg1988applied,di1996conformal}, while the bulk reduces to a chiral topological field theory (TFT) \citep{kitaev2006anyons,freed2016reflection}, see Fig.\ref{fig:Chiral-topological-phases}(a). Such phases may be bosonic and fermionic, and may be protected or enriched by an on-site symmetry, but we will not make use of this symmetry in our analysis - only the chirality of the phase will be used. A notable example for chiral topological phases is given by Chern insulators \citep{PhysRevLett.61.2015,qi2008topological,ryu2010topological}: SPTs protected by the $U\left(1\right)$ fermion number symmetry, which admit free-fermion Hamiltonians. The single particle spectrum of a Chern insulator on the cylinder is depicted in Fig.\ref{fig:Chiral-topological-phases}(b). Another notable example are the topologically ordered Kitaev spin liquids \citep{kitaev2006anyons,takagi2019concept}, which can be described by Majorana fermions with a single particle spectrum similar to Fig.\ref{fig:Chiral-topological-phases}(b), coupled to a $\mathbb{Z}_{2}$ (fermion-parity) gauge field. Note that the velocity $v$ of the boundary CFT is a non-universal parameter which generically changes as the microscopic Hamiltonian is deformed. More generally, different chiral branches may have different velocities. \begin{figure}[!th] \begin{centering} \includegraphics[width=0.6\columnwidth]{TopoPhase.pdf} \par\end{centering} \caption{Chiral topological phases of matter on the cylinder. (a) The low energy description of a chiral topological phase is comprised of two, counter propagating, chiral conformal field theories (CFTs) on the boundary, and a chiral topological field theory (TFT) in the bulk. (b) Examples: schematic single-particle spectrum of a Chern insulator and of the Majorana fermions describing a Kitaev spin liquid. Assuming discrete translational symmetry with spacing $a$ in the $x$ direction, one can plot the single-particle eigen-energies $\varepsilon$ on the cylinder as a function of (quasi) momentum $k_{x}$. This reveals an integer number of chiral dispersion branches whose eigen-states are supported on one of the two boundary components. In the Chern insulator (Kitaev spin liquid) these correspond to the Weyl (Majorana-Weyl) fermion CFT, with $c=\pm1$ ($c=\pm1/2$) per branch. The velocity, $v=\left|\partial\varepsilon/\partial k_{x}\right|$ at the chemical potential $\mu$, is a non-universal parameter. \label{fig:Chiral-topological-phases} } \end{figure} The chirality of the boundary CFT and bulk TFT is manifested by their non-vanishing chiral central charge $c$, which is rational and \textit{universal} - it is a topological invariant with respect to continuous deformations of the Hamiltonian which preserve locality and the bulk energy gap, and therefore constant throughout a topological phase \citep{Witten_1989,kitaev2006anyons,gromov2015framing,bradlyn2015topological}. On the boundary $c$ is defined with respect to an orientation of the cylinder, so the two boundary components have opposite chiral central charges. Since, as described below, $c$ is much better understood from the boundary perspective, we sometimes refer to it as the \textit{boundary} chiral central charge. A main theme of this thesis is the study of $c$ from the \textit{bulk} perspective, and the relation between the two perspectives implied by the anomaly inflow mechanism. \subsection{Geometric physics in chiral topological matter\label{subsec:Geometric-physics-in}} The non-vanishing of $c$ implies a number of geometric, or 'gravitational', physical phenomena \citep{ginsparg1988applied,di1996conformal,abanov2014electromagnetic,klevtsov2015geometric,bradlyn2015topological,gromov2016boundary}. In particular, the boundary supports a non-vanishing energy current $J_{E}$, which receives a correction \begin{align} J_{E}\left(T\right) & =J_{E}\left(0\right)+2\pi T^{2}\frac{c}{24},\label{eq:1-2} \end{align} \textcolor{black}{at a temperature $T>0$, and in the thermodynamic limit $L=\infty$, where $L$ is the circumference of the cylinder. Note that we set $K_{\text{B}}=1$ and $\hbar=1$ throughout. }Within CFT, this correction is universal since it is independent of $v$. Taking the two counter propagating boundary components of the cylinder into account, and placing these at slightly different temperatures, leads to a thermal Hall conductance $K_{H}=c\pi T/6$ \citep{kane1997quantized,read2000paired,cappelli2002thermal}, a prediction that recently led to the first measurements of $c$ \citep{jezouin2013quantum,banerjee2017observed,banerjee2018observation,Kasahara:2018aa}. In analogy with Eq.\eqref{eq:1-2}, the boundary of a chiral topological phase also supports a non-vanishing ground state (or $T=0$) momentum density $p\left(L\right)$, which receives a universal correction on a cylinder with finite circumference $L<\infty$. The details of this finite-size correction will be described in Sec.\ref{sec:Main-section-3:}, where it is used to relate the chiral central charge (as well as the topological spins of anyon excitations) to the complexity of simulating chiral topological matter on classical computers. Abstractly, both $T>0$ and $L<\infty$ corrections described above follow directly from the (chiral) Virasoro anomaly, or Virasoro central extension, which defines $c$ in 2D CFT. Equivalently, these corrections can be understood in terms of the 'global' gravitational anomaly - the complex phase accumulated by a CFT partition function on the torus under a Dehn twist \citep{ginsparg1988applied,di1996conformal}. This anomaly is termed 'global' since the Dehn twist is a large coordinate transformation, or more accurately, an element of the diffeomorphism group of the torus, which lies outside of the connected component of the identity. The Dehn twist is therefore the geometric analog of the large $U\left(1\right)$ gauge transformation used in the celebrated Laughlin argument, and an attempt has been made to follow this analogy and produce a 'thermal Laughlin argument' \citep{nakai2017laughlin}. The chiral central charge also implies a 'local', or 'perturbative' gravitational anomaly, which, at least in the context of relativistic QFT in curved space-time, physically corresponds to the non-conservation of energy-momentum in the presence of curvature gradients \citep{alvarez1984gravitational,bertlmann2000anomalies,bastianelli2006path}. Through the anomaly inflow mechanism, or in more physical terms, through bulk+boundary energy-momentum conservation, this boundary anomaly implies a gravitational Chern-Simons (gCS) term in the effective action describing the 2+1D bulk of a chiral topological phase \citep{Chern-Simons,jackiw2003chern,kraus2006holographic,witten2007three,stone2012gravitational}\footnote{Whether the gCS term matches the boundary\textit{ global} gravitational anomaly as well is, to the best of my knowledge, an open problem.}. In turn, the gCS term implies a quantized energy-momentum-stress response to curvature gradients, in the bulk \footnote{In fact, the gCS contribution to the energy-momentum-stress tensor is proportional to the mathematically important Cotton tensor of the metric \citep{perez2010conserved}. }. Though the gCS term is relatively well understood in the context of relativistic QFT, its concrete physical content in the non-relativistic setting of condensed matter physics is quite subtle, due to the following reasons: \begin{enumerate} \item Physically, the actual gravitational field of the earth is usually negligible in condensed matter experiments. It is therefore clear that the adjective 'gravitational' used above cannot be taken literally, and requires further interpretation. Namely, one must find a physical probe relevant in condensed matter experiments, which will somehow mimic the effects of a strong gravitational field. This scenario is often referred to as 'analog gravity' or 'emergent gravity' \citep{volovik2009universe}. Mathematically, this corresponds to a physically accessible geometric structure on the space-time occupied by the system of interest, in the spirit of general relativity\footnote{We use the words geometry and gravity interchangeably from here on.}. The most straight forward example is given by strain - a physical deformation of the sample on which the system resides \citep{avron1995viscosity}. An additional set of examples is given by spin-2 inhomogeneities \citep{gromov2017geometric} and collective excitations \citep{volovik1990gravitational,volovik2009universe,haldane2009hall,haldane2011geometrical,you2014theory,gromov2017bimetric}. Finally, Luttinger's trick relates temperature gradients to an applied gravitational field \citep{luttinger1964theory,cooper1997thermoelectric,shitade2014heat,gromov2015thermal,bradlyn2015low,nakai2016finite,nakai2017laughlin}. \item Fundamentally, the coupling of a system to gravity generally depends on its global space-time symmetries in the absence of gravity. For example, relativistic systems will couple differently from Galilean invariant systems. Even when the spatial symmetries are fixed, the gravitational background may vary, e.g Riemannian vs. Riemann-Cartan geometry, which are both relativistic. Moreover, for systems defined on a lattice, there is no definite, or universal, prescription for a coupling to gravity at all, as opposed to lattice gauge fields which are very well understood. The coupling of a system to gravity therefore relies on more refined information than that used to classify topological phases of matter. In particular, known results in relativistic QFT do not directly apply to the non-relativistic condensed matter systems we are interested in. \item Technically, when describing gravity in terms of a metric, the gCS term is third order in derivatives, so obtaining effective actions that contain it \textit{consistently}, i.e account for \textit{all} possible terms up to the same order, is nontrivial. \end{enumerate} Naturally, the pioneering approaches to the above difficulties were based on an adaptation of known results in relativistic QFT \citep{volovik1990gravitational,read2000paired,ryu2012electromagnetic} (see also \cite{wang2011topological,palumbo2016holographic}), an approach that we carefully and critically follow in Sec.\ref{sec:Main-section-1:}. A much more advanced treatment developed over the past decade, primarily in the context of quantum Hall states \citep{haldane2009hall,haldane2011geometrical,you2014theory,abanov2014electromagnetic,gromov2014density,PhysRevB.90.045123,gromov2015framing,gromov2015thermal,bradlyn2015low,bradlyn2015topological,klevtsov2015geometric,gromov2016boundary,gromov2017bimetric,gromov2017investigating,klevtsov2017laughlin,klevtsov2017lowest,klevtsov2017quantum,wiegmann2018inner,Cappelli_2018,schine2018measuring,kapustin2019thermal,hu2020microscopic}. In particular, a \textit{non-relativistic} gCS term arises in quantum Hall states, and produces corrections to the odd viscosity (introduced below) at finite wave-vector, and in curved background \citep{abanov2014electromagnetic,bradlyn2015topological,klevtsov2015geometric,gromov2015framing,klevtsov2017quantum}. We follow this observation in Sec.\ref{sec:Main-section-2:}. We note a couple of additional central results from the literature: \begin{enumerate} \item In quantum Hall states, the chiral central charge contributes to the braiding statistics and angular momentum of conical defects \citep{can2016emergent,gromov2016geometric}, the latter was recently observed in an optical realization of integer quantum Hall states \citep{schine2016synthetic,schine2018measuring}. \item The gCS term is not directly related to $K_{H}=c\pi T/6$ through Luttinger's trick, simply because it is too high in derivatives of the background metric \citep{stone2012gravitational}. Moreover, careful analysis in quantum Hall states shows that $K_{H}$ receives no bulk contribution at all \citep{gromov2015thermal,bradlyn2015low}, and is therefore purely a boundary phenomenon, as explained below Eq.\eqref{eq:1-2}. Nevertheless, derivatives of $K_{H}$ can be computed from the bulk Hamiltonian a la Luttinger \citep{cooper1997thermoelectric,qin2011energy,bradlyn2015low}, resulting in a relative topological invariant for gapped lattice systems \citep{kapustin2019thermal}. The latter gives a rigorous 2+1D lattice definition for the chiral central charge. \end{enumerate} \subsection{Chiral superfluids and superconductors \label{subsec:Chiral-superfluids-and}} An important class of 2+1D chiral topological phases appears in chiral superfluids and superconductors (CSFs and CSCs, or CSF/Cs), where the ground state is a condensate of Cooper pairs of fermions, which are spinning around their centre of mass with a non-vanishing angular momentum $\ell\in\mathbb{Z}$ \citep{read2000paired,volovik2009universe}, see Fig.\ref{fig:Chiral-Superfluid}. As reviewed below, CSF/Cs appear in a wide range of physical systems, all of which have been the subject of extensive and continued research effort, going back to the classic body of work on superfluid $^{3}\text{He}$ \citep{vollhardt2013superfluid}. The interest in CSF/Cs comes from two fronts, a fermionic/topological front, and a bosonic/symmetry-breaking front, both resulting directly from the $\ell$-wave condensate. A central theme of this thesis is the intricate interplay between these two facets of CSF/Cs. \begin{figure}[!th] \begin{centering} \includegraphics[width=0.35\columnwidth]{ChiralSuperfluid.pdf} \par\end{centering} \caption{Chiral superfluids and superconductors (CSF/Cs) are comprised of fermions $\psi$, which form Cooper pairs with non-vanishing relative angular momentum $\ell\in\mathbb{Z}$ (red arrows), in units of $\hbar$. As a result, CSF/Cs support boundary degrees of freedom (dashed orange) with a chiral central charge $c\in\left(\ell/2\right)\mathbb{Z}$. \label{fig:Chiral-Superfluid}} \end{figure} On the fermionic/topological front, the $\ell$-wave condensate leads to an energy gap for single fermion excitations, which form a chiral SPT phase of matter: a topological superconductor \citep{Kallin_2016,Sato_2017}. Topological superconductors are of interest since their chiral central charge $c\in\left(\ell/2\right)\mathbb{Z}$ can be half-integer, indicating the presence of chiral Majorana spinors on boundaries, each contributing an additive $\pm1/2$ to $c$, where the sign depends on their chirality. In turn, this implies the presence of Majorana bound states, or zero modes, in the cores of vortices \citep{read2000paired}. The observation of Majorana fermions, which are their own anti-particles, and may not exist in nature as elementary particles, is of fundamental interest. Moreover, Majorana bound states are closely related to non-abelian Ising anyons \citep{moore1991nonabelions}, which have been proposed as building blocks for topological quantum computers \citep{kitaev2003fault,nayak2008non}. On the bosonic/symmetry-breaking front, the $\ell$-wave condensate implies an exotic symmetry breaking pattern, which leads to an unusual spectrum of bosonic excitations, and as a result, an unusual hydrodynamic description. In more detail, the condensation of $\ell$-wave Cooper pairs corresponds to a non-vanishing ground-state expectation value for the operator $\psi^{\dagger}\left(\partial_{x}\pm i\partial_{y}\right)^{\left|\ell\right|}\psi^{\dagger}$, where $\psi^{\dagger}$ is a fermion creation operator\footnote{Due to Fermi statistics, $\ell$ must be odd if $\psi$ is spin-less. An even $\ell$ requires spin-full fermions forming spin-less (singlet) Cooper pairs, $\psi_{\uparrow}^{\dagger}\left(\partial_{x}\pm i\partial_{y}\right)^{\left|\ell\right|}\psi_{\downarrow}^{\dagger}$. Spin-full fermions can also form spin-1 (triplet) Cooper pairs with odd $\ell$, as is the case in $^{3}\text{He-A}$. Since the geometric physics we are interested in is independent of the spin of the Cooper pair, we restrict attention to spin-less fermions for odd $\ell$, and write our expressions per spin component for even $\ell$.} and $\pm=\text{sgn}\left(\ell\right)$. An $\ell$-wave condensate implies the breaking of time reversal symmetry $T$ and parity (spatial reflection) $P$ down to $PT$, and of the symmetry groups generated by particle number $N$ and angular momentum $L$ down to a diagonal subgroup \begin{align} & U\left(1\right)_{N}\times SO\left(2\right)_{L}\rightarrow U\left(1\right)_{L-\left(\ell/2\right)N}.\label{eq:2-1-1} \end{align} In CSFs, this symmetry breaking occurs spontaneously, due to a symmetric two-body attractive interaction between fermions. This phenomenon is generic, at least from the perspective of perturbative Fermi-surface renormalization group \citep{shankar1994renormalization}. Thin films of $^{3}\text{He-A}$ are experimentally accessible $p$-wave CSFs \citep{Levitin841,PhysRevLett.109.215301,Ikegami59,Zhelev_2017}, and there are many proposals for the realization of various $\ell$-wave CSFs in cold atoms \citep{PhysRevLett.101.160401,PhysRevLett.103.020401,PhysRevA.86.013639,PhysRevA.87.053609,PhysRevLett.115.225301,BOUDJEMAA20171745,Hao_2017}. The spontaneous symmetry breaking \eqref{eq:2-1-1} implies a single Goldstone field, charged under the broken generator $N+\left(\ell/2\right)L$, as well as massive Higgs fields, which are $U\left(1\right)_{N}$-neutral, and carry angular momentum $0$ and $\pm2\ell$ \citep{brusov1981superfluidity,Volovik_2013,Sauls:2017aa,PhysRevB.98.064503,hsiao2018universal}. In particular, $p$-wave ($\ell=\pm1$) superfluids support Higgs fields of angular momentum $0$ and $\pm2$, which form a spatial metric, including a non-relativistic analog of the graviton \citep{volovik1990gravitational,read2000paired,volovik2009universe}. This observation will play a central role in Sec.\ref{sec:Main-section-1:}. The angular momentum $\ell/2$ carried by the Goldstone field leads to a $P,T$-odd hydrodynamic description, including an odd (or Hall) viscosity, which is introduced below, and studied in Sec.\ref{sec:Main-section-2:}. An \textit{intrinsic} CSC is obtained if the $U\left(1\right)_{N}$ symmetry is gauged, by coupling to a dynamical gauge field. This gauge field physically corresponds to the 3+1D electromagnetic interaction between electrons, which are themselves confined to a 2+1D lattice of ions. Experimental evidence for chiral superconductivity was recently reported in Ref.\citep{Jiao:2020aa}. One may also consider an emergent 2+1D gauge field, with a Chern-Simons and/or Maxwell dynamics. In particular, this leads to CSFs of 'composite fermions' \citep{read2000paired,PhysRevX.5.031027,Son_2018}, including field theoretic descriptions of the non-abelian candidates for the fractional quantum Hall state observed at filling $5/2$ \citep{banerjee2018observation}, a subject of ongoing debate \citep{PhysRevB.98.045112,PhysRevLett.121.026801,PhysRevB.97.121406,PhysRevB.98.167401,PhysRevB.99.085309}. The symmetry breaking pattern \eqref{eq:2-1-1} may also occur explicitly, due the proximity of a conventional $s$-wave superconductor (SC) to 2+1D spin-orbit coupled metal, in which case we speak of a \textit{proximity induced} CSC, an observation of which was reported in Refs.\citep{PhysRevLett.114.017001,M_nard_2017}. Note that in this case the Goldstone and Higgs fields can be viewed as non-dynamical. Despite the large body of work on boundary Majorana fermions in CSFs and CSCs (CSF/Cs), the bulk geometric physics corresponding to these through anomaly inflow, and presumably captured by a gCS action, remains poorly understood, due to the difficulties mentioned in Sec.\ref{subsec:Geometric-physics-in}. In fact, most existing statements, though made in truly pioneering and seminal work \citep{volovik1990gravitational,read2000paired,ryu2012electromagnetic}, are speculative, and are primarily based on an inaccurate adaptation of known results in relativistic QFT to $p$-wave CSF/Cs. An understanding of the bulk geometric physics is of particular importance since, in the simplest case of spin-less fermions with no additional internal symmetry, the only charge carried by the boundary Majorana fermions is energy-momentum, and the only boundary anomalies and bulk topological effective actions are therefore gravitational\footnote{For spin-full $p$-wave CSFs, one can exploit $SU\left(2\right)$ spin rotation symmetry, and does not have to resort to gravitational probes \citep{volovik1989fractional,read2000paired,stone2004edge}.}. In particular, the boundary Majorana fermions are always $U\left(1\right)_{N}$-neutral, and it follows that no $U\left(1\right)_{N}$ boundary anomaly or bulk topological effective action occurs\footnote{An exception to this rule occurs in Galilean invariant systems, where momentum and $U\left(1\right)_{N}$-current are identified, as we will see in Sec.\ref{sec:Main-section-2:}.}. Motivated by this state of affairs, the goal of Sec.\ref{sec:Main-section-1:}-\ref{sec:Main-section-2:} is to turn the insightful ideas of Refs.\citep{volovik1990gravitational,read2000paired,ryu2012electromagnetic} into concrete physical predictions. As a first approach to the problem, in Sec.\ref{sec:Main-section-1:} we follow Refs.\citep{volovik1990gravitational,read2000paired,ryu2012electromagnetic} and utilize the low energy relativistic description of $p$-wave CSF/Cs, which exists because the $p$-wave condensate $\psi^{\dagger}\left(\partial_{x}\pm i\partial_{y}\right)\psi^{\dagger}$ is first order in derivatives. The main questions we ask are: \begin{quote} \textit{What type of space-time geometry emerges in the low energy relativistic description of $p$-wave superfluids and superconductors? What are the physical implications of the emergent relativistic geometry to these non-relativistic systems? } \end{quote} Our answer to the first question is that the fermionic excitations in $p$-wave CSF/Cs correspond at low energy to a massive relativistic Majorana spinor, which is minimally coupled to an emergent Riemann-Cartan geometry. This geometry is described by the $p$-wave order parameter $\Delta^{i}\sim\delta^{ij}\psi^{\dagger}\partial_{j}\psi^{\dagger}$, made up of the Goldstone and Higgs fields, as well as a $U\left(1\right)_{N}$ gauge field. As opposed to the Riemannian geometry previously believed to emerge \citep{volovik1990gravitational,read2000paired,ryu2012electromagnetic}, Riemann-Cartan space-times are characterized by a non-vanishing torsion tensor, in addition to the curvature tensor \citep{ortin2004gravity}. In condensed matter physics (or elasticity theory), torsion is well known to describe the density of lattice dislocations \citep{hughes2011torsional,hughes2013torsional,parrikar2014torsion,geracie2014hall}\footnote{Similarly, curvature traditionally describes the density of lattice disclinations, as well a the curving of a two-dimensional material in three dimensional space. It is also known that temperature gradients correspond to time-like torsion via Luttinger's trick \citep{shitade2014heat,bradlyn2015low}. }, and our results provide a new mechanism by which torsion can emerge - due to the symmetry breaking pattern \eqref{eq:2-1-1} at $\ell=\pm1$ . The above statements are relevant if one aims at studying relativistic fermions in nontrivial space-times using table-top experiments \citep{Kim2017}, or if one hopes that emergent relativistic geometry in condensed matter can answer fundamental questions about the seemingly relativistic geometry of our universe \citep{volovik2009universe}. Here, however, we are interested in answering the second question posed above. As expected, a gCS term appears in the low energy effective action of $p$-wave CSF/Cs, and we find that it produces a precisely quantized bulk energy-momentum-stress response to the $p$-wave Higgs fields. Accordingly, a (perturbative) gravitational anomaly that depends on the Higgs fields appears on the boundary, implying a $c$-dependent transfer of energy-momentum between bulk and boundary. The emergence of torsion leads to additional interesting terms in the bulk effective action. In particular, a non-topological 'gravitational \textit{pseudo} Chern-Simons' term produces an energy-momentum-stress response closely mimicking that of the gCS term, and we show how to disentangle the two responses in order to extract $c$ from bulk measurements. In lattice models, the low energy description consists of an even number of relativistic Majorana spinors - a fermion doubling phenomena. Surprisingly, we find that these spinors experience slightly different emergent geometries. As a result, additional 'gravitational Chern-Simons difference' terms are possible, which are again not of topological origin, but nevertheless imply responses which must be carefully distinguished from those of the gCS term. All other terms in the bulk effective action are either higher in derivatives, or are lower in derivatives but naively diverge within the relativistic description. The latter 'UV-sensitive' terms cannot be reliably interpreted based on the relativistic description, and require a non-relativistic treatment. In particular, the relativistic, UV-sensitive, and somewhat controversial 'torsional Hall viscosity' \citep{hughes2011torsional,hughes2013torsional,parrikar2014torsion,geracie2014hall,hoyos2014hall,bradlyn2015low}, is found in Sec.\ref{sec:Main-section-2:} to correspond to the non-relativistic and well understood odd (or Hall) viscosity of CSF/Cs \citep{Read:2009aa,read2011hall,hoyos2014effective,shitade2014bulk,moroz2015effective}, which is introduced below. Before continuing, we note that there has been considerable recent interest in torsional physics in condensed matter, in systems described at low energy by 3+1D Weyl (or Majorana-Weyl) spinors, namely Weyl semi-metals and $^{3}$He-A \citep{PhysRevResearch.2.033269,PhysRevLett.124.117002,PhysRevB.101.125201,PhysRevB.101.165201,laurila2020torsional,huang2020torsional}, and in Kitaev's honeycomb model \citep{PhysRevB.101.245116,PhysRevB.102.125152}. \subsection{Odd viscosity\label{subsec:Odd-viscosity}} The odd (or Hall) viscosity $\eta_{\text{o}}$ is a non-dissipative, time reversal odd, stress response to strain-rate \citep{PhysRevLett.75.697,Avron1998,PhysRevB.86.245309,hoyos2014hall,PhysRevE.89.043019}, which can appear even in superfluids (SFs) and incompressible (or gapped) fluids, where the more familiar and intuitive dissipative viscosity vanishes. The observable signatures of $\eta_{\text{o}}$ are actively studied in a variety of systems \citep{PhysRevE.90.063005,PhysRevB.94.125427,PhysRevLett.119.226602,PhysRevLett.118.226601,PhysRevB.96.174524,PhysRevFluids.2.094101,banerjee2017odd,bogatskiy2018edge,holder2019unified,PhysRevLett.122.128001}, and recently led to its measurement in a colloidal chiral fluid \citep{soni2018free} and in graphene's electron fluid under a magnetic field \citep{Berdyugineaau0685}. In isotropic 2+1 dimensional fluids, the odd viscosity tensor at zero wave-vector ($\mathbf{q}=0$) reduces to a single component . In analogy with the celebrated quantization of the odd (or Hall) conductivity in the quantum Hall (QH) effect \citep{thouless1982quantized,avron1983homotopy,golterman1993chern,qi2008topological,Nobel-2016,mera2017topological}, this component obeys a quantization condition \begin{align} & \eta_{\text{o}}^{\left(1\right)}=-\left(\hbar/2\right)s\cdot n_{0},\;s\in\mathbb{Q},\label{eq:1-1} \end{align} in incompressible quantum fluids, such as integer and fractional QH states \citep{PhysRevLett.75.697,Read:2009aa,read2011hall}. Here $n_{0}$ is the ground state density, and $s$ is a rational-valued topological invariant\footnote{In fact, an $SO\left(2\right)_{L}$-symmetry-protected topological invariant.}, labeling the many-body ground state, which can be interpreted as the average angular momentum per particle (in units of $\hbar$, which is henceforth set to 1). Remarkably, Eq.\eqref{eq:1-1} also holds in CSFs, though they are compressible. Computing $\eta_{\text{o}}^{\left(1\right)}$ in an $\ell$-wave CSF, one finds Eq.\eqref{eq:1-1} with the intuitive angular momentum per fermion, $s=\ell/2$ \citep{Read:2009aa,read2011hall,hoyos2014effective,shitade2014bulk,moroz2015effective}. Thus, a measurement of $\eta_{\text{o}}^{\left(1\right)}$ at $\mathbf{q}=\mathbf{0}$ can be used to obtain the angular momentum $\ell$ of the Cooper pair, but carries no additional information. It is therefore clear that the symmetry breaking pattern \eqref{eq:2-1-1} which defines $\ell$, rather than ground-state topology, is the origin of the quantization $s=\ell/2$ in CSFs \footnote{Accordingly, the quantization of $s$ is broken in a mixture of CSFs with different $\ell$s, where $U\left(1\right)_{N}\times SO\left(2\right)_{L}$ is completely broken. In the mixture $s\equiv-2\eta_{\text{o}}^{\left(1\right)}/n=\sum_{i}n_{i}\left(\ell_{i}/2\right)/\sum_{i}n_{i}$ retains its meaning as an average angular momentum per particle, but is no longer quantized. This should be contrasted with multicomponent QH states \citep{bradlyn2015topological}, where all $n_{i}$s are proportional to the same applied magnetic field through the filling factors $\nu_{i}\in\mathbb{Q}$, and $s$ remains quantized.}. Nevertheless, the gapped fermions in a CSF do carry non-trivial ground-state topology labeled by the central charge $c\in\left(\ell/2\right)\mathbb{Z}$, and, based on results in quantum Hall states \citep{abanov2014electromagnetic,klevtsov2015geometric,bradlyn2015topological}, a $c$-dependent correction to $\eta_{\text{o}}^{\left(1\right)}$ of Eq.\eqref{eq:1-1} is therefore expected to appear at small non-zero wave-vector, \begin{align} & \delta\eta_{\text{o}}^{\left(1\right)}\left(\mathbf{q}\right)=-\frac{c}{24}\frac{1}{4\pi}q^{2}.\label{eq:2-1-2} \end{align} This raises the questions: \begin{quote} \textit{In chiral superfluids, can the boundary chiral central charge be extracted from a measurement of the bulk odd viscosity? Can it be extracted from any bulk measurement?} \end{quote} Providing a definite answer to these questions is the main goal of Sec.\ref{sec:Main-section-2:}, and requires a fully non-relativistic treatment of CSFs. The main reason for this is that the relativistic low energy description misses most of the physics of the Goldstone field. Analysis of Goldstone physics in CSFs was undertaken in Refs.\citep{volovik1988quantized,goryo1998abelian,goryo1999observation,furusaki2001spontaneous,stone2004edge,roy2008collective,lutchyn2008gauge,ariad2015effective}, most of which revolving around the non-vanishing, yet non-quantized, Hall (or odd) conductivity in CSFs. More recently, Refs.\citep{hoyos2014effective,moroz2015effective} considered CSFs in curved (or strained) space, following the pioneering work \citep{son2006general} on $s$-wave ($\ell=0$) SFs. These works demonstrated that the Goldstone field, owing to its charge $L+\left(\ell/2\right)N$, produces the $\mathbf{q}=\mathbf{0}$ odd viscosity \eqref{eq:1-1}, and it is therefore natural to expect that a $q^{2}$ correction similar to \eqref{eq:2-1-2} will also be produced. Nevertheless, Refs.\citep{hoyos2014effective,moroz2015effective} did not consider the derivative expansion to the high order at which $q^{2}$ corrections to $\eta_{\text{o}}$ would appear, nor did they detect any bulk signature of $c$ at lower orders. In Sec.\ref{sec:Main-section-2:} we obtain a low energy effective field theory that consistently captures both the chiral Goldstone mode and the gCS term, thus unifying and extending the seemingly unrelated analysis of Refs.\citep{son2006general,hoyos2014effective,moroz2015effective} and Sec.\ref{sec:Main-section-1:}. Using the theory we show that $c$ cannot be extracted for a measurement of the odd viscosity alone, as suggested by Eq.\eqref{eq:2-1-2}. Nevertheless, a related observable, termed 'improved odd viscosity', does allow for the bulk measurement of $c$. Additional results of the same spirit are found in Galilean invariant CSFs. \subsection{Quantum Monte Carlo sign problems in chiral topological matter \label{subsec:Complexity-of-simulating}} Utilizing a random sampling of phase-space according to the Boltzmann probability distribution, Monte Carlo simulations are arguably the most powerful tools for numerically evaluating thermal averages in classical many-body physics \citep{doi:10.1080/01621459.1949.10483310}. Though the phase-space of an $N$-body system scales exponentially with $N$, a Monte-Carlo approximation with a fixed desired error is usually obtained in polynomial time \citep{troyer2005computational,barahona1982computational}. In \textit{Quantum} Monte Carlo (QMC), one attempts to perform Monte-Carlo computations of thermal averages in quantum many-body systems, by following the heuristic idea that quantum systems in $d$ dimensions are equivalent to classical systems in $d+1$ dimensions \citep{Assaad,li2019sign}. The difficulty with any such quantum to classical mapping, henceforth referred to as a \textit{method}, is the infamous \textit{sign problem}, where the mapping can produce complex, rather than non-negative, Boltzmann weights $p$, which do not correspond to a probability distribution. Faced with a sign problem, one can try to change the method used and obtain $p\geq0$, thus \textit{curing the sign problem} \citep{marvian2018computational,klassen2019hardness}. Alternatively, one can perform QMC using the weights $\left|p\right|$, which is often done but generically leads to an exponential computational complexity in evaluating physical observables, limiting ones ability to simulate large systems at low temperatures \citep{troyer2005computational}. Conceptually, the sign problem can be understood as an obstruction to mapping quantum systems to classical systems, and accordingly, from a number of complexity theoretic perspectives, a generic curing algorithm in polynomial time is not believed to exist \citep{troyer2005computational,bravyi2006complexity,hastings2016quantum,marvian2018computational,hangleiter2019easing,klassen2019hardness}. In many-body physics, however, one is mostly interested in universal phenomena, i.e phases of matter and the transitions between them, and therefore representative Hamiltonians which are free of the sing problem (henceforth 'sign-free') often suffice \citep{kaul2013bridging}. In fact, QMC simulations continue to produce unparalleled results, in all branches of many-body quantum physics, precisely because new sign-free models are constantly being discovered \citep{RevModPhys.67.279,PhysRevLett.83.3116,PhysRevX.3.031010,PhysRevD.88.021701,kaul2013bridging,GazitE6987,berg2019monte,li2019sign}. Designing sign-free models requires \textit{design principles} (or ``de-sign'' principles) \citep{kaul2013bridging,wang2015split} - easily verifiable properties that, if satisfied by a Hamiltonian and method, lead to a sign-free representation of the corresponding partition function. An important example is the condition $\bra iH\ket j\leq0$ where $i\neq j$ label a local basis, which implies non-negative weights $p$ in a wide range of methods \citep{kaul2013bridging,hangleiter2019easing}. Hamiltonians satisfying this condition in a given basis are known as \textit{stoquastic} \citep{bravyi2006complexity}, and have proven very useful in both application and theory of QMC in bosonic (or spin, or 'qudit') systems \citep{kaul2013bridging,troyer2005computational,bravyi2006complexity,hastings2016quantum,marvian2018computational,hangleiter2019easing,klassen2019hardness}. Fermionic Hamiltonians are not expected to be stoquastic in any local basis \citep{troyer2005computational,li2019sign}, and alternative methods, collectively known as determinantal quantum Monte-Carlo (DQMC), are therefore used \citep{PhysRevD.24.2278,Assaad,Santos_2003,li2019sign,berg2019monte}. The search for design principles that apply to DQMC, and applications thereof, has naturally played the dominant role in tackling the sign problem in fermionic systems, and has seen a lot of progress in recent years \citep{chandrasekharan2013fermion,wang2015split,li2016majorana,wei2016majorana,wei2017semigroup,berg2019monte,li2019sign}. Nevertheless, long standing open problems in quantum many-body physics continue to defy solution, and remain inaccessible for QMC. These include the nature of high temperature superconductivity and the associated repulsive Hubbard model \citep{Santos_2003,PhysRevB.80.075116,PhysRevX.5.041041,kantian2019understanding}, dense nuclear matter and the associated lattice QCD at finite baryon density \citep{Hands_2000,PhysRevD.66.074507,10.1093/ptep/ptx018}, and the enigmatic fractional quantum Hall state at filling $5/2$ and its associated Coulomb Hamiltonian \citep{banerjee2018observation,PhysRevB.98.045112,PhysRevLett.121.026801,PhysRevB.97.121406,PhysRevB.99.085309,hu2020microscopic}, all of which are fermionic. \textcolor{red}{} One may wonder if there is a fundamental reason that no design principle applying to the above open problems has so far been found, despite intense research efforts. More generally, \begin{quote} \textit{Are there phases of matter that do not admit a sign-free representative? Are there physical properties that cannot be exhibited by sign-free models?} \end{quote} We refer to such phases of matter, where the sign problem simply cannot be cured, as having an \textit{intrinsic sign problem} \citep{hastings2016quantum}. From a practical perspective, intrinsic sign problems may prove useful in directing research efforts and computational resources. From a fundamental perspective, intrinsic sign problems identify certain phases of matter as inherently quantum - their physical properties cannot be reproduced by a partition function with positive Boltzmann weights. To the best of our knowledge, the first intrinsic sign problem was discovered by Hastings \citep{hastings2016quantum}, who proved that no stoquastic, commuting projector, Hamiltonians exist for the 'doubled semion' phase \citep{PhysRevB.71.045110}, which is bosonic and topologically ordered. In Ref.\citep{PhysRevResearch.2.033515}, we generalize this result considerably - excluding the possibility of stoquastic Hamiltonians in a broad class of bosonic non-chiral topological phases of matter. Additionally, Ref.\citep{ringel2017quantized} demonstrated, based on the algebraic structure of edge excitations, that no translationally invariant stoquastic Hamiltonians exist for bosonic chiral topological phases. In Sec.\ref{sec:Main-section-3:}, we will establish a new criterion for intrinsic sign problems in chiral topological matter, and take the first step in analyzing intrinsic sign problems in fermionic systems. First, based on the well established 'momentum polarization' method for characterizing chiral topological matter \citep{PhysRevB.88.195412,PhysRevLett.110.236801,PhysRevB.90.045123,PhysRevB.92.165127}, we obtain a variant of the result of Ref.\citep{ringel2017quantized} - excluding the possibility of stoquastic Hamiltonians in a broad class of bosonic chiral topological phases. We then develop a formalism with which we obtain analogous results for systems comprised of both bosons \textit{and} fermions - excluding the possibility of sign-free DQMC simulations. \begin{table*}[h] \renewcommand*{\arraystretch}{1.3} \resizebox{\textwidth}{!}{ \begin{tabular}{lllll} \hline \hline \textbf{Phase of matter} & \textbf{Parameterization} & $\boldsymbol{c}$ & $\boldsymbol{\left\{h_{a}\right\}}$ & \textbf{Intrinsic sign problem?} \tabularnewline \hline Laughlin (B) \citep{hu2020microscopic} & Filling $1/q,\;(q\in2\mathbb{N})$ & $1$ & $\left\{ a^{2}/2q\right\} _{a=0}^{q-1}$ & In $98.5\%$ of first $10^3$ \tabularnewline Laughlin (F) \citep{hu2020microscopic} & Filling $1/q,\;(q\in2\mathbb{N}-1)$ & $1$ & $\left\{ \left(a+1/2\right)^{2}/2q\right\} _{a=0}^{q-1}$ & In $96.7\%$ of first $10^3$ \tabularnewline Chern insulator (F) \citep{PhysRevResearch.2.043032} & Chern number $\nu\in\mathbb{Z}$ & $\nu$ & $\left\{ \nu/8\right\} $ & For $\nu\notin 12\mathbb{Z}$ \tabularnewline $\ell$-wave superconductor (F) \citep{PhysRevB.100.104512} & Pairing channel $\ell\in2\mathbb{Z}-1$ & $-\ell/2$ & $\left\{ -\ell/16\right\} $ & Yes \tabularnewline Kitaev spin liquid (B) \citep{kitaev2006anyons} & Chern number $\nu\in2\mathbb{Z}-1$ & $\nu/2$ & $\left\{ 0,1/2,\nu/16\right\} $ & Yes \tabularnewline $SU\left(2\right)_{k}$ Chern-Simons (B) \citep{bonderson2007non} & Level $k\in\mathbb{N}$ & $3k/\left(k+2\right)$ & $\left\{ a\left(a+2\right)/4\left(k+2\right)\right\} _{a=0}^{k}$ & In $91.6\%$ of first $10^3$ \tabularnewline $E_{8}$ $K$-matrix (B) \citep{PhysRevB.94.155113} & Stack of $n\in\mathbb{N}$ copies& $8n$ & $\left\{ 0\right\} $ & For $n\notin 3\mathbb{N}$ \tabularnewline Fibonacci anyon model (B) \citep{bonderson2007non} & & $14/5$ (mod $8$)& $\left\{ 0,2/5\right\} $ & Yes \tabularnewline Pfaffian (F) \citep{hsin2020effective} & & $3/2$ & $\left\{ 0,1/2,1/4,3/4,1/8,5/8 \right\}$ & Yes \tabularnewline PH-Pfaffian (F) \citep{hsin2020effective} & & $1/2$ & $\left\{ 0,0,1/2,1/2,1/4,3/4 \right\}$ & Yes \tabularnewline Anti-Pfaffian (F) \citep{hsin2020effective} & & $-1/2$ & $\left\{ 0,1/2,1/4,3/4,3/8,7/8 \right\}$ & Yes \tabularnewline \hline \hline \end{tabular} } \caption{Examples of intrinsic sign problems based on the criterion $e^{2\pi ic/24}\protect\notin\left\{ \theta_{a}\right\} $, in terms of the chiral central charge $c$ and the topological spins $\theta_{a}=e^{2\pi ih_{a}}$. The number of spins $h_a$ is equal to the dimension of the ground state subspace on the torus. We mark bosonic/fermionic phases by (B/F). The quantum Hall Laughlin phases corresond to $U(1)_q$ Chern-Simons theories. The $\ell$-wave superconductor is chiral, e.g $p+ip$ for $\ell=1$, and comprised of a single flavour of spinless fermions. The data shown refers only to the SPT phase formed by gapped fermionic excitations, see Sec.\ref{subsec:Chiral-superfluids-and}. Data for the spinfull case is identical to that of the Chern insulator, with $-\ell$ odd (even) in place of $\nu$, for triplet (singlet) pairing. The modulo 8 ambiguity in the central charge of the Fibonacci anyon model corresponds to the stacking of a given realization with copies of the $E_{8}$ $K$-matrix phase. Data for the three quantum Hall Pfaffian phases is given at the minimal filling 1/2. The physical filling 5/2 is obtained by stacking with a $\nu=2$ Chern insulator, and an intrinsic sign problem appears in this case as well. \label{tab:1} } \end{table*} All of the above mentioned topological phases are gapped, 2+1 dimensional, and described at low energy by a topological field theory \citep{doi:10.1142/S0129055X90000107,kitaev2006anyons,freed2016reflection}. The class of such phases in which we find an intrinsic sign problem is defined in terms of robust data characterizing them: the chiral central charge $c$, a rational number, as well as the set $\left\{ \theta_{a}\right\} $ of topological spins of anyons, a subset of roots of unity. Namely, we find that \begin{quote} \nopagebreak[0] \textit{An intrinsic sign problem exists if $e^{2\pi ic/24}$ is not the topological spin of some anyon, i.e $e^{2\pi ic/24}\notin\left\{ \theta_{a}\right\} $. } \end{quote} The above criterion applies to most chiral topological phases, see Table \ref{tab:1} for examples. In particular, we identify an intrinsic sign problem in $96.7\%$ of the first one-thousand fermionic Laughlin phases, and in all non-abelian Kitaev spin liquids. We also find intrinsic sign problems in $91.6\%$ of the first one-thousand $SU\left(2\right)_{k}$ Chern-Simons theories. Since, for $k\neq1,2,4$, these allow for universal quantum computation by manipulation of anyons \citep{Freedman:2002aa,nayak2008non}, our results support the strong belief that quantum computation cannot be simulated with classical resources, in polynomial time \citep{Arute:2019aa}. This conclusion is strengthened by examining the Fibonacci anyon model, which is known to be universal for quantum computation \citep{nayak2008non}, and is found to be intrinsically sign-problematic. We stress that both $c$ and $\left\{ \theta_{a}\right\} $ have clear observable signatures in both the bulk and boundary of chiral topological matter, some of which have been experimentally observed. The chiral central charge was extensively discussed in previous section, including its observation in Refs.\citep{jezouin2013quantum,banerjee2017observed,banerjee2018observation,Kasahara:2018aa,schine2018measuring}. The topological spins determine the exchange statistics of anyons, predicted to appear in interferometry experiments \citep{nayak2008non}. Experimental observation remained elusive \citep{PhysRevLett.122.246801} until it was recently reported in the Laughlin 1/3 quantum Hall state \citep{Nakamura:2020aa}. Additionally, a measurement of anyonic statistics via current correlations \citep{PhysRevLett.116.156802} was recently reported in the Laughlin 1/3 state \citep{Bartolomei173}. \pagebreak{} \section{Probing topological superconductors with emergent relativistic gravity\label{sec:Main-section-1:}} In this section, we restrict our attention to spin-less $p$-wave chiral superfluids and superconductors (CSF/Cs), and analyze the relativistic geometry, or gravity, that emerges at low energy. We seek answers to the questions posed in Sec.\ref{subsec:Chiral-superfluids-and}. \subsection{Approach and main results\label{subsec:Results0}} \subsubsection{Model and approach\label{subsec:Results1}} As a starting point, we consider a simple model for spin-less $p$-wave CSF/Cs \citep{Volovik:1988aa}. The action is given by \begin{align} S\left[\psi;\Delta\right]= & \int\text{d}^{2+1}x\left[\psi^{\dagger}\left(i\partial_{t}+\frac{\delta^{ij}\partial_{i}\partial_{j}}{2m^{*}}-m\right)\psi+\left(\frac{1}{2}\psi^{\dagger}\Delta^{j}\partial_{j}\psi^{\dagger}+h.c\right)\right],\label{eq:13} \end{align} and describes the coupling of a spin-less fermion $\psi$, with effective mass $m^{*}$ and chemical potential $-m$, to a $p$-wave order parameter $\Delta=\left(\Delta^{x},\Delta^{y}\right)\in\mathbb{C}^{2}$. The order parameter corresponds to the condensate of Cooper pairs described in Sec.\ref{subsec:Chiral-superfluids-and}. In a proximity induced CSC, the order parameter can be thought of as a non-dynamical background field, as it appears in Eq.\eqref{eq:13}. In intrinsic CSCs, and in CSFs, the order parameter is a quantum field which mediates an attractive interaction between fermions, and a treatment of the dynamics of $\Delta$ is deferred to Sec.\ref{sec:Conclusion-and-discussion} and \ref{sec:Main-section-2:}. The ground-state, or unperturbed, configuration of $\Delta$ is the $p_{x}\pm ip_{y}$ configuration $\Delta=\Delta_{0}e^{i\theta}\left(1,\pm i\right)$, where $\Delta_{0}>0$ and $\theta$ are constants. The phase $\theta$ corresponds to the Goldstone field implied by Eq.\eqref{eq:2-1-1}, while the orientation (or chirality) $o=\pm$ corresponds to the breaking of reflection and time reversal symmetries to their product, $P\times T\rightarrow PT$. One may view the model \eqref{eq:13} as 'microscopic', as will be done in Sec.\ref{sec:Main-section-2:}, but here we will think of it as the low energy description of a lattice model, introduced and analyzed in Sec.\ref{sec:Lattice-model}. In the 'relativistic regime' where the order parameter is much larger than the single particle scales, the lattice model is essentially a lattice regularization of four, generically massive, relativistic Majorana spinors, centered at the particle-hole invariant points $k=-k$ in the Brillouin zone. Around each of these four points, the low energy description is given by an action of the form \eqref{eq:13}. In the relativistic regime the effective mass $m^{*}$ is large, and in the limit $m^{*}\rightarrow\infty$ Eq.\eqref{eq:13} reduces to a relativistic action, with mass $m$ and speed of light $c_{\text{light}}=\Delta_{0}/\hbar$, for the Nambu spinor $\Psi^{\dagger}=\left(\psi^{\dagger},\psi\right)$, which is a Majorana spinor. The different Majorana spinors, associated with the four particle-hole invariant points, have different orientations $o_{n}$ and masses $m_{n}$, where $1\leq n\leq4$. The chiral central charge of the lattice model can be deduced from its Chern number \citep{read2000paired,kitaev2006anyons,volovik2009universe,ryu2010topological}. The $n$th Majorana spinor contributes $c_{n}=o_{n}\text{sgn}\left(m_{n}\right)/4$, and summing over $n$ one obtains the central charge of the lattice model $c=\sum_{n=1}^{4}c_{n}=\sum_{n=1}^{4}o_{n}\text{sgn}\left(m_{n}\right)/4$, which gives the topological phase diagram purely in terms of the low energy relativistic data $o_{n},m_{n}$, see Sec.\ref{sec:Lattice-model}. This formula motivates a study of the geometric physics associated with $c$, purely within the low energy relativistic description, which we now pursue. In order to access the physics associated with $c$, we perturb the order parameter out of the $p_{x}\pm ip_{y}$ configuration, and treat $\Delta=\left(\Delta^{x},\Delta^{y}\right)\in\mathbb{C}^{2}$ as a general space-time dependent field. This is analogous to applying an electromagnetic field in order to measure a quantized Hall conductivity in the quantum Hall effect. Following the observations of Refs.\cite{volovik2009universe,read2000paired}, we show in Sec.\ref{sec:Emergent-Riemann-Cartan-geometry} that, in the relativistic limit, the Majorana spinor $\Psi$ experiences such a general order parameter (along with a general $U\left(1\right)_{N}$ gauge field) as a non trivial gravitational background, namely Riemann-Cartan geometry. See also Sec.\ref{subsec:The-order-parameter} for the basics of this statement. Some of the emergent geometry is described by the (inverse) spatial metric \begin{eqnarray} & & g^{ij}=-\Delta^{(i}\Delta^{j)*},\label{eq:emergent metric} \end{eqnarray} where brackets denote the symmetrization, and the sign is a matter of convention. The metric $g^{ij}$ corresponds to the Higgs field included in the order parameter. Parameterizing $\Delta=e^{i\theta}\left(\left|\Delta^{x}\right|,e^{i\phi}\left|\Delta^{y}\right|\right)$ with the overall phase $\theta$ and relative phase $\phi\in\left(-\pi,\pi\right]$, the metric is independent of $\theta$ and of the orientation $o=\text{sgn}\phi=\pm$, which splits order parameters into $p_{x}+ip_{y}$-like and $p_{x}-ip_{y}$-like. Note that in the $p_{x}\pm ip_{y}$ configuration the metric is euclidian, $g^{ij}=-\Delta_{0}^{2}\delta^{ij}$. For our purposes it is important that the metric be perturbed out of this form, and in particular it is not enough to take the $p_{x}\pm ip_{y}$ configuration with a space-time dependent Goldstone field $\theta$. \begin{figure}[!th] \begin{centering} \includegraphics[scale=0.4]{NewComparison.pdf} \par\end{centering} \caption{A comparison of the integer quantum Hall effect (IQHE) and its energy-momentum analog in $p$-wave CSF/Cs. (a) In the IQHE there is a perpendicular electric current $\left\langle J\right\rangle $ in response to an applied electric field $E$, with a quantized Hall conductivity, proportional to the Chern number $\nu\in \mathbb{Z}$, as encoded in a $U(1)_N$ Chern-Simons term. (b) In $p$-wave CSF/Cs, an energy current $\left\langle J_{E}\right\rangle $ flows in response to a space dependent order parameter $\Delta$, as encoded in a gravitational Chern-Simons term. Derivatives of the curvature $\tilde{\mathcal{R}}$ associated with $\Delta$ play the role of the electric field in the IQHE, and $\left\langle J_{E}\right\rangle $ is perpendicular to the curvature gradient $\nabla\tilde{\mathcal{R}}$. The ratio between the magnitudes of $\left\langle J_{E}\right\rangle $ and $\nabla\tilde{\mathcal{R}}$ is quantized, and proportional to the chiral central charge $c\in (1/2)\mathbb{Z}$. As described in the text, the spontaneous breaking of $U(1)_N$ symmetry in $p$-wave CSF/Cs allows for a gravitational \textit{pseudo} Chern-Simons term, encoding closely related bulk responses, which are not topological in nature. (c) The quantized Hall conductivity implies the existence of a chiral boundary fermion with a $U\left(1\right)_N$ anomaly, which can be described as a Weyl fermion at low energy. (d) The analogous response in $p$-wave CSF/Cs implies the existence of a boundary chiral Majorana fermion with a gravitational anomaly, which can be described as a Majorana-Weyl fermion at low energy.\label{fig:A-comparison-of-1}} \end{figure} \subsubsection{Topological bulk responses from a gravitational Chern-Simons term \label{subsubsec:Topological bulk responses from a gravitational Chern-Simons term}} Using the mapping of $p$-wave CSF/Cs to relativistic Majorana fermions in Riemann-Cartan space-time, we compute and analyze in Sec.\ref{sec:Bulk-response} the effective action obtained by integrating over the bulk fermions in the presence of a general order parameter $\Delta$, and $U\left(1\right)_{N}$ gauge field. Here we discuss the main physical implications of this effective action. As already explained in Sec.\ref{subsec:Chiral-superfluids-and}, we only describe UV-insensitive physics, which can be reliably understood within the low energy relativistic description. This physics is controlled by dimensionless coefficients, including the chiral central charge $c$ in which we are primarily interested. As expected, the effective action contains a gCS term \citep{Chern-Simons,jackiw2003chern,kraus2006holographic,witten2007three,perez2010conserved,stone2012gravitational} \begin{align} S_{\text{gCS}}= & \alpha\int\text{tr}\left(\tilde{\Gamma}\wedge\text{d}\tilde{\Gamma}+\frac{2}{3}\tilde{\Gamma}\wedge\tilde{\Gamma}\wedge\tilde{\Gamma}\right), \end{align} with coefficient $\alpha=\frac{c}{96\pi}\in\frac{1}{192\pi}\mathbb{Z}$, and where $\tilde{\Gamma}$ is the Christoffel symbol of the space-time metric obtained from Eq.\eqref{eq:emergent metric}, see Sec.\ref{subsec:Effective-action-for}. Although we obtain this result in the limit $m^{*}\rightarrow\infty$, we expect it to hold throughout the phase diagram. This is based on known arguments for the quantization of the coefficient $\alpha$ due to symmetry, and on the relation with the boundary gravitational anomaly described below. The gCS term implies a topological bulk response, where energy-momentum currents and densities appear due to a space-time dependent order parameter. To see this in the simplest setting, assume that the order parameter is time independent, and that the relative phase is $\phi=\pm\frac{\pi}{2}$, as in the $p_{x}\pm ip_{y}$ configuration, so that $\Delta=e^{i\theta}\left(\left|\Delta^{x}\right|,\pm i\left|\Delta^{y}\right|\right)$, $o=\pm$. Then the metric is time independent, and takes the simple form \begin{eqnarray} & & g^{ij}=-\begin{pmatrix}\left|\Delta^{x}\right|^{2} & 0\\ 0 & \left|\Delta^{y}\right|^{2} \end{pmatrix}.\label{eq:3-3} \end{eqnarray} On this background, we find the following contributions to the expectation values of the fermionic energy current $J_{E}^{i}$, and momentum density $P_{i}$ \footnote{$P_{x}$ ($P_{y}$) is the density of the $x$ ($y$) component of momentum.}, \begin{eqnarray} \left\langle J_{E}^{i}\right\rangle _{\text{gCS}}&=&-\frac{c}{96\pi}\frac{1}{\hbar}\varepsilon^{ij}\partial_{j}\tilde{\mathcal{R}},\label{eq:4}\\ \left\langle P_{i}\right\rangle _{\text{gCS}}&=&-\frac{c}{96\pi}\hbar g_{ik}\varepsilon^{kj}\partial_{j}\tilde{\mathcal{R}}.\nonumber \end{eqnarray} Here $\tilde{\mathcal{R}}$ is the curvature, or Ricci scalar, of the metric $g_{ij}$, which is the inverse of $g^{ij}$, and $\varepsilon^{xy}=-\varepsilon^{yx}=1$. The curvature for the above metric is given explicitly by \begin{eqnarray} & \tilde{\mathcal{R}} & =-2\left|\Delta^{x}\right|\left|\Delta^{y}\right|\left(\partial_{y}\left(\frac{\left|\Delta^{y}\right|\partial_{y}\left|\ensuremath{\Delta}^{x}\right|}{\left|\ensuremath{\Delta}^{x}\right|^{2}}\right)+\partial_{x}\left(\frac{\left|\Delta^{x}\right|\partial_{x}\left|\ensuremath{\Delta}^{y}\right|}{\left|\ensuremath{\Delta}^{y}\right|^{2}}\right)\right).\label{eq:3-8-8} \end{eqnarray} It is a nonlinear expression in the order parameter, which is second order in derivatives. Thus the responses \eqref{eq:4} are third order in derivatives, and start at linear order but include nonlinear contributions as well. The first equation in \eqref{eq:4} is analogous to the response $\left\langle J^{i}\right\rangle =\frac{\nu}{2\pi}\varepsilon^{ij}E_{j}$ of the IQHE, see Fig.\ref{fig:A-comparison-of-1}(a),(b). The second equation is analogous to the dual response $\left\langle \rho\right\rangle =\frac{\nu}{2\pi}B$. \subsubsection{Additional bulk responses from a gravitational pseudo Chern-Simons term\label{subsubsec:Additional bulk responses from a gravitational pseudo}} Apart from the gCS term, the effective action obtained by integrating over the bulk fermions also contains an additional term of interest, which we refer to as a gravitational \textit{pseudo} Chern-Simons term (gpCS). To the best of our knowledge, the gpCS term has not appeared previously in the context of $p$-wave CSF/Cs. It is possible because $U\left(1\right)_N$ symmetry is spontaneously broken in $p$-wave CSF/Cs. In the geometric point of view, this translates to the emergent geometry in $p$-wave CSF/Cs being not only curved but also torsion-full. The gpCS term produces bulk responses which are closely related to those of gCS, despite it being fully gauge invariant. This gauge invariance implies that it is not associated with a boundary anomaly, nor does its coefficient $\beta$ need to be quantized. Hence, gpCS does not encode \textit{topological} bulk responses. Remarkably, we find that $\beta$ is quantized and identical to the coefficient $\alpha=\frac{c}{96\pi}$ of the gCS term in the limit of $m^*\rightarrow\infty$, but we do not expect this value to hold outside of this limit. Let us now describe the bulk responses from gpCS, setting $\beta=\frac{c}{96\pi}$. First, we find the following contributions to the fermionic energy current and momentum density, \begin{eqnarray} \left\langle J_{E}^{i}\right\rangle _{\text{gpCS}}&=&\frac{c}{96\pi}\varepsilon^{ij}\partial_{j}\tilde{\mathcal{R}},\label{eq:11-1}\\ \left\langle P_{i}\right\rangle _{\text{gpCS}}&=&-\frac{c}{96\pi}g_{ik}\varepsilon^{kj}\partial_{j}\tilde{\mathcal{R}}.\nonumber \end{eqnarray} Up to the sign difference in the first equation, these responses are the same as those from gCS \eqref{eq:4}. As opposed to gCS, the gpCS term also contributes to the fermionic charge density $\rho=-\psi^{\dagger}\psi$. For the bulk responses we have written thus far, every Majorana spinor contributed $c_{n}=\frac{o_{n}}{4}\text{sgn}\left(m_{n}\right)$, and summing over $n$ produced the central charge $c$. For the density response this is not the case. Here, the $n$th Majorana spinor contributes \begin{eqnarray} \left\langle \rho\right\rangle _{\text{gpCS}}=\frac{o_{n}c_{n}}{24\pi}\sqrt{g}\tilde{\mathcal{R}},\label{eq:9-1} \end{eqnarray} where $\sqrt{g}=\sqrt{\text{det}g_{ij}}$ is the emergent volume element. The orientation $o_{n}$ in Eq. (\ref{eq:9-1}) makes the sum over the four Majorana spinors different from the central charge, $\sum_{n=1}^{4}o_{n}c_{n}=\sum_{n=1}^{4}\frac{1}{2}\text{sgn}\left(m_{n}\right)\neq c$. The appearance of $o_{n}$ can be understood by considering the effect of time reversal, since both the density and curvature are time reversal even. The response \eqref{eq:9-1} also holds when the order parameter is time dependent, in which case $\tilde{\mathcal{R}}$ will also contain time derivatives. One then finds a time dependent density, but there is no corresponding current response, which is due to the breaking of $U(1)_N symmetry$. To gain some insight into the expressions we have written thus far, we write the operators $P,J_{E}$ more explicitly. For each Majorana spinor (suppressing the index $n$), \begin{eqnarray} P_{j}&=&\frac{i}{2}\psi^{\dagger}\overleftrightarrow{D_{j}}\psi,\label{eq:10-5}\\ J_{E}^{j}&=&g^{jk}P_{k}+\frac{o}{2}\partial_{k}\left(\frac{1}{\sqrt{g}}\varepsilon^{jk}\rho\right)+O\left(\frac{1}{m^{*}}\right).\nonumber \end{eqnarray} The momentum density is the familiar expression for free fermions, but in the energy current we have only written explicitly contributions that survive the limit $m^{*}\rightarrow\infty$. These contributions are only possible due to the $p$-wave pairing, and are of order $\Delta^{2}$. From the relation \eqref{eq:10-5} between $J_{E}$, $P$ and $\rho$ we can understand that the equality $\left\langle J_{E}^{j}\right\rangle _{\text{gCS}}=g^{jk}\left\langle P_{k}\right\rangle _{\text{gCS}}$ expressed in equation \eqref{eq:4} is a result of the vanishing contribution of gCS to the density $\rho$. We can also understand the sign difference between the first and second line of \eqref{eq:11-1} as a result of \eqref{eq:9-1}. The important point is that a measurement of the charge density $\rho$ can be used to fix the value of the coefficient $\beta$, which is generically unquantized, and thus separate the contributions of gpCS to $P,J_{E}$, from those of gCS. In this manner, one can overcome the obscuring of gCS by gpCS. \subsubsection{Bulk-boundary correspondence from gravitational anomaly } Among the two terms in the bulk effective action which we described in Sec.\ref{subsubsec:Topological bulk responses from a gravitational Chern-Simons term}-\ref{subsubsec:Additional bulk responses from a gravitational pseudo}, only gCS is related to the boundary gravitational anomaly. This relation can be explicitly analyzed in the case where $\Delta=\Delta_{0}e^{i\theta\left(t,x\right)}\left(1+f\left(x,t\right),i\right)$ is a perturbation of the $p_{x}+ip_{y}$ configuration with small $f$, and there is a domain wall (or boundary) at $y=0$ where the value of $c$ jumps. For simplicity, assume $c=1/2$ for $y<0$ and $c=0$ for $y>0$. This situation is illustrated in Fig.\ref{fig:A-comparison-of-1}(d). In Appendix \ref{sec:Boundary-fermions-and} we derive the action for the boundary, or edge mode, \begin{align} S_{\text{e}}=\frac{i}{2}\int\mbox{d}t\text{d}x\tilde{\xi}\left(\partial_{t}-\left|\Delta^{x}\left(t,x\right)\right|\partial_{x}\right)\tilde{\xi}, \end{align} which describes a chiral $D=1+1$ Majorana fermion $\tilde{\xi}$ localized on the boundary, with a space-time dependent velocity $\left|\Delta^{x}\left(x,t\right)\right|=\Delta_{0}\left|1+f\left(x,t\right)\right|$. Classically, the edge fermion $\tilde{\xi}$ conserves energy-momentum in the following sense, \begin{eqnarray} \partial_{\beta}t_{\mbox{e}\;\alpha}^{\beta}+\partial_{\alpha}\mathcal{L}_{\mbox{e}}=0.\label{eq:12-1} \end{eqnarray} Here $t_{\mbox{e}}$ is the canonical energy-momentum tensor for $\tilde{\xi}$, with indices $\alpha,\beta=t,x$, and $\mathcal{L}_{\mbox{e}}$ is the edge Lagrangian, $S_{\text{e}}=\int\text{d}t\mathcal{L}_{\mbox{e}}$, see Sec.\ref{subsec:Energy-momentum}. For $\alpha=t$ ($\alpha=x$), Eq.\eqref{eq:12-1} describes the sense in which the edge fermion conserves energy (momentum) classically. The source term $\partial_{\alpha}\mathcal{L}_{\mbox{e}}$ follows from the space-time dependence of $\mathcal{L}_{\text{e}}$ through $\Delta^{x}$. Quantum mechanically, the action $S_{\text{e}}$ is known to have a gravitational anomaly, which means that energy-momentum is not conserved at the quantum level \cite{bertlmann2000anomalies}. In the context of emergent gravity, this implies that Eq.\eqref{eq:12-1} is violated for the expectation values, \begin{align} \partial_{\beta}\left\langle t_{\mbox{e}\;\alpha}^{\beta}\right\rangle +\partial_{\alpha}\left\langle \mathcal{L}_{\mbox{e}}\right\rangle =-\frac{c}{96\pi}g_{\alpha\gamma}\varepsilon^{\gamma\beta y}\partial_{\beta}\tilde{\mathcal{R}}.\label{eq:12} \end{align} This equation is written with $\hbar=1$ and $c_{\mbox{light}}=\Delta_{0}/\hbar=1$ for simplicity. Since $\Delta^x$ depends on time, $\tilde{\mathcal{R}}$ is not the curvature of the spatial metric $g_{ij}$, but of a corresponding space-time metric $g_{\mu\nu}$ \eqref{eq:10-1}, and is given by $\tilde{\mathcal{R}}=\ddot{f}-2\dot{f}^{2}+O(f\ddot{f},f\dot{f}^{2})$ in this case. Note that time dependence in this example is crucial. From gCS we find for $\Delta=\Delta_{0}e^{i\theta\left(t,x\right)}\left(1+f\left(x,t\right),i\right)$ the bulk energy-momentum tensor \begin{eqnarray} & & \left\langle t_{\;\alpha}^{y}\right\rangle _{\text{gCS}}=-\frac{c}{96\pi}g_{\alpha\gamma}\varepsilon^{\gamma\beta y}\partial_{\beta}\tilde{\mathcal{R}},\label{eq:13b} \end{eqnarray} which explains the anomaly as the inflow of energy-momentum from the bulk to the boundary, \begin{eqnarray} & & \partial_{\beta}\left\langle t_{\mbox{e}\;\alpha}^{\beta}\right\rangle +\partial_{\alpha}\left\langle \mathcal{L}_{\mbox{e}}\right\rangle =\left\langle t_{\;\alpha}^{y}\right\rangle _{\text{gCS}}. \end{eqnarray} Since $c$ jumps from 1/2 to 0 at $y=0$ the energy-momentum current \eqref{eq:13b} stops at the boundary and does not extend to the $y>0$ region. The gravitationally anomalous boundary mode is then essential for the conservation of total energy-momentum to hold. As this example shows, bulk-boundary correspondence follows from bulk+boundary conservation of energy-momentum in the presence of a space-time dependent order parameter. \subsection{Lattice model\label{sec:Lattice-model} } In this section we review and slightly generalize a simple lattice model for a $p$-wave SC \cite{bernevig2013topological}, which will serve as our microscopic starting point. We describe its band structure and its symmetry protected topological phases, and also explain some of the basics of the emergent geometry which can be seen in this setting. The hamiltonian is given in real space by \begin{eqnarray} H=-\frac{1}{2}\sum_{\boldsymbol{l}}\left[t\psi_{\boldsymbol{l}}^{\dagger}\psi_{\boldsymbol{l}+x}+t\psi_{\boldsymbol{l}}^{\dagger}\psi_{\boldsymbol{l}+y}+\mu\psi_{\boldsymbol{l}}^{\dagger}\psi_{\boldsymbol{l}}\right. +\left.\delta^{x}\psi_{\boldsymbol{l}}^{\dagger}\psi_{\boldsymbol{l}+x}^{\dagger} + \delta^{y}\psi_{\boldsymbol{l}}^{\dagger}\psi_{\boldsymbol{l}+y}^{\dagger}+h.c\right].\label{eq:2-1} \end{eqnarray} Here the sum is over all lattice sites $\boldsymbol{l}\in L$ of a 2 dimensional square lattice $L=a\mathbb{Z}\times a\mathbb{Z}$, with a lattice spacing $a$. $\psi_{\boldsymbol{l}}^{\dagger},\psi_{\boldsymbol{l}}$ are creation and annihilation operators for spin-less fermions on the lattice, with the canonical anti commutators $\left\{ \psi_{\boldsymbol{l}}^{\dagger},\psi_{\boldsymbol{l}'}\right\} =\delta_{\boldsymbol{l}\boldsymbol{l}'}$. $\boldsymbol{l}+x$ denotes the nearest neighboring site to $\boldsymbol{l}$ in the $x$ direction. The hopping amplitude $t$ is real and $\mu$ is the chemical potential. Apart from the single particle terms $t\psi_{\boldsymbol{l}}^{\dagger}\psi_{\boldsymbol{l}+x}+t\psi_{\boldsymbol{l}}^{\dagger}\psi_{\boldsymbol{l}+y}+\mu$, there is also the pairing term $\delta^{x}\psi_{\boldsymbol{l}}^{\dagger}\psi_{\boldsymbol{l}+x}^{\dagger}+\delta^{y}\psi_{\boldsymbol{l}}^{\dagger}\psi_{\boldsymbol{l}+y}^{\dagger}$ , with the order parameter $\delta=\left(\delta^{x},\delta^{y}\right)\in\mathbb{\mathbb{C}}^{2}$. We think of $\delta$ as resulting from a Hubbard-Stratonovich decoupling of interactions, in which case we refer to it as intrinsic, or as being induced by proximity to an $s$-wave SC. In both cases we treat $\delta$ as a bosonic background field that couples to the fermions. The generic order parameter is charged under a few symmetries of the single particle terms. The order parameter has charge 2 under the global $U\left(1\right)$ group generated by $Q=-\sum_{\boldsymbol{l}}\psi_{\boldsymbol{l}}^{\dagger}\psi_{\boldsymbol{l}}$, in the sense that $e^{-i\alpha Q}H\left(e^{2i\alpha}\delta\right)e^{i\alpha Q}=H\left(\delta\right)$, which physically represents the electromagnetic charge $-2$ of Cooper pairs\footnote{Since $\delta$ has charge 2, $H$ commutes with the fermion parity $\left(-1\right)^{Q}$. The Ground state of $H$ will therefore be labelled by a fermion parity eigenvalue $\pm1$, in addition to the topological label which is the Chern number \cite{read2000paired,kitaev2009periodic}. Fermion parity is a subtle quantity in the thermodynamic limit, and will not be important in the following.}. The order parameter is also charged under time reversal $T$, which is an anti unitary transformation satisfying $T^{2}=1$, that acts as the complex conjugation of coefficients in the Fock basis corresponding to $\psi_{\boldsymbol{l}},\psi_{\boldsymbol{l}}^{\dagger}$. The equation $T^{-1}H\left(\delta^{*}\right)T=H\left(\delta\right)$ shows $\delta\mapsto\delta^{*}$ under time reversal. Finally, $\delta$ is also charged under the point group symmetry of the lattice, which for the square lattice is the Dihedral group $D_{4}$. The continuum analog of this is that the order parameter is charged under spatial rotations and reflections, and more generally, under space-time transformations (diffeomorphisms), which is due to the orbital angular momentum 1 of Cooper pairs in a $p$-wave SC. This observation will be important for our analysis, and will be discussed further below. In an intrinsic $p_{x}\pm ip_{y}$ SC, the configuration of $\delta$ which minimizes the ground state energy is given by $\delta=\delta_{0}e^{i\theta}\left(1,\pm i\right)$, where $\delta_{0}>0$ is determined by the minimization, but the sign $o=\pm1$ and the phase $\theta$ (which dynamically corresponds to a goldstone mode) are left undetermined. See \cite{volovik2009universe} for a pedagogical discussion of a closely related model within mean field theory. A choice of $\theta$ and $o$ corresponds to a spontaneous symmetry breaking of the group $U\left(1\right)\rtimes\left\{ 1,T\right\} $ including both the $U\left(1\right)$ and time reversal transformations. More accurately, in the $p_{x}\pm ip_{y}$ SC, the group $\left(U\left(1\right)\rtimes\left\{ 1,T\right\} \right)\times D_{4}$ is spontaneously broken down to a certain diagonal subgroup. We discuss the continuum analog of this and its implications in section \ref{subsec:Energy-momentum}. Crucially, we do not restrict $\delta$ to the $p_{x}\pm ip_{y}$ configuration, and treat it as a general two component complex vector $\delta=\left(\delta^{x},\delta^{y}\right)\in\mathbb{\mathbb{C}}^{2}$. In the following we will take $\delta$ to be space time dependent, $\delta\mapsto\delta_{\boldsymbol{l}}\left(t\right)$, and show that this space time dependence can be thought of as a perturbation to which there is a topological response, but for now we assume $\delta$ is constant. \subsubsection{\label{subsec:Band-structure-and}Band structure and phase diagram } Writing the Hamiltonian \eqref{eq:2-1} in Fourier space, and in the BdG form in terms of the Nambu spinor $\Psi_{\boldsymbol{q}}=\left(\psi_{\boldsymbol{q}},\psi_{-\boldsymbol{q}}^{\dagger}\right)^{T}$ we find \begin{align} H & =\frac{1}{2}\int_{BZ}\frac{\mbox{d}^{2}\boldsymbol{q}}{\left(2\pi\right)^{2}}\Psi_{\boldsymbol{q}}^{\dagger}\begin{pmatrix}h_{\boldsymbol{q}} & \delta_{\boldsymbol{q}}\\ \delta_{\boldsymbol{q}}^{*} & -h_{\boldsymbol{q}} \end{pmatrix}\Psi_{\boldsymbol{q}}+const\nonumber \\ & =\frac{1}{2}\int_{BZ}\frac{\mbox{d}^{2}\boldsymbol{q}}{\left(2\pi\right)^{2}}\Psi_{\boldsymbol{q}}^{\dagger}\left(\boldsymbol{d}_{\boldsymbol{q}}\cdot\boldsymbol{\sigma}\right)\Psi_{\boldsymbol{q}}+const,\label{eq:3} \end{align} with $h_{\boldsymbol{q}}=-t\cos\left(aq_{x}\right)-t\cos\left(aq_{y}\right)-\mu$ real and symmetric, and $\delta_{\boldsymbol{q}}=-i\delta^{x}\sin\left(aq_{x}\right)-i\delta^{y}\sin\left(aq_{y}\right)$ complex and anti-symmetric. Here $\boldsymbol{\sigma}=\left(\sigma^{x},\sigma^{y},\sigma^{z}\right)$ is the vector of Pauli matrices and $BZ$ is the Brillouin zone $BZ=\left(\mathbb{R}/\frac{2\pi}{a}\mathbb{Z}\right)^{2}$. By definition, the Nambu spinor obeys the reality condition $\Psi_{\boldsymbol{q}}^{\dagger}=\left(\sigma^{x}\Psi_{-\boldsymbol{q}}\right)^{T}$, and is therefore a Majorana spinor, see appendix \ref{subsec:Charge-conjugation-(Appendix)}. Accordingly, the BdG Hamiltonian is particle-hole (or charge conjugation) symmetric, $\sigma^{x}H\left(\boldsymbol{q}\right)^{*}\sigma^{x}=-H\left(-\boldsymbol{q}\right)$, and therefore belongs to symmetry class D of the Altland-Zirnbauer classification of free fermion Hamiltonians \cite{ryu2010topological}. The constant in \eqref{eq:3} is $\frac{1}{2}\text{tr}h=\frac{V}{2}\int\frac{\text{d}^{2}\boldsymbol{q}}{\left(2\pi\right)^{2}}h_{\boldsymbol{q}}$ where $V$ is the infinite volume. This operator ordering correction is important as it contributes to physical quantities such as the energy density and charge density, but we will mostly keep it implicit in the following. The BdG band structure is given by $E_{\boldsymbol{q},\pm}=\pm\frac{1}{2}E_{\boldsymbol{q}}$ where \begin{eqnarray} E_{\boldsymbol{q}}=\left|\boldsymbol{d}_{\boldsymbol{q}}\right|=\sqrt{h_{\boldsymbol{q}}^{2}+\left|\delta_{\boldsymbol{q}}\right|^{2}}. \end{eqnarray} For the $p_{x}\pm ip_{y}$ configuration $\left|\delta_{\boldsymbol{q}}\right|^{2}=\delta_{0}^{2}\left(\sin^{2}aq_{x}+\sin^{2}aq_{y}\right)$, and therefore $E_{\boldsymbol{q}}$ can only vanish at the particle-hole invariant points $a\boldsymbol{K}^{\left(1\right)}=\left(0,0\right),a\boldsymbol{K}^{\left(2\right)}=\left(0,\pi\right),a\boldsymbol{K}^{\left(3\right)}=\left(\pi,\pi\right),a\boldsymbol{K}^{\left(4\right)}=\left(\pi,0\right)$, which happens when $\mu=-2t,0,2t,0$. Representative band structures are plotted in Fig.\ref{fig:Generic-band-structure}. For $\delta_{0}\ll t$ the spectrum takes the form of a gapped single particle Fermi surface with gap $\sim\delta_{0}$, while for $\delta_{0}\gg t$ one obtains Four regulated relativistic fermions centered at the points $\boldsymbol{K}^{\left(n\right)},\;1\leq n\leq4$ with masses $m_{n}=-2t-\mu,-\mu,2t-\mu,-\mu$, speed of light $c_{\text{light}}=a\delta_{0}/\hbar$, bandwidth $\sim\delta_{0}$ and momentum cutoff $\sim a^{-1}$. \begin{figure}[!th] \begin{centering} \subfloat[]{ \includegraphics[width=0.35\columnwidth]{NewFermiSurfaceSpectrum} } \subfloat[]{ \includegraphics[width=0.35\columnwidth]{NewRelativisticSpectrum} } \par\end{centering} \caption{Generic band structure of the lattice model. (a) When the order parameter is much smaller than the single particle bandwidth $\delta\ll t$, the spectrum takes the form of a gapped single particle Fermi surface with gap $\sim\delta$. This regime describes the onset of superconductivity, and it is appropriate to refer to $\delta$ as the ``gap function''. (b) When the order parameter is much larger than the single particle scales $\delta\gg t,\mu$, the spectrum takes the form of four regulated relativistic fermions centered at the particle-hole invariant points $\left(0,0\right),\left(0,\pi\right),\left(\pi,0\right),\left(\pi,\pi\right)$, in units of the inverse lattice spacing $a^{-1}$. We will be working in this regime. \label{fig:Generic-band-structure}} \end{figure} With generic $\mu,\delta_{0}$ the spectrum is gapped, and the Chern number $\nu$ labeling the different topological phases is well defined. It can be calculated by $\nu=\int_{BZ}\frac{\text{d}^{2}k}{2\pi}\text{tr}\left(\mathcal{F}\right)$ where $\mathcal{F}$ is the Berry curvature on the Brillouin zone $BZ$ \cite{ryu2010topological}. A more general definition is $\nu=\frac{1}{24\pi^{2}}\int_{\mathbb{R}\times BZ}\text{tr}\left(G\text{d}G^{-1}\right)^{3}$\footnote{More explicitly, $\nu=\frac{1}{24\pi^{2}}\mbox{tr}\int_{\mathbb{R}\times BZ}\mbox{d}^{3}k\varepsilon^{\alpha\beta\gamma}\left(G\partial_{\alpha}G^{-1}\right)\left(G\partial_{\beta}G^{-1}\right)\left(G\partial_{\gamma}G^{-1}\right)$.}, where $G\left(k_{0},k_{x},k_{y}\right)$ is the single particle propagator \cite{volovik2009universe}, which remains valid in the presence of weak interactions, as long as the gap does not close. For two band Hamiltonians such as \eqref{eq:3}, $\nu$ reduces to the homotopy type of the map $\hat{\boldsymbol{d}_{\boldsymbol{q}}}=\boldsymbol{d}_{\boldsymbol{q}}/\left|\boldsymbol{d}_{\boldsymbol{q}}\right|$ from $BZ$ (which is a flat torus) to the sphere, \begin{align} \nu=\frac{1}{\left(2\pi\right)^{2}}\int_{BZ}\text{d}^{2}\boldsymbol{q}\hat{\boldsymbol{d}_{k}}\cdot\left(\partial_{q_{y}}\hat{\boldsymbol{d}_{\boldsymbol{q}}}\times\partial_{q_{y}}\hat{\boldsymbol{d}_{\boldsymbol{q}}}\right)\in\mathbb{Z}. \end{align} One obtains $\nu=0$ for $\left|\mu\right|>2t$, $\nu=\pm1$ for $\mu\in\left(0,2t\right)$ and $\nu=\mp1$ for $\mu\in\left(-2t,0\right)$. The topological phase diagram is plotted in Fig.\ref{fig:Phase-Diagram}(a). Away from the $p_{x}\pm ip_{y}$ configuration, the topological phase diagram is essentially unchanged. For $\text{Im}\left(\delta^{x*}\delta^{y}\right)\neq0$, gap closings happen at the same points $\boldsymbol{K}^{\left(n\right)}$ and the same values of $\mu$ described above. $\nu$ takes the same values, with the orientation $o=\text{sgn}\left(\text{Im}\left(\delta^{x*}\delta^{y}\right)\right)$, described below, generalizing the sign $\pm1$ that characterizes the $p_{x}\pm ip_{y}$ configuration. For $\text{Im}\left(\delta^{x*}\delta^{y}\right)=0$ the spectrum is always gapless. The topological phase diagram is most easily understood from the formula $\nu=\frac{1}{2}\sum_{n=1}^{4}o_{n}\text{sgn}\left(m_{n}\right)$ where $o_{n}=\pm1$ are orientations associated with the relativistic fermions which we describe below \cite{sticlet2012edge}. It will also be useful consider a slight generalization of the single particle part of the lattice model, with un-isotropic hopping $t^{x}\psi_{\boldsymbol{l}}^{\dagger}\psi_{\boldsymbol{l}+x}+t^{y}\psi_{\boldsymbol{l}}^{\dagger}\psi_{\boldsymbol{l}+y}$. This changes the masses to $m_{1}=-\left(t_{1}+t_{2}\right)-\mu,m_{2}=t_{1}-t_{2}-\mu,m_{3}=t_{1}+t_{2}-\mu,m_{4}=-\left(t_{1}-t_{2}\right)-\mu$. In particular, the degeneracy between the masses $m_{2},m_{4}$ breaks, and additional trivial phases appear around $\mu=0$. See Fig.\ref{fig:Phase-Diagram}(b). \begin{figure}[!th] \begin{centering} \subfloat[]{ \includegraphics[width=0.35\columnwidth]{PahseDiagram1.pdf} } \subfloat[]{ \includegraphics[width=0.35\columnwidth]{PahseDiagram2.pdf} } \par\end{centering} \caption{The topological phase diagram of the lattice model is simplest to understand from the formula $\nu=\frac{1}{2}\sum_{n=1}^{4}o_{n}\text{sgn}\left(m_{n}\right)$ for the Chern number in terms of the masses and orientations of low energy relativistic fermions. (a) Topological phase diagram for isotropic hopping $t$. Units on the vertical axis are arbitrary, the topological phase diagram only depends on the orientation $o=\text{sgn}\left(\text{Im}\left(\delta^{x*}\delta^{y}\right)\right)$. (b) Topological phase diagram for anisotropic hopping $t^{x}\protect\neq t^{y}$, additional trivial phases exist around $\mu=0$. Here $t=\frac{t^{x}+t^{y}}{2}$. \label{fig:Phase-Diagram}} \end{figure} \subsubsection{Basics of the emergent geometry \label{subsec:The-order-parameter}} A key insight which we will extensively use, originally due to Volovik, is that the order parameter is in fact a \textit{vielbein}. In the present space-time independent situation, this vielbein is just a $2\times2$ matrix which generically will be invertible \begin{eqnarray} & & e_{A}^{\;\;j}=\left(\begin{array}{cc} \mbox{Re}(\delta^{x}) & \mbox{Re}(\delta^{y})\\ \mbox{Im}(\delta^{x}) & \mbox{Im}(\delta^{y}) \end{array}\right)\in GL\left(2\right),\label{eq:5-1} \end{eqnarray} where $A=1,2,\;j=x,y$. More accurately, $e_{A}^{\;\;j}$ is invertible if $\text{det}\left(e_{A}^{\;\;i}\right)=\text{Im}\left(\delta^{x*}\delta^{y}\right)\neq0$. We refer to an order parameter as singular if $\text{Im}\left(\delta^{x*}\delta^{y}\right)=0$. From the vielbein one can calculate a metric, which in the present situation is a general symmetric positive semidefinite matrix \begin{align} g^{ij}=e_{A}^{\;\;i}\delta^{AB}e_{B}^{\;\;j}=\delta^{(i}\delta^{j)*}\label{eq:6-2} =\left(\begin{array}{cc} \left|\delta^{x}\right|^{2} \mbox{Re}\left(\delta^{x}\delta^{y*}\right)\\ \mbox{Re}\left(\delta^{x}\delta^{y*}\right) & \left|\delta^{y}\right|^{2} \end{array}\right).\nonumber \end{align} Every vielbein determines a metric uniquely, but the converse is not true. Vielbeins $e,\tilde{e}$ that are related by an internal reflection and rotation $e_{A}^{\;j}=\tilde{e}{}_{B}^{\;\;j}L_{\;A}^{B}$ with $L\in O\left(2\right)$ give rise to the same metric. By diagonalization, it is also clear that any metric can be written in terms of a vielbein. Therefore the set of (constant) metrics can be parameterized by the coset $GL\left(2\right)/O\left(2\right)$. To see this explicitly we parameterize $\delta=e^{i\theta}\left(\left|\delta^{x}\right|,e^{i\phi}\left|\delta^{y}\right|\right)$ with the overall phase $\theta$ and relative phase $\phi\in\left(-\pi,\pi\right]$. Then \begin{align} g^{ij}=\left(\begin{array}{cc} \left|\delta^{x}\right|^{2} & \left|\delta^{x}\right|\left|\delta^{y}\right|\cos\phi\\ \left|\delta^{x}\right|\left|\delta^{y}\right|\cos\phi & \left|\delta^{y}\right|^{2} \end{array}\right) \end{align} is independent of $\theta$ which parametrizes $SO\left(2\right)$ and $\text{sgn}\phi$ which parametrizes $O\left(2\right)/SO\left(2\right)$. Note that the group $O\left(2\right)$ of internal rotations and reflections is just $U\left(1\right)\rtimes\left\{ 1,T\right\} $ acting on $e_{A}^{\;\;j}$. In more detail, $\delta\mapsto e^{2i\alpha}\delta$ (or $\delta\mapsto\delta^{*}$) corresponds to $e_{A}^{\;\;i}\mapsto L_{\;A}^{B}e_{B}^{\;\;i}$ with \begin{align} L=\begin{pmatrix}\cos2\alpha & \sin2\alpha\\ -\sin2\alpha & \cos2\alpha \end{pmatrix} \left(\text{or } L=\begin{pmatrix}1 & 0\\ 0 & -1 \end{pmatrix}\right). \end{align} The internal reflections, corresponding to a reversal of time, flip the \textit{orientation} of the vielbein $o=\text{sgn}\left(\text{det}\left(e_{A}^{\;\;i}\right)\right)=\text{sgn}\left(\text{Im}\left(\delta^{x*}\delta^{y}\right)\right)$\textit{, }and therefore every quantity that depends on $o$ is time reversal odd\textit{. }We will also refer to $o$ as the orientation of the order parameter. An order parameter with a positive (negative) orientation can be thought of as $p_{x}+ip_{y}$-like ($p_{x}-ip_{y}$-like). For the $p_{x}\pm ip_{y}$ configuration, $\delta=e^{i\theta}\delta_{0}\left(1,\pm i\right)$, one obtains a scalar metric $g^{ij}=\delta_{0}\delta^{ij}$, independent of the phase $\theta$ and the orientation $o=\pm$. We see that $\theta,o$ correspond precisely to the $O\left(2\right)=U\left(1\right)\rtimes\left\{ 1,T\right\} $ degrees of freedom of the vielbein to which the metric is blind to. Thus the metric $g^{ij}$ corresponds to the Higgs part of the order parameter, by which we mean the part of the order parameter on which the ground state energy depends, in the intrinsic case. The fact that $U\left(1\right)$ transformations map to internal rotations also appears naturally in the BdG formalism which we will use in the following. Consider the Nambu spinor $\Psi=\left(\psi,\psi^{\dagger}\right)^{T}$. It follows from the $U\left(1\right)$ action $\psi\mapsto e^{i\alpha}\psi$ that $\Psi\mapsto e^{i\alpha\sigma^{z}}\Psi$ where $\sigma^{z}$ is the Pauli matrix. We see that $U\left(1\right)$ acts on $\Psi$ as a spin rotation. Moreover, the fact that $\delta$ has charge $2$ while $\psi$ has charge 1 implies $e$ is an $SO\left(2\right)$ vector while $\Psi$ is a spinor. \subsubsection{Non-relativistic continuum limit \label{subsec:Coupling-the-Lattice}} Consider the lattice model \eqref{eq:2-1}, with a general space time dependent order parameter $\delta_{\boldsymbol{l}}=\left(\delta_{\boldsymbol{l}}^{x}\left(t\right),\delta_{\boldsymbol{l}}^{y}\left(t\right)\right)$, and minimally coupled to electromagnetism, \begin{eqnarray} H=-\frac{1}{2}\sum_{\boldsymbol{l}}\left[t\psi_{\boldsymbol{l}}^{\dagger}e^{iA_{\boldsymbol{l},\boldsymbol{l}+x}}\psi_{\boldsymbol{l}+x}+\left(\mu_{\boldsymbol{l}}+A_{t,\boldsymbol{l}}\right)\psi_{\boldsymbol{l}}^{\dagger}\psi_{\boldsymbol{l}}\right. +\left.\delta_{\boldsymbol{l}}^{x}\psi_{\boldsymbol{l}}^{\dagger}e^{iA_{\boldsymbol{l},\boldsymbol{l}+x}}\psi_{\boldsymbol{l}+x}^{\dagger}+\left(x\leftrightarrow y\right)+h.c\right].\label{eq:8} \end{eqnarray} Here $A_{\boldsymbol{l},\boldsymbol{l}'},A_{t,\boldsymbol{l}}$ are the components of a $U\left(1\right)$ gauge field describing background electromagnetism, on the discrete space and continuous time. We will work in the relativistic regime $\delta_{0}\gg t,\mu$ where $\delta_{0}$ is a characteristic scale for $\delta$. To obtain a continuum description, we split $BZ$ into four quadrants $BZ=\cup_{n=1}^{4}BZ^{\left(n\right)}$ centered around the four points $\boldsymbol{K}^{\left(n\right)}$, and decompose the fermion operator $\psi_{\boldsymbol{l}}$ as a sum $\psi_{\boldsymbol{l}}=\sum_{n=1}^{4}\psi_{\boldsymbol{l}}^{\left(n\right)}e^{i\boldsymbol{K}^{\left(n\right)}\cdot\boldsymbol{l}}$, where $\psi_{\boldsymbol{l}}^{\left(n\right)}e^{i\boldsymbol{K}^{\left(n\right)}\cdot\boldsymbol{l}}$ has non zero Fourier modes only in $BZ^{\left(n\right)}$. Thus the fermions $\psi^{\left(n\right)}$ all have non zero Fourier modes only in $BZ^{\left(n\right)}-\boldsymbol{K}^{\left(n\right)}=\left[-\frac{\pi}{2a},\frac{\pi}{2a}\right]^{2}$. This restriction of the quasi momenta provides the fermions $\psi^{\left(n\right)}$ with a \textit{physical} cutoff $\sim a^{-1}$, which will be important when we compare results from the continuum description to the lattice model. Assuming $\mu,\delta,A$ have small derivatives relative to $a^{-1}$, the inter fermion terms in $H$ can be neglected and $H$ splits into a sum $H\approx\sum_{n=1}^{4}H^{\left(n\right)}$, with $H^{\left(n\right)}$ a Hamiltonian for $\psi^{\left(n\right)}$. We then expand the Hamiltonians $H^{\left(n\right)}$ in small $\psi^{\left(n\right)}$ derivatives relative to $a^{-1}$. The resulting Hamiltonian, focusing on the point $\boldsymbol{K}^{\left(1\right)}=\left(0,0\right)$, is the $p$-wave superfluid (SF) Hamiltonian \begin{eqnarray} H_{\text{SF}}=\int\text{d}^{2}x\left[\psi^{\dagger}\left( - \frac{D^{2}}{2m^{*}}+m-A_{t}\right)\psi\right.\label{eq:9} -\left.\left(\frac{1}{2}\psi^{\dagger}\Delta^{j}\partial_{j}\psi^{\dagger}+h.c\right)\right], \end{eqnarray} where the fermion field has been redefined such that $\left\{ \psi^{\dagger}\left(x\right),\psi\left(x'\right)\right\} =\delta^{\left(2\right)}\left(x-x'\right)$. Here $D_{\mu}=\partial_{\mu}-iA_{\mu}$ is the $U\left(1\right)$-covariant derivative, with the connection $A=A_{j}\text{d}x^{j}$ related to $A_{\boldsymbol{l},\boldsymbol{l}'}$ by $A_{\boldsymbol{l},\boldsymbol{l}'}=\int_{\boldsymbol{l}}^{\boldsymbol{l}'}A$, and $D^{2}=\delta^{ij}D_{i}D_{j}$ with $i,j=x,y$. Note the appearance of the flat background spatial metric $\delta^{ij}$. The effective mass is related to the hopping amplitude $1/m^{*}=a^{2}t$, and the order parameter is $\Delta=a\delta$, so it is essentially the lattice order parameter. The chemical potential for the $p$-wave SF is $-m$. The coupling to $A$ in the pairing term is lost, since $\psi^{\dagger}\psi^{\dagger}=0$. For this reason it is a derivative and not a covariant derivative that appears in $\psi^{\dagger}\Delta^{j}\partial_{j}\psi^{\dagger}$, and one can verify that this term is gauge invariant. Moreover, due to the anti-commutator $\left\{ \psi^{\dagger}\left(x\right),\psi^{\dagger}\left(y\right)\right\} =0$ any operator put between two $\psi^{\dagger}$s is anti-symmetrized, and in particular $\psi^{\dagger}\Delta^{j}\partial_{j}\psi^{\dagger}=\psi^{\dagger}\frac{1}{2}\left\{ \Delta^{j},\partial_{j}\right\} \psi^{\dagger}$ where $\left\{ \Delta^{j},\partial_{j}\right\} $ is the anti-commutator of differential operators. This Hamiltonian is essentially the one considered in \cite{read2000paired} for the $p$-wave SF. The corresponding action is the $p$-wave SF action \begin{eqnarray} S_{\text{SF}}\left[\psi,\Delta,A\right]=\int\text{d}^{2+1}x\left[\psi^{\dagger}\left(iD_{t}+\frac{D^{2}}{2m^{*}}-m\right)\psi\right. +\left.\left(\frac{1}{2}\psi^{\dagger}\Delta^{j}\partial_{j}\psi^{\dagger}+h.c\right)\right],\label{eq:10} \end{eqnarray} in which $\psi,\psi^{\dagger}$ are no longer fermion operators, but independent Grassmann valued fields, $\left\{ \psi\left(x\right),\psi^{\dagger}\left(x'\right)\right\} =0$. This action comes equipped with a momentum cutoff $\Lambda_{UV}\sim a^{-1}$ inherited from the lattice model. For the other points $\boldsymbol{K}^{\left(2\right)},\boldsymbol{K}^{\left(3\right)},\boldsymbol{K}^{\left(4\right)}$ the SF action obtained is slightly different. The chemical potential for the $n$th fermion is $-m_{n}$.The order parameter for the $n$th fermion is $\Delta_{\left(n\right)}^{x}=a\delta^{x}e^{iK_{x}^{\left(n\right)}},\;\Delta_{\left(n\right)}^{y}=a\delta^{y}e^{iK_{y}^{\left(n\right)}}$, and we note that $e^{iK_{j}^{\left(n\right)}}=\pm1$. The order parameters for $\boldsymbol{K}^{\left(1\right)}=\left(0,0\right),\;\boldsymbol{K}^{\left(3\right)}=\left(\pi,\pi\right)$ are related by an overall sign, which is a $U\left(1\right)$ transformation, and so are the order parameters for $\boldsymbol{K}^{\left(2\right)}=\left(0,\pi\right),\boldsymbol{K}^{\left(4\right)}=\left(\pi,0\right)$. Thus the order parameters for $n=1,3$ are physically indistinguishable, and so are order parameters for $n=2,4$. The order parameters for $n=1$ and $n=2$ are however physically distinct. First, the orientations $o_{n}=\text{sgn}\left(\text{Im}\left(\Delta_{\left(n\right)}^{x*}\Delta_{\left(n\right)}^{y}\right)\right)$ are different, with $o_{1}=-o_{2}$. Second, the metrics $g_{\left(n\right)}^{ij}=\Delta^{(i}\Delta^{j)*}$ are generically different, with the same diagonal components, but $g_{\left(1\right)}^{xy}=-g_{\left(2\right)}^{xy}$. We note that if the relative phase between $\delta^{x}$ and $\delta^{y}$ is $\pm\pi/2$, as in the $p_{x}\pm ip_{y}$ configuration, then all metrics $g_{\left(n\right)}^{ij}$ are diagonal and therefore equal. These differences between the orientations and metrics of the different lattice fermions will be important later on. Similarly, the effective mass tensor which in \eqref{eq:9}, for $n=1$, is $\left(M^{-1}\right)^{ij}=\frac{\delta^{ij}}{m^{*}}$, has different signatures for different $n$, but this will not be important for our analysis. For now we continue working with the action \eqref{eq:10} for the $n=1$ fermion, keeping the other lattice fermions implicit until section \ref{subsec:Summing-over-lattice}. \subsubsection{Relativistic continuum limit \label{subsec:Relativistic-limit-of}} Since we work in the relativistic regime $\delta\gg t,\mu$ we can treat the term $\psi^{\dagger}\frac{D^{2}}{2m^{*}}\psi$ as a perturbation and compute quantities to zeroth order in $1/m^{*}$. Then $S_{\text{SF}}$ reduces to what we refer to as the relativistic limit of the $p$-wave SF action, given in BdG form by \begin{eqnarray} S_{\text{rSF}}\left[\psi,\Delta,A\right]\label{eq:14-0} =\frac{1}{2}\int\mbox{d}^{2+1}x\Psi^{\dagger}\begin{pmatrix}i\partial_{t}+A_{t}-m \frac{1}{2}\left\{ \Delta^{j},\partial_{j}\right\} \\ -\frac{1}{2}\left\{ \Delta^{*j},\partial_{j}\right\} i\partial_{t}-A_{t}+m \end{pmatrix}\Psi. \end{eqnarray} It is well known that when $\Delta$ takes the $p_{x}\pm ip_{y}$ configuration $\Delta=\Delta_{0}e^{i\theta}\left(1,\pm i\right)$ and $A=0$ this action is that of a relativistic Majorana spinor in Minkowski space-time, with mass $m$ and speed of light $c_{\text{light}}=\frac{\Delta_{0}}{\hbar}$. In the following, we will see that for general $\Delta$ and $A$, \eqref{eq:14-0} is the action of a relativistic Majorana spinor in curved and torsion-full space-time. We wil sometimes refer to the relativistic limit as $m^{*}\rightarrow\infty$, though this is somewhat loose, because in the relativistic regime both $m^{*}$ is large and $m$ is small. Before we go on to analyze the $p$-wave SF in the relativistic limit, it is worth considering what of the physics of the $p$-wave SF is captured by the relativistic limit, and what is not. First, the coupling to $A_{x},A_{y}$ is lost, so the relativistic limit is blind to the magnetic field. Since superconductors are usually defined by their interaction with the magnetic field, the relativistic limit is actually insufficient to describe the properties of the $p$-wave SF as a superconductor. Of course, a treatment of superconductivity also requires the dynamics of $\Delta$. Likewise, the term $\frac{1}{2m^{*}}\psi^{\dagger}D^{2}\psi=\frac{1}{2m^{*}}\psi^{\dagger}\delta^{ij}D_{i}D_{j}\psi$ seems to be the only term in $S_{\text{SF}}$ that includes the flat background metric $\delta^{ij}$, describing the real geometry of space. It appears that the relativistic limit is insufficient to describe the response of the system to a change in the real geometry of space\footnote{In fact, some of the response to the real geometry can be obtained, see our discussion, section \ref{sec:Conclusion-and-discussion}.}. Nevertheless, as is well known, the relativistic limit does suffice to determine the topological phases of the $p$-wave SC as a free (and weakly interacting) fermion system. Indeed, the Chern number labeling the different topological phases can be calculated by the formula $\nu=\frac{1}{2}\sum_{n=1}^{4}o_{n}\text{sgn}\left(m_{n}\right)$, which only uses data from the relativistic limit. Here the sum is over the four particle-hole invariant points of the lattice model, with orientations $o_{n}$ and masses $m_{n}$. This suggests that at least some physical properties characterizing the different free fermion topological phases can be obtained from the relativistic limit. Indeed, in the following we will see how a topological bulk response and a corresponding boundary anomaly can be obtained within the relativistic limit. \subsection{Emergent Riemann-Cartan geometry\label{sec:Emergent-Riemann-Cartan-geometry}} We argue that \eqref{eq:14-0} is precisely the action which describes a relativistic massive Majorana spinor in a curved and torsion-full background known as Riemann-Cartan (RC) geometry, with a particular form of background fields. We refer the reader to \cite{ortin2004gravity} parts I.1 and I.4.4, for a review of RC geometry and the coupling of fermions to it, and provide only the necessary details here, focusing on the implications for the $p$-wave SF. For simplicity we work locally and in coordinates, and we differ the treatment of global aspects to appendix \ref{subsec:Global-structures-and}. The action describing the dynamics of a Majorana spinor on RC background in 2+1 dimensional space-time can be written as \begin{eqnarray} S_{\text{RC}}\left[\chi,e,\omega\right]\label{eq:43-1} =\frac{1}{2}\int\mbox{d}^{2+1}x\left|e\right|\overline{\chi}\left[\frac{i}{2}e_{a}^{\;\mu}\left(\gamma^{a}D_{\mu}-\overleftarrow{D_{\mu}}\gamma^{a}\right)-m\right]\chi. \end{eqnarray} Here $\chi$ is a Majorana spinor with mass $m$ obeying, as a field operator, the canonical anti-commutation relation $\left\{ \chi\left(x\right),\chi\left(y\right)\right\} =\frac{\delta^{\left(2\right)}\left(x-y\right)}{\left|e\left(x\right)\right|}$, where we suppressed spinor indices. As a Grassmann field $\left\{ \chi\left(x\right),\chi\left(y\right)\right\} =0$. The field $e_{a}^{\;\mu}$ is an inverse vielbein which is an invertible matrix at each point in space-time. The indices $a,b,\dots\in\left\{ 0,1,2\right\} $ are $SO\left(1,2\right)$ (Lorentz) indices which we refer to as internal indices, while $\mu,\nu,\dots\in\left\{ t,x,y\right\} $ are coordinate indices. We will also use $A,B,\dots\in\left\{ 1,2\right\} $ for spatial internal indices and $i,j,\dots\in\left\{ x,y\right\} $ for spatial coordinate indices The vielbein $e_{\;\mu}^{a}$, is the inverse of $e_{a}^{\;\mu}$, such that $e_{\;\mu}^{a}e_{a}^{\;\nu}=\delta_{\mu}^{\nu},\;e_{\;\mu}^{a}e_{b}^{\;\mu}=\delta_{b}^{a}$. It is often useful to view the vielbein as a set of linearly independent (local) one-forms $e^{a}=e_{\;\mu}^{a}\text{d}x^{\mu}$. The metric corresponding to the vielbein is $g_{\mu\nu}=e_{\;\mu}^{a}\eta_{ab}e_{\;\nu}^{b}$ and the inverse metric is $g^{\mu\nu}=e_{a}^{\;\mu}\eta^{ab}e_{b}^{\;\nu}$, where $\eta_{ab}=\eta^{ab}=\text{diag}\left[1,-1,-1\right]$ is the flat Minkowski metric. Internal indices are raised and lowered using $\eta$, while coordinate indices are raised and lowered using $g$ and its inverse. Using $e$ one can replace internal indices with coordinate indices and vice versa, e.g $v^{a}=e_{\;\mu}^{a}v^{\mu}$. The volume element is defined by $\left|e\right|=\left|\text{det}e_{\;\mu}^{a}\right|=\sqrt{g}$. $\left\{ \gamma^{a}\right\} _{a=0}^{2}$ are gamma matrices obeying $\left\{ \gamma^{a},\gamma^{b}\right\} =2\eta^{ab}$, and we will work with $\gamma^{0}=\sigma^{z},\;\gamma^{1}=-i\sigma^{x},\;\gamma^{2}=i\sigma^{y}$\footnote{The gamma matrices form a basis for the Clifford algebra associated with $\eta$. The above choice of basis is a matter of convention. }. The covariant derivative $D_{\mu}=\partial_{\mu}+\omega_{\mu}$\footnote{We use the notation $D$ for spin, Lorentz, and $U\left(1\right)$ covariant derivatives in any representation, and the exact meaning should be clear from the field $D$ acts on. } contains the spin connection $\omega_{\mu}=\frac{1}{2}\omega_{ab\mu}\Sigma^{ab}$, where $\Sigma^{ab}=\frac{1}{4}\left[\gamma^{a},\gamma^{b}\right]$ generate the spin group $Spin\left(1,2\right)$ which is the double cover of the Lorentz group $SO\left(1,2\right)$. Note that $\omega_{ab\mu}=-\omega_{ba\mu}$ and therefore $\omega_{\;b\mu}^{a}$ is an $SO\left(1,2\right)$ connection. It follows that $\omega$ is metric compatible, $D_{\mu}\eta_{ab}=0$. It is often useful to work (locally) with a connection one-form $\omega=\omega_{\mu}\text{d}x^{\mu}$. $\overline{\chi}$ is the Dirac conjugate defined as in Minkowski space-time $\overline{\chi}=\chi^{\dagger}\gamma^{0}$. The derivative $\overleftarrow{D_{\mu}}$ acts only on $\overline{\chi}$ and is explicitly given by $\chi\overleftarrow{D_{\mu}}=\partial_{\mu}\overline{\chi}-\overline{\chi}\omega_{\mu}$. Our statement is that $S_{\text{RC}}\left[\chi,e,\omega\right]$ evaluated on the fields \begin{eqnarray} \chi=\left|e\right|^{-1/2}\Psi,\hspace{7bp}\label{17} e_{a}^{\;\mu}=\left(\begin{array}{ccc} 1 & 0 & 0\\ 0 & \mbox{Re}(\Delta^{x}) & \mbox{Re}(\Delta^{y})\\ 0 & \mbox{Im}(\Delta^{x}) & \text{Im}(\Delta^{y}) \end{array}\right),\hspace{7bp}\omega_{\mu}=-2A_{\mu}\Sigma^{12}, \end{eqnarray} reduces precisely to $S_{\text{rSF}}\left[\psi,\Delta,A\right]$ of equation \eqref{eq:14-0}, where one must keep in mind that $S_{\text{RC}}$ is written in relativistic units where $\hbar=1$ and $c_{\text{light}}=\Delta_{0}/\hbar=1$, which we will use in the following. Moreover, the functional integral over $\chi$ is equal to the functional integral over $\Psi$. This refines the original statement by Volovik and subsequent work by Read and Green \cite{read2000paired}. We defer the proof to appendices \ref{subsec:Equivalent-forms-of} and \ref{subsec:Equality-of-path}, where we also address certain subtleties that arise. Here we describe the particular RC geometry that follows from \eqref{17}, and attempt to provide some intuition for this geometric description of the $p$-wave SF. Starting with the vielbein, note that the only nontrivial part of $e_{a}^{\;\mu}$ is the spatial part $e_{A}^{\;j}$, which is just the order parameter $\Delta$, as in \eqref{eq:5-1}. The inverse metric we obtain from our vielbein is \begin{eqnarray} g^{\mu\nu}=e_{a}^{\;\mu}\eta^{ab}e_{b}^{\;\nu}\label{eq:10-1} =\left(\begin{array}{ccc} 1 & 0 & 0\\ 0 & -\left|\Delta^{x}\right|^{2} & -\mbox{Re}\left(\Delta^{x}\Delta^{*y}\right)\\ 0 & -\mbox{Re}\left(\Delta^{x}\Delta^{*y}\right) & -\left|\Delta^{y}\right|^{2} \end{array}\right), \end{eqnarray} where the spatial part $g^{ij}=-\Delta^{(i}\Delta^{j)*}$ is the Higgs part of the order parameter, as in \eqref{eq:6-2}. For the $p_{x}\pm ip_{y}$ configuration the metric reduces to the Minkowski metric. If $\Delta$ is time independent $g^{\mu\nu}$ describes a Riemannian geometry which is trivial in the time direction, but we allow for a time dependent $\Delta$. A metric of the form \eqref{eq:10-1} is said to be in gaussian normal coordinates with respect to space \cite{carroll2004spacetime}. The $U\left(1\right)$ connection $A_{\mu}$ maps to a $Spin\left(2\right)$ connection $\omega_{\mu}=-2A_{\mu}\Sigma^{12}=-iA_{\mu}\sigma^{z}$ which corresponds to spatial spin rotations. This is a special case of the general $Spin\left(1,2\right)$ connection which appears in RC geometry. The fact that $U\left(1\right)$ transformations map to spin rotations when acting on the Nambu spinor $\Psi$ is a general feature of the BdG formalism as was already discussed in section \ref{subsec:The-order-parameter}. From the spin connection $\omega$ it is natural to construct a curvature, which is a matrix valued two-form defined by $R_{\;b}^{a}=\text{d}\omega_{\;b}^{a}+\omega_{\;c}^{a}\wedge\omega_{\;b}^{c}$. In local coordinates $x^{\mu}$ it can be written as $R_{\;b}^{a}=\frac{1}{2}R_{\;b\mu\nu}^{a}\text{d}x^{\mu}\wedge\text{d}x^{\nu}$, where the components are given explicitly by $R_{\;b\mu\nu}^{a}=\partial_{\mu}\omega_{\;b\nu}^{a}-\partial_{\nu}\omega_{\;b\mu}^{a}+\omega_{\;c\mu}^{a}\omega_{\;b\nu}^{c}-\omega_{\;c\nu}^{a}\omega_{\;b\mu}^{c}$. It follows from \eqref{17} that in our case the only non zero components are \begin{eqnarray} R_{12}=-R_{21}=-2F, \end{eqnarray} where the two form $F=\text{d}A$ is the $U\left(1\right)$ field strength, or curvature, comprised of the electric and magnetic fields. \subsubsection{Torsion and additional geometric quantities} Since we treat $A$ and $\Delta$ as independent background fields, so are the spin connection $\omega$ and vielbein $e$. This situation is referred to as the first order vielbein formalism for gravity \cite{ortin2004gravity}. Apart from the metric $g$ and the curvature $R$ which we already described, there are a few more geometric quantities which can be constructed from $e,\omega$, and that will be used in the following. These additional quantities revolve around the notion of torsion. The torsion tensor $T$ is an important geometrical quantity, but a pragmatic way to view it is as a useful parameterization for the set of all spin connections $\omega$, for a fixed vielbein $e$. Thus one can work with the variables $e,T$ instead of $e,\omega$. We will see later on that the bulk responses in the $p$-wave SC are easier to describe using $e,T$. This is analogous to, and as we will see, generalizes, the situation in $s$-wave SC, where the independent degrees of freedom are $A$ and $\Delta=\left|\Delta\right|e^{i\theta}$, but it is natural to change variables and work with $\Delta$ and $D_{\mu}\theta=\partial_{\mu}\theta-2A_{\mu}$ instead. We now provide the details. The torsion tensor, or two-form, is defined in terms of $e,\omega$ as $T^{a}=De^{a}$, or in coordinates $T_{\mu\nu}^{a}=2D_{[\mu}e_{\nu]}^{a}$. Since our temporal vielbein $e^{0}=\text{d}t$ is trivial and the connection $\omega$ is only an $SO\left(2\right)$ connection, $T^{0}=0$ for all $A$ and $\Delta$. All other components of the torsion are in general non trivial, and are given by $T_{ij}^{A}=D_{i}e_{\;j}^{A}-D_{j}e_{\;i}^{A},\;T_{ti}^{A}=-T_{it}^{A}=D_{t}e_{\;i}^{A}$. This describes the simple change of variables from $\omega$ to $T$. Going from $T$ back to $\omega$ is slightly more complicated, and is done as follows. One starts by finding the $\omega$ that corresponds to $T=0$. The solution is the unique torsion free spin connection $\tilde{\omega}=\tilde{\omega}\left(e\right)$ which we refer to as the Levi Civita (LC) spin connection\footnote{The unique torsion free spin connection $\tilde{\omega}$ is also referred to as the Cartan connection is the literature.}. This connection is given explicitly by $\tilde{\omega}_{abc}=\frac{1}{2}\left(\xi_{abc}+\xi_{bca}-\xi_{cab}\right)$ where $\xi_{\;bc}^{a}=2e_{b}^{\;\mu}e_{c}^{\;\nu}\partial_{[\mu}e_{\;\nu]}^{a}$. Now, for a general $\omega$ the difference $C_{\;b\mu}^{a}=\omega_{\;b\mu}^{a}-\tilde{\omega}_{\;b\mu}^{a}$ is referred to as the contorsion tensor, or one-form. It carries the same information as $T$ and the two are related by $T^{a}=C_{\;b}^{a}\wedge e^{b}$ ($T_{\mu\nu}^{a}=2C_{\;b[\mu}^{a}e_{\;\nu]}^{b}$) and $C_{\mu\alpha\nu}=\frac{1}{2}\left(T_{\alpha\mu\nu}+T_{\mu\nu\alpha}-T_{\nu\alpha\mu}\right)$. One can then reconstruct $\omega$ from $e,T$ as $\omega=\tilde{\omega}\left(e\right)+C\left(e,T\right)$. Note that $\omega,\tilde{\omega}$ are both connections, but $C,T$ are tensors. For the $p_{x}\pm ip_{y}$ configuration $\Delta=\Delta_{0}e^{i\theta}\left(1,\pm i\right)$ one finds $\tilde{\omega}_{12\mu}=-\tilde{\omega}_{21\mu}=-\partial_{\mu}\theta$ (with all other components vanishing), and it follows that $C_{12\mu}=D_{\mu}\theta$. These are familiar quantities in the theory of superconductivity, and one can view $\tilde{\omega}$ and $C$ as generalizations of these. General formulas are given in appendix \ref{subsec:Explicit-formulas-for}. Using $\tilde{\omega}$ one can define a covariant derivative $\tilde{D}$ and curvature $\tilde{R}$ just as $D$ and $R$ are constructed from $\omega$. The quantity $\tilde{R}_{\;\nu\rho\sigma}^{\mu}$ is the usual Riemann tensor of Riemannian geometry and general relativity. Note that $\tilde{R}_{\;\nu\rho\sigma}^{\mu}$ depends solely on $g$ which is the Higgs part of the order parameter $\Delta$. Since $g$ is flat in the $p_{x}\pm ip_{y}$ configuration, we conclude that a non vanishing Riemann tensor requires a deviation of the Higgs part of $\Delta$ from the $p_{x}\pm ip_{y}$ configuration. As in Riemannian geometry we can define the Ricci tensor $\tilde{\mathcal{R}}_{\nu\sigma}=\tilde{R}_{\;\nu\mu\sigma}^{\mu}$ and Ricci scalar $\tilde{\mathcal{R}}=\tilde{\mathcal{R}}_{\;\nu}^{\nu}$. Examples for the calculation of $\tilde{\mathcal{R}}$ in terms of $\Delta$ were given in section \ref{subsubsec:Topological bulk responses from a gravitational Chern-Simons term}. Another important quantity which can be constructed from $e,\omega$ is the affine connection $\Gamma_{\;\beta\mu}^{\alpha}=e_{a}^{\;\alpha}\left(\partial_{\mu}e_{\;\beta}^{a}+\omega_{\;b\mu}^{a}e_{\;\beta}^{b}\right)=e_{a}^{\;\alpha}D_{\mu}e_{\;\beta}^{a}$, or affine connection (local) one-form $\Gamma_{\;\beta}^{\alpha}=\Gamma_{\;\beta\mu}^{\alpha}\text{d}x^{\mu}$. It is not difficult to see that $T$ is the anti symmetric part of $\Gamma$, $T_{\;\mu\nu}^{\rho}=\Gamma_{\mu\nu}^{\rho}-\Gamma_{\nu\mu}^{\rho}$, and it follows that the LC affine connection $\tilde{\Gamma}_{\;\beta\mu}^{\alpha}=e_{a}^{\;\alpha}\tilde{D}_{\mu}e_{\;\beta}^{a}$, for which $T=0$, is symmetric in its the two lower indices. This is the usual metric compatible and torsion free connection of Riemannian geometry, given by the Christoffel symbol $\tilde{\Gamma}_{\alpha\beta\mu}=\frac{1}{2}\left(\partial_{\mu}g_{\beta\alpha}+\partial_{\beta}g_{\alpha\mu}-\partial_{\alpha}g_{\mu\beta}\right)$. $\Gamma$ appears in covariant derivatives of tensors with coordinate indices, for example $\nabla_{\mu}v^{\alpha}=\partial_{\mu}v^{\alpha}+\Gamma_{\;\beta\mu}^{\alpha}v^{\beta}$, $\nabla_{\mu}v_{\alpha}=\partial_{\mu}v_{\alpha}-v_{\beta}\Gamma_{\;\alpha\mu}^{\beta}$, and so on. We also denote by $\nabla$ the total covariant derivative of tensors with both coordinate and internal indices, which includes both $\omega$ and $\Gamma$. Thus, for example, $\nabla_{\mu}v_{\;\nu}^{a}=\partial_{\mu}v_{\;\nu}^{a}+\omega_{\;b\mu}^{a}v_{\;\nu}^{b}-v_{\;\nu}^{a}\Gamma_{\;\mu\alpha}^{\nu}=D_{\mu}v_{\;\nu}^{a}-v_{\;\nu}^{a}\Gamma_{\;\mu\alpha}^{\nu}$. The most important occurrence of $\nabla$ is in the identity $\nabla_{\nu}e_{\;\mu}^{a}=0$, which follows from the definition of $\Gamma$ in this formalism, and is sometimes called the first vielbein postulate. It means that the covariant derivative $\nabla$ commutes with index manipulation preformed using $e,\eta$ and $g$. To obtain more intuition for what $\Gamma$ is from the $p$-wave SC point of view, we can write it as $\Gamma_{\;a\mu}^{\alpha}=-D_{\mu}e_{a}^{\;\alpha}$. Then it is clear that the non vanishing components of $\Gamma_{\;a\mu}^{\alpha}$ are given by $\Gamma_{\;1\mu}^{j}+i\Gamma_{\;2\mu}^{j}=-D_{\mu}\Delta^{j}$ \subsection{Symmetries, currents, and conservation laws \label{sec:Symmetries,-currents,-and}} In order to map fermionic observables in the $p$-wave SF to those of a Majorana fermion in RC space-time, it is usefull is to map the symmetries and the corresponding conservation laws between the two. We start with $S_{\text{SF}}$, and then review the analysis of $S_{\text{RC}}$ and show how it maps to that of $S_{\text{SF}}$, in the relativistic limit. The bottom line is that there is a sense in which electric charge and energy-momentum are conserved in a $p$-wave SC, and this maps to the sense in which spin and energy-momentum are conserved for a Majorana spinor in RC space-time. \subsubsection{Symmetries, currents, and conservation laws of the $p$-wave superfluid action \label{subsec:Symmetries,-currents,-and}} \paragraph{Electric charge } $U\left(1\right)$ gauge transformations act on $\psi,\Delta,A$ by \begin{align} \psi\mapsto e^{i\alpha}\psi,\;\Delta\mapsto e^{2i\alpha}\Delta,\;A_{\mu}\mapsto A_{\mu}+\partial_{\mu}\alpha.\label{eq:6.1} \end{align} This symmetry of $S_{\text{SF}}\left[\psi,\Delta,A\right]$ implies a conservation law for electric charge, \begin{eqnarray} \partial_{\mu}J^{\mu}=-i\psi^{\dagger}\Delta^{j}\partial_{j}\psi^{\dagger}+h.c,\label{eq:15} \end{eqnarray} where $J^{\mu}=-\frac{\delta S}{\delta A_{\mu}}$ is the fermion electric current. Since $A_{\mu}$ does not enter the pairing term, $J^{\mu}$ is the same as in the normal state where $\Delta=0$, \begin{eqnarray} J^{t}=-\psi^{\dagger}\psi,\;J^{j}=-\frac{\delta^{jk}}{m^{*}}\frac{i}{2}\psi^{\dagger}\overleftrightarrow{D_{k}}\psi.\label{16} \end{eqnarray} Here $\psi^{\dagger}\overleftrightarrow{D_{k}}\psi=\psi^{\dagger}D_{k}\psi-\left(D_{k}\psi^{\dagger}\right)\psi$. The conservation law \eqref{eq:15} shows that the fermionic charge alone is not conserved due to the exchange of charge between the fermions $\psi$ and Cooper pairs $\Delta$. If one adds a ($U\left(1\right)$ gauge invariant) term $S'\left[\Delta,A\right]$ to the action and considers $\Delta$ as a dynamical field, then it is possible to use the equation of motion $\frac{\delta\left(S'+S\right)}{\delta\Delta}=0$ for $\Delta$ and the definition $J_{\Delta}^{\mu}=-\frac{\delta S'}{\delta A_{\mu}}$ of the Cooper pair current in order to rewrite \eqref{eq:15} as $\partial_{\mu}\left(J^{\mu}+J_{\Delta}^{\mu}\right)=0$. This expresses the conservation of total charge in the $p$-wave SC. \paragraph{Energy-momentum \label{subsec:Energy-momentum}} Energy and momentum are at the heart of our analysis, and obtaining the correct expressions for these quantities, as well as interpreting correctly the conservation laws they satisfy, will be crucial. In flat space, one usually starts with the canonical energy-momentum tensor. For a Lagrangian $\mathcal{L}\left(\phi,\partial\phi,x\right)$, where $\phi$ is any fermionic of bosonic field, it is given by \begin{eqnarray} & & t_{\;\nu}^{\mu}=\frac{\partial\mathcal{L}}{\partial\partial_{\mu}\phi}\partial_{\nu}\phi-\delta_{\nu}^{\mu}\mathcal{L}, \end{eqnarray} and satisfies, on the equation of motion for $\phi$, \begin{eqnarray} & & \partial_{\mu}t_{\;\nu}^{\mu}=-\partial_{\nu}\mathcal{L},\label{eq:18} \end{eqnarray} which can be obtained from Noether's first theorem for space-time translations. Thus $t_{\;\nu}^{\mu}$ is conserved if and only if the Lagrangian is independent of the coordinate $x^{\nu}$. This motivates the identification of $t_{\;t}^{\mu}$ as the energy current, and of $t_{\;j}^{\mu}$ as the current of the $j$th component of momentum ($j$-momentum). $t_{\;t}^{t}$ is just the Hamiltonian density, or energy density, and $t_{\;j}^{t}$ is the $j$-momentum density. It is well known however, that the canonical energy-momentum tensor may fail to be gauge invariant, symmetric in its indices, or traceless, in situations where these properties are physically required, and it is also sensitive to the addition of total derivatives to the Lagrangian. To obtain the physical energy-momentum tensor one can either ``improve`` $t_{\;\nu}^{\mu}$ or appeal to a geometric (gravitational) definition which directly provides the physical energy-momentum tensor \cite{ortin2004gravity,forger2004currents}. We will comment on the coupling of the $p$-wave SF to a real background geometry our discussion, section \ref{sec:Conclusion-and-discussion}, but here we fix the background geometry to be flat, and instead continue by introducing the $U\left(1\right)$-covariant canonical energy-momentum tensor. It can be shown to coincide with the physical energy-momentum tensor obtained by coupling the $p$-wave SF to a real background geometry. Since we work with a fixed flat background geometry in this section, we will only consider space-time transformations which are symmetries of this background, and it will suffice to consider space-time translations and spatial rotations. The $U\left(1\right)$-covariant canonical energy-momentum tensor is relevant in the following situation. Assume that the $x$ dependence in $\mathcal{L}$ is only through a $U\left(1\right)$ gauge field to which $\phi$ is minimally coupled, $\mathcal{L}\left(\phi,\partial\phi,x\right)=\mathcal{L}\left(\phi,D\phi\right)$. Then, $t_{\;\nu}^{\mu}$ is not gauge invariant, and therefore physically ambiguous. This is reflected in the conservation law \eqref{eq:18} which takes the non covariant form \begin{eqnarray} \partial_{\mu}t_{\;\nu}^{\mu}=J^{\mu}\partial_{\nu}A_{\mu},\label{19} \end{eqnarray} where $J^{\mu}=-\frac{\partial\mathcal{L}}{\partial A_{\mu}}$ is the $U\left(1\right)$ current. This lack of gauge invariance is to be expected, as this conservation law follows from translational symmetry, and translations do not commute with gauge transformations. Instead, one should use $U\left(1\right)$-covariant space-time translations, which are translations from $x$ to $x+a$ followed by a $U\left(1\right)$ parallel transport from $x+a$ back to $x$, $\phi\left(x\right)\mapsto e^{iq\int_{x-a}^{x}A}\phi\left(x-a\right)$ where $\phi\mapsto e^{iq\alpha}\phi$ under $U\left(1\right)$ and the integral is over the straight line from $x-a$ to $a$. This is still a symmetry because the additional $e^{iq\int_{x-a}^{x}A}$ is just a gauge transformation. The conservation law that follows from this modified action of translations is \begin{eqnarray} \partial_{\mu}t_{\text{cov}\;\nu}^{\mu}=F_{\nu\mu}J^{\mu},\label{eq:21} \end{eqnarray} where $F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$ is the electromagnetic field strength, and \begin{eqnarray} t_{\text{cov}\;\nu}^{\mu}=\frac{\partial\mathcal{L}}{\partial D_{\mu}\phi}D_{\nu}\phi-\delta_{\nu}^{\mu}\mathcal{L}=t_{\;\nu}^{\mu}-J^{\mu}A_{\nu}\label{eq:32-00} \end{eqnarray} is the $U\left(1\right)$-covariant version of $t_{\;\nu}^{\mu}$, which we refer to as the $U\left(1\right)$-covariant canonical energy-momentum tensor. The right hand side of \eqref{eq:21} is just the usual Lorentz force, which acts as a source of $U\left(1\right)$-covariant energy-momentum. We stress that the covariant and non covariant conservation laws are equivalent, as can be verified by using the fact that $\partial_{\mu}J^{\mu}=0$ in this case. Both hold in any gauge, but in \eqref{eq:21} all quantities are gauge invariant. For the $p$-wave SF one obtains the $U\left(1\right)$-covariant energy-momentum tensor \begin{eqnarray} t_{\text{cov}\;t}^{t}&=&\frac{i}{2}\psi^{\dagger}\overleftrightarrow{D_{t}}\psi-\mathcal{L}\label{25-2}\\ &=& \frac{\delta^{ij}D_{i}\psi^{\dagger}D_{j}\psi}{2m^{*}}+m\psi^{\dagger}\psi-\left(\frac{1}{2}\psi^{\dagger}\Delta^{j}\partial_{j}\psi^{\dagger}+h.c\right),\nonumber\\ t_{\text{cov}\;j}^{t}&=&\frac{i}{2}\psi^{\dagger}\overleftrightarrow{D_{j}}\psi,\nonumber \\ t_{\text{cov}\;t}^{i}&=&-\frac{\delta^{ik}\left(D_{k}\psi\right)^{\dagger}D_{t}\psi}{2m^{*}}+\frac{1}{2}\psi^{\dagger}\Delta^{i}\partial_{t}\psi^{\dagger}+h.c,\nonumber \\ t_{\text{cov}\;j}^{i}&=&-\frac{\delta^{ik}\left(D_{k}\psi\right)^{\dagger}D_{j}\psi}{2m^{*}}+\frac{1}{2}\psi^{\dagger}\Delta^{i}\partial_{j}\psi^{\dagger}+h.c-\delta_{j}^{i}\mathcal{L}.\nonumber \end{eqnarray} The $U\left(1\right)$-covariant conservation law is slightly more complicated than \eqref{eq:21} due the additional background field $\Delta$, \begin{align} \partial_{\mu}t_{\text{cov}\;\nu}^{\mu}=\frac{1}{2}\psi^{\dagger}\partial_{j}\psi^{\dagger}D_{\nu}\Delta^{j}+h.c+F_{\nu\mu}J^{\mu},\label{32} \end{align} where we have used the $U\left(1\right)$ conservation law \eqref{eq:15}, and $D_{\mu}\Delta^{j}=\left(\partial_{\mu}-2iA_{\mu}\right)\Delta^{j}$. This conservation law shows that ($U\left(1\right)$-covariant) fermionic energy-momentum is not conserved due to the exchange of energy-momentum with the background fields $A,\Delta$. Apart from the Lorentz force there is an additional source term due to the space-time dependence of $\Delta$. As in the case of the electric charge, if one considers $\Delta$ as a dynamical field and uses its equation of motion, \eqref{32} can be written as\footnote{$t_{\Delta\;\text{cov}\;\nu}^{\mu}$ is the $U\left(1\right)$-covariant energy-momentum tensor of Cooper pairs. It is defined by \eqref{eq:32-00} with $\phi=\Delta$ and $\mathcal{L}=\mathcal{L}'\left(\Delta,\Delta^{*},D\Delta,D\Delta^{*}\right)$ being the (gauge invariant) term added to the $p$-wave SF Lagrangian. Here it is important that the coupling of $\Delta$ to $\psi$ in \eqref{eq:10} can be written without derivatives of $\Delta$. } \begin{align} \partial_{\mu}\left(t_{\text{cov}\;\nu}^{\mu}+t_{\Delta\;\text{cov}\;\nu}^{\mu}\right)=F_{\nu\mu}\left(J^{\mu}+J_{\Delta}^{\mu}\right),\label{33} \end{align} which is of the general form \eqref{eq:21}. Note that the spatial part $t_{\text{cov}\;j}^{i}$ is not symmetric, \begin{align} t_{\text{cov}\;y}^{x}-t_{\text{cov}\;x}^{y}=\frac{1}{2}\psi^{\dagger}\left(\Delta^{x}\partial_{y}-\Delta^{y}\partial_{x}\right)\psi^{\dagger}+h.c, \label{eq:28-0} \end{align} which physically represents an exchange of angular momentum between $\Delta$ and $\psi$, possible because of the intrinsic angular momentum of Cooper pairs in a $p$-wave SC. Explicitly, the ($U\left(1\right)$-covariant) angular momentum current is given by $J_{\varphi}^{\mu}=t_{\text{cov}\;\varphi}^{\mu}=t_{\text{cov}\;\nu}^{\mu}\zeta^{\nu}$ where $\zeta=x\partial_{y}-y\partial_{x}=\partial_{\varphi}$ is the generator of spatial rotations around $x=y=0$, and $\varphi$ is the polar angle. From \eqref{32} and \eqref{eq:28-0} we find its conservation law \begin{align} \partial_{\mu}J_{\varphi}^{\mu}=\left(\frac{1}{2}\psi^{\dagger}\partial_{j}\psi^{\dagger}D_{\varphi}\Delta^{j}+h.c+F_{\varphi\mu}J^{\mu}\right)\label{eq:36}\\ +\frac{1}{2}\psi^{\dagger}\left(\Delta^{x}\partial_{y}-\Delta^{y}\partial_{x}\right)\psi^{\dagger}+h.c,\nonumber \end{align} which shows that even when the Lorentz force in the $\varphi$ direction vanishes and $\Delta$ is ($U\left(1\right)$-covariantly) constant in the $\varphi$ direction, $\Delta$ still acts a source for fermionic angular momentum, due to the last term. Even though fermionic angular momentum is never strictly conserved in a $p$-wave SF, it is well known that a certain combination of fermionic charge and fermionic angular momentum can be strictly conserved \cite{shitade2014bulk,tada2015orbital,volovik2015orbital,shitade2015orbital}. Indeed, using \eqref{eq:36} and \eqref{eq:15}, \begin{align} &\partial_{\mu}\left(J_{\varphi}^{\mu}\mp\frac{1}{2}J^{\mu}\right)=\left(\frac{1}{2}\psi^{\dagger}\partial_{j}\psi^{\dagger}D_{\varphi}\Delta^{j}+h.c+F_{\varphi\mu}J^{\mu}\right)\nonumber \\ & \pm\frac{i}{2}\psi^{\dagger}\left(\Delta^{x}\pm i\Delta^{y}\right)\left(\partial_{x}\mp i\partial_{y}\right)\psi^{\dagger}+h.c. \end{align} We see that when $F_{\varphi\mu}=0$, $D_{\varphi}\Delta=0$ and $\Delta^{y}=\pm i\Delta^{x}$, the above current is strictly conserved \begin{eqnarray} & & \partial_{\mu}\left(J_{\varphi}^{\mu}\mp\frac{1}{2}J^{\mu}\right)=0, \end{eqnarray} which occurs in the generalized $p_{x}\pm ip_{y}$ configuration $\Delta=e^{i\theta\left(r,t\right)}\Delta_{0}\left(r,t\right)\left(1,\pm i\right)$, written in the gauge $A_{\varphi}=0$, and where $r=\sqrt{x^{2}+y^{2}}$. This conservation law follows from the symmetry of the generalized $p_{x}\pm ip_{y}$ configuration under the combination of a spatial rotation by an angle $\alpha$ and a $U\left(1\right)$ transformation by a phase $\mp\alpha/2$. \subsubsection{Symmetries, currents, and conservation laws in the geometric description\label{subsec:Currents,-symmetries,-and}} The symmetries and conservation laws for Dirac fermions have been described recently in \cite{hughes2013torsional}. Here we review the essential details (for Majorana fermions) and focus on the mapping to the symmetries and conservation laws of the $p$-wave SF action \eqref{eq:14-0}, which were described in section \ref{subsec:Symmetries,-currents,-and}. \paragraph{Currents in the geometric description \label{currents}} The natural currents in the geometric description are defined by the functional derivatives of the action $S_{\text{RC}}$ with respect to the background fields $e,\omega$, \begin{eqnarray} \mathsf{J}_{\;a}^{\mu}=\frac{1}{\left|e\right|}\frac{\delta S_{\text{RC}}}{\delta e_{\;\mu}^{a}},\;\mathsf{J}^{ab\mu}=\frac{1}{\left|e\right|}\frac{\delta S_{\text{RC}}}{\delta\omega_{ab\mu}}.\label{eq:54} \end{eqnarray} $\mathsf{J}_{\;a}^{\mu}$ is the energy momentum (energy-momentum) tensor and $\mathsf{J}^{ab\mu}$ is the spin current. Note that we use $\mathsf{J}$ as opposed to $J$ to distinguish the geometric currents from the $p$-wave SF currents described in the previous section, though the two are related as shown below. Calculating the geometric currents for the action \eqref{eq:43-1} one obtains \begin{eqnarray} 2\mathsf{J}_{\;a}^{\mu}&=&\mathcal{L}_{\text{RC}}e_{a}^{\;\mu}-\frac{i}{2}\overline{\chi}\left(\gamma^{\mu}D_{a}-\overleftarrow{D_{a}}\gamma^{\mu}\right)\chi,\label{eq:55}\\ 2\mathsf{J}^{ab\mu}&=&-\frac{1}{4}\overline{\chi}\chi e_{c}^{\;\mu}\varepsilon^{abc},\nonumber \end{eqnarray} where $\mathcal{L}_{\text{RC}}=\overline{\chi}\left[\frac{i}{2}e_{a}^{\;\mu}\left(\gamma^{a}D_{\mu}-\overleftarrow{D_{\mu}}\gamma^{a}\right)-m\right]\chi$ is (twice) the Lagrangian, which vanishes on the $\chi$ equation of motion. We see that $\mathsf{J}_{\;a}^{\mu}$ is essentially the $SO\left(1,2\right)$-covariant version of the canonical energy-momentum tensor of the spinor $\chi$. We also see that the spin current $\mathsf{J}^{ab\mu}$ has a particularly simple form in $D=2+1$, it is just the spin density $\frac{1}{2}\overline{\chi}\chi$ times a tensor $-\frac{1}{2}e_{c}^{\;\mu}\varepsilon^{abc}$ that only depends on the background field $e$. Using the expressions \eqref{17} for the geometric fields we find that $\mathsf{J}_{\;a}^{\mu},\;\mathsf{J}^{ab\mu}$ are related simply to the electric current and the ($U\left(1\right)$-covariant) canonical energy-momentum tensor described in section \ref{subsec:Symmetries,-currents,-and}, in the limit $m^{*}\rightarrow\infty$, \begin{eqnarray} & & J^{\mu}=4\left|e\right|\mathsf{J}^{12\mu}=-\psi^{\dagger}\psi\delta_{t}^{\mu},\label{eq:56}\\ & & t_{\text{cov}\;\nu}^{\mu}=-\left|e\right|\mathsf{J}_{\;\nu}^{\mu}=\begin{cases} \frac{i}{2}\psi^{\dagger}\overleftrightarrow{D_{\nu}}\psi & \mu=t\\ \frac{1}{2}\psi^{\dagger}\Delta^{j}\partial_{\nu}\psi^{\dagger}+h.c & \mu=j \end{cases}.\nonumber \end{eqnarray} Here we have simplified $t_{\text{cov}}$ using the equation of motion for $\psi$, and one can also use the equation of motion to remove time derivatives and obtain Schrodinger picture operators. For example, $t_{\text{cov}\;t}^{t}=\frac{i}{2}\psi^{\dagger}\overleftrightarrow{D_{t}}\psi=m\psi^{\dagger}\psi-\left(\frac{1}{2}\psi^{\dagger}\Delta^{j}\partial_{j}\psi^{\dagger}+h.c\right)$ is just the ($U\left(1\right)$-covariant) Hamiltonian density in the relativistic limit. The expression for the energy current $t_{\text{cov}\;t}^{i}$ is more complicated, and it is convenient to write it using some of the geometric quantities introduced above \begin{align} t_{\text{cov }t}^{j}=g^{jk}\frac{i}{2}\psi^{\dagger}\overleftrightarrow{D_{k}}\psi-&\frac{o}{2}\partial_{k}\left(\frac{1}{\left|e\right|}\varepsilon^{jk}\psi^{\dagger}\psi\right)\label{eq:49}\\ -&\left(\psi^{\dagger}\psi\right)g^{jk}C_{12k}.\nonumber \end{align} This is an expression for the energy current in terms of the momentum and charge densities, and it will be obtained below as a consequence of Lorentz symmetry in the relativistic limit. We now describe the symmetries of the action \eqref{eq:43-1} and the conservation laws they imply for these currents. As expected, these conservation laws turn out to be essentially the ones derived in section \eqref{subsec:Symmetries,-currents,-and}, in the relativistic limit. \paragraph{Spin \label{spin}} The Lorentz Lie algebra $so\left(1,2\right)$ is comprised of matrices $\theta\in\mathbb{R}^{3\times3}$ with entries $\theta_{\;b}^{a}$ such that $\theta_{ab}=-\theta_{ba}$. These can be spanned as $\theta=\frac{1}{2}\theta_{ab}L^{ab}$ where the generators $L^{ab}=-L^{ba}$ are defined such that $\eta L^{ab}$ is the antisymmetric matrix with $1$ ($-1$) at position $a,b$ ($b,a$) and zero elsewhere. The spinor representation of $\theta$ is \begin{eqnarray} \hat{\theta}=\frac{1}{2}\theta_{ab}\Sigma^{ab},\;\Sigma^{ab}=\frac{1}{4}\left[\gamma^{a},\gamma^{b}\right]. \end{eqnarray} Local Lorentz transformations act on $\chi,e,\omega$ by \begin{eqnarray} \chi&\mapsto& e^{-\hat{\theta}}\chi,\;e_{a}^{\;\mu}\mapsto e_{b}^{\;\mu}\left(e^{\theta}\right)_{\;a}^{b},\nonumber\\ \omega_{\mu}&\mapsto& e^{-\hat{\theta}}\left(\partial_{\mu}+\omega_{\mu}\right)e^{\hat{\theta}}.\label{eq:20-00} \end{eqnarray} The subgroup of $SO\left(1,2\right)$ that is physical in the $p$-wave SC is $SO\left(2\right)$ generated by $L^{12}$. Using the relations \eqref{17} between the $p$-wave SC fields and the geometric fields, and choosing $\theta=\theta_{12}L^{12}=-2\alpha L^{12}$, the transformation law \eqref{eq:20-00} reduces to the $U\left(1\right)$ transformation \eqref{eq:6.1}, \begin{align} \psi\mapsto e^{i\alpha}\psi,\;\Delta\mapsto e^{2i\alpha}\Delta,\;A_{\mu}\mapsto A_{\mu}+\partial_{\mu}\alpha. \end{align} The factor of 2 in $\theta_{12}=-2\alpha$ shows that $U(1)$ actually maps to $Spin(2)$, the double cover of $SO(2)$. Moreover, the fact that $\Delta$ has $U\left(1\right)$ charge 2 while $\psi$ has $U\left(1\right)$ charge 1 corresponds to $e_{a}^{\;\mu}$ being an $SO\left(1,2\right)$ vector while $\chi$ is an $SO\left(1,2\right)$ spinor. The Lie algebra version of \eqref{eq:20-00} is \begin{align} \delta\chi=-\frac{1}{2}\theta_{ab}\Sigma^{ab}\chi,\;\delta e_{\;\mu}^{a}=-\theta_{\;b}^{a}e_{\;\mu}^{b},\;\delta\omega_{\;b\mu}^{a}=D_{\mu}\theta_{\;b}^{a}. \end{align} Invariance of $S_{\text{RC}}$ under this variation implies the conservation law \begin{eqnarray} \nabla_{\mu}\mathsf{J}^{ab\mu}-\mathsf{J}^{ab\rho}T_{\mu\rho}^{\mu}=\mathsf{J}^{[ab]},\label{eq:57} \end{eqnarray} valid on the equations of motion for $\chi$ \cite{hughes2013torsional,bradlyn2015low}. This conservation law relates the anti symmetric part of the energy-momentum tensor to the divergence of spin current. Essentially, the energy-momentum tensor isn't symmetric due to the presence of the background field $\omega$ which transforms under $SO\left(1,2\right)$. From a different point of view, the vielbein $e$ acts as a source for the fermionic spin current since it is charged under $SO\left(1,2\right)$. Inserting the expressions \eqref{17} into the $\left(a,b\right)=\left(1,2\right)$ component of \eqref{eq:57} we obtain \eqref{eq:15}, \begin{eqnarray} & & \partial_{\mu}J^{\mu}=-i\psi^{\dagger}\Delta^{j}\partial_{j}\psi^{\dagger}+h.c.\label{eq:15-1} \end{eqnarray} The other components of \eqref{eq:57} follow from the symmetry under local boosts, which is only a symmetry of $S_{\text{SF}}$ when $m^{*}\rightarrow\infty$. These can be used to obtain the formula \eqref{eq:49} for the energy current of the $p$-wave SF, in the limit $m^{*}\rightarrow\infty$, in terms of the momentum and charge densities. \paragraph{Energy-momentum\label{subsec:Diffeomorphism-symmetry}} A diffeomorphism is a smooth invertible map between manifolds. We consider only diffeomorphisms from space-time to itself and denote the group of such maps by $Diff$. Since the flat background metric $\delta^{ij}$ decouples in the relativistic limit, it makes sense to consider all diffeomorphisms, and not restrict to symmetries of $\delta^{ij}$ as we did in section \ref{subsec:Energy-momentum}. Locally, diffeomorphisms can be described by coordinate transformations $x\mapsto x'=f\left(x\right)$. The lie algebra is that of vector fields $\zeta^{\nu}\left(x\right)$, which means diffeomorphisms in the connected component of the identity $Diff_{0}$ can be written as $f\left(x\right)=f_{1}\left(x\right)$ where $f_{\varepsilon}\left(x\right)=\exp_{x}\left(\varepsilon\zeta\right)=x+\varepsilon\zeta\left(x\right)+O\left(\varepsilon^{2}\right)$ is the flow of $\zeta$ \cite{nakahara2003geometry}. $Diff$ acts on the geometric fields by the pullback \begin{eqnarray} \chi\left(x\right)&\mapsto&\chi\left(f\left(x\right)\right),\;e_{\;\mu}^{a}\left(x\right)\mapsto\partial_{\mu}f^{\nu}e_{\;\nu}^{a}\left(f\left(x\right)\right),\nonumber\\ \omega_{\mu}\left(x\right)&\mapsto&\partial_{\mu}f^{\nu}\omega_{\nu}\left(f\left(x\right)\right).\label{eq:51} \end{eqnarray} The action of $Diff$ on the $p$-wave SF fields is similar, and follows from \eqref{eq:51} supplemented by the dictionary \eqref{17}. For $f\in Diff_{0}$ generated by $\zeta$, the Lie algebra version of \eqref{eq:51} is given by the Lie derivative, \begin{eqnarray} \delta\chi&=&\mathcal{L}_{\zeta}\chi=\zeta^{\mu}\partial_{\mu}\chi,\label{eq:53-0}\\ \delta e_{\;\mu}^{a}&=&\mathcal{L}_{\zeta}e_{\;\mu}^{a}=\partial_{\mu}\zeta^{\nu}e_{\;\nu}^{a}+\zeta^{\nu}\partial_{\nu}e_{\;\mu}^{a},\nonumber \\ \delta\omega_{\mu}&=&\mathcal{L}_{\zeta}\omega_{\mu}=\partial_{\mu}\zeta^{\nu}\omega_{\nu}+\zeta^{\nu}\partial_{\nu}\omega_{\mu}.\nonumber \end{eqnarray} Since these variations are not Lorentz covariant, they will give rise to a conservation law which is not Lorentz covariant. This follows from the fact that the naive $Diff$ action \eqref{eq:51} does not commute with Lorentz gauge transformations, as was described for the simpler case of translations and $U\left(1\right)$ gauge transformations in section \ref{subsec:Energy-momentum}. Instead, one should use the Lorentz-covariant $Diff$ action, which is the pull back from $f\left(x\right)$ to $x$ followed by a Lorentz parallel transport from $f\left(x\right)$ to $x$ along the integral curve $\gamma_{x,\zeta}\left(\varepsilon\right)=\exp_{x}\left(\varepsilon\zeta\right)=f_{\varepsilon}\left(x\right)$, \begin{eqnarray} \chi\left(x\right)&\mapsto& P\chi\left(f\left(x\right)\right),\label{eq:62}\\ e_{\;\mu}^{a}\left(x\right)&\mapsto& P_{\;b}^{a}\partial_{\mu}f^{\nu}e_{\;\nu}^{b}\left(f\left(x\right)\right),\nonumber\\ \omega_{\mu}\left(x\right)&\mapsto& P\left[\partial_{\mu}f^{\nu}\omega_{\mu}\left(f\left(x\right)\right)+\partial_{\mu}\right]P^{-1},\nonumber \end{eqnarray} where $P=\frac{1}{2}P_{ab}\Sigma^{ab}$ and $P=\mathcal{P}\exp\left(-\int_{\gamma_{x,\zeta}}\omega\right)$ is the spin parallel transport given by the path ordered exponential. At the Lie algebra level, this modification of \eqref{eq:51} amounts to an infinitesimal Lorentz gauge transformation generated by $\theta_{ab}=-\zeta^{\rho}\omega_{ab\rho}$, which modifies \eqref{eq:53-0} to the covariant expressions \begin{eqnarray} \delta\chi&=&\zeta^{\mu}\nabla_{\mu}\chi,\label{eq:53-1}\\ \delta e_{\;\mu}^{a}&=&\nabla_{\mu}\zeta^{a}-T_{\mu\nu}^{a}\zeta^{\nu},\nonumber \\ \delta\omega_{\mu}&=&\zeta^{\nu}R_{ab\nu\mu}.\nonumber \end{eqnarray} Since the usual $Diff$ and Lorentz actions on the fields are both symmetries of $S_{\text{RC}}$, so is the Lorenz-covariant $Diff$ action. This leads directly to the conservation law \begin{eqnarray} & & \nabla_{\mu}\mathsf{J}_{\;\nu}^{\mu}-\mathsf{J}_{\;\nu}^{\rho}T_{\mu\rho}^{\mu}=T_{\nu\mu}^{b}\mathsf{J}_{\;b}^{\mu}+R_{bc\nu\mu}\mathsf{J}^{bc\mu},\label{eq:71} \end{eqnarray} valid on the equations of motion for $\chi$ \cite{bradlyn2015low,hughes2013torsional}. We find it useful to rewrite \eqref{eq:71} in a way which isolates the effect of torsion, \begin{eqnarray} & & \tilde{\nabla}_{\mu}\mathsf{J}_{\;\nu}^{\mu}=C_{ab\nu}\mathsf{J}^{[ab]}+R_{ab\nu\mu}\mathsf{J}^{ab\mu},\label{eq:60} \end{eqnarray} where we note that the curvature also depends on the torsion, $R=\tilde{R}+\tilde{D}C+C\wedge C$. Equation \eqref{eq:71} can also be massaged to the non-covariant form \begin{align} \partial_{\mu}\left(\left|e\right|\mathsf{J}_{\;\nu}^{\mu}\right)=\left(e_{a}^{\;\rho}D_{\nu}e_{\;\mu}^{a}\right)\left|e\right|\mathsf{J}_{\;\rho}^{\mu}+R_{\nu\mu ab}\left|e\right|\mathsf{J}^{ab\mu}.\label{eq:66} \end{align} Using the dictionary \eqref{17} and the subsequent paragraph, and \eqref{eq:56}, this reduces to \begin{align} \partial_{\mu}t_{\text{cov}\;\nu}^{\mu}=\left(D_{\nu}\Delta^{j}\right)\frac{1}{2}\psi^{\dagger}\partial_{j}\psi^{\dagger}+h.c+F_{\nu\mu}J^{\mu}, \end{align} which is just the energy-momentum conservation law \eqref{32} for the $p$-wave SF (with $m^{*}\rightarrow\infty$). Writing the conservation law in the form \eqref{eq:66} may not seem natural from the geometric point of view because it uses the partial derivative as opposed to a covariant derivative. It is however natural from the $p$-wave SC point of view, where space-time is actually flat and $e$ is viewed as a bosonic field with no geometric role, which is the order parameter $\Delta$. This point is important in the context of the gravitational anomaly in the $p$-wave SC, see Appendix \ref{subsec:Boundary-gravitational-anomaly}. Similar statements hold for other mechanisms for emergent/analogue gravity, see section I.6 of \cite{volovik2009universe} and \cite{keser2016analogue}, and were also made in the gravitational context without reference to emergent phenomena \cite{leclerc2006canonical}. \subsection{Bulk response\label{sec:Bulk-response} } \subsubsection{Currents from effective action\label{subsec:Bulk-response-from}} The effective action for the background fields is obtained by integrating over the spin-less fermion $\psi$, \begin{eqnarray} e^{iW_{\text{SF}}\left[\Delta,A\right]}=\int\text{D}\psi^{\dagger}\text{D}\psi e^{iS_{\text{SF}}\left[\psi,\psi^{\dagger},\Delta,A\right]}. \end{eqnarray} The integral is a fermionic coherent state functional integral, over the Grassmann valued fields $\psi,\psi^{\dagger}$, and the action $S_{\text{SF}}$ is given in \eqref{eq:10}. As described in section \ref{sec:Emergent-Riemann-Cartan-geometry}, in the relativistic limit $W_{\text{SF}}$ is equal to the effective action obtained by integrating over a Majorana fermion coupled to RC geometry, \begin{eqnarray} e^{iW_{\text{SF}}\left[\Delta,A\right]}&=&e^{iW_{\text{RC}}\left[e,\omega\right]}\label{eq:73-1}\\ &=&\int\text{D}\left(\left|e\right|^{1/2}\chi\right)e^{iS_{\text{RC}}\left[\chi,e,\omega\right]},\nonumber \end{eqnarray} where $e,\omega$ are given in terms of $\Delta,A$ by \eqref{17}. It follows from the definition \eqref{eq:54} of the spin current $\mathsf{J}^{ab\mu}$ and the energy-momentum tensor $\mathsf{J}_{\;a}^{\mu}$ as functional derivatives of $S_{\text{RC}}$ that their ground state expectation values are given by \begin{eqnarray} & & \left\langle \mathsf{J}_{\;a}^{\mu}\right\rangle =\frac{1}{\left|e\right|}\frac{\delta W_{\text{RC}}}{\delta e_{\;\mu}^{a}},\;\left\langle \mathsf{J}^{ab\mu}\right\rangle =\frac{1}{\left|e\right|}\frac{\delta W_{\text{RC}}}{\delta\omega_{ab\mu}}.\label{eq:54-1} \end{eqnarray} Using the mapping \eqref{eq:56} between $\mathsf{J}_{\;a}^{\mu},\;\mathsf{J}^{ab\mu}$ and $t_{\text{cov}\;\nu}^{\mu},\;J^{\mu}$ we see that \begin{eqnarray} & & \left\langle J^{\mu}\right\rangle =4\left|e\right|\left\langle \mathsf{J}^{12\mu}\right\rangle =4\frac{\delta W_{\text{RC}}\left[e,\omega\right]}{\delta\omega_{12\mu}},\label{eq:71-1}\\ & & \left\langle t_{\text{cov}\;\nu}^{\mu}\right\rangle =-\left|e\right|e_{\;\nu}^{a}\left\langle \mathsf{J}_{\;a}^{\mu}\right\rangle =-e_{\;\nu}^{a}\frac{\delta W_{\text{RC}}\left[e,\omega\right]}{\delta e_{\;\mu}^{a}}.\nonumber \end{eqnarray} This is the recipe we will use to obtain the expectation values $\left\langle J^{\mu}\right\rangle ,\left\langle t_{\text{cov}\;\nu}^{\mu}\right\rangle $ from the effective action $W_{\text{RC}}$ for a Majorana spinor in RC space-time. Note that in \eqref{eq:71-1} there are derivatives with respect to all components of the vielbein, not just the spatial ones which we can physically obtain from $\Delta$. For this reason, to get all components of $\left\langle t_{\text{cov}\;\nu}^{\mu}\right\rangle $, we should obtain $W_{\text{RC}}$ for general $e$, take the functional derivative in \eqref{eq:71-1}, and only then set $e$ to the configuration obtained from $\Delta$ according to \eqref{17}. From the $p$-wave SF point of view, this corresponds to the introduction of a fictitious background field $e_{0}^{\;\mu}$ which enters $S_{\text{SF}}$ by generalizing $\psi^{\dagger}iD_{t}\psi$ to $\psi^{\dagger}\frac{i}{2}e_{0}^{\;\mu}\overleftrightarrow{D_{\mu}}\psi$, and setting $e_{0}^{\;\mu}=\delta_{t}^{\mu}$ at the end of the calculation, as in \cite{bradlyn2015low}. Before we move on, we offer some intuition for the expressions \eqref{eq:71-1}. The first equation in \eqref{eq:71-1} follows from the definition $J^{\mu}=-\frac{\delta S_{\text{SF}}}{\delta A_{\mu}}$ of the electric current and the simple relation $\omega_{12\mu}=-\omega_{21\mu}=-2A_{\mu}$ between the spin connection and the $U\left(1\right)$ connection. The second equation in \eqref{eq:71-1} is slightly trickier. It implies that the (relativistic part of the) energy-momentum tensor $t_{\text{cov}\;\nu}^{\mu}$ is given by a functional derivative with respect to the order parameter $\Delta$, because $\Delta$ is essentially the vielbein $e$. This may seem strange, and it is certainly not the case in an $s$-wave SC, where $\frac{\delta H}{\delta\Delta}\sim\psi_{\uparrow}^{\dagger}\psi_{\downarrow}^{\dagger}$ has nothing to do with energy-momentum. In a $p$-wave SC, the operator $\frac{\delta H}{\delta\Delta^{j}}\sim\psi^{\dagger}\partial_{j}\psi^{\dagger}$ contains a spatial derivative which hints that it is related to fermionic momentum. More accurately, we see from \eqref{25-2} that the operator $\psi^{\dagger}\partial_{j}\psi^{\dagger}$ enters the energy-momentum tensor in a $p$-wave SC. \subsubsection{Effective action from perturbation theory } \paragraph{Setup and generalities} We consider the effective action for a $p$-wave SF on the plane $\mathbb{R}^{2}$, with the corresponding space-time manifold $M_{3}=\mathbb{R}_{t}\times\mathbb{R}^{2}$, by using perturbation theory around the $p_{x}\pm ip_{y}$ configuration $\Delta=\Delta_{0}e^{i\theta}\left(1,\pm i\right)$ with no electromagnetic fields $\partial_{\mu}\theta-2A_{\mu}=0$. After $U\left(1\right)$ gauge fixing $\theta=0$\footnote{In doing so we are ignoring the possibility of vortices, see \cite{ariad2015effective}.}, we obtain $\Delta=\Delta_{0}\left(1,\pm i\right),\;A=0$. Let us start with the $p_{x}+ip_{y}$ configuration, which has a positive orientation, in which case the corresponding (gauge fixed) vielbein and spin connection are just $e_{a}^{\;\mu}=\delta_{a}^{\mu}$ and $\omega_{ab\mu}=0$. A perturbation of the $p_{x}+ip_{y}$ configuration corresponds to $e_{a}^{\;\mu}=\delta_{a}^{\mu}+h_{a}^{\;\mu}$ with a small $h$ and to a small spin connection $\omega_{ab\mu}$. In other words, a perturbation of the $p_{x}+ip_{y}$ configuration without electromagnetic fields corresponds to a perturbation of flat and torsion-less space-time. The effective action for a Dirac spinor in a background RC geometry was recently calculated perturbatively around flat and torsionless space-time, with a positive orientation, in the context of geometric responses of Chern insulators \cite{hughes2013torsional,parrikar2014torsion}. This is equal to $2W_{\text{RC}}$ where $W_{\text{RC}}$ is the effective action for a Majorana spinor in RC geometry. At this point is seems that we can apply these results in order to obtain the effective action for the $p$-wave SC, in the relativistic limit. There is however, an additional ingredient in the perturbative calculation of the effective action which we did not yet discuss, which is the renormalization scheme used to handle diverging integrals. We refer to terms in the effective action that involve diverging integrals as \textit{UV sensitive}. The values one obtains for such terms depend on the details of the renormalization scheme, or in other words, on microscopic details that are not included in the continuum action. For us, the continuum description is simply an approximation to the lattice model, where space is a lattice but time is continuous. This implies a physical cutoff $\Lambda_{UV}$ for wave-vectors, but not for frequencies. In particular, such a scheme is not Lorentz invariant, even though the action in the relativistic limit is. Lorentz symmetry is in any case broken down to spatial $SO\left(2\right)$ for finite $m^{*}$. For these reasons, UV sensitive terms in the effective action $W_{\text{RC}}$ for the $p$-wave SC will be assigned different values than those obtained before, using a fully relativistic scheme. The perturbative calculation within the renormalization scheme outlined above is described in appendix \ref{subsec:Perturbative-calculation-of}, where we also demonstrate that it produces physical quantities that approximate those of the lattice model, and compare to the fully relativistic schemes used in previous works. In the following we will focus on the UV \textit{insensitive} part of the effective action, and in doing so we will obtain results which are essentially\footnote{See the discussion of $O\left(\frac{m}{\Lambda_{UV}}\right)$ corrections below.} independent of microscopic details that do not appear in the continuum action. We start by quoting the fully relativistic results of \cite{hughes2013torsional,parrikar2014torsion}, and then restrict our attention to the UV insensitive part of the effective action, and describe the physics of the $p$-wave SC it encodes. \paragraph{Effective action for a single Majorana spinor \label{subsec:Effective-action-for}} The results of \cite{hughes2013torsional,parrikar2014torsion} can be written as \begin{eqnarray} 2W_{\text{RC}}\left[e,\omega\right]&=&\frac{\kappa_{H}}{2}\int_{M_{3}}Q_{3}\left(\tilde{\omega}\right)\label{eq:72}\\ &+&\frac{\zeta_{H}}{2}\int_{M_{3}}e^{a}De_{a}-\frac{\kappa_{H}}{2}\int_{M_{3}}\tilde{\mathcal{R}}e^{a}De_{a}\nonumber\\ &+&\frac{1}{2\kappa_{N}}\int_{M_{3}}\left(\tilde{\mathcal{R}}-2\Lambda+\frac{3}{2}c^{2}\right)\left|e\right|\mbox{d}^{3}x+\cdots\nonumber \end{eqnarray} where \begin{eqnarray} Q_{3}\left(\tilde{\omega}\right)=\text{tr}\left(\tilde{\omega}\text{d}\tilde{\omega}+\frac{2}{3}\tilde{\omega}^{3}\right) \end{eqnarray} is the Chern-Simons (local) 3-form, $c=C_{abc}\varepsilon^{abc}$ is the totally antisymmetric piece of the contorsion tensor, and $\kappa_{H},\zeta_{H},1/\kappa_{N},\Lambda/\kappa_{N}$ are coefficients that will be discussed further below. The first two lines of \eqref{eq:72} are written in terms of differential forms, and the third line is written in terms of scalars. By scalars we mean $Diff$ invariant objects. In the differential forms the wedge product is implicit, as it will be from now on, so $\tilde{\omega}\wedge\text{d}\tilde{\omega}$ is written as $\tilde{\omega}\text{d}\tilde{\omega}$ and so on. The integrals over differential forms can be written as integrals over pseudo-scalars, \begin{eqnarray} & & e^{a}De_{a}=\left(e_{\;\alpha}^{a}D_{\beta}e_{\;\gamma}^{b}\frac{1}{\left|e\right|}\varepsilon^{\alpha\beta\gamma}\right)\left|e\right|\text{d}^{3}x=-oc\left|e\right|\text{d}^{3}x,\label{eq:76}\\ & & Q_{3}\left(\tilde{\omega}\right)=\left(\tilde{\omega}_{\;b\alpha}^{a}\partial_{\beta}\tilde{\omega}_{\;a\gamma}^{b}+\frac{2}{3}\tilde{\omega}_{\;b\gamma}^{a}\tilde{\omega}_{\;c\beta}^{b}\tilde{\omega}_{\;a\gamma}^{c}\right)\frac{1}{\left|e\right|}\varepsilon^{\alpha\beta\gamma}\left|e\right|\text{d}^{3}x,\nonumber \end{eqnarray} which are only invariant under the orientation preserving subgroup of $Diff$ which we denote $Diff_{+}$. Here $o=\text{sgn}\left(\text{det}\left(e\right)\right)$ is the orientation of $e$. These expressions are odd under orientation reversing diffeomorphisms because so are $o$ and the pseudo-tensor $\frac{1}{\left|e\right|}\varepsilon^{\alpha\beta\gamma}$ \footnote{In this section $\varepsilon$ always stands for the usual totally anti symmetric symbol, normalized to 1. Thus $\varepsilon^{123}=\varepsilon^{xyt}=\varepsilon_{xyt}=1$. Note that $\varepsilon^{abc}$ is an $SO\left(1,2\right)$ tensor, and an $O\left(1,2\right)$ pseudo-tensor, while $\varepsilon^{\mu\nu\rho}=\text{det}\left(e\right)e_{a}^{\;\mu}e_{b}^{\;\nu}e_{c}^{\;\rho}\varepsilon^{abc}$ is a (coordinate) tensor density, $\frac{1}{\text{det}\left(e\right)}\varepsilon^{\mu\nu\rho}$ is a tensor and $\frac{1}{\left|e\right|}\varepsilon^{\mu\nu\rho}=\frac{1}{\left|\text{det}\left(e\right)\right|}\varepsilon^{\mu\nu\rho}$ is a pseudo-tensor. }. Equation \eqref{eq:72} can be expanded in the perturbations $h_{a}^{\;\mu}$ and $\omega_{ab\mu}$ to reveal the order in perturbation theory at which the different terms arise, see appendix \ref{subsec:Perturbative-calculation-of}. Additionally, at every order in the perturbations the effective action can be expanded in powers of derivatives of the perturbations over the mass $m$. The terms written explicitly above show up at first and second order in $h,\omega$ and at up to third order in their derivatives. They also include higher order corrections that make them $Diff_{+}$ and Lorentz gauge invariant, or invariant up to total derivatives. All other contributions denoted by $+\cdots$ are at least third order in the perturbations or fourth order in derivatives. Such a splitting is not unique \cite{hughes2013torsional}, but the form \eqref{eq:72} has been chosen because it is well suited for the study of the bulk responses. Let us now describe the different terms in \eqref{eq:72}. The first term is the gravitational Chern-Simons (gCS) term. It has a similar structure to the more familiar $U\left(1\right)$ CS term $\int A\text{d}A$, and is in fact an $SO\left(1,2\right)$ CS term, but note that the LC spin connection $\tilde{\omega}$ is a functional of the vielbein $e$. It is important that the spin connection in gCS is not $\omega$, since through $\omega_{\mu}=-2A_{\mu}\Sigma^{12}$ this would imply a quantized Hall conductivity in a $p$-wave SC, which does not exist \cite{read2000paired,stone2004edge}. As it is written in \eqref{eq:72}, gCS is invariant under $Diff_{+}$, but not under $SO\left(1,2\right)$ if $M_{3}$ has a boundary. This is the boundary $SO\left(1,2\right)$ anomaly, which is discussed further in section \ref{subsec:Gauge-symmetry-of}. Using the relation $\tilde{\Gamma}_{\;\beta\mu}^{\alpha}=e_{a}^{\;\alpha}\left(\delta_{b}^{a}\partial_{\mu}+\tilde{\omega}_{\;b\mu}^{a}\right)e_{\;\beta}^{b}$ between $\tilde{\Gamma}$ and $\tilde{\omega}$, one can derive an important formula, \begin{align} Q_{3}\left(\tilde{\Gamma}\right)-Q_{3}\left(\tilde{\omega}\right)=\text{tr}\left[\frac{1}{3}\left(e\text{d}e^{-1}\right)^{3}+\text{d}\left(\text{d}e^{-1}e\tilde{\Gamma}\right)\right],\nonumber\\\label{eq:70} \end{align} where unusually, $e=\left(e_{\;\mu}^{a}\right)$ is treated in this expression as a matrix valued function \cite{kraus2006holographic,stone2012gravitational}. The variation with respect to $e$ of the two terms on the right hand side is a total derivative, which means that they are irrelevant for the purpose of calculating bulk responses. One can therefore use $Q_{3}\left(\tilde{\Gamma}\right)$, which only depends on the metric $g_{\mu\nu}$, instead of $Q_{3}\left(\tilde{\omega}\right)$. The form $\int_{M_{3}}Q_{3}\left(\tilde{\Gamma}\right)$ of gCS is invariant under $SO\left(1,2\right)$ but not under $Diff_{+}$, as opposed to $\int_{M_{3}}Q_{3}\left(\tilde{\omega}\right)$. Thus the right hand side of \eqref{eq:70} has the effect of shifting the boundary anomaly from $SO\left(1,2\right)$ to $Diff$. The second term in \eqref{eq:72} has a structure similar to a CS term with $e^{a}$ playing the role of a connection, and indeed some authors refer to it as such \cite{zanelli2012chern}. Nevertheless, it is $SO\left(1,2\right)$ and $Diff_{+}$ invariant, as can be seen from \eqref{eq:76}. This term was related to the torsional Hall viscosity in \cite{hughes2013torsional}, where it was discussed extensively. The third term in \eqref{eq:72} is also $SO\left(1,2\right)$ and $Diff_{+}$ invariant. We refer to this term as \textit{gravitational pseudo Chern-Simons} (gpCS), to indicate its similarity to gCS, and the fact that it is not a Chern-Simons term. The similarity between gCS and gpCS is demonstrated and put in a broader context in the discussion, section \ref{sec:Conclusion-and-discussion}. In section \ref{subsec:Calculation-of-bulk} we will see that gCS and gpCS produce similar contributions to bulk responses. For now, we simply note that both terms are second order in $h$ and third order in derivatives of $h$. The third line in \eqref{eq:72} contains the Einstein-Hilbert action with a cosmological constant $\Lambda$ familiar from general relativity, and an additional torsional contribution $\propto c^{2}$. The coefficient $1/\kappa_{N}$ of the Einstein-Hilbert term is usually related to a Newton's constant $G_{N}=\kappa_{N}/8\pi$. Note that in Riemannian geometry, where torsion vanishes and $\omega=\tilde{\omega}$, only the gCS term, the Einstein-Hilbert term, and the cosmological constant survive. The coefficients $\kappa_{H},\zeta_{H},1/\kappa_{N},\Lambda/\kappa_{N}$ are given by frequency and wave-vector integrals that arise within the perturbative calculation, and are described in appendix \ref{subsec:Perturbative-calculation-of}. In particular $\zeta_{H},1/\kappa_{N},\Lambda/\kappa_{N}$ are dimension-full, with mass dimensions $2,1,3$, and naively diverge. In other words, they are UV sensitive. On the other hand, $\kappa_{H}$ is dimensionless and UV insensitive. With no regularization, one finds \begin{eqnarray} \kappa_{H}=\frac{1}{48\pi}\frac{\text{sgn}\left(m\right)o}{2}. \end{eqnarray} Thus, the effective action for a single Majorana spinor can be written as \begin{align} W_{\text{RC}}\left[e,\omega\right]=\frac{1/2}{96\pi}\frac{\text{sgn}\left(m\right)o}{2}W\left[e,\omega\right]+\cdots\label{eq:77} \end{align} where \begin{align} W\left[e,\omega\right]=\int_{M_{3}}Q_{3}\left(\tilde{\omega}\right)-\int_{M_{3}}\tilde{\mathcal{R}}e^{a}De_{a}\label{eq:RelEffAction} \end{align} is the sum of gCS and gpCS, and the dots include UV sensitive terms, or terms of a higher order in derivatives or perturbations, as described above. Since the lattice model implies a finite physical cutoff $\Lambda_{UV}$ for wave-vectors, \eqref{eq:77} is exact only for $m/\Lambda_{UV}\rightarrow0$. For non-zero $m$ there are small $O\left(m/\Lambda_{UV}\right)$ corrections\footnote{All expressions here are with $\hbar=c_{\text{light}}=1$. Restoring units one finds $\frac{m}{\Lambda_{\text{UV}}}\sim\frac{\text{max}\left(t,\mu\right)}{\delta}$ and so $\frac{m}{\Lambda_{\text{UV}}}\ll1$ in the relativistic regime.} to \eqref{eq:77}. We will keep these corrections implicit for now, and come back to them in section \ref{subsec:Gauge-symmetry-of}. \paragraph{Summing over Majorana spinors \label{subsec:Summing-over-lattice}} As discussed in Appendix \ref{sec:Lattice-model}, the continuum description of the $p$-wave SC includes four Majorana spinors labeled by $1\leq n\leq4$, with masses $m_{n}$, which are coupled to vielbeins $e_{\left(n\right)}$. Let us repeat the necessary details. The vielbein $e_{\left(1\right)}$ is associated with the order parameter $\delta$ of the underlying lattice model, as in \eqref{17}, up to an unimportant rescaling by the lattice spacing $a$. For this reason we treat it as a fundamental vielbein and write $e=e_{\left(1\right)}$ in some expressions. The other vielbeins $\left(e_{\left(n\right)}\right)_{a}^{\;\mu}$ are obtained from $e$ by multiplying one of the columns $\mu=x,y$ or both by $-1$. This implies that $o=o_{1}=o_{3}=-o_{2}=-o_{4}$, and that the metrics $g_{\left(n\right)}^{\mu\nu}$ are identical apart from $g^{xy}=g_{\left(1\right)}^{xy}=g_{\left(3\right)}^{xy}=-g_{\left(2\right)}^{xy}=-g_{\left(4\right)}^{xy}$. With this in mind, we can sum over the four Majorana spinors and obtain and effective action for the $p$-wave SC, \begin{eqnarray} W_{\text{SC}}\left[e,\omega\right]&=&\sum_{n=1}^{4}W_{\text{RC}}\left[e_{\left(n\right)},\omega\right]\label{eq:77-1-1}\\ &=&\frac{1/2}{96\pi}\sum_{n=1}^{4}\frac{\text{sgn}\left(m_{n}\right)o_{n}}{2}W\left[e_{\left(n\right)},\omega\right]+\cdots\nonumber \end{eqnarray} Note that the Chern number of the lattice model is given by $\nu=\sum_{n=1}^{4}\text{sgn}\left(m_{n}\right)o_{n}/2$, but since $W$ also depends on the different vielbeins $e_{\left(n\right)}$, \eqref{eq:77-1-1} does not only depend on $\nu$ in the general case. Some simplification is possible however. Since $e_{\left(1\right)}=e_{\left(3\right)}$ and $e_{\left(2\right)}=e_{\left(4\right)}$ up to a space-time independent $SO\left(2\right)$ ($U\left(1\right)$) transformation, \begin{eqnarray} W_{\text{SC}}\left[e,\omega\right]&=&\sum_{l=1}^{2}\frac{\nu_{l}/2}{96\pi}W\left[e_{\left(l\right)},\omega\right]+\cdots\label{eq:77-3}\\ &=&\sum_{l=1}^{2}\frac{\nu_{l}/2}{96\pi}\int_{M_{3}}Q_{3}\left(\tilde{\omega}{}_{\left(l\right)}\right)+\cdots\nonumber \end{eqnarray} where in the second line, we have only written explicitly gCS terms. Here we defined \begin{align} \nu_{1}&=\frac{o_{1}}{2}\left(\text{sgn}\left(m_{1}\right)+\text{sgn}\left(m_{3}\right)\right),\\ \nu_{2}&=\frac{o_{2}}{2}\left(\text{sgn}\left(m_{2}\right)+\text{sgn}\left(m_{4}\right)\right),\nonumber \end{align} which are both integers, $\nu_{1},\nu_{2}\in\mathbb{Z}.$ The Chern number of the lattice model is given by the sum $\nu=\nu_{1}+\nu_{2}$. Thus the lattice model seems to behave like a bi-layer, with layer index $l=1,2$. In the topological phases of the model $\nu_{1}=0$, $\nu=\nu_{2}=\pm1$, and so \begin{eqnarray} & W_{\text{SC}}\left[e,\omega\right] & =\frac{\nu/2}{96\pi}W\left[e_{\left(2\right)},\omega\right]+\cdots\label{eq:77-4}\\ & & =\frac{\nu/2}{96\pi}\int_{M_{3}}Q_{3}\left(\tilde{\omega}_{\left(2\right)}\right)+\cdots\nonumber \end{eqnarray} where again, in the second line we have only written explicitly the gCS term. This result is close to what one may have guessed. In the topological phases with Chern number $\nu\neq0$, the effective action contains a single gCS term, with coefficient $\frac{\nu/2}{96\pi}$. A result of this form has been anticipated in \cite{volovik1990gravitational,read2000paired,wang2011topological,ryu2012electromagnetic,palumbo2016holographic}, but there are a few details which are important to note. First, apart from gCS, $W$ also contains the a gpCS term of the form $\int_{M_{3}}\tilde{\mathcal{R}}e^{a}De_{a}$, which is possible due to the emergent torsion. Second, the connection that appears in the CS form $Q_{3}$ is a LC connection, and not the torsion-full connection $\omega$. Moreover, this LC connection is not $\tilde{\omega}$, but a modification of it $\tilde{\omega}{}_{\left(2\right)}$, where the subscript $\left(2\right)$ indicates the effect of the multiple Majorana spinors in the continuum description of the lattice model. Third, the geometric fields $e,\omega$ are given by $\Delta,A$. In the trivial phases $\nu_{1}=-\nu_{2}\in\left\{ -1,0,1\right\} $, $\nu=0$, and we find \begin{eqnarray} & W_{\text{SC}}\left[e,\omega\right] & =\frac{\nu_{1}/2}{96\pi}\left[W\left[e_{\left(1\right)},\omega\right]-W\left[e_{\left(2\right)},\omega\right]\right]+\cdots\label{eq:77-5}\\ & & =\frac{\nu_{1}/2}{96\pi}\left[\int_{M_{3}}Q_{3}\left(\tilde{\omega}{}_{\left(1\right)}\right)-\int_{M_{3}}Q_{3}\left(\tilde{\omega}{}_{\left(2\right)}\right)\right]+\cdots\nonumber \end{eqnarray} This result is quite surprising. Instead of containing no gCS terms, some trivial phases contain the difference of two such terms, with slightly different spin connections. One may wonder if these trivial phases are really trivial after all. This is part of a larger issue which we now address. \subsubsection{Symmetries of the effective action \label{subsec:Gauge-symmetry-of}} By considering the gauge symmetry of the effective action we can reconstruct the topological phase diagram appearing in Fig.\ref{fig:Phase-Diagram} from \eqref{eq:77-3}. This will also help us understand which of our results are special to the relativistic limit, and which should hold throughout the phase diagram. By gauge symmetry we refer in this section to the $SO\left(2\right)$ subgroup of $SO\left(1,2\right)$, which corresponds to the physical $U\left(1\right)$ symmetry of the $p$-wave SC. Equation \eqref{eq:70} shows that we can equivalently consider $Diff$ symmetry. The physical reason for this equivalence is that the $p$-wave order parameter is charged under both symmetries, and therefore maps them to one another. The effective action was calculated within perturbation theory on the space-time manifold $M_{3}=\mathbb{R}_{t}\times\mathbb{R}^{2}$, but for this discussion, we use its locality to assume it remains locally valid on more general $M_{3}$, which may be closed (compact and without a boundary) or have a boundary. A closed space-time is most simply obtained by working on $M_{3}=\mathbb{R}_{t}\times M_{2}$ with $M_{2}$ closed, and with background fields $\Delta,A$ which are periodic in time, such that $\mathbb{R}_{t}$ can be compactified to a circle. As described in appendix \ref{subsec:Global-structures-and}, a non singular order parameter endows $M_{3}$ with an orientation and a spin structure, and in particular requires that $M_{2}$ be orientable \cite{quelle2016edge}, which we assume. Thus, for example, we exclude the possibility of $M_{2}$ being the Mobius strip. Moreover, a non singular order parameter on a closed $M_{2}$ requires that $M_{2}$ contain $\left(g-1\right)o$ magnetic monopoles \cite{read2000paired}, where $g$ is the genus of $M_{2}$, and we assume that this condition is satisfied. For example, if $M_{2}$ is the sphere then it must contain a single monopole or anti-monopole depending on the orientation $o$ \cite{kraus2009majorana,moroz2016chiral}. \paragraph{Quantization of coefficients\label{subsec:quantization}} The first fact about the gCS term that we will need, is that gauge symmetry of $\alpha\int_{M_{3}}Q_{3}\left(\tilde{\omega}\right)$ for all closed $M_{3}$ requires that $\alpha$ be quantized such that $\alpha\in\frac{1}{192\pi}\mathbb{Z}$, see equation (2.27) of \cite{witten2007three}. In order to understand how generic is our result \eqref{eq:77-3}, we will check what quantization condition on $\alpha_{1},\alpha_{2}$ is required for gauge symmetry of $\alpha_{1}\int_{M_{3}}Q_{3}\left(\tilde{\omega}_{\left(1\right)}\right)+\alpha_{2}\int_{M_{3}}Q_{3}\left(\tilde{\omega}_{\left(2\right)}\right)$ on all closed $M_{3}$. Following the arguments of \cite{witten2007three} we find that $\alpha_{1}+\alpha_{2}\in\frac{1}{192\pi}\mathbb{Z}$, but $\alpha_{1},\alpha_{2}\in\mathbb{R}$ are not separately restricted, see appendix \ref{subsec:quntization-of-coefficients}. It is therefore natural to define $\alpha=\alpha_{1}+\alpha_{2}$ and rewrite \begin{eqnarray} && \alpha_{1}\int_{M_{3}}Q_{3}\left(\tilde{\omega}_{\left(1\right)}\right)+\alpha_{2}\int_{M_{3}}Q_{3}\left(\tilde{\omega}_{\left(2\right)}\right)\\ &&=\alpha\int_{M_{3}}Q_{3}\left(\tilde{\omega}_{\left(2\right)}\right)+\alpha_{1}\int_{M_{3}}\left[Q_{3}\left(\tilde{\omega}_{\left(1\right)}\right)-Q_{3}\left(\tilde{\omega}_{\left(2\right)}\right)\right],\nonumber \end{eqnarray} where $\alpha\in\frac{1}{192\pi}\mathbb{Z}$ but $\alpha_{1}\in\mathbb{R}$. Comparing with the result \eqref{eq:77-3}, we identify $\alpha=\frac{\nu/2}{96\pi}$, $\alpha_{1}=\frac{\nu_{1}/2}{96\pi}$, and we conclude that $\nu$ must be precisely an integer and equal to the Chern number, while $\nu_{1}$ need not be quantized. We therefore interpret the $O\left(m/\Lambda_{\text{UV}}\right)$ corrections to $\alpha=\frac{\nu/2}{96\pi}$ produced in our computation as artifacts of our approximations\footnote{Specifically, in obtaining the relativistic continuum approximation we split the Brillouin zone $BZ$ into four quadrants and linearized the lattice Hamiltonian \eqref{eq:3} in every quadrant. Applying any integral formula for the Chern number to the approximate Hamiltonian will give a result $\nu_{\text{apprx}}=\frac{1}{2}\sum_{n=1}^{4}o_{n}\text{sgn}\left(m_{n}\right)+O\left(m/\Lambda_{\text{UV}}\right)$ which is only approximately quantized in the relativistic regime, simply because the approximate Hamiltonian is discontinuous on $BZ$. Nevertheless, the known quantization $\nu\in\mathbb{Z}$ and the fact that $\nu_{\text{apprx}}\approx\nu$ are enough to obtain the exact result $\nu=\frac{1}{2}\sum_{n=1}^{4}o_{n}\text{sgn}\left(m_{n}\right)$.}, which must vanish due to gauge invariance. On the other hand, we interpret the quantization $\alpha_{1}=\frac{\nu_{1}/2}{96\pi}$ as a special property of the relativistic limit with both $m^{*}\rightarrow\infty$ and $m\rightarrow0$, which should not hold throughout the phase diagram. So far we have only considered gCS terms. As already explained, the gpCS term is gauge invariant on any $M_{3}$, and we therefore see no reason for the quantization of its coefficient. Explicitly, $-\beta\int_{M_{3}}\tilde{\mathcal{R}}e^{a}De_{a}$ is gauge invariant for all $\beta\in\mathbb{R}$. Thus we interpret the approximate quantization of the coefficients of gpCS terms as a special property of the relativistic limit, which should not hold throughout the phase diagram. We note that even for a relativistic spinor any $\beta\in\mathbb{R}$ can be obtained, by adding a non minimal coupling to torsion \cite{hughes2013torsional}. In light of the above, it is natural to interpret \eqref{eq:77-3} as a special case of \begin{eqnarray} && W_{\text{SC}}\left[e,\omega\right]=\frac{\nu/2}{96\pi}\int_{M_{3}}Q_{3}\left(\tilde{\omega}_{\left(2\right)}\right)\label{eq:80-1}\\ &&+\alpha_{1}\int_{M_{3}}\left[Q_{3}\left(\tilde{\omega}_{\left(1\right)}\right)-Q_{3}\left(\tilde{\omega}_{\left(2\right)}\right)\right]\nonumber\\ &&-\beta_{1}\int_{M_{3}}\tilde{\mathcal{R}}_{\left(1\right)}e_{\left(1\right)}^{a}De_{\left(1\right)a}-\beta_{2}\int_{M_{3}}\tilde{\mathcal{R}}e_{\left(2\right)}^{a}De_{\left(2\right)a}+\cdots\nonumber \end{eqnarray} where $\nu\in\mathbb{Z}$ is the Chern number and $\alpha_{1},\beta_{1},\beta_{2}$ are additional, non quantized, yet dimensionless, response coefficients. In the relativistic limit $\alpha_{1},\beta_{1},\beta_{2}$ happen to be quantized, but this is not generic. Only the first gCS term encodes topological bulk responses, proportional to the Chern number $\nu$, and below we will see that only this term is related to an edge anomaly. We can also write \eqref{eq:80-1} more symmetrically, \begin{eqnarray} && W_{\text{SC}}\left[e,\omega\right]\label{eq:81-1}\\ &&=\sum_{l=1}^{2}\left[\alpha_{l}\int_{M_{3}}Q_{3}\left(\tilde{\omega}_{\left(l\right)}\right)-\beta_{l}\int_{M_{3}}\tilde{\mathcal{R}}_{\left(l\right)}e_{\left(l\right)}^{a}De_{\left(l\right)a}\right]+\cdots\nonumber \end{eqnarray} but here we must keep in mind the quantization condition $\alpha_{1}+\alpha_{2}=\frac{\nu/2}{96\pi}\in\frac{1}{192\pi}\mathbb{Z}$. This equation should be compared with the result in the relativistic limit \eqref{eq:77-3}, where $\alpha_{l},\beta_{l}$ are all quantized, and $\alpha_{l}=\beta_{l}$. We note that the quantization of $\alpha_{l},\beta_{l}$ in the relativistic limit can be understood on dimensional grounds: in this limit there are simply not enough dimension-full quantities which can be used to construct dimensionless quantities, beyond $\text{sgn}\left(m_{n}\right)$ and $o_{n}$. Of course, this does not explain why $\alpha_{l}=\beta_{l}$ in the relativistic limit. \paragraph{Boundary anomalies\label{subsec:Boundary-anomalies}} We can strengthen the above conclusions by considering space-times $M_{3}$ with a boundary. The second fact about the gCS term that we will need is that it is not gauge invariant when $M_{3}$ has a boundary, even with a properly quantized coefficient. In more detail, the $SO\left(2\right)$ variation of gCS is given by \begin{align} \delta_{\theta}\int_{M_{3}}Q_{3}\left(\tilde{\omega}\right)=-\text{tr}\int_{\partial M_{3}}\mbox{d}\theta\tilde{\omega}. \end{align} Up to normalization, the boundary term above is called the consistent Lorentz anomaly, which is one of the forms in which the gravitational anomaly manifests itself \cite{bertlmann2000anomalies}\footnote{Generally speaking, \textit{consistent} anomalies are given by symmetry variations of functionals. We will also discuss below the more physical \textit{covariant} anomalies, which correspond to the actual inflow of some charge from bulk to boundary}. The anomaly $\text{tr}\int_{\partial M_{3}}\mbox{d}\theta\tilde{\omega}$ is a local functional that can either be written as the gauge variation of a local bulk functional, as it is written above, or as the gauge variation of a \textit{nonlocal} boundary functional $F\left[\tilde{\omega}\right]$, such that $\delta_{\theta}F\left[\tilde{\omega}\right]=\int_{\partial M_{3}}\mbox{d}\theta\tilde{\omega}$, but cannot be written as the gauge variation of a local boundary functional \cite{manes1985algebraic}. The difference of two gCS terms is also not gauge invariant, \begin{align} & \delta_{\theta}\left[\int_{M_{3}}Q_{3}\left(\tilde{\omega}_{\left(1\right)}\right)-\int_{M_{3}}Q_{3}\left(\tilde{\omega}_{\left(2\right)}\right)\right]\\ &=-\text{tr}\int_{\partial M_{3}}\mbox{d}\theta\left(\tilde{\omega}_{\left(1\right)}-\tilde{\omega}_{\left(2\right)}\right),\nonumber \end{align} but here there is a local boundary term that can produce the same variation, given by $\text{tr}\left(\tilde{\omega}_{(1)}\tilde{\omega}_{(2)}\right)$. The physical interpretation is as follows. Since $F\left[\tilde{\omega}\right]$ is non local it can be interpreted as the effective action obtained by integrating over a gapless, or massless, boundary field coupled to $e$. These are the boundary chiral Majorana fermions of the $p$-wave SC. The statement that $F$ cannot be local implies that this boundary field cannot be gapped. In this manner the existence of the gCS term in the bulk effective action, with a coefficient that is fixed within a topological phase, implies the existence of gapless degrees of freedom that cannot be gapped within a topological phase. We study this bulk-boundary correspondence in more detail in Appendix \ref{sec:Boundary-fermions-and}. Naively, the difference of two gCS terms implies the existence of two boundary fermions with opposite chiralities, one of which is coupled to $e_{\left(1\right)}$ and the other coupled to $e_{\left(2\right)}$. The boundary term $\int_{\partial M_{3}}\text{tr}\left(\tilde{\omega}_{\left(1\right)}\tilde{\omega}_{\left(2\right)}\right)$ can only be generated if the two counter propagating fermions are coupled, and its locality indicates that this coupling can open a gap. Thus the term $\int_{\partial M_{3}}\text{tr}\left(\tilde{\omega}_{\left(1\right)}\tilde{\omega}_{\left(2\right)}\right)$ represents the effect of a generic interaction between two counter propagating chiral Majorana fermions. Again, as opposed to the gCS term, the gpCS term is gauge invariant on any $M_{3}$, and is therefore unrelated to edge anomalies. Thus, in the effective action \eqref{eq:80-1}, only the first gCS term is related to an edge anomaly. \paragraph{Time reversal and reflection symmetry of the effective action} Time reversal $T$ and reflection $R$ are discussed in appendices \ref{subsec:Spatial-reflections-and} and \ref{subsec:relativisitc Spatial-reflection-and}. The orientation $o$ of the order parameter is odd under both $T,R$, and it follows that so are the coefficients $\nu_{l}$. Therefore $\nu_{l}$ are $T,R$-odd response coefficients. More generally, $\alpha_{l},\beta_{l}$ in \eqref{eq:81-1} are $T,R$-odd response coefficients. As described in section \ref{subsec:Effective-action-for}, integrals over differential forms are also odd under the orientation reversing diffeomorphisms $T,R$, and therefore $W_{\text{SC}}$ is invariant under $T,R$. \subsubsection{Calculation of currents\label{subsec:Calculation-of-bulk}} To derive the currents we start with the expression \begin{align} \alpha_{1}\int_{M_{3}}Q_{3}\left(\tilde{\omega}\right)-\beta_{1}\int_{M_{3}}\tilde{\mathcal{R}}e^{a}De_{a}+\cdots\label{eq:72-1} \end{align} which is the effective action for the layer $l=1$. We then sum the results over $l=1,2$, as in \eqref{eq:81-1}, to get the full low energy response of the lattice model, keeping in mind that $\alpha_{1}+\alpha_{2}=\frac{\nu/2}{96\pi}\in\frac{1}{192\pi}\mathbb{Z}$. \paragraph{Bulk response from gravitational Chern-Simons terms\label{subsec:Currents-from-the}} For the purpose of calculating the contribution of gCS to the bulk energy-momentum tensor it is easier to use $Q_{3}\left(\tilde{\Gamma}\right)$ instead of $Q_{3}\left(\tilde{\omega}\right)$. The result is \cite{jackiw2003chern,perez2010conserved,stone2012gravitational} \begin{align} \left\langle \mathsf{J}_{\;a}^{\mu}\right\rangle _{\text{gCS}}=\frac{1}{\left|e\right|}\frac{\delta}{\delta e_{\;\mu}^{a}}\left[\alpha_{1}\int_{M_{3}}Q_{3}\left(\tilde{\Gamma}\right)\right]=4\alpha_{1}\tilde{C}_{\;a}^{\mu},\label{eq:74} \end{align} where $\tilde{C}$ is the Cotton tensor, which can be written as \begin{eqnarray} \tilde{C}^{\mu\nu}=-\frac{1}{\sqrt{g}}\varepsilon^{\rho\sigma(\mu}\tilde{\nabla}_{\rho}\tilde{\mathcal{R}}_{\sigma}^{\nu)}. \end{eqnarray} Relevant properties of the Cotton tensor are $\tilde{\nabla}_{\mu}\tilde{C}^{\mu\nu}=0$, $\tilde{C}_{\;\mu}^{\mu}=0$, and $C^{[\mu\nu]}=0$. It follows from \eqref{eq:74} that \begin{align} \left\langle t_{\text{cov}\;\nu}^{\mu}\right\rangle _{\text{gCS}}=-\left|e\right|\left\langle \mathsf{J}_{\;\nu}^{\mu}\right\rangle _{\text{gCS}}=-4\alpha_{1}\left|e\right|\tilde{C}_{\;\nu}^{\mu}.\label{eq:110} \end{align} For order parameters of the form \begin{eqnarray} & & \Delta=e^{i\theta}\left(\left|\Delta^{x}\right|,\pm i\left|\Delta^{y}\right|\right)\label{eq:110-10} \end{eqnarray} the metrics for both layers $l=1,2$ are identical. Since $\tilde{C}$ only depends on the metric it follows that for such order parameters the summation over $l=1,2$ gives \begin{align} \left\langle t_{\text{cov}\;\nu}^{\mu}\right\rangle _{\text{gCS}}=-\left|e\right|\left\langle \mathsf{J}_{\;\nu}^{\mu}\right\rangle _{\text{gCS}}=-\frac{\nu/2}{96\pi}4\left|e\right|\tilde{C}_{\;\nu}^{\mu}.\label{eq:110-1} \end{align} Put differently, the difference of gCS terms in \eqref{eq:81-1}, with coefficient $\alpha_{1}$, does not produce a bulk response for such order parameters. This provides a simple way to separate the topological invariant $\nu$ from the non quantized $\alpha_{1}$. The Cotton tensor takes a simpler form if the geometry is a product geometry, where the metric is of the form $\text{d}s^{2}=g_{\alpha\beta}\left(x^{\alpha}\right)\text{d}x^{\alpha}\text{d}x^{\beta}+\sigma\text{d}z^{2}$. Here $\sigma=\pm1$ depends on whether $z$ is a space-like or time-like coordinate, and we will use both in the following. The two coordinates $x^{\alpha}$ are space-like if $z$ is time-like and mixed if $z$ is space-like. In this case the curvature is determined by the curvature scalar, which corresponds to the curvature scalar of the two dimensional metric $g_{\alpha\beta}$. In particular $\mathcal{R}_{\;\beta}^{\alpha}=\frac{1}{2}\mathcal{R}\delta_{\beta}^{\alpha}$ and the other components of $\mathcal{R}_{\;\nu}^{\mu}$ vanish. Then \begin{align} \left\langle \mathsf{J}^{\alpha z}\right\rangle _{\text{gCS}}=\left\langle \mathsf{J}^{z\alpha}\right\rangle _{\text{gCS}}=\alpha_{1}\frac{1}{\left|e\right|}\varepsilon^{z\alpha\beta}\partial_{\beta}\tilde{\mathcal{R}},\label{eq:111} \end{align} and the other components vanish. In terms of $t_{\text{cov}\;\nu}^{\mu}$, \begin{eqnarray} & & \left\langle t_{\text{cov}\;z}^{\alpha}\right\rangle _{\text{gCS}}=-\alpha_{1}\sigma\varepsilon^{z\alpha\beta}\partial_{\beta}\tilde{\mathcal{R}},\\ & & \left\langle t_{\text{cov}\;\alpha}^{z}\right\rangle _{\text{gCS}}=-\alpha_{1}g_{\alpha\beta}\varepsilon^{z\beta\gamma}\partial_{\gamma}\tilde{\mathcal{R}}.\nonumber \end{eqnarray} Taking $z=t$ is natural in the context of the $p$-wave SC, since the emergent metric \eqref{eq:10-1} is always a product metric if $\Delta$ is time independent. Then, with a general time independent order parameter, \begin{eqnarray} & & \left\langle J_{E}^{i}\right\rangle _{\text{gCS}}=\left\langle t_{\text{cov}\;t}^{i}\right\rangle _{\text{gCS}}=-\alpha_{1}\varepsilon^{ij}\partial_{j}\tilde{\mathcal{R}}\label{eq:87-2},\\ & & \left\langle P_{i}\right\rangle _{\text{gCS}}=\left\langle t_{\text{cov}\;i}^{t}\right\rangle _{\text{gCS}}=-\alpha_{1}g_{ik}\varepsilon^{kj}\partial_{j}\tilde{\mathcal{R}},\nonumber \end{eqnarray} where $\tilde{\mathcal{R}}$ is the curvature associated with the spatial metric $g^{ij}=-\Delta^{(i}\Delta^{j)*}$. Again, for order parameters of the form \eqref{eq:110-10} the metrics for both layers $l=1,2$ are identical, and the summation over $l=1,2$ produces \begin{eqnarray} & & \left\langle J_{E}^{i}\right\rangle _{\text{gCS}}=-\frac{\nu/2}{96\pi}\varepsilon^{ij}\partial_{j}\tilde{\mathcal{R}}\label{eq:87-2-2},\\ & & \left\langle P_{i}\right\rangle _{\text{gCS}}=-\frac{\nu/2}{96\pi}g_{ik}\varepsilon^{kj}\partial_{j}\tilde{\mathcal{R}}.\nonumber \end{eqnarray} These are the topological bulk responses described in section \ref{subsubsec:Topological bulk responses from a gravitational Chern-Simons term}. It is also usefull to consider order parameters of the form \begin{eqnarray} & & \Delta=\Delta_{0}e^{i\theta}\left(1,e^{i\phi}\right),\label{eq:87-9} \end{eqnarray} where $\phi$ is space dependent. Here the metrics satisfy $g^{xy}=g_{\left(1\right)}^{xy}=-g_{\left(2\right)}^{xy}=\Delta_{0}^{2}\cos\phi$, with the other components constant, and therefore the Ricci scalars satisfy $\mathcal{R}=\mathcal{R}_{\left(1\right)}=-\mathcal{R}_{\left(2\right)}$. The summation over $l=1,2$ for such order parameters then gives \begin{eqnarray} & & \left\langle J_{E}^{i}\right\rangle _{\text{gCS}}=-\left(\alpha_{1}-\alpha_{2}\right)\varepsilon^{ij}\partial_{j}\tilde{\mathcal{R}}.\label{eq:87-2-1} \end{eqnarray} Unlike the sum $\alpha_{1}+\alpha_{2}=\frac{\nu/2}{96\pi}$, the difference $\alpha_{1}-\alpha_{2}=2\alpha_{1}-\frac{\nu/2}{96\pi}$ is not quantized. The response \eqref{eq:87-2-1} is therefore not a topological bulk response. Measuring $\left\langle J_{E}\right\rangle $ for an order parameter such that $\mathcal{R}=\mathcal{R}_{\left(1\right)}=\mathcal{R}_{\left(2\right)}$, and then for an order parameter such that $\mathcal{R}=\mathcal{R}_{\left(1\right)}=-\mathcal{R}_{\left(2\right)}$, allows one to fix both $\alpha_{1},\alpha_{2}$, or both $\nu$ and $\alpha_{1}$. To demonstrate how closely \eqref{eq:87-2-1} can resemble a topological bulk response, we go back to the lattice model. In the relativistic limit we found that some trivial phases, where $\nu=0$, have $\alpha_{1}=\frac{\nu_{1}/2}{192\pi}\neq0$. It follows that these trivial phases have \textit{in the relativistic limit} a quantized response \begin{align} \left\langle J_{E}^{i}\right\rangle _{\text{gCS}}=-2\alpha_{1}\varepsilon^{ij}\partial_{j}\tilde{\mathcal{R}}=-\frac{\nu_{1}}{96\pi}\varepsilon^{ij}\partial_{j}\tilde{\mathcal{R}}, \end{align} for order parameters $\Delta=\Delta_{0}e^{i\theta}\left(1,e^{i\phi}\right)$. Another case of interest is when $z$ is a spatial coordinate. As an example, we take $z=y$. This decomposition is less natural in the $p$-wave SC, as can be seen from \eqref{eq:10-1}. It allows for time dependence, but restricts the configuration the order parameter can take at any given time. A simple example for an order parameter that gives rise to a product metric with respect to $y$ is $\Delta=\Delta_{0}e^{i\theta\left(t,x\right)}\left(1+f\left(t,x\right),\pm i\right)$, which is a perturbation of the $p_{x}\pm ip_{y}$ configuration with a small real function $f$. Then \begin{eqnarray} & & \left\langle t_{\text{cov}\;\alpha}^{y}\right\rangle _{\text{gCS}}=-\frac{\nu/2}{96\pi}g_{\alpha\beta}\varepsilon^{\beta\gamma y}\partial_{\gamma}\tilde{\mathcal{R}},\label{eq:104} \end{eqnarray} where we have summed over $l=1,2$. This an interesting contribution to the $x$-momentum current and energy current in the $y$ direction. If we consider, as in Fig.\ref{fig:A-comparison-of-1}, a boundary or domain wall at $y=0$, between a topological phase and a trivial phase where $\nu=0$, we see that there is an inflow of energy and $x$-momentum into the boundary from the topological phase. This shows that energy and $x$-momentum are accumulated on the boundary, at least locally, which corresponds to the boundary gravitational anomaly. We complete the analysis of this situation from the boundary point of view in Appendix \ref{subsec:Implication-for-the}. \paragraph{Bulk response from the gravitational pseudo Chern-Simons term \label{subsec:Additional-contributions}} The gpCS term $-\beta_{1}\int_{M_{3}}\tilde{\mathcal{R}}e^{a}De_{a}$ contributes to the energy-momentum tensor, and also provides a contribution to the spin density, \begin{align} \left\langle \mathsf{J}^{\mu\nu}\right\rangle _{\text{gpCS}}=&\beta_{1}\left\{\frac{1}{\left|e\right|}\varepsilon^{\mu\nu\rho}\partial_{\rho}\tilde{\mathcal{R}}-\frac{1}{\left|e\right|}\varepsilon^{\mu\rho\sigma}\tilde{\mathcal{R}}T_{\rho\sigma}^{\nu}\right.\label{eq:92-1}+\left.2o\left[\left(\tilde{\nabla}^{\mu}\tilde{\nabla}^{\nu}-g^{\mu\nu}\tilde{\nabla}^{2}\right)-\tilde{\mathcal{R}}^{\mu\nu}\right]c \vphantom{\frac{1}{\left|e\right|}} \right\}.\\ \left\langle \mathsf{J}^{ab\mu}\right\rangle _{\text{gpCS}}=&\beta_{1}o\tilde{\mathcal{R}}\varepsilon^{abc}e_{c}^{\;\mu}\nonumber \end{align} These are calculated in appendix \ref{subsec:Calculation-of-certain}. Using \eqref{eq:71-1}, the above contribution to the spin density corresponds to a contribution to the charge density, \begin{eqnarray} & & \left\langle J^{t}\right\rangle _{\text{gpCS}}=4\beta_{1}o\left|e\right|\tilde{\mathcal{R}}.\label{eq:98} \end{eqnarray} The most notable feature of this density is that it is not accompanied by a current, even for time dependent background fields, where $\partial_{\mu}\left\langle J^{\mu}\right\rangle =\partial_{t}\left\langle J^{t}\right\rangle \neq0$. This represents the non conservation of fermionic charge in a $p$-wave SC \eqref{eq:15}. The appearance of $o$ can be understood from \eqref{eq:76}. One can also understand the appearance of $o$ based on time reversal symmetry. Since both $J^{t}$ and $\tilde{\mathcal{R}}$ are time reversal even, the coefficient of the above response cannot be $\beta_{1}$, which is time reversal odd. We now discuss the energy-momentum contributions $\left\langle \mathsf{J}^{\mu\nu}\right\rangle _{\text{gpCS}}$ in \eqref{eq:92-1}, with the purpose of comparing them to the gCS contributions $\left\langle \mathsf{J}^{\mu\nu}\right\rangle _{\text{gCS}}$. To do this in the simplest setting, we restrict to a product geometry with respect to the coordinate $z$ as described in the previous section. We will also assume for simplicity that torsion vanishes, and generalize to non-zero torsion in appendix \ref{subsec:Calculation-of-certain}. For a torsion-less product geometry $\left\langle \mathsf{J}^{\mu\nu}\right\rangle _{\text{gpCS}}$ reduces to \begin{align} -\left\langle \mathsf{J}^{\alpha z}\right\rangle _{\text{gpCS}}=\left\langle \mathsf{J}^{z\alpha}\right\rangle _{\text{gpCS}}=\beta_{1}\frac{1}{\left|e\right|}\varepsilon^{z\alpha\beta}\partial_{\beta}\tilde{\mathcal{R}}.\label{eq:94-1} \end{align} Note that while the gpCS term vanishes in a torsion-less geometry, the currents it produces, given by its functional derivatives, do not. Comparing with \eqref{eq:111}, we see that $\left\langle \mathsf{J}^{z\alpha}\right\rangle _{\text{gpCS}}\propto\left\langle \mathsf{J}^{z\alpha}\right\rangle _{\text{gCS}}$, while $\left\langle \mathsf{J}^{\alpha z}\right\rangle _{\text{gpCS}}\propto-\left\langle \mathsf{J}^{\alpha z}\right\rangle _{\text{gCS}}$, with the proportionality constant $\alpha_{1}/\beta_{1}$, that goes to 1 in the relativistic limit. This demonstrates the similarity between the gpCS and gCS terms. In particular, we find in a time independent situation the following contributions to the energy current and momentum density, \begin{eqnarray} & & \left\langle J_{E}^{i}\right\rangle _{\text{gpCS}}=\left\langle t_{\text{cov}\;t}^{i}\right\rangle _{\text{gpCS}}=\beta_{1}\varepsilon^{ij}\partial_{j}\tilde{\mathcal{R}},\label{eq:92-2}\\ & & \left\langle P_{i}\right\rangle _{\text{gpCS}}=\left\langle t_{\text{cov}\;i}^{t}\right\rangle _{\text{gpCS}}=-\beta_{1}g_{ik}\varepsilon^{kj}\partial_{j}\tilde{\mathcal{R}}.\nonumber \end{eqnarray} Comparing with \eqref{eq:87-2}, we see that $\left\langle P_{i}\right\rangle _{\text{gpCS}}\propto\left\langle P_{i}\right\rangle _{\text{gCS}}$, while $\left\langle J_{E}^{i}\right\rangle _{\text{gpCS}}\propto-\left\langle J_{E}^{i}\right\rangle _{\text{gCS}}$. This sign difference can be understood from the density response \eqref{eq:98}, and the relation \eqref{eq:49} between the operators $J_{E}$ and $P$, in the relativistic limit. With vanishing torsion it reduces to \begin{eqnarray} & & J_{E}^{j}-g^{jk}P_{k}=\frac{o}{2}\varepsilon^{jk}\partial_{k}\left(\frac{1}{\left|e\right|}J^{t}\right). \end{eqnarray} Thus the gCS contributions \eqref{eq:87-2} satisfy $\left\langle J_{E}^{j}\right\rangle _{\text{gCS}}-g^{jk}\left\langle P_{k}\right\rangle _{\text{gCS}}=0$ because gCS does not contribute to the density. On the other hand, the gpCS does contribute to the density, which is why $\left\langle J_{E}^{j}\right\rangle _{\text{gpCS}}-g^{jk}\left\langle P_{k}\right\rangle _{\text{gpCS}}\neq0$. This conclusion holds regardless of the value of the coefficient $\beta_{1}$ of gpCS. One can therefore fix the value of $\beta_{1}$ by a measurement of the density, and thus separate the topological bulk responses (gCS) from the non-topological bulk responses (gpCS). More accurately, we have seen that the lattice model behaves as a bi-layer with layer index $l=1,2$, and there are actually two coefficients $\beta_{1},\beta_{2}$. As in the previous section, one can extract both $\beta_{1},\beta_{2}$ by first considering an order parameter \eqref{eq:110-10} such that $\mathcal{R}=\mathcal{R}_{\left(1\right)}=\mathcal{R}_{\left(2\right)}$, and then considering an order parameter \eqref{eq:87-9} such that $\mathcal{R}=\mathcal{R}_{\left(1\right)}=-\mathcal{R}_{\left(2\right)}$. Another case of interest is when $z$ is a spatial coordinate, and as in the previous section we take $z=y$, $\Delta=\Delta_{0}e^{i\theta\left(t,x\right)}\left(1+f\left(t,x\right),\pm i\right)$. We then find from \eqref{eq:94-1}, $\left\langle \mathsf{J}^{y\alpha}\right\rangle _{\text{gpCS}}=\beta_{1}\frac{1}{\left|e\right|}\varepsilon^{z\alpha\beta}\partial_{\beta}\tilde{\mathcal{R}}$, or \begin{eqnarray} & & \left\langle t_{\text{cov}\;\alpha}^{y}\right\rangle _{\text{gpCS}}=-\beta_{1}g_{\alpha\beta}\varepsilon^{\beta\gamma y}\partial_{\gamma}\tilde{\mathcal{R}}.\label{eq:104-1} \end{eqnarray} In the presence of a boundary (or domain wall) at $y=0$, this describes an inflow of energy and $x$-momentum from the bulk to the boundary, such that $\left\langle t_{\text{cov}\;\alpha}^{y}\right\rangle _{\text{gpCS}}\propto\left\langle t_{\text{cov}\;\alpha}^{y}\right\rangle _{\text{gCS}}$. After summing over $l=1,2$ one finds the proportionality constant $\frac{\alpha_{1}+\alpha_{2}}{\beta_{1}+\beta_{2}}$, that goes to 1 in the relativistic limit. Nevertheless, we argue that $\left\langle t_{\text{cov}\;\alpha}^{y}\right\rangle _{\text{gCS}}$ corresponds to a boundary gravitational anomaly while $\left\langle t_{\text{cov}\;\alpha}^{y}\right\rangle _{\text{gpCS}}$ does not, in accordance with section \ref{subsec:Boundary-anomalies}. The relation between gCS and the boundary gravitational anomaly is well known within the gravitational description, and is described from the $p$-wave SC point of view in Appendix \ref{subsec:Implication-for-the}. The fact that $\left\langle t_{\text{cov}\;\alpha}^{y}\right\rangle _{\text{gpCS}}$ is unrelated to any boundary anomaly follows from the fact that it is $SO\left(1,2\right)$ and $Diff$ invariant. Due to this invariance the bulk gpCS term produces not only the bulk currents \eqref{eq:92-1}, but also boundary currents, such that bulk+boundary energy-momentum is conserved. In a product geometry with $z=y$ we find the boundary currents \begin{align} \left\langle \mathsf{j}^{\alpha\beta}\right\rangle _{\text{gpCS}}=&-\beta_{1}\frac{1}{\left|e\right|}\varepsilon^{\alpha\beta y}\tilde{\mathcal{R}},\label{eq:98-3}\\ \left\langle \mathsf{j}^{ab\mu}\right\rangle _{\text{gpCS}}=&0,\nonumber \end{align} which are calculated in appendix \ref{subsec:Calculation-of-certain}. We see that \begin{align} \tilde{\nabla}_{\alpha}\left\langle \mathsf{j}^{\alpha\beta}\right\rangle _{\text{gpCS}}=\left\langle \mathsf{J}^{y\beta}\right\rangle _{\text{gpCS}}.\label{eq:99} \end{align} This conservation law is the statement of bulk+boundary conservation of energy-momentum within the gravitational description. It can be understood from \eqref{eq:60}, by noting that the source terms in \eqref{eq:60} vanish because $\left\langle \mathsf{j}^{ab\mu}\right\rangle _{\text{gpCS}}=0$, and because we assumed torsion vanishes. The additional source term $\left\langle \mathsf{J}^{y\beta}\right\rangle _{\text{gpCS}}$, absent in \eqref{eq:60}, represents the inflow from the bulk. In Appendix \ref{subsec:Implication-for-the} we translate \eqref{eq:99} to the language of the $p$-wave SC. \subsection{Discussion\label{sec:Conclusion-and-discussion}} \paragraph*{Beyond the relativistic limit} In this section we have shown that there is a topological bulk response of $p$-wave CSF/Cs to a perturbation of their order parameter, which follows from a gCS term, and we described a corresponding gravitational anomaly of the edge states. The coefficient of gCS was found to be $\alpha=\frac{c}{96\pi}$ where $c$ is the chiral central charge, as anticipated. We provided arguments, based on symmetry and topology, for the validity of these results beyond the relativistic limit in which they were computed. The appearance of torsion in the emergent geometry brought about a surprise: an additional term, closely related but distinct from gCS, which we referred to as gravitational pseudo Chern-Simons (gpCS), with a dimensionless coefficient $\beta=\frac{c}{96\pi}=\alpha$. The gpCS term is completely invariant under the symmetries we considered, and is therefore unrelated to edge anomalies. Therefore, the quantization of $\beta$ seems to be a property of the relativistic limit\footnote{The quantization of $\beta$ can be understood on dimensional grounds - in the relativistic limit there are simply not enough dimension-full quantities which can be used to construct dimensionless quantities, beyond $\text{sgn}\left(m\right)$ and $o$. Of course, this does not explain why $\beta=\alpha$ in the relativistic limit. }, which will not hold throughout the phase diagram. The above results are based on a careful mapping of $p$-wave CSF/Cs, in the regime where the order parameter is large, to relativistic Majorana spinors in a curved and torsion-full space-time. Though the relativistic limit captures a rich geometric physics in CSF/Cs, it is in fact very limited in its ability to describe the full non-relativistic system, as discussed in Appendix \ref{subsec:Relativistic-limit-of}. This raises the question of whether the results obtained in this section truly characterize $p$-wave CSF/Cs, especially in the physically important weak coupling limit where $\Delta$ is small, which is opposite to the relativistic limit. Moreover, it is natural to ask whether the results obtained here apply to $\ell$-wave CSF/Cs with $\left|\ell\right|>1$, which do not admit a relativistic low energy description. These questions will be answered, in part, in Sec.\ref{sec:Main-section-2:}, where we perform a general non-relativistic analysis of continuum models for $\ell$-wave CSF/Cs. In order to address these questions in the context of lattice models, numerical studies seem to be required. Recently, such studies where initiated in the context of emergent gravity in Kitaev's honeycomb model \citep{PhysRevB.101.245116,PhysRevB.102.125152}, which is closely related to the $p$-wave CSF/Cs discussed in this section. For non-topological physical properties, the results show that the lattice model is well described by relativistic predictions, but only in the relativistic regime, where the masses of low energy relativistic fermions are small and the correlation length is large, as might be expected. An interesting question is whether the topological physics associated with the gCS term is more robustly captured by the relativistic predictions made in this section. \paragraph*{Real background geometry and manipulation of the order parameter} In this section we considered $p$-wave CSF/Cs in flat space, and focused on the emergent geometry described by a general $p$-wave order parameter. It is also natural to consider the effect of a real background geometry, obtained by deforming, or straining, the 2-dimensional sample in 3-dimensional space, possibly in a time dependent manner. Treating this at the level of the lattice model is beyond the scope of this thesis, but we can take the non-relativistic action \eqref{eq:13} as a starting point. On a general deformed sample it generalizes to \begin{align} & S\left[\psi;\Delta,A,G\right]=\int\mbox{d}^{2+1}x\sqrt{G}\left[\psi^{\dagger}\frac{i}{2}\overleftrightarrow{D_{t}}\psi-\frac{1}{2m^{*}}G^{ij}D_{i}\psi^{\dagger}D_{j}\psi+\left(\frac{1}{2}\Delta^{j}\psi^{\dagger}\partial_{j}\psi^{\dagger}+h.c\right)\right],\label{eq:120} \end{align} which now depends on three background fields: the order parameter $\Delta^{j}$, the $U\left(1\right)$ connection $A_{\mu}$, which enters $D_{\mu}=\partial_{\mu}-iA_{\mu}$, and includes a chemical potential, and the real background metric $G$, coming from the embedding of the 2-dimensional sample in 3-dimensional space, which corresponds to the strain tensor $u^{ij}=\left(G^{ij}-\delta^{ij}\right)/2$. This action is written for the fermion $\psi$, which satisfies $\left\{ \psi^{\dagger}\left(x\right),\psi\left(y\right)\right\} =\delta^{\left(2\right)}\left(x-y\right)/\sqrt{G\left(x\right)}$ as an operator. In this problem there are two (inverse) metrics, the real $G^{ij}$ and emergent $g^{ij}=\Delta^{(i}\Delta^{j)*}$, and it is interesting to study their interplay. In our analysis we focused on the relativistic limit, where $m^{*}\rightarrow\infty$. In this limit the metric $G$ completely decouples from the action, when written in terms of the fundamental fermion density $\tilde{\psi}=G^{1/4}\psi$ \citep{hawking1977zeta,fujikawa1980comment,abanov2014electromagnetic} . Thus, results obtained within the relativistic limit, are essentially unaffected by the background metric $G$. This conclusion is appropriate as long as the order parameter is treated as an independent background field, which is always suitable for the purpose of integrating out the gapped fermion density $\psi$. One then obtains the bulk currents and densities that we have described, which depend on the configuration of $\Delta$, and the question that remains is what this configuration physically is. Two scenarios are of importance. The first scenario is that of a proximity induced CSC, where the order parameter is induced by proximity to a conventional $s$-wave SC. In this case both $G$ and $g$ are background metrics, a scenario similar to a bi-metric description of anisotropic quantum Hall states \citep{gromov2017investigating}. In this case the magnitude of the order parameter depends on the distance between the sample and the $s$-wave SC, so if the position of the $s$-wave SC is fixed but the sample is deformed, a space-time dependent order parameter is obtained. One can also obtain the same effect by considering a flat sample, where $G$ is euclidian, and an $s$-wave SC with a curved surface, leading to non-Euclidian $g$. This provides one rout to a manipulation of the order parameter that will result in the bulk effects we have described. The second scenario is that of intrinsic CSCs, and CSFs, where the order parameter is dynamical. The order parameter splits into a massive Higgs field, which is the emergent metric $g^{ij}$, and a massless Goldstone field which is the overall phase $\theta$. The quantum theory of the emergent metric $g^{ij}$ is on its own an interesting problem, which will be discussed in Sec.\ref{sec:Discussion-and-outlook}. Nevertheless, as long as the probes $A,G$ are slow compared to the Higgs mass, $g^{ij}$ can be treated as fixed to its instantaneous ground state configuration, which in general will depend on the details of the attractive fermionic interaction. For an interaction that depends only on the geodesic distance, the ground state configuration is expected to be the curved space $p_{x}\pm ip_{y}$ configuration, where the pairing term is $\Delta_{0}e^{i\theta}\psi^{\dagger}\left(E_{1}^{\;j}\partial_{j}\pm iE_{2}^{\;j}\partial_{j}\right)\psi^{\dagger}$, see Appendix \ref{sec:microscopic model} and Refs.\citep{read2000paired,hoyos2014effective,moroz2015effective,quelle2016edge,moroz2016chiral}. Here $\Delta_{0}$ is a constant, $\theta$ is the Goldstone phase, and $E$ is a vielbein for the real metric $G$, such that $G^{ij}=E_{A}^{\;i}\delta^{AB}E_{B}^{\;j}$, which is a fixed background field\footnote{The $SO\left(2\right)_{L}$ ambiguity in choosing $E$ can be incorporated into $\theta$, which has $SO\left(2\right)_{L}$ charge 1}. What this means, in the language of this section, is that the emergent metric is proportional to the real metric, $g^{ij}=\Delta_{0}^{2}G^{ij}$. It follows that the responses to the emergent metric $g$ that we have described are, in this case, responses to the real metric $G$. This suggests a second rout to the manipulation of the order parameter that will result in the bulk effects we have described. Of course, in the intrinsic case one cannot ignore the dynamics of the Goldstone phase $\theta$, which will be gapless as long as $A$ is treated as a background field, as appropriate in CSFs. It is then interesting to see how the results obtained in this section are modified by the dynamics of $\theta$. The interplay between the topological gCS term and the $\theta$-dependent gpCS term suggests a non-trivial modification, which will be explored in Sec.\ref{sec:Main-section-2:}. \paragraph*{Towards experimental observation } There are a few basic questions that arise when trying to make contact between the phenomena described in this section and a possible experimental observation. Here we take as granted that one has at one's disposal either a $p$-wave CSF/C, or a candidate material. The first question is how to manipulate the Higgs part of the order parameter, which is the emergent metric, and was discussed above. The second natural question is how to measure energy currents and momentum densities. Also relevant, though not accentuated in this section, is a measurement of the stress tensor, comprised of the spatial components of the energy-momentum tensor. One possible approach, which provides both a means to manipulate the order parameter, and a measurement of energy-momentum-stress is a measurement of the phonon spectrum a la \citep{barkeshli2012dissipationless,schmeltzer2014propagation,schmeltzer2017detecting}. For the gpCS term, apart from energy-momentum-stress, there is also the density response \eqref{eq:9-1}, which is a simpler quantity for measurement, though not a \textit{topological} bulk response. A possible way to avoid the need to measure energy-momentum-stress may be possible in a Galilean invariant system, where electric current and momentum density are closely related. The simplest scenario is that of a $p$-wave CSF on a curved sample \eqref{eq:120}, where one assumes that the emergent metric follows the real metric, $g^{ij}=\Delta_{0}^{2}G^{ij}$. Here the electric current is related to the momentum density by \begin{align} & J^{i}=-\frac{G^{ij}}{m^{*}}P_{j}. \end{align} Our result \eqref{eq:4} then implies that the expectation value $\left\langle J^{i}\right\rangle $ has a contribution related to the gCS term, \begin{align*} & \left\langle J^{i}\right\rangle _{\text{gCS}}=-\frac{G^{ij}}{m^{*}}\left\langle P_{j}\right\rangle _{\text{gCS}}=\frac{1}{m^{*}}\frac{c}{96\pi}\hbar\varepsilon^{ij}\partial_{j}\tilde{\mathcal{R}}. \end{align*} This is not a topological response per se, due to the appearance of $m^{*}$. But, if $m^{*}$ is known, then the central charge $c$ can be extracted from a measurement of the electric current, which may be simpler to measure than energy-momentum-stress. This motivates the study of Galilean invariant CSFs, which will pursued in Sec.\ref{sec:Main-section-2:}. \paragraph*{Hall viscosity and torsional Hall viscosity} In the effective action \eqref{eq:72} for a Majorana spinor in Riemann-Cartan space-time, there is a term $\left(\zeta_{H}/2\right)\int e_{a}De^{a}$, which describes an energy-momentum-stress response termed \textit{torsional Hall viscosity} \citep{hughes2011torsional,hughes2013torsional,parrikar2014torsion}, due to its similarities with the odd (or Hall) viscosity $\eta_{\text{o}}$ that occurs in quantum Hall states and CSF/Cs \citep{avron1995viscosity,read2009non,abanov2014electromagnetic,hoyos2014effective,moroz2015effective}. As opposed to the well understood $\eta_{\text{o}}$, the torsional Hall viscosity $\zeta_{H}$ remains somewhat controversial \citep{geracie2014hall,hoyos2014hall,bradlyn2015low}, because it only appears if a non-symmetric stress tensor is used, and because it is UV-sensitive. The latter is also the reason that, in this section, we avoided from interpreting $\zeta_{H}$ in the context of $p$-wave CSF/Cs. However, the mapping between $p$-wave CSF/Cs and relativistic Majorana spinors in Riemann-Cartan space-time developed in this section, along with the existence of a Hall viscosity on one side of the mapping, and of a torsional Hall viscosity on the other, strongly suggests that the two types of Hall viscosity should map to one another, in this context. Moreover, it is known that the gCS term implies universal corrections to $\eta_{\text{o}}$ at non-zero wave-vector or in curved background \citep{abanov2014electromagnetic,bradlyn2015topological,klevtsov2015geometric}, and that the gpCS term implies curvature corrections to $\zeta_{H}$ \citep{hughes2013torsional}, which we expect to be non-universal. These observations provide us with a strong motivation to study the interplay between torsional Hall viscosity, Hall viscosity, and chiral central charge in CSFs, which we do in Sec.\ref{sec:Main-section-2:}. \pagebreak{} \section{Boundary central charge from bulk odd viscosity: chiral superfluids\label{sec:Main-section-2:}} As discussed in Sec.\ref{sec:Conclusion-and-discussion}, our analysis of CSF/Cs within the relativistic limit raises many questions which require a fully non-relativistic treatment. These questions revolve around the quantization of coefficients, application of strain (or a background metric), odd viscosity, dynamics of the Goldstone mode, and Galilean symmetry, and will be addressed in this section. In particular, we will answer the questions posed in Sec.\ref{subsec:Odd-viscosity}, by deriving a low energy effective field theory that consistently captures both the chiral Goldstone mode implied the symmetry breaking pattern \eqref{eq:2-1-1}, and the gCS term discussed in Sec.\ref{sec:Main-section-1:}. This theory unifies and extends the seemingly unrelated analysis of Refs.\citep{son2006general,hoyos2014effective,moroz2015effective} and Sec.\ref{sec:Main-section-1:}. \subsection{Building blocks for the effective field theory\label{sec: building blocks}} In order to probe a CSF, we minimally couple it to two background fields - a time-dependent spatial metric $G_{ij}$, which we use to apply strain $u_{ij}=\left(G_{ij}-\delta_{ij}\right)/2$ and strain-rate $\partial_{t}u_{ij}$, and a $U\left(1\right)_{N}$-gauge field $A_{\mu}=\left(A_{t},A_{i}\right)$, where we absorb a chemical potential $A_{t}=-\mu+\cdots$. The microscopic action $S$ is then invariant under $U\left(1\right)_{N}$ gauge transformations, implying the number conservation $\partial_{\mu}(\sqrt{G}J^{\mu})=0$, where $\sqrt{G}J^{\mu}=-\delta S/\delta A_{\mu}$. It is also clear that $S$ is invariant under \textit{spatial} diffeomorphisms generated by $\delta x^{i}=\xi^{i}\left(\mathbf{x}\right)$, if $G_{ij}$ transforms as a tensor and $A_{\mu}$ as a 1-form. Less obvious is the fact that a Galilean invariant fluid is additionally symmetric under $\delta x^{i}=\xi^{i}\left(t,\mathbf{x}\right)$, provided one adds to the transformation rule of $A_{i}$ a non-standard piece that dependens on the non-relativistic mass $m$ \citep{son2006general,hoyos2012hall,hoyos2014effective,gromov2014density,Andreev:2014aa,geracie2015spacetime,Andreev:2015aa,Geracie:2017aa}, \begin{align} & \delta A_{i}=-\xi^{k}\partial_{k}A_{i}-A_{k}\partial_{i}\xi^{k}+mG_{ij}\partial_{t}\xi^{j}.\label{eq:4-3-1-1} \end{align} We refer to $\delta x^{i}=\xi^{i}\left(\mathbf{x},t\right)$ as \textit{local Galilean symmetry} (LGS), as it can be viewed as a local version of the Galilean transformation $\delta x^{i}=v^{i}t$. The LGS implies the momentum conservation law\footnote{We use the notation $\varepsilon^{\mu\nu\rho}$ for the totally anti-symmetric (pseudo) tensor, normalized such that $\varepsilon^{xyt}=1/\sqrt{G}$, as well as $\varepsilon^{ij}=\varepsilon^{ijt}$.} \begin{align} & \frac{1}{\sqrt{G}}\partial_{t}\left(\sqrt{G}mJ^{i}\right)+\nabla_{j}T^{ji}=nE_{i}+\varepsilon^{ij}J_{j}B,\label{eq:5-3-1} \end{align} where $\sqrt{G}T^{ij}=2\delta S/\delta G^{ij}$ is the stress tensor and the right hand side is the Lorentz force. This identifies the momentum density $P^{i}=mJ^{i}$ - a familiar Galilean relation. Since CSFs spontaneously break the rotation symmetry in flat space, in order to describe them in curved, or strained, space, it is necessary to introduce a background vielbein. This is a field $E_{\;j}^{A}$ valued in $GL\left(2\right)$, such that $G_{ij}=E_{\;i}^{A}\delta_{AB}E_{\;j}^{B}$, where $A,B\in\left\{ 1,2\right\} $. For a given metric $G$ the vielbein $E$ is not unique - there is an internal $O\left(2\right)_{P,L}=\mathbb{Z}_{2,P}\ltimes SO\left(2\right)_{L}$ ambiguity, or symmetry, acting by $E_{\;j}^{A}\mapsto O_{\;B}^{A}E_{\;j}^{B}$, $O\in O\left(2\right)_{P,L}$. The generators $L,P$ correspond to \textit{internal} spatial rotations and reflections, and are analogs of angular momentum and spatial reflection (parity), acting on the tangent space rather than on space itself. The inverse vielbein $E_{B}^{\;j}$ is defined by $E_{\;j}^{A}E_{B}^{\;j}=\delta_{B}^{A}$. The charge $N+\left(\ell/2\right)L$ of the Goldstone field $\theta$ implies the covariant derivative \begin{align} & \nabla_{\mu}\theta=\partial_{\mu}\theta-A_{\mu}-s_{\theta}\omega_{\mu},\label{eq:6} \end{align} with a \textit{geometric spin} $s_{\theta}=\ell/2$. Here $\omega_{\mu}$ is the non-relativistic spin connection, an $SO\left(2\right)_{L}$-gauge field which is $\ensuremath{E_{\;j}^{A}}$-compatible, see Appendix \ref{sec:Geometric quantities}. So far we assumed that the microscopic fermion $\psi$ does not carry a geometric spin, $s_{\psi}=0$, which defines the physical system of interest, see Fig.\ref{fig:Comparison-of-the-1-1}(a). It will be useful, however, to generalize to $s_{\psi}\in\left(1/2\right)\mathbb{Z}$. A non-zero $s_{\psi}$ changes the SSB pattern \eqref{eq:2-1-1}, by modifying the geometric spin of the Goldstone field to $s_{\theta}=s_{\psi}+\ell/2$ and the unbroken generator to $L-s_{\theta}N$. In the special case $s_{\psi}=-\ell/2$ the Cooper pair is geometrically spin-less and $L$ is unbroken, as in an $s$-wave SF, see Fig.\ref{fig:Comparison-of-the-1-1}(b). This $s_{\theta}=0$ CSF is, however, distinct from a conventional $s$-wave SF, because $P$ and $T$ are still broken down to $PT$, and we therefore refer to it as a \textit{geometric} $s$-wave (g$s$-wave) CSF, to distinguish the two. In particular, a central charge $c\neq0$, which is $P,T$-odd, is not forbidden by symmetry, and is in fact independent of $s_{\psi}$. This makes the g$s$-wave CSF particularly useful for our purposes. We note that $\omega_{\mu}$ transforms as a 1-form under LGS only if $B/2m$ is added to $\omega_{t}$ \citep{hoyos2014effective,moroz2015effective}, which we do implicitly throughout. For $\psi$, this is equivalent to adding a g-factor $g_{\psi}=2s_{\psi}$ \citep{geracie2015spacetime}. \begin{figure}[!th] \begin{centering} \par\end{centering} \begin{centering} \par\end{centering} \begin{centering} \par\end{centering} \begin{centering} \includegraphics[width=0.6\linewidth]{Fig9.pdf} \par\end{centering} \caption{(a) A CSF is comprised of fermions $\psi$ which carry no \textit{geometric }spin, $s_{\psi}=0$, which form Cooper pairs with relative angular momentum $\ell\in\mathbb{Z}$ (red arrows). The geometric spin $s_{\theta}=\ell/2$ of the Cooper pair gives rise to the $\mathbf{q}=\mathbf{0}$ odd viscosity \eqref{eq:1-1}, with $s=s_{\theta}$. The CSF supports boundary degrees of freedom (dashed orange) with a chiral central charge $c\in\left(\ell/2\right)\mathbb{Z}$, which cannot be extracted from the odd viscosity $\eta_{\text{o}}\left(\mathbf{q}\right)$ alone \eqref{eq:14-1}. (b) In an auxiliary CSF the fermion $\tilde{\psi}$ is assigned a geometric spin $\tilde{s}_{\psi}=-\ell/2$ (blue arrows). The geometric spin of the Cooper pair therefore vanishes, $\tilde{s}_{\theta}=\ell/2+\tilde{s}_{\psi}=0$, as in an $s$-wave superfluid, but the central charge is unchanged, $\tilde{c}=c$. As a result, the small $\mathbf{q}$ behavior of the odd viscosity $\tilde{\eta}_{\text{o}}\left(\mathbf{q}\right)$ depends only on $c$ \eqref{eq:18-1-1}. The improved odd viscosity of the CSF is defined as the odd viscosity of the auxiliary CSF, and is given explicitly by \eqref{eq:16-2-1}. \label{fig:Comparison-of-the-1-1}} \end{figure} \subsection{Effective field theory\label{sec: effective field theory}} Based on the above characterization of CSFs, the low energy, long wave-length, behavior of the system can be captured by an effective action $S_{\text{eff}}\left[\theta;A,G;\ell,c\right]$, formally obtained by integrating out all massive degrees of freedom - the single fermion excitations and the Higgs fields. In this Section we describe a general expression for $S_{\text{eff}}$, compatible with the symmetries, SSB pattern, and ground state topology of CSFs. The effective action can be written order by order in a derivative expansion, with the power counting scheme $\partial_{\mu}=O\left(p\right),\;A_{\mu},G_{ij}=O\left(1\right),\;\theta=O\left(p^{-1}\right)$ \citep{son2006general,hoyos2014effective}. The spin connection is a functional of $G_{ij}$ that involves a single derivative (see Appendix \ref{sec:Geometric quantities}), so $\omega_{\mu}=O\left(p\right)$. Denoting by $\mathcal{L}_{n}$ the term in the Lagrangian which is $O\left(p^{n}\right)$ and invariant under all symmetries, we have $S_{\text{eff}}=\sum_{n=0}^{\infty}\int\text{d}^{2}x\text{d}t\sqrt{G}\mathcal{L}_{n}$. The desired $q^{2}$ corrections to $\eta_{\text{o}}$ are $O\left(p^{3}\right)$, which poses the main technical difficulty. The leading order Lagrangian \begin{align} & \mathcal{L}_{0}=P\left(X\right),\;X=\nabla_{t}\theta-\frac{1}{2m}G^{ij}\nabla_{i}\theta\nabla_{j}\theta,\label{eq:9-0} \end{align} was studied in detail in \citep{hoyos2014effective}, and contains the earlier results of \citep{volovik1988quantized,goryo1998abelian,goryo1999observation,furusaki2001spontaneous,stone2004edge,roy2008collective,lutchyn2008gauge}. Here $X$ is the unique $O\left(1\right)$ scalar, which reduces to the chemical potential $\mu$ in the ground state(s) $\partial_{\mu}\theta=0$, and $P$ is an arbitrary function of $X$ that physically corresponds to the ground state pressure $P_{0}=P\left(\mu\right)$. The function $P$ also determines the ground state density $n_{0}=P'\left(\mu\right)$, and the leading dispersion of the Goldstone mode $\omega^{2}=c_{s}^{2}q^{2}$, where $c_{s}^{2}=\partial_{n_{0}}P_{0}/m=P'(\mu)/P''(\mu)m$ is the speed of sound. For $\ell\neq0$, the spin connection appears in each $\nabla\theta$, see Eq.\eqref{eq:6}, and so $\mathcal{L}_{0}$ includes $O\left(p\right)$ contributions, which produce the leading odd viscosity and conductivity, discussed below. There are no additional terms at $O\left(p\right)$, so that $\mathcal{L}_{1}=0$ \citep{hoyos2014effective}. At $O\left(p^{2}\right)$ one has \begin{align} \mathcal{L}_{2}= & F_{1}\left(X\right)R+F_{2}\left(X\right)\left[mK_{\;i}^{i}-\nabla^{2}\theta\right]^{2}+F_{3}\left(X\right)\left[2m\left(\nabla_{i}K_{\;j}^{j}-\nabla^{j}K_{ji}\right)\nabla^{i}\theta\right]+\cdots,\label{eq:10-1-0} \end{align} where $K_{ij}=\partial_{t}G_{ij}/2$ and $R$ are the extrinsic curvature and Ricci scalar of the spatial slice at time $t$ \citep{carroll2004spacetime}, the $F$s are arbitrary functions of $X$, and dots indicate additional terms which do not contribute to $\eta_{\text{o}}$ up to $O\left(p^{2}\right)$, see Appendix \ref{subsec: Second order effective action} for the full expression. The Lagrangian $\mathcal{L}_{2}$ was obtained in \citep{son2006general} for $s$-wave SFs. For $\ell\neq0$ the spin connection in $\nabla\theta$ produces $O\left(p^{3}\right)$ contributions to $\mathcal{L}_{2}$, and, in turn, non-universal $q^{2}$ corrections to $\eta_{\text{o}}$. The term $\mathcal{L}_{3}$ is the last ingredient required for reliable results at $O\left(p^{3}\right)$. Most importantly, it includes the non-relativistic gCS term \citep{bradlyn2015topological,gromov2016boundary}, $\mathcal{L}_{3}=\text{\ensuremath{\mathcal{L}}}_{\text{gCS}}+\cdots$, where \begin{align} \text{\ensuremath{\mathcal{L}}}_{\text{gCS}} & =-\frac{c}{48\pi}\omega\text{d}\omega.\label{eq:10-0} \end{align} Here $\omega\text{d}\omega=\varepsilon^{\mu\nu\rho}\omega_{\mu}\partial_{\nu}\omega_{\rho}$, and $c$ is the chiral central charge of the boundary degrees of freedom, as required to match the boundary gravitational anomaly \citep{kraus2006holographic,stone2012gravitational,PhysRevB.98.064503}. Unlike the lower order terms, $\mathcal{L}_{\text{gCS}}$ is independent of $\theta$, and encodes only the response of the fermionic topological phase to the background metric. The direct confirmation of \eqref{eq:10-0} within a non-relativistic microscopic model has been anticipated for some time \citep{volovik1990gravitational,read2000paired}, and is a main result of Appendix \ref{sec:microscopic model}. In Appendices \ref{subsec:Odd-viscosity-from} and \ref{subsec:Additional terms at third order} we argue that additional terms in $\mathcal{L}_{3}$ do not produce $q^{2}$ corrections to $\eta_{\text{o}}$. There are three symmetry-allowed topological terms that can be added to $S_{\text{eff}}$ \citep{ferrari2014fqhe,can2014fractional,abanov2014electromagnetic,gromov2014density,gromov2015framing,can2015geometry,klevtsov2015geometric,bradlyn2015topological,Klevtsov_2016,klevtsov2017laughlin,Cappelli_2018}. These are the $U\left(1\right)$ Chern-Simons (CS) and first and second Wen-Zee (WZ1, WZ2) terms, which can be added to $\mathcal{L}_{1}$, $\mathcal{L}_{2}$, $\mathcal{L}_{3}$, respectively \footnote{These are not LGS invariant, and would need to be modified along the lines of \citep{hoyos2012hall,Andreev:2014aa,Andreev:2015aa}. }, \begin{align} & \frac{\nu}{4\pi}\left(A\text{d}A-2\overline{s}\omega\text{d}A+\overline{s^{2}}\omega\text{d}\omega\right).\label{eq:11} \end{align} As our notation suggests, WZ2 and gCS are identical for the purpose of local bulk responses, of interest here, but the two are globally distinct \citep{bradlyn2015topological,gromov2016boundary,Cappelli_2018}. Based on symmetry, and ignoring boundary physics, the independent coefficients $\nu$, $\nu\overline{s}$, and $\nu\overline{s^{2}}$ obey certain quantization conditions \citep{witten2007three}, but are otherwise unconstrained. The absence of a boundary $U\left(1\right)_{N}$-anomaly then fixes $\nu=0$ \citep{PhysRevB.98.064503}, but leaves $\nu\overline{s},\nu\overline{s^{2}}$ undetermined \citep{bradlyn2015topological,gromov2016boundary,Cappelli_2018}. One can argue that a Chern-Simons term can only appear for the unbroken generator $L-s_{\theta}N$, so that $\nu=0$ implies $\nu\overline{s}=\nu\overline{s^{2}}=0$. Moreover, in the following section we will see that a perturbative computation within a canonical model for $\ell=\pm1$ shows that $\nu\overline{s}=\nu\overline{s^{2}}=0$, which applies to any deformation of the model (which preserves the symmetries, SSB pattern, and single fermion gap), due to the quantization of $\nu\overline{s},\nu\overline{s^{2}}$. Accordingly, we set $\nu\overline{s}=\nu\overline{s^{2}}=0$ in the following. \subsection{Benchmarking the effective theory against a microscopic model\label{subsec:Benchmarking-the-effective}} In this section we take a complementary approach and compute $S_{\text{eff}}$ perturbatively, starting from a canonical microscopic model for a $p$-wave CSF. The perturbative computation verifies the general expression in a particular example, and determines the coefficients of topological terms which are not completely fixed by symmetry. It also gives one a sense of the behavior of the coefficients of non-topological terms as a function of microscopic parameters such as the chemical potential $\mu$ and mass $m$. In particular, the results of Sec.\ref{sec:Main-section-1:} are reproduced in the relativistic limit $m\rightarrow \infty$. Here we will outline the computation and describe its results, omitting many technical details which can be found in Appendix \ref{sec:microscopic model}. The microscopic model is given by \begin{align} S_{\text{m}}=\int\mbox{d}^{2}x & \text{d}t\sqrt{G}\left[\frac{i}{2}\psi^{\dagger}\overleftrightarrow{\nabla_{t}}\psi-\frac{1}{2m}G^{ij}\nabla_{i}\psi^{\dagger}\nabla_{j}\psi+\left(\frac{1}{2}\Delta^{j}\psi^{\dagger}\nabla_{j}\psi^{\dagger}+h.c\right)-\frac{1}{2\lambda}G_{ij}\Delta^{i*}\Delta^{j}\right],\label{eq:3-1-1} \end{align} where the covariant derivative of the spin-less (or single component) fermion $\psi$ is $\nabla_{\mu}=\partial_{\mu}+iA_{\mu}+is_{\psi}\omega_{\mu}$. Note that we allow for a \textit{geometric} spin $s_{\psi}$, as discussed in Sec.\ref{sec: building blocks}. Apart from the standard non-relativistic kinetic term, the action includes the simplest attractive two-body interaction \citep{Volovik:1988aa,quelle2016edge}, mediated by the complex vector $\Delta^{i}$, the order parameter, with coupling constant $\lambda>0$. A simplified version of this action was already discussed in Sec.\ref{sec:Conclusion-and-discussion}. For a given $\Delta^{j}$, the fermion $\psi$ is gapped, unless the chemical potential $\mu$ or chirality $\ell=\text{sgn}\left(\text{Im}\left(\Delta^{x}\Delta^{y*}\right)\right)$ are tuned to 0, and forms a fermionic topological phase characterized by the boundary chiral central charge \citep{read2000paired,volovik2009universe,ryu2010topological} \begin{align} & c=-\left(\ell/2\right)\Theta\left(\mu\right)\in\left\{ 0,\pm1/2\right\} .\label{eq:12-1-2} \end{align} An effective action $S_{\text{eff},\text{m}}\left[\Delta;A,G\right]$ for $\Delta^{j}$ in the background $A_{\mu},G_{ij}$ is then obtained by integrating over the fermion. The subscript 'm' indicates that this is obtained from the particular microscopic model $S_{\text{m}}$. Since Eq.\eqref{eq:3-1-1} is quadratic in $\psi,\psi^{\dagger}$, obtaining $S_{\text{eff},\text{m}}$ is formally straight forward, and leads to a functional Pfaffian. To zeroth order in derivatives $S_{\text{eff},\text{m}}\left[\Delta;A,G\right]=-V\left[\Delta;G\right]$, where the potential $V$ is minimized by the $p_{x}\pm ip_{y}$ order parameter. In flat space this is given by the familiar $\Delta^{j}\partial_{j}=\Delta_{0}e^{-2i\theta}\left(\partial_{x}\pm i\partial_{y}\right)$. Here $\Delta_{0}$ is a fixed function of $m,\mu$ and $\lambda$, determined by the minimization, while the phase $\theta$ and chirality $\ell=\pm1$ are undetermined. In order to write down the $p_{x}\pm ip_{y}$ configuration in curved space it is necessary to use the background vielbein \citep{hoyos2014effective,quelle2016edge} \begin{align} \Delta^{j} & =\Delta_{0}e^{-2i\theta}\left(E_{1}^{\;j}\pm iE_{2}^{\;j}\right).\label{eq:12-3} \end{align} Fluctuations of $\Delta$ away from these configurations correspond to massive Higgs modes, which should in principle be integrated out to obtain a low energy action $S_{\text{eff},\text{m}}\left[\theta;A,G\right]$ that can be compared with the general $S_{\text{eff}}$ of the previous section. We will simply ignore these fluctuations, and obtain $S_{\text{eff},\text{m}}\left[\theta;A,G\right]$ by plugging Eq.\eqref{eq:12-3} into $S_{\text{eff},\text{m}}\left[\Delta;A,G\right]$. This will suffice as a derivation of $S_{\text{eff}}$ from a microscopic model. A proper treatment of the massive Higgs modes will only further renormalize the coefficients we find, apart from the central charge $c$. To practically compare the actions $S_{\text{eff}}$ and $S_{\text{eff},\text{m}}$ we expand them in fields, to second order around $\theta=0,A_{\nu}=-\mu\delta_{\nu}^{t},G_{ij}=\delta_{ij}$, and in derivatives, to third order, see appendices \ref{subsec:Effective-action-and} and \ref{subsec:Perturbative-expansion}. Equating these two double expansions leads to an overdetermined system of equations for the phenomenological parameters in $S_{\text{eff}}$ in terms of the microscopic parameters in $S_{\text{m}}$, with a unique solution. In particular, we find the dimensionless parameters \begin{align} \frac{P''}{m}= & \frac{1}{2\pi}\begin{cases} 1\\ \frac{1}{1+2\kappa} \end{cases},\;\;\;\;\;F_{1}'=\frac{1}{96\pi}\begin{cases} 1\\ \frac{3}{1+2\kappa} \end{cases},\;\;\;\;\;mF_{2}=-\frac{1}{128\pi}\begin{cases} 1+2\kappa\\ \frac{1}{1+2\kappa} \end{cases},\label{eq:c2}\\ mF_{3}= & \frac{1}{48\pi}\begin{cases} 1+\kappa\\ \frac{1}{1+2\kappa} \end{cases},\;\;\;\;\;c=\begin{cases} -\ell/2\\ 0 \end{cases},\nonumber \end{align} where $\kappa=\left|\mu\right|/m\Delta_{0}^{2}>0$, primes denote derivatives with respect to $\mu$, and the cases refer to $\mu>0$ and $\mu<0$. We note that for $\mu>0$ there is a single particle Fermi surface, with energy $\varepsilon_{F}=\mu$ and wave-vector $k_{F}=\sqrt{2m\mu}$, which for small $\lambda$ will acquire an energy gap $\varepsilon_{\Delta}=\Delta_{0}k_{F}\ll\varepsilon_{F}$. In this weak-coupling regime, it is natural to parametrize the coefficients in \eqref{eq:c2} using the small parameter $\varepsilon_{\Delta}/\varepsilon_{F}=\sqrt{2/\kappa}$. The coefficient $P''$ determines the leading odd (or Hall) conductivity and has been computed previously in the literature \citep{volovik1988quantized,goryo1998abelian,goryo1999observation,furusaki2001spontaneous,stone2004edge,roy2008collective,lutchyn2008gauge,hoyos2014effective,ariad2015effective}, while $F_{1},F_{2}$ and $F_{3}$, to the best of our knowledge, have not been computed previously, even for an $s$-wave SF. Crucially, Eq.\eqref{eq:c2} shows that the coefficient $c$ of the bulk gCS term matches the known boundary central charge \eqref{eq:12-1-2}, which is a main result of the perturbative computation. It follows that there is no WZ2 term in $S_{\text{eff},\text{m}}$, so $\nu\overline{s^{2}}=0$, in accordance with the general discussion of the previous section. We additionally confirm that $\nu=\nu\overline{s}=0$. A few additional comments regarding Eq.\eqref{eq:c2} are in order: \begin{enumerate} \item The seeming quantization of $P''/m$ and $F_{1}'$ for $\mu>0$ is a non-generic result, as was shown explicitly for $P''/m$ \citep{ariad2015effective}. \item The free fermion limit $\kappa\rightarrow\infty$, or $\Delta_{0}\rightarrow0$, of certain coefficients in \eqref{eq:c2} diverges for $\mu>0$ but not for $\mu<0$. This signals the breakdown of the gradient expansion for a gapless Fermi surface, but not for gapped free fermions. \item The opposite limit, $\kappa\rightarrow0$, or $m\rightarrow\infty$, is the relativistic limit studied in Sec.\ref{sec:Main-section-1:}, in which the fermionic part of the model reduces to a 2+1 dimensional Majorana spinor with mass $\mu$ and speed of light $\Delta_{0}$, coupled to a Riemann-Cartan geometry described by $\Delta^{i},\;A_{\mu}$. Accordingly, there is a sense in which the limit $\kappa\rightarrow\infty$ of $S_{\text{eff,m}}$ reproduces the effective action of a massive Majorana spinor in Riemann-Cartan space-time (see Sec.\ref{sec:Bulk-response}). In particular, in the limit $\kappa\rightarrow0$ the dimensionless coefficients \eqref{eq:c2} are all quantized, as expected based on dimensional analysis. Apart from $c$, only the coefficient $F_{1}'$ is discontinuous at $\mu=0$ within this limit, with a quantized discontinuity $-\left(\ell/4\right)\left[F'_{1}\left(0^{+}\right)-F'_{1}\left(0^{-}\right)\right]=\left(\ell/2\right)/96\pi$ that matches the coefficient $\beta$ of the \textit{gravitational pseudo Chern-Simons} term of Sec.\ref{sec:Main-section-1:}. As anticipated in Sec.\ref{sec:Main-section-1:}, the coefficient $c$ remains quantized away from the relativistic limit, while $\beta$, or $F_{1}'$, does not. Finally, we note that our perturbative computation of the gCS term generalizes the computations of Refs.\cite{goni1986massless,van1986topological,vuorio1986parity,vuorio1986parityErr,kurkov2018gravitational} and Appendix \ref{subsec:Perturbative-calculation-of} for relativistic fermions, and reduces to these as $\kappa\rightarrow0$. \end{enumerate} \subsection{Induced action and linear response\label{sec: induced action and linear response}} By expanding $S_{\text{eff}}$ to second order in the fields $\theta,A_{t}-\mu,A_{i},u_{ij}$, and performing Gaussian integration over $\theta$, we obtain an induced action $S_{\text{ind}}\left[A_{\mu},u_{ij}\right]$ that captures the linear response of CSFs to the background fields, see Appendix \ref{subsec:Obtaining--from} for explicit expressions. Taking functional derivatives one obtains the expectation values $J^{\mu}=G^{-1/2}\delta S_{\text{ind}}/\delta A_{\mu}$, $T^{ij}=G^{-1/2}\delta S_{\text{ind}}/\delta u_{ij}$ of the current and stress, and from them the conductivity $\sigma^{ij}=\delta J^{i}/\delta E_{j}$, the viscosity $\eta^{ij,kl}=\delta T^{ij}/\delta\partial_{t}u_{kl}$, and the mixed response function $\kappa^{ij,k}=\delta T^{ij}/\delta E_{k}=\delta J^{k}/\delta\partial_{t}u_{ij}$. We will also need the static susceptibilities $\chi_{JJ}^{\mu,\nu},\;\chi_{TJ}^{ij,\nu}$, defined by restricting to time independent $A_{\mu},u_{ij}$, and computing $\delta J^{\mu}/\delta A_{\nu}$ and $\delta J^{\nu}/\delta u_{ij}$, respectively. Before we compute $\eta_{\text{o}}$, it is useful to restrict its form based on dimensionality and symmetries: space-time translations and spatial rotations, as well as $PT$. The analysis is performed in Appendices \ref{subsec: B.1}-\ref{subsec: B.4}, and results in the expression \begin{align} \eta_{\text{o}}\left(\omega,\mathbf{q}\right)= & \eta_{\text{o}}^{\left(1\right)}\sigma^{xz}+\eta_{\text{o}}^{\left(2\right)}\left[\left(q_{x}^{2}-q_{y}^{2}\right)\sigma^{0x}-2q_{x}q_{y}\sigma^{0z}\right],\label{eq:3-3-2-1-1} \end{align} which is written in a basis of anti-symmetrized tensor products of the symmetric Pauli matrices, $\sigma^{ab}=2\sigma^{[a}\otimes\sigma^{b]}$ \citep{Avron1998}. As components of the strain tensor, the matrices $\sigma^{x},\sigma^{z}$ correspond to shears, while the identity matrix $\sigma^{0}$ corresponds to a dilatation. The details of the system are encoded in two independent coefficients $\eta_{\text{o}}^{\left(1\right)},\eta_{\text{o}}^{\left(2\right)}\in\mathbb{C}$, which are themselves arbitrary functions of $\omega,q^{2}$. In the case of uniform strain ($\mathbf{q}=\mathbf{0}$), the odd viscosity tensor reduces to a single component, $\eta_{\text{o}}\left(\omega,\mathbf{0}\right)=\eta_{\text{o}}^{\left(1\right)}\left(\omega\right)\sigma^{[x}\otimes\sigma^{z]}$, as is well known \citep{PhysRevLett.75.697,Avron1998,PhysRevB.86.245309,hoyos2014hall,PhysRevE.89.043019}. The additional component $\eta_{\text{o}}^{\left(2\right)}$ has not been discussed much in the literature \citep{abanov2014electromagnetic,hoeller2018second}, and also appears in the presence of vector, or pseudo-vector, anisotropy \citep{PhysRevB.99.045141,PhysRevB.99.035427}, in which case $\mathbf{q}$ should be replaced by a background vector $\mathbf{b}$. The expression \eqref{eq:3-3-2-1-1} applies at finite temperature, out of equilibrium, and in the presence of disorder that preserves the symmetries on average. For clean systems at zero temperature, $\eta_{\text{o}}^{\left(1\right)},\eta_{\text{o}}^{\left(2\right)}$ are both real, even functions of $\omega$. In gapped systems $\eta_{\text{o}}^{\left(1\right)},\eta_{\text{o}}^{\left(2\right)}$ will usually be regular at $\omega=0=q^{2}$, though an exception to this rule was recently found in Ref.\citep{10.21468/SciPostPhys.9.1.006}. For the CSF, we find the $\omega=0$ coefficients \begin{align} \eta_{\text{o}}^{\left(1\right)}\left(q^{2}\right)= & -\frac{1}{2}s_{\theta}n_{0}-\left(\frac{c}{24}\frac{1}{4\pi}+s_{\theta}C^{\left(1\right)}\right)q^{2}+O\left(q^{4}\right),\nonumber \\ \eta_{\text{o}}^{\left(2\right)}\left(q^{2}\right)= & \frac{1}{2}s_{\theta}n_{0}q^{-2}+\left(\frac{c}{24}\frac{1}{4\pi}+s_{\theta}C^{\left(2\right)}\right)+O\left(q^{2}\right),\label{eq:14-1} \end{align} where $C^{\left(1\right)},C^{\left(2\right)}\in\mathbb{R}$ are generically non-zero, and are given by particular linear combinations of the dimensionless coefficients $F_{1}'\left(\mu\right),mF_{2}\left(\mu\right),mF_{3}\left(\mu\right)$, defined in \eqref{eq:10-1-0}, see Appendix \ref{subsec:Obtaining--from} for more details. The leading term in $\eta_{\text{o}}^{\left(1\right)}$ is the familiar \eqref{eq:1-1}, which also appears in gapped states, while the non-analytic leading term in $\eta_{\text{o}}^{\left(2\right)}$ is possible because the superfluid is gapless, and does not contribute to the viscosity tensor when $q\rightarrow0$ at $\omega\neq0$ \citep{hoyos2014effective}. Both leading terms obey the same quantization condition due to SSB, can be used to extract $s_{\theta}$, and are independent of $c$. The sub-leading corrections to both $\eta_{\text{o}}^{\left(1\right)},\eta_{\text{o}}^{\left(2\right)}$ contain the quantized gCS contributions proportional to $c$, but also the non-universal coefficients $C^{\left(1\right)},C^{\left(2\right)}$. Thus the central charge cannot be extracted from a measurement of $\eta_{\text{o}}$ alone. Noting that the non-universal sub-leading corrections to $\eta_{\text{o}}$ originate from the geometric spin $s_{\theta}=\ell/2$ of the Goldstone field, one is naturally led to consider the g$s$-wave CSF, where $s_{\theta}=0$, and the odd viscosity is, to leading order in $q$, purely due to the gCS term \begin{align} \tilde{\eta}_{\text{o}}^{\left(1\right)}\left(q^{2}\right)= & -\frac{c}{24}\frac{1}{4\pi}q^{2}+O\left(q^{4}\right),\label{eq:18-1-1}\\ \tilde{\eta}_{\text{o}}^{\left(2\right)}\left(q^{2}\right)= & \frac{c}{24}\frac{1}{4\pi}+O\left(q^{2}\right).\nonumber \end{align} Here and below we use $O$ and $\tilde{O}$, for the quantity $O$ in the CSF and in the corresponding $g$s-wave CSF, respectively. Equation \eqref{eq:18-1-1} follows from \eqref{eq:14-1} by setting $s_{\theta}=0$, but can be understood directly from $S_{\text{eff}}$. Indeed, for the g$s$-wave CSF, $S_{\text{eff}}$ is identical to that of the conventional $s$-wave SF up to $O\left(p^{2}\right)$, but contains the additional $\mathcal{L}_{\text{gCS}}$ at $O\left(\omega q^{2}\right)$, which is the leading $P,T$-odd term, and produces the leading odd viscosity \eqref{eq:18-1-1}. Due to the LGS \eqref{eq:4-3-1-1}-\eqref{eq:5-3-1}, the viscosity \eqref{eq:18-1-1} implies also \begin{align} & \tilde{\chi}_{TJ,\text{o}}^{ij,k}=-\frac{i}{m}\frac{c}{48\pi}q_{\perp}^{i\vphantom{j}}q_{\perp}^{j}q_{\bot}^{k\vphantom{j}}+O\left(q^{4}\right),\label{eq:17-1} \end{align} where $q_{\perp}^{i}=\varepsilon^{ij}q_{j}$, and the subscript ``o'' (``e'') refers to the $P,T$-odd (even) part of an object, which is odd (even) in $\ell$. In particular, a steady $P,T$-odd current $\tilde{J}_{\text{o}}^{k}=-\frac{1}{m}\frac{c}{96\pi}\partial_{\perp}^{k}R+O\left(q^{4}\right)$ flows perpendicularly to gradients of curvature, which has the linearized form $R=-2\partial_{\perp}^{i}\partial_{\perp}^{j}u_{ij}$. We stress that the fundamental relation is the momentum density $\tilde{P}_{\text{o}}^{k}=-\frac{c}{96\pi}\partial_{\perp}^{k}R+O\left(q^{4}\right)$, which follows from the odd viscosity \eqref{eq:18-1-1} along with momentum conservation, irrespective of Galilean symmetry. This relation was predicted for $p$-wave CSFs based on the relativistic limit, see Sec.\ref{subsubsec:Topological bulk responses from a gravitational Chern-Simons term} and \ref{sec:Conclusion-and-discussion}. We conclude that, in the g$s$-wave CSF, $c$ can be extracted from a measurement of $\tilde{\eta}_{\text{o}}$, and in the Galilean invariant case, also from a measurement of the current $\tilde{J}$ in response to (time-independent) strain. Though the simple results above do not apply to the physical system of interest, the CSF, there is a relation between the observables of the CSF and the corresponding $g$s-wave CSF, which we can utilize. At the level of induced actions it is given by the simple relation \begin{align} \tilde{S}_{\text{ind}}\left[A_{\mu},u_{ij}\right]=S_{\text{ind}}\left[A_{\mu}-\left(\ell/2\right)\omega_{\mu},u_{ij}\right],\label{eq:induced} \end{align} where $\omega_{\mu}$ is expressed through $u_{ij}$ as in Appendix \ref{sec:Geometric quantities}, and by taking functional derivatives one finds relations between response functions \citep{geracie2015spacetime}. In particular, \begin{align} \tilde{\eta}_{\text{o}}^{ij,kl}= & \eta_{\text{o}}^{ij,kl}-\frac{\ell}{4}n_{0}\left(\sigma^{xz}\right)^{ij,kl}+\frac{i\ell}{4}\left(\kappa_{\text{e}\vphantom{\bot}}^{ij,(k}q_{\perp}^{l)}-\kappa_{\text{e}\vphantom{\bot}}^{kl,(i}q_{\perp}^{j)}\right)+\frac{\ell^{2}}{16}\sigma_{\text{o}}q_{\perp}^{(i}\varepsilon_{\vphantom{\bot}}^{j)(k}q_{\perp}^{l)},\label{eq:16-2-1} \end{align} where the response functions $\eta_{\text{o}},\sigma_{\text{o}},\kappa_{\text{e}}$ depend on $\omega,\mathbf{q}$. In a Galilean invariant system one further has \begin{align} \tilde{\chi}_{TJ,\text{o}}^{ij,k}= & \chi_{TJ,\text{o}}^{ij,k}-\frac{\ell}{4m}\chi_{TJ,\text{e}}^{ij,t}iq_{\bot}^{k}+\frac{\ell}{2}iq_{\bot}^{(i}\chi_{JJ,\text{e}}^{j),k}+\frac{\ell^{2}}{8m}q_{\bot}^{(i}\chi_{JJ,\text{o}}^{j),t}q_{\bot}^{k},\label{eq:21-1} \end{align} and we note the relations $\chi_{TJ,\text{e}}^{ij,t}=\kappa_{\text{e}}^{ij,k}iq_{k},\;\chi_{JJ,\text{o}}^{j,t}=\sigma_{\text{o}}q_{\bot}^{j},\;\chi_{JJ,\text{e}}^{j,k}=\rho_{\text{e}}q_{\bot}^{j}q_{\bot}^{k}$, between the above susceptibilities, the response functions $\kappa_{\text{e}},\sigma_{\text{o}}$, and the London diamagnetic response $\rho_{\text{e}}$. Though the above expressions are a mouthful, they correspond to the simple subtraction of an angular momentum $\ell/2$ per fermion, as expressed by Eq.\eqref{eq:induced}. \subsection{Discussion\label{sec:discussion}} Equations \eqref{eq:18-1-1} and \eqref{eq:16-2-1} are the main results of this Section. They rely on the SSB pattern \eqref{eq:2-1-1}, but not on Galilean symmetry. Equation \eqref{eq:16-2-1} expresses $\tilde{\eta}_{\text{o}}$ as a bulk observable of CSFs, which we refer to as the \textit{improved odd viscosity}. According to \eqref{eq:18-1-1}, the leading term in the expansion of $\tilde{\eta}_{\text{o}}\left(0,\mathbf{q}\right)$ around $\mathbf{q}=\mathbf{0}$ is fixed by $c$. Since this leading term occurs at second order in $\mathbf{q}$, in order to extract $c$ one needs to measure $\sigma_{\text{o}},\chi_{\text{e}}$, and $\eta_{\text{o}}$, at zeroth, first, and second order, respectively. In a Galilean invariant system, \eqref{eq:18-1-1} and \eqref{eq:16-2-1} imply \eqref{eq:17-1} and \eqref{eq:21-1} respectively, which, in turn, show that $c$ can be extracted in an experiment where $U\left(1\right)_{N}$ fields and strain are applied, and the resulting number current and density are measured. In particular, a measurement of the stress tensor is not required. Since $U\left(1\right)_{N}$ fields can be applied in Galilean invariant fluids by tilting and rotating the sample \citep{Viefers_2008}, we believe that a bulk measurement of the boundary central charge, through \eqref{eq:17-1} and \eqref{eq:21-1}, is within reach of existing experimental techniques \citep{PhysRevLett.109.215301,Levitin841,Ikegami59,Zhelev_2017}. \textcolor{red}{} The problem of obtaining $c$ from a bulk observable has been previously studied in QH states, described by \eqref{eq:10-0}-\eqref{eq:11} \citep{ferrari2014fqhe,can2014fractional,abanov2014electromagnetic,gromov2014density,gromov2015framing,can2015geometry,klevtsov2015geometric,bradlyn2015topological,Klevtsov_2016,gromov2016boundary,klevtsov2017laughlin,Cappelli_2018}. It was found that $c$ can only be extracted if $\text{var}s=\overline{s^{2}}-\overline{s}^{2}=0$, as in Laughlin states, or in a single filled Landau level. Under this condition, the response to strain, at fixed $A_{\mu}-\overline{s}\omega_{\mu}$, depends purely on $c$ \citep{bradlyn2015topological,gromov2016boundary} - a useful theoretical characterization, which seems challenging experimentally. However, the improved odd viscosity \eqref{eq:16-2-1}, constructed here, applies also to $\text{var}s=0$ QH states, with $\ell$ replaced by $-2\overline{s}$, and defines a concrete bulk observable which is precisely quantized, and determined by $c$. \pagebreak{} \section{Intrinsic sign problem in chiral topological matter\label{sec:Main-section-3:}} The question of intrinsic Monte Carlo sign problems was motivated in Sec.\ref{subsec:Complexity-of-simulating}, where we also stated and discussed the criterion $e^{2\pi ic/24}\notin\left\{ \theta_{a}\right\} $, which we obtain for intrinsic sign problems in chiral topological phases. Here we precisely state and derive the results that hold under this criterion. In Sec.\ref{sec:Signs-from-geometric} we discuss the universal finite-size correction to the boundary momentum density in chiral topological phases, arriving at the 'momentum polarization' Eq.\eqref{eq:12-3-1}. Relying on the above, Sec.\ref{sec:No-stoquastic-Hamiltonians} then obtains Result \hyperref[Result 1]{1} - an intrinsic sign problem in bosonic chiral topological matter. In Sec.\ref{sec:Spontaneous-chirality} we perform a similar analysis for the case where chirality (or time reversal symmetry breaking), appears spontaneously rather than explicitly, arriving at Result \hyperref[Result 2]{2}. We then turn to fermionic systems. In Sec.\ref{sec:Determinantal-quantum-Monte} we develop a formalism which unifies and generalizes the currently used DQMC algorithms, and the corresponding design principles. In Sec.\ref{sec:No-sign-free-DQMC}, we obtain within this formalism Result \hyperref[Result 1F]{1F} and Result \hyperref[Result 2F]{2F}, the fermionic analogs of Results \hyperref[Result 1]{1} and \hyperref[Result 2]{2}. Section \ref{sec:Generalization-and-extension} describes a conjectured extension of our results that applies beyond chiral phases, and unifies them with the intrinsic sign problems found in our parallel work \citep{PhysRevResearch.2.033515}. In Sec.\ref{sec:Discussion-and-outlook} we discuss our results and provide an outlook for future work. \subsection{Signs from geometric manipulations \label{sec:Signs-from-geometric}} \subsubsection{Momentum finite-size correction\label{subsec:Boundary-finite-size}} In analogy with the $T>0$ correction to the energy current in Eq.\eqref{eq:1-2}, the boundary of a chiral topological phase, described by a chiral CFT, also supports a non-vanishing ground state (or $T=0$) momentum density $p$, which receives a universal correction on a cylinder with finite circumference $L<\infty$, \begin{align} & p\left(L\right)=p\left(\infty\right)+\frac{2\pi}{L^{2}}\left(h_{0}-\frac{c}{24}\right).\label{eq:2-2} \end{align} Equation \eqref{eq:2-2} is the main property of chiral topological matter that we use below, so we discuss it in detail. The appearance of the chiral central charge is a manifestation of the global gravitational anomaly, as explained in Sec.\ref{subsec:Geometric-physics-in}. The rational number $h_{0}$ is a \textit{chiral} conformal weight from the boundary CFT. Like the chiral central charge, the two boundary components of the cylinder have opposite $h_{0}$s, see Fig.\ref{fig:Chiral-topological-phases}. From the bulk TFT perspective, $h_{0}$ corresponds to the topological spin of an anyon quasi-particle, defined by the phase $\theta_{0}=e^{2\pi ih_{0}}$ accumulated as the anyon undergoes a $2\pi$ rotation \citep{kitaev2006anyons}. The set $\left\{ \theta_{a}\right\} _{a=1}^{N}$ of topological spins of anyons is associated with the $N$-dimensional ground state subspace on the torus, and the unique $\theta_{0}=e^{2\pi ih_{0}}$ defined by \eqref{eq:2-2} corresponds to the generically unique ground state on the cylinder, with a finite-size energy separation $\sim1/L$ from the low lying excited states, see Appendix \ref{subsec:Further-details-regarding}. As the equation $\theta_{0}=e^{2\pi ih_{0}}$ suggests, only $h_{0}\mod1$ is universal for a topological phase. The integer part of $h_{0}$ can change as the Hamiltonian is deformed on the cylinder, while maintaining the bulk gap, and even as a function of $L$ for a fixed Hamiltonian. Additionally, the choice of $\theta_{0}$ from the set $\left\{ \theta_{a}\right\} $ is non-universal, and can change due to bulk gap preserving deformations, or as a function of $L$. Both types of discontinuous jumps in $h_{0}$ may be accompanied by an accidental degeneracy of the ground state on the cylinder. Therefore, the universal and $L$-independent statement regarding $h_{0}$ is that, apart from accidental degeneracies, $e^{2\pi ih_{0}}=\theta_{0}\in\left\{ \theta_{a}\right\} $ - a fact that will be important in our analysis. The non-trivial behavior of $h_{0}$ described above appears when the boundary corresponds to a non-conformal deformation of a CFT, by e.g a chemical potential. As demonstrated analytically and numerically in Appendix \ref{subsec:Beyond-the-assumption}, such behavior appears already in the simple context of Chern insulators with non-zero Fermi momenta, as would be the case in Fig.\ref{fig:Chiral-topological-phases}(b) if the chemical potential $\mu$ is either raised or lowered. \subsubsection{Momentum polarization\label{subsec:Momentum-polarization}} In this section we describe a procedure for the extraction of $h_{0}-c/24$ in Eq.\eqref{eq:2-2}, given a lattice Hamiltonian on the cylinder. Since the two boundary components carry opposite momentum densities, the ground state on the cylinder does not carry a total momentum, only a 'momentum polarization'. It is therefore clear that some sort of one-sided translation will be required. \begin{figure}[!th] \begin{centering} \includegraphics[width=0.6\columnwidth]{Defect.pdf} \par\end{centering} \caption{Momentum polarization. (a) Hamiltonian, or spatial, point of view. The operator $T_{R}$ translates the right half of the cylinder by one unit cell, a distance $a$, in the $x$ direction. It acts as the identity on the left boundary component, and as a translation on the right boundary component. The object $\tilde{Z}/Z$ is the thermal expectation value of $T_{R}$. (b) Field theory, or space-time, point of view. The object $\tilde{Z}$ is the partition function on a space-time carrying a screw dislocation. The space-time region occupied by the boundary components of the spatial cylinder is colored in orange. The screw dislocation can be described as an additional boundary component, on which $T_{R}$ acts as a translation, with a high effective temperature $1/\beta_{*}$. \label{fig:Defect} } \end{figure} Following Ref.\citep{PhysRevB.88.195412}, we define $\tilde{Z}:=\text{Tr}\left(T_{R}e^{-\beta H}\right)$, which is related to the usual partition function $Z=\text{Tr}\left(e^{-\beta H}\right)$ ($\beta=1/T$), by the insertion of the operator $T_{R}$, which translates the right half of the cylinder by one unit cell in the periodic $x$ direction, see Fig.\ref{fig:Defect}(a). The object $\tilde{Z}$ satisfies \begin{align} \tilde{Z} & =Z\exp\left[\alpha N_{x}+\frac{2\pi i}{N_{x}}\left(h_{0}-\frac{c}{24}\right)+o\left(N_{x}^{-1}\right)\right],\label{eq:12-3-1} \end{align} where $N_{x}$ is the number of sites in the $x$ direction, $\alpha\in\mathbb{C}$ is non-universal and has a negative real part, and $o\left(N_{x}^{-1}\right)$ indicates corrections that decay faster than $N_{x}^{-1}$ as $N_{x}\rightarrow\infty$. \textcolor{red}{}The above expression is valid at temperatures low compared to the finite-size energy differences on the boundary, $\beta^{-1}=o\left(N_{x}^{-1}\right)$, see Appendix \ref{subsec:Further-details-regarding}. Equation \eqref{eq:12-3-1} follows analytically from the low energy description of chiral topological matter in terms of chiral TFT and CFT \citep{PhysRevB.88.195412}, and was numerically scrutinized in a large number of examples in Refs.\citep{PhysRevB.88.195412,PhysRevLett.110.236801,PhysRevB.90.045123,PhysRevB.90.115133,PhysRevB.92.165127}, as well as in Appendix \ref{subsec:Beyond-the-assumption}. Nevertheless, we are not aware of a rigorous proof of Eq.\eqref{eq:12-3-1} for gapped lattice Hamiltonians. Therefore, in stating our results we will use the assumption 'the Hamiltonian $H$ is in a chiral topological phase of matter', the content of which is that $H$ admits a low energy description in terms of a chiral TFT with chiral central charge $c$ and topological spins $\left\{ \theta_{a}\right\} $, and in particular, Eq.\eqref{eq:12-3-1} holds for any bulk-gap preserving deformation of $H$, with $e^{2\pi ih_{0}}\in\left\{ \theta_{a}\right\} $ (apart from accidental degeneracies on the cylinder, as explained below Eq.\eqref{eq:2-2}). In the remainder of this section we further discuss the content of Eq.\eqref{eq:12-3-1} and its expected range of validity, in light of the Hamiltonian and space-time interpretations of $\tilde{Z}$. From a Hamiltonian perspective, $\tilde{Z}/Z$ is the thermal expectation value of $T_{R}$, evaluated at a temperature $\beta^{-1}$ low enough to isolate the ground state. The exponential decay expressed in Eq.\eqref{eq:12-3-1} appears because $T_{R}$ is not a symmetry of $H$, and $-\text{Re}\left(\alpha\right)$ can be understood as the energy density of the line defect where $T_{R}$ is discontinuous, see Fig.\ref{fig:Defect}(a). In fact, we expect Eq.\eqref{eq:12-3-1} to hold irrespective of whether the \textit{uniform} translation is a symmetry of $H$, or of the underlying 'lattice' on which $H$ is defined, which may be any polygonalization of the cylinder (see Ref.\citep{PhysRevLett.115.036802} for a similar scenario). The only expected requirement is that the low energy description of $H$ is homogeneous. Furthermore, if Eq.\eqref{eq:12-3-1} only holds after a disorder averaging of $\tilde{Z}/Z$, our results and derivations in the following sections remain unchanged. There is also a simple space-time interpretation of $\tilde{Z}$, which will be useful in the context of DQMC. The usual partition function $Z=\text{Tr}\left(e^{-\beta H}\right)$ has a functional integral representation in terms of bosonic fields $\phi$ (fermionic fields $\psi$) defined on space, the cylinder $C$ in our case, and the imaginary time circle $S_{\beta}^{1}=\mathbb{R}/\beta\mathbb{Z}$, with periodic (anti-periodic) boundary conditions \citep{altland2010condensed}. In $\tilde{Z}=\text{Tr}\left(T_{R}e^{-\beta H}\right)$, the insertion of $T_{R}$ produces a twisting of the boundary conditions of $\phi,\psi$ in the time direction, such that $\tilde{Z}$ is the partition function on a space-time carrying a screw dislocation, see Fig.\ref{fig:Defect}(b). The above interpretation of $\tilde{Z}$, supplemented by Eq.\eqref{eq:2-2}, allows for an intuitive explanation of Eq.\eqref{eq:12-3-1}, which loosely follows its analytic derivation \citep{PhysRevB.88.195412}. As seen in Fig.\ref{fig:Defect}(b), the line where $T_{R}$ is discontinuous can be interpreted as an additional boundary component at a high effective temperature, $\beta_{*}\ll L/v$. Since the effective temperature is much larger than the finite size energy-differences $2\pi v/L$ on the boundary CFT, the screw dislocation contributes no finite size corrections to $\tilde{Z}$. This leaves only the contribution of the boundary component on the right side of the cylinder, where $T_{R}$ produces the phase $e^{iaLp\left(L\right)}$, assuming $\beta_{*}\ll L/v\ll\beta$. Equation \eqref{eq:2-2} then leads to the universal finite size correction $\left(2\pi i/N_{x}\right)\left(h_{0}-c/24\right)$. \subsection{Excluding stoquastic Hamiltonians for chiral topological matter\label{sec:No-stoquastic-Hamiltonians}} In this section we consider bosonic (or 'qudit', or spin) systems, and a single design principle - existence of a local basis in which the many-body Hamiltonian is stoquastic. A sketch of the derivation of Result \hyperref[Result 1]{1} is that the momentum polarization $\tilde{Z}$ is positive for Hamiltonians $H'$ which are stoquastic in an on-site and homogenous basis, and this implies that $\theta_{0}=e^{2\pi ic/24}$ for any Hamiltonian $H$ obtained from $H'$ by conjugation with a local unitary. \subsubsection{Setup\label{subsec:Setup}} The many body Hilbert space is given by $\mathcal{H}=\otimes_{\mathbf{x}\in X}\mathcal{H}_{\mathbf{x}}$, where the tensor product runs over the sites $\mathbf{x}=\left(x,y\right)$ of a 2-dimensional lattice $X$, and $\mathcal{H}_{\mathbf{x}}$ are on-site 'qudit' Hilbert spaces of finite dimension $\mathsf{d}\in\mathbb{N}$. With finite-size QMC simulations in mind, we consider a square lattice with spacing 1, $N_{x}\times N_{y}$ sites, and periodic boundary conditions, so that $X=\mathbb{Z}_{N_{x}}\times\mathbb{Z}_{N_{y}}$ is a discretization of the flat torus $\left(\mathbb{R}/N_{x}\mathbb{Z}\right)\times\left(\mathbb{R}/N_{y}\mathbb{Z}\right)$. Generalization to other 2-dimensional lattices is straight forward. On this Hilbert space a gapped $r$-local Hamiltonian $H=\sum_{\mathbf{x}}H_{\mathbf{x}}$ is assumed to be given. Here the terms $H_{\mathbf{x}}$ are supported within a range $r$ of $\mathbf{x}$ - they are defined on $\otimes_{\left|\mathbf{y}-\mathbf{x}\right|\leq r}\mathcal{H}_{\mathbf{y}}$ and act as the identity on all other qudits. Fix an tensor product basis $\ket s=\otimes_{\mathbf{x}\in X}\ket{s_{\mathbf{x}}}$, labeled by strings $s=\left(s_{\mathbf{x}}\right)_{\mathbf{x}\in X}$, where $s_{\mathbf{x}}\in\left\{ 1,\cdots,\mathsf{d}\right\} $ labels a basis $\ket{s_{\mathbf{x}}}$ for $\mathcal{H}_{\mathbf{x}}$. For any vector $\mathbf{d}\in X$, the corresponding translation operator $T^{\mathbf{d}}$ is defined in this basis, $T^{\mathbf{d}}\ket s=\ket{t^{\mathbf{d}}s}$, with $\left(t^{\mathbf{d}}s\right)_{\mathbf{x}}=s_{\mathbf{x}+\mathbf{d}}$. These statements assert that $\ket s$ is both an on-site and a homogeneous basis, or \textit{on-site homogenous }for short. Note that $T^{\mathbf{d}}$ acts as a permutation matrix on the $\ket s$s, and in particular, has non-negative matrix elements in this basis. In accordance with Sec.\ref{subsec:Momentum-polarization}, we assume that the low energy description of $H$ is invariant under $T^{\mathbf{d}}$, as defined above. In doing so, we exclude the possibility of generic background gauge fields for any on-site symmetry that $H$ may posses, which is beyond the scope of this thesis. Nevertheless, commonly used background gauge fields, such as those corresponding to uniform magnetic fields with rational flux per plaquette, can easily be incorporated into our analysis, by restricting to translation vectors $\mathbf{d}$ in a sub-lattice of $X$. A restriction to sub-lattice translations can also be used to guarantee that $T^{\mathbf{d}}$ acts purely as a translation in the low energy TQFT description. In particular, a lattice translation may permute the anyon types $a$ \footnote{We thank Michael Levin for pointing out this phenomenon.}. Since the number of anyons is finite, restricting to large enough translations will eliminate this effect. An example is given by Wen's plaquette model, where different anyons are localized on the even/odd sites of a bipartite lattice \citep{PhysRevB.87.184402}, and a restriction to translations that maps the even (odd) sites to themselves will be made. Finally, we assume that $H$ is \textit{locally stoquastic}: it is term-wise stoquastic in a local basis. This means that a local unitary operator $U$ exists, such that the conjugated Hamiltonian $H'=UHU^{\dagger}$ is a sum of local terms $H_{\mathbf{x}}'=UH_{\mathbf{x}}U^{\dagger}$, which have non-positive matrix elements in the on-site homogeneous basis, $\bra sH_{\mathbf{x}}'\ket{\tilde{s}}\leq0$ for all basis states $\ket s,\ket{\tilde{s}}$. Note that we include the diagonal matrix elements in the definition, without loss of generality. The term \textit{local unitary} used above refers to a depth-$D$ quantum circuit, a product $U=U_{D}\cdots U_{1}$ where each $U_{i}$ is itself a product of unitary operators with non-overlapping supports of diameter $w$. It follows that $H'$ has a range $r'=r+2r_{U}$, where $r_{U}=Dw$ is the range of $U$. Equivalently, we may take $U$ to be a finite-time evolution with respect to an $\tilde{r}$-local, smoothly time-dependent, Hamiltonian $\tilde{H}\left(t\right)$, given by the time-ordered exponential $U=\text{TO}e^{-i\int_{0}^{1}\tilde{H}\left(t\right)dt}$. The two types of locality requirements are equivalent, as finite-time evolutions can be efficiently approximated by finite-depth circuits, while finite-depth circuits can be written as finite-time evolutions over time $D$ with piecewise constant $w$-local Hamiltonians \citep{Lloyd1073,zeng2019quantum}. \subsubsection{Constraining $c$ and $\left\{ \theta_{a}\right\} $} In order to discuss the momentum polarization, we need to map the stoquastic Hamiltonian $H'$ from the torus $X$ to a cylinder $C$. This is done by choosing a translation vector $\mathbf{d}\in X$, and then cutting the torus $X$ along a line $l$ parallel to $\mathbf{d}$. To simplify the presentation we restrict attention to the case $\mathbf{d}=\left(1,0\right)$, where $T^{\mathbf{d}}=T$ (and in the following $T_{R}^{\mathbf{d}}=T_{R}$). All other cases amount to a lattice-spacing redefinition, see Appendix \ref{subsec:Cutting-the-torus-2}. The cylinder $C=\mathbb{Z}_{N_{x}}\times\left\{ 1,\dots,N_{y}\right\} $ is then obtained from the torus $X=\mathbb{Z}_{N_{x}}\times\mathbb{Z}_{N_{y}}$ by cutting along the line $l=\left\{ \left(i,1/2\right):\;i\in\mathbb{Z}_{N_{x}}\right\} $. A stoquastic Hamiltonian on the cylinder can be obtained from that on the torus by removing all local terms $H'_{\mathbf{x}}$ whose support overlaps $l$, see Fig.\ref{fig:cutting}. Note that this procedure may render $H'$ acting as $0$ on certain qudits $\mathcal{H}_{\mathbf{x}}$ with $\mathbf{x}$ within a range $r'$ of $l$, but this does not bother us. Since all terms $H_{\mathbf{x}}'$ are individually stoquastic, this procedure leaves $H'$, now defined on the cylinder, stoquastic. One can similarly map $H$ and $U$ to the cylinder $C$ such that the relation $H'=UHU^{\dagger}$ remains valid on $C$. \begin{figure}[!th] \begin{centering} \includegraphics[width=0.3\columnwidth]{Cutting.pdf} \par\end{centering} \caption{Cutting the torus to a cylinder along the line $l$. Orange areas mark the supports of Hamiltonian terms $H_{\mathbf{x}}'$ which are removed from $H'$, while blue areas mark the supports of terms which are kept. \label{fig:cutting} } \end{figure} Let us now make contact with the momentum polarization Eq.\eqref{eq:12-3-1}. Having mapped $H'$ to the cylinder, we consider the 'partition function' \begin{align} \tilde{Z}' & :=\text{Tr}\left(e^{-\beta H'}T_{R}\right),\label{eq:4-1} \end{align} where $T_{R}$ is defined by $T_{R}\ket s=\ket{T_{R}s}$, $\left(T_{R}s\right)_{x,y}=s_{x+\Theta\left(y\right),y}$ , and $\Theta$ is a heavy side function supported on the right half of the cylinder. Though $\tilde{Z}'$ is generally different from $\tilde{Z}=\text{Tr}\left(e^{-\beta H}T_{R}\right)$ appearing in Eq.\eqref{eq:12-3-1}, it satisfies two useful properties: \begin{enumerate} \item $\tilde{Z}'>0$. Both $-H'$ and $T_{R}$ have non-negative entries in the on-site basis $\ket s$, and therefore so does $e^{-\beta H'}T_{R}$. \item $H'=UHU^{\dagger}$ is in the same phase of matter as $H$, so $c'=c$ and $\left\{ \theta_{a}'\right\} =\left\{ \theta_{a}\right\} $. Moreover, $h_{0}'=h_{0}$ for all $N_{x}$. Treating $U$ as a finite time evolution, we have $H\left(\lambda\right)=U\left(\lambda\right)HU\left(\lambda\right)^{\dagger}$, where $U\left(\lambda\right):=\text{TO}e^{-i\int_{0}^{\lambda}\tilde{H}\left(t\right)dt}$, as a deformation from $H$ to $H'$ which maintains locality and preserves the bulk-gap. Moreover, the full spectrum on the cylinder is $\lambda$-independent, and therefore so is $h_{0}$. \end{enumerate} Combining Eq.\eqref{eq:12-3-1}, for $H'$ instead of $H$, with the two above properties leads to \begin{align} 1= & \tilde{Z}'/\left|\tilde{Z}'\right|\label{eq:6-1-0}\\ = & \exp2\pi i\left[\epsilon'N_{x}+\frac{1}{N_{x}}\left(h_{0}-\frac{c}{24}\right)+o\left(N_{x}^{-1}\right)\right],\nonumber \end{align} where $\epsilon':=\text{Im}\left(\alpha'\right)$ is generally different from $\epsilon=\text{Im}\left(\alpha\right)$ of Eq.\eqref{eq:12-3-1} since $H'\neq H$. The non-universal integer part of $h_{0}$ can then be eliminated by raising Eq.\eqref{eq:6-1-0} to the $N_{x}$th power, \begin{align} 1 & =e^{2\pi i\epsilon'N_{x}^{2}}\theta_{0}\left(N_{x}\right)e^{-2\pi ic/24}+o\left(1\right),\label{eq:6-1} \end{align} where we used $\theta_{0}=e^{2\pi ih_{0}}$, and $o\left(1\right)\rightarrow0$ as $N_{x}\rightarrow\infty$. We also indicated explicitly the possible $N_{x}$-dependence of $\theta_{0}$, as described in Sec.\ref{subsec:Boundary-finite-size}. We proceed under the assumption that no accidental degeneracies occur on the cylinder, so that $\theta_{0}\left(N_{x}\right)\in\left\{ \theta_{a}\right\} $ for all $N_{x}$, deferring the degenerate case to Appendix \ref{subsec:Dealing-with-accidental}. Now, for rational $\epsilon'=n/m$, the series $e^{2\pi i\epsilon'N_{x}^{2}}$ ($N_{x}\in\mathbb{N}$) covers periodically a subset $S$ of the $m$th roots of unity, including $1\in S$. On the other hand, for irrational $\epsilon'$ the series $e^{2\pi i\epsilon'N_{x}^{2}}$ is dense in the unit circle. Combined with the fact that $\theta_{0}\left(N_{x}\right)$ is valued in the finite set $\left\{ \theta_{a}\right\} $, while $c$ is $N_{x}$-independent, Equation \eqref{eq:6-1} implies that $\epsilon'$ must be rational, and that the values attained by $\theta_{0}\left(N_{x}\right)e^{-2\pi ic/24}$ cover the set $S$ periodically, for large enough $N_{x}$. It follows that $1\in S\subset\left\{ \theta_{a}e^{-2\pi ic/24}\right\} $. We therefore have \begin{description} \item [{Result$\;$1\label{Result 1}}] If a local bosonic Hamiltonian $H$ is both locally stoquastic and in a chiral topological phase of matter, then one of the corresponding topological spins satisfies $\theta_{a}=e^{2\pi ic/24}$. Equivalently, a bosonic chiral topological phase of matter where $e^{2\pi ic/24}$ is not the topological spin of some anyon, i.e $e^{2\pi ic/24}\notin\left\{ \theta_{a}\right\} $, admits no local Hamiltonians which are locally stoquastic. \end{description} The above result can be stated in terms of the topological $\mathbf{T}$-matrix, which is the representation of a Dehn twist on the torus ground state subspace, and has the spectrum $\text{Spec}\left(\mathbf{T}\right)=\left\{ \theta_{a}e^{-2\pi ic/24}\right\} _{a}$ \citep{doi:10.1142/S0129055X90000107,kitaev2006anyons,PhysRevLett.110.236801,PhysRevB.88.195412,PhysRevLett.110.067208,PhysRevB.91.125123}. \begin{description} \item [{Result$\;$1'\label{Result 1'}}] If a local bosonic Hamiltonian $H$ is is both locally stoquastic and in a chiral topological phase of matter, then the corresponding $\mathbf{T}$-matrix satisfies $1\in\text{Spec}\left(\mathbf{T}\right)$. Equivalently, a bosonic chiral topological phase of matter where $1\notin\text{Spec}\left(\mathbf{T}\right)$, admits no local Hamiltonians which are locally stoquastic. \end{description} The above result is our main statement for bosonic phases of matter. The logic used in its derivation is extended in Sec.\ref{sec:Spontaneous-chirality}-\ref{sec:No-sign-free-DQMC}, where we generalize Result \hyperref[Result 1]{1} to systems which are fermionic, spontaneously-chiral, or both. \subsection{Spontaneous chirality\label{sec:Spontaneous-chirality}} The invariants $h_{0}$ and $c$ change sign under both time reversal $\mathcal{T}$ and parity (spatial reflection) $\mathcal{P}$, and therefore require a breaking of $\mathcal{T}$ and $\mathcal{P}$ down to $\mathcal{PT}$ to be non-vanishing. The momentum polarization Eq.\eqref{eq:12-3-1} is valid if this symmetry breaking is explicit, i.e $H$ does not commute with $\mathcal{P}$ and $\mathcal{T}$ separately. Here we consider the case where $H$ is $\mathcal{P},\mathcal{T}$-symmetric, but these are broken down to $\mathcal{PT}$ spontaneously, as in e.g intrinsic topological superfluids and superconductors \citep{volovik2009universe,PhysRevB.100.104512,rose2020hall}. We first generalize Eq.\eqref{eq:12-3-1} to this setting, and then use this generalization to obtain a spontaneously-chiral analog of Result \hyperref[Result 1]{1}. Note that the physical time-reversal $\mathcal{T}$ is an \textit{on-site} anti-unitary operator acting \textit{identically} on all qudits, which implies $\left[\mathcal{T},T_{R}\right]=0$, while $\mathcal{P}$ is a unitary operator that maps the qudit at $\mathbf{x}$ to that at $P\mathbf{x}$, where $P$ is the nontrivial element in $O\left(2\right)/SO\left(2\right)$, e.g $\left(x,y\right)\mapsto\left(-x,y\right)$. \subsubsection{Momentum polarization for spontaneously-chiral Hamiltonians \label{subsec:Momentum-polarization-for}} For simplicity, we begin by assuming that $H$ is 'classically symmetry breaking' - it has two exact ground states on the cylinder, already at finite system sizes. We therefore have two ground states $\ket{\pm}$, such that $\ket -$ is obtained from $\ket +$ by acting with either $\mathcal{T}$ or $\mathcal{P}$. In particular, $\ket{\pm}$ have opposite values of $h_{0}$ and $c$. The $\beta\rightarrow\infty$ density matrix is then $e^{-\beta H}/Z=\left(\rho_{+}+\rho_{-}\right)/2$, where $\rho_{\pm}=\ket{\pm}\bra{\pm}$, and this modifies the right hand side of Eq.\eqref{eq:12-3-1} to its real part, \begin{align} \tilde{Z}:= & \text{Tr}\left(T_{R}e^{-\beta H}\right)\label{eq:12-3-1-1}\\ = & Ze^{-\delta N_{x}}\cos2\pi\left[\epsilon N_{x}+\frac{2\pi}{N_{x}}\left(h_{0}-\frac{c}{24}\right)+o\left(N_{x}^{-1}\right)\right],\nonumber \end{align} where $-\delta\pm2\pi i\epsilon$ are the values of the non-universal $\alpha$ obtained from Eq.\eqref{eq:12-3-1}, by replacing $e^{-\beta H}$ by $\rho_{\pm}$ . Indeed, it follows from $\left[\mathcal{T},T_{R}\right]=0$ that if two density matrices are related by $\rho_{-}=\mathcal{T}\rho_{+}\mathcal{T}^{-1}$, then $\tilde{Z}_{\pm}:=Z_{\pm}\text{Tr}\left(T_{R}\rho_{+}\right)$ are complex conjugates, $\tilde{Z}_{-}=\tilde{Z}_{+}^{*}$. Now, for a generic symmetry breaking Hamiltonian $H$, exact ground state degeneracy happens only in the infinite volume limit \citep{sachdev_2011}. At finite size, the two lowest lying eigenvalues of $H$ would be separated by an exponentially small energy difference $\Delta E=O\left(e^{-fN_{x}^{\lambda}}\right)$, with some $f>0,\lambda>0$. The two corresponding eigenstates would be $\mathcal{T},\mathcal{P}$-even/odd, of the form $W\left[\ket +\pm\ket -\right]$, where $W$ is a $\mathcal{T},\mathcal{P}$-invariant local unitary \citep{zeng2019quantum}. One can think of these statements as resulting from the existence of a bulk-gap preserving and $\mathcal{T},\mathcal{P}$-symmetric deformation of $H$ to a 'classically symmetry breaking' Hamiltonian\footnote{The canonical example is the transverse field Ising model $H\left(g\right)=-\sum_{i=1}^{N_{x}}\left(Z_{i}Z_{i+1}+gX_{i}\right)$ in 1+1d. Exact ground state degeneracy appears at finite $N_{x}$ only for $g=0$, though spontaneous symmetry breaking occurs for all $\left|g\right|<1$, where a splitting $\sim\left|g\right|^{N_{x}}$ appears.}. In the generic setting, we have \begin{align} e^{-\beta H}/Z & =W\left(\rho_{+}+\rho_{-}\right)W^{\dagger}/2+O\left(\beta\Delta E\right), \end{align} and, following our treatment of the local unitary $U$ in the previous section, Equation \eqref{eq:12-3-1-1} remains valid, with modified $\delta,\epsilon$, but unchanged $h_{0}-c/24$. This statement holds for temperatures much higher than $\Delta E$ and much smaller that the CFT energy spacing, $\Delta E\ll\beta^{-1}\ll N_{x}^{-1}$, or more accurately $\beta^{-1}=o\left(N_{x}^{-1}\right)$ and $\beta\Delta E=o\left(N_{x}^{-1}\right)$ (cf. Sec.\ref{subsec:Momentum-polarization}). Note that the universal content of Eq.\eqref{eq:12-3-1-1} is the absolute value $\left|h_{0}-c/24\right|$, since the cosine is even and $\text{sgn}\left(\epsilon\right)$ is non-universal. \subsubsection{Constraining $c$ and $\left\{ \theta_{a}\right\} $ } Let us now assume that a gapped and local Hamiltonian $H$ is $\mathcal{T}$,$\mathcal{P}$-symmetric, and is locally stoquastic, due to a unitary $U$. It follows that $\tilde{Z}'=\text{Tr}\left(T_{R}e^{-\beta H'}\right)>0$, where $H'=UHU^{\dagger}$. If $U$ happens to be $\mathcal{T}$,$\mathcal{P}$-symmetric, then so is $H'$, and Eq.\eqref{eq:12-3-1-1} holds for $\tilde{Z}'$, with $\delta',\epsilon'$ in place of $\delta,\epsilon$. For a general $U$, we have \begin{align} e^{-\beta H'}/Z & '=UW\left(\rho_{+}+\rho_{-}\right)W^{\dagger}U^{\dagger}/2+O\left(\beta\Delta E\right), \end{align} where $UW$ need not be $\mathcal{T}$,$\mathcal{P}$-symmetric. As result, $\tilde{Z}'$ satisfies a weaker form of Eq.\eqref{eq:12-3-1-1}, \begin{align} 0<\tilde{Z}'=\left(Z'/2\right) & \sum_{\sigma=\pm}e^{-\delta_{\sigma}'N_{x}}e^{2\pi i\sigma\left[\epsilon_{\sigma}'N_{x}+\frac{1}{N_{x}}\left(h_{0}-\frac{c}{24}\right)+o\left(N_{x}^{-1}\right)\right]},\label{eq:10-00} \end{align} where $\delta_{+}',\epsilon_{+}'$ may differ from $\delta_{-}',\epsilon_{-}'$, and we also indicated the positivity of $\tilde{Z}'$. Now, if $\delta_{+}'\neq\delta_{-}'$, one of the chiral contributions is exponentially suppressed relative to the other as $N_{x}\rightarrow\infty$, and we can apply the analysis of Sec.\ref{sec:No-stoquastic-Hamiltonians}. If $\delta_{+}'=\delta_{-}'$, we obtain \begin{align} 0< & \sum_{\sigma=\pm}\exp2\pi i\sigma\left[\epsilon_{\sigma}'N_{x}+\frac{1}{N_{x}}\left(h_{0}-\frac{c}{24}\right)+o\left(N_{x}^{-1}\right)\right],\label{eq:11-1-0} \end{align} in analogy with Eq.\eqref{eq:6}.Unlike Eq.\eqref{eq:6}, taking the $N_{x}$th power of this equation does not eliminate the mod 1 ambiguity in $h_{0}$. This corresponds to the fact that, as opposed to explicitly chiral systems, stacking copies of a spontaneously chiral system does not increase its net chirality. One can replace $T_{R}$ in $\tilde{Z}'$ with a larger half-translation $T_{R}^{m}$, which would multiply the argument of the cosine by $m$. However, since the largest translation on the cylinder is obtained for $m\approx N_{x}/2$, this does not eliminate the mod 1 ambiguity in $h_{0}$. Moreover, even if it so happens that $\epsilon'_{+}=\epsilon'_{-}=0$, Equation \eqref{eq:11-1-0} does not imply $h_{0}-c/24=0$ (mod 1) since $N_{x}$ is large. In order to make progress, we make use of the bagpipes construction illustrated in Fig.\ref{fig:bagpipes}. We attach $M$ identical cylinders, or 'pipes', to the given lattice, and act with $T_{R}$ on these cylinders. The global topology of the given lattice is unimportant - all that is needed is a large enough disk in which the construction can be applied. The construction does require some form of homogeneity in order to have a unique extension of the Hamiltonian $H'$ to the pipes, and which will be identical for all pipes. We will assume a strict translation symmetry with respect to a sub-lattice, but we believe that this assumption can be relaxed. \begin{figure}[th] \begin{centering} \includegraphics[width=0.3\columnwidth]{Bagpipes2} \par\end{centering} \caption{Bagpipes construction. We attach $M$ identical cylinders, or 'pipes', to the given lattice, and define the half translation $T_{R}$ to act on their top halves, as indicated by blue arrows. The contributions of the pipes to the momentum polarization adds, producing the factor $M$ in Eq.\eqref{eq:12-4}. \label{fig:bagpipes}} \end{figure} The resulting surface, shown in Fig.\ref{fig:bagpipes}, has negative curvature at the base of each pipe, which requires a finite number of lattice disclinations in this region. In order to avoid any possible ambiguity in the definition of $H'$ at a disclination, one can simply remove any local term $H_{\mathbf{x}}'$ whose support contains a disclination, which amounts to puncturing a hole around each disclination. The resulting boundary components do not contribute to the momentum polarization since $T_{R}$ acts on these as the identity. With the construction at hand, the identical contributions of all cylinders to $\tilde{Z}'$ add, which implies \begin{align} 0< & \sum_{\sigma=\pm}\exp2\pi i\sigma M\left[\epsilon_{\sigma}'N_{x}+\frac{1}{N_{x}}\left(h_{0}-\frac{c}{24}\right)+o\left(N_{x}^{-1}\right)\right].\label{eq:12-4} \end{align} Setting $M=N_{x}$ gives \begin{align} 0< & e^{2\pi i\epsilon_{+}'N_{x}^{2}}\theta_{0}\left(N_{x}\right)e^{-2\pi ic/24}\\ & +e^{-2\pi i\epsilon_{-}'N_{x}^{2}}\theta_{0}^{*}\left(N_{x}\right)e^{2\pi ic/24}+o\left(1\right),\nonumber \end{align} where we indicate explicitly the possible $N_{x}$-dependence of $\theta_{0}$. This is the spontaneously chiral analog of Eq.\eqref{eq:6-1}, and can be analyzed similarly. Since $\theta_{0}\left(N_{x}\right)$ is valued in the finite set $\left\{ \theta_{a}\right\} $, both $\epsilon'_{\pm}$ must be rational, $\epsilon'_{\pm}=n_{\pm}/m_{\pm}$. Restricting then to $N_{x}=n_{x}m_{+}m_{-}$, such that $e^{2\pi i\epsilon_{\pm}N_{x}^{2}}=1$, and $\theta_{0}$ attains a constant value $\theta_{a}$ for large enough $n_{x}$, we have \begin{align} 0<\text{Re}\left(\theta_{a}e^{-2\pi ic/24}\right),\label{eq:14} \end{align} for some anyon $a$. Repeating the analysis with $k$ times more pipes $M=kN_{x}$, replaces $\theta_{a}e^{-2\pi ic/24}$ in Eq.\eqref{eq:14} with its $k$th power, for all $k\in\mathbb{N}$. This infinite set of equations then implies $\theta_{a}e^{-2\pi ic/24}=1$. To summarize, \begin{description} \item [{Result$\;$2\label{Result 2}}] If a local bosonic Hamiltonian $H$ is both locally stoquastic and in a spontaneously-chiral topological phase of matter, then one of the corresponding topological spins satisfies $\theta_{a}=e^{2\pi ic/24}$. Equivalently, a bosonic spontaneously-chiral topological phase of matter where $e^{2\pi ic/24}$ is not the topological spin of some anyon, i.e $e^{2\pi ic/24}\notin\left\{ \theta_{a}\right\} $, admits no local Hamiltonians which are locally stoquastic. \end{description} This extends Result \hyperref[Result 1]{1} beyond explicitly-chiral Hamiltonians, and clarifies that the essence of the intrinsic sign problem we find is the macroscopic, physically observable, condition $e^{2\pi ic/24}\notin\left\{ \theta_{a}\right\} $, as opposed to the microscopic absence (or presence) of time reversal and reflection symmetries. \subsection{DQMC: locality, homogeneity, and geometric manipulations\label{sec:Determinantal-quantum-Monte}} In order to obtain fermionic analogs of the bosonic results of the previous sections, we first need to establish a framework in which such results can be obtained. In this section we develop a formalism that unifies and generalizes the currently used DQMC algorithms and design principles, and implement within it the geometric manipulations used in previous sections, in a sign-free manner. Since we wish to treat the wide range of currently known DQMC algorithms and design principles on equal footing, the discussion will be more abstract than the simple setting of locally stoquastic Hamiltonians used above. In particular, Sections \ref{subsec:Local-determinantal-QMC}-\ref{subsec:Local-and--homogeneous} lead up to the definition of \textit{locally sign-free DQMC}, which is our fermionic analog of a locally stoquastic Hamiltonian. This definition is used later on in Sec.\ref{sec:No-sign-free-DQMC} to formulate Result \hyperref[Result 1F]{1F} and Result \hyperref[Result 2F]{2F}, the fermionic analogs of Results \hyperref[Result 1]{1} and \hyperref[Result 2]{2}. The new tools needed to establish these results are the sign-free geometric manipulations described in Sec.\ref{sec:Sign-free-geometric-manipulation}. \subsubsection{Local DQMC\label{subsec:Local-determinantal-QMC}} In the presence of bosons and fermions, the many-body Hilbert space is given by $\mathcal{H}=\mathcal{H}_{\text{F}}\otimes\mathcal{H}_{\text{B}}$, where $\mathcal{H}_{\text{F}}$ is a fermionic Fock space, equipped with an on-site occupation basis $\ket{\nu}_{\text{F}}=\prod_{\mathbf{x},\alpha}\left(f_{\mathbf{x},\alpha}^{\dagger}\right)^{\nu_{\mathbf{x},\alpha}}\ket 0_{\text{F}}$, $\nu_{\mathbf{x},\alpha}\in\left\{ 0,1\right\} $, generated by acting with fermionic (anti-commuting) creation operators $f_{\mathbf{x},\alpha}^{\dagger}$ on the Fock vacuum $\ket 0_{\text{F}}$. The product is taken with respect to a fixed ordering of fermion species $\alpha\in\left\{ 1,\cdots,\mathsf{d}_{\text{F}}\right\} $ and lattice sites $\mathbf{x}\in X$. We will also make use of the single-fermion space $\mathcal{H}_{1\text{F}}\cong\mathbb{C}^{\left|X\right|}\otimes\mathbb{C}^{\mathsf{d}_{\text{F}}}$, spanned by $\ket{\mathbf{x},\alpha}_{\text{F}}=f_{\mathbf{x},\alpha}^{\dagger}\ket 0_{\text{F}}$, where $\left|X\right|=N_{x}N_{y}$ is the system size. As in Sec.\ref{sec:No-stoquastic-Hamiltonians}, $\mathcal{H}_{\text{B}}$ is a many-qudit Hilbert space with local dimension $\mathsf{d}$. It can also be a bosonic Fock space where $\mathsf{d}=\infty$. We consider local fermion-boson Hamiltonians $H$, of the form \begin{align} H=\sum_{\mathbf{x},\mathbf{y}}f_{\mathbf{x}}^{\dagger}h_{0}^{\mathbf{x},\mathbf{y}}f_{\mathbf{y}}+H_{I},\label{eq:11-0} \end{align} where the free-fermion Hermitian matrix $h_{0}^{\mathbf{x},\mathbf{y}}$ is $r_{0}$-local, it vanishes unless $\left|\mathbf{x}-\mathbf{y}\right|\leq r_{0}$, and we suppress, here and in the following, the fermion species indices. The Hamiltonian $H_{I}$ describes all possible $r_{0}$-local interactions which preserve the fermion parity $\left(-1\right)^{N_{f}}$, where $N_{f}=\sum_{\mathbf{x}}f_{\mathbf{x}}^{\dagger}f_{\mathbf{x}}$, including fermion-independent terms $H_{\text{B}}$ as in Sec.\ref{sec:No-stoquastic-Hamiltonians}. Thus $H_{I}$ is of the form \begin{align} H_{I}= & H_{\text{B}}+\sum_{\mathbf{x},\mathbf{y}}f_{\mathbf{x}}^{\dagger}K_{\text{B}}^{\mathbf{x},\mathbf{y}}f_{\mathbf{y}}\label{eq:dots}\\ & +\sum_{\mathbf{x},\mathbf{y},\mathbf{z},\mathbf{w}}f_{\mathbf{x}}^{\dagger}f_{\mathbf{y}}^{\dagger}V_{\text{B}}^{\mathbf{x},\mathbf{y},\mathbf{z},\mathbf{w}}f_{\mathbf{z}}f_{\mathbf{w}}+\cdots,\nonumber \end{align} where $K_{\text{B}}^{\mathbf{x},\mathbf{y}}$ (for all $\mathbf{x},\mathbf{y}\in X$) is a local bosonic operator with range $r_{0}$, and vanishes unless $\left|\mathbf{x}-\mathbf{y}\right|\leq r_{0}$, and similarly for $V_{\text{B}}^{\mathbf{x},\mathbf{y},\mathbf{z},\mathbf{w}}$, which vanishes unless $\mathbf{x},\mathbf{y},\mathbf{z},\mathbf{w}$ are contained in a disk or radius $r_{0}$. In Eq.\eqref{eq:dots} dots represent additional pairing terms of the form $ff$, $f^{\dagger}f^{\dagger}$, or $ffff$, $f^{\dagger}f^{\dagger}f^{\dagger}f^{\dagger}$, as well as terms with a higher number of fermions, all of which are $r_{0}$-local and preserve the fermion parity. Since locality is defined in terms of anti-commuting Fermi operators, a local stoquastic basis is not expected to exist, and accordingly, the sign problem appears in any QMC method in which the Boltzmann weights are given in terms of Hamiltonian matrix elements in a local basis \citep{troyer2005computational,li2019sign}. For this reason, the methods used to perform QMC in the presence of fermions are distinct from the ones used in their absence. These are collectively referred to as DQMC \citep{PhysRevD.24.2278,Assaad,Santos_2003,li2019sign,berg2019monte}, and lead to the imaginary time path integral representation of the partition function $Z=\text{Tr}\left(e^{-\beta H}\right)$, \begin{align} Z & =\int D\phi D\psi e^{-S_{\phi}-S_{\psi,\phi}}\label{eq:2}\\ & =\int D\phi e^{-S_{\phi}}\text{Det}\left(D_{\phi}\right)\nonumber \\ & =\int D\phi e^{-S_{\phi}}\text{Det}\left(I+U_{\phi}\right),\nonumber \end{align} involving a bosonic field $\phi$ with an action $S_{\phi}$, and a fermionic (grassmann valued) field $\psi$, with a quadratic action $S_{\psi,\phi}=\sum_{\mathbf{x},\mathbf{x}'}\int\text{d}\tau\overline{\psi}_{\mathbf{x},\tau}\left[D_{\phi}\right]_{\mathbf{x},\mathbf{y}}\psi_{\mathbf{y},\tau}$ defined by the $\phi$-dependent single-fermion operator $D_{\phi}$. In the third line of Eq.\eqref{eq:2} we assumed the Hamiltonian form $D_{\phi}=\partial_{\tau}+h_{\phi\left(\tau\right)}$, and used a standard identity for the determinant in terms of the single-fermion imaginary-time evolution operator $U_{\phi}=\text{TO}e^{-\int_{0}^{\beta}h_{\phi\left(\tau\right)}\text{d}\tau}$ \citep{PhysRevD.24.2278}, where $\text{TO}$ denotes the time ordering. The field $\phi$ ($\psi$) is defined on a continuous imaginary-time circle $\tau\in\mathbb{R}/\beta\mathbb{Z}$, with periodic (anti-periodic) boundary conditions, and on the spatial lattice $X$. The second and third lines of Eq.\eqref{eq:2} define the Monte Carlo phase space $\left\{ \phi\right\} $ and Boltzmann weight \begin{align} p\left(\phi\right) & =e^{-S_{\phi}}\text{Det}\left(D_{\phi}\right)\label{eq:18-2}\\ & =e^{-S_{\phi}}\text{Det}\left(I+U_{\phi}\right).\nonumber \end{align} In applications, the DQMC representation \eqref{eq:2} may be obtained from the Hamiltonian $H$ in a number of ways. If a Yukawa type model is assumed as a starting point \citep{berg2019monte}, i.e $H_{I}=H_{\text{B}}+\sum_{\mathbf{x},\mathbf{y}}f_{\mathbf{x}}^{\dagger}K_{\text{B}}^{\mathbf{x},\mathbf{y}}f_{\mathbf{y}}$, then the action $S_{\phi}$ is obtained from the Hamiltonian $H_{\text{B}}$, and $h_{\phi\left(\tau\right)}=h_{0}+K_{\text{B}}$. Alternatively, the representation \eqref{eq:2} may be obtained through a Hubbard-Stratonovich decoupling and/or a series expansion of fermionic self-interactions \citep{PhysRevLett.82.4155,chandrasekharan2013fermion,wang2015split}. Such is the case e.g when there are no bosons $\mathcal{H}=\mathcal{H}_{\text{F}}$, and $H_{I}=\sum f_{\mathbf{x}}^{\dagger}f_{\mathbf{y}}^{\dagger}V^{\mathbf{x},\mathbf{y},\mathbf{z},\mathbf{w}}f_{\mathbf{z}}f_{\mathbf{w}}$. Note that for a given fermionic self-interaction, there are various possible DQMC representations, obtained e.g via a Hubbard-Stratonovich decoupling in different channels. To take into account and generalize the above relations between $H$ and the corresponding DQMC representation, we will only assume (i) that the effective single-fermion Hamiltonian $h_{\phi\left(\tau\right)}$ reduces to the free fermion matrix $h^{\left(0\right)}$ in the absence of $\phi$, i.e $h_{\phi\left(\tau\right)=0}=h_{0}$, (ii) that the boson field $\phi$ is itself an $r_{0}$-local object\footnote{Thus $\phi$ is a map from sets of lattice sites with diameter less than $r_{0}$, such as links, plaquettes etc., to a fixed vector space $\mathbb{C}^{k}$. Additionally, the $\phi$ integration in \eqref{eq:2} runs over all such functions. As an example, restricting to constant functions $\phi$ leads to non-local all to all interactions between fermions.}, and (iii) that the $r_{0}$-locality of $h_{0}$ and $H_{I}$ implies the $r$-locality of $S_{\phi}$ and $h_{\phi\left(\tau\right)}$, where $r$ is some function of $r_{0}$, independent of system size. The physical content of these assumptions is that the fields $\psi$ and operators $f$ correspond to the same physical fermion\footnote{Technically, via the fermionic coherent state construction of the functional integral \citep{altland2010condensed}.}, and that the boson $\phi$ mediates \textit{all} fermionic interactions $H_{I}$, and therefore corresponds to both the physical bosons in $\mathcal{H}_{\text{B}}$ and to composite objects made of an even number of fermions within a range $r_{0}$ (e.g a Cooper pair $\phi\sim ff$). We can therefore write \begin{align} S_{\phi} & =\sum_{\tau,\mathbf{x}}S_{\phi;\tau,\mathbf{x}},\label{eq:16}\\ h_{\phi\left(\tau\right)} & =\sum_{\mathbf{x}}h_{\phi\left(\tau\right);\mathbf{x}},\nonumber \end{align} where each term $S_{\phi;\tau,x}$ depends only on the values of $\phi$ at points $\left(\mathbf{x}',\tau'\right)$ with $\left|\tau-\tau'\right|,\left|\mathbf{x}-\mathbf{x}'\right|\leq r$, and similarly, each term $h_{\phi\left(\tau\right);\mathbf{x}}$ is supported on a disk of radius $r$ around $\mathbf{x}$, and depends on the values of $\phi\left(\tau\right)$ at points $\mathbf{x}$ within this disk. Note that even though $H$ is Hermitian, we do not assume the same for $h_{\phi\left(\tau\right)}$. Non-Hermitian $h_{\phi\left(\tau\right)}$s naturally arise in Hubbard\textendash Stratonovich decouplings, see e.g \citep{PhysRevB.71.155115,wang2015split}. Even when $h_{\phi\left(\tau\right)}$ is Hermitian for all $\phi$, its time-dependence implies that $U_{\phi}$ is non-Hermitian, and therefore $\text{Det}\left(I+U_{\phi}\right)$ in Eq.\eqref{eq:18-2} is generically complex valued \citep{PhysRevD.24.2278}. This is the generic origin of the sign problem in DQMC. Section \ref{subsec:Local-and--homogeneous} below describes the notion of \textit{fermionic design principles}, algebraic conditions on $U_{\phi}$ implying $\text{Det}\left(I+U_{\phi}\right)\geq0$, and defines what it means for such design principles to be local and homogenous. In the following analysis, we exclude the case of 'classically-interacting fermions', where $\phi$ is time-independent and $h_{\phi}$ is Hermitian. In this case the fermionic weight $\text{Det}\left(I+e^{-\beta h_{\phi}}\right)$ is trivially non-negative, and sign-free DQMC is always possible, provided $S_{\phi}\in\mathbb{R}$. We view such models as 'exactly solvable', on equal footing with free-fermion and commuting projector models. Given a phase of matter, the possible existence of exactly solvable models is independent of the possible existence of sign-free models. Even when an exactly solvable model exists, QMC simulations are of interest for generic questions, such as phase transitions due to deformations of the model \citep{hofmann2019search}. In particular, Ref.\citep{PhysRevLett.119.127204} utilized a classically-free description of Kitaev's honeycomb model to obtain the thermal Hall conductance and chiral central charge, which should be contrasted with the intrinsic sign problem we find in the corresponding phase of matter, see Table \ref{tab:1} and Sec.\ref{sec:No-sign-free-DQMC}. \subsubsection{Local and homogenous fermionic design principles\label{subsec:Local-and--homogeneous}} The representation \eqref{eq:2} is sign-free if $p\left(\phi\right)=e^{-S_{\phi}}\text{Det}\left(I+U_{\phi}\right)\geq0$ for all $\phi$. A design principle then amounts to a set of polynomially verifiable properties \footnote{That is, properties which can be verified in a polynomial-in-$\beta\left|X\right|$ time. As an example, given a local Hamiltonian, deciding whether there exists a local basis in which it is stoquastic is NP-complete \citep{marvian2018computational,klassen2019hardness}. In particular, one does not need to perform the exponential operation of evaluating $p$ on every configuration $\phi$ to assure that $p\left(\phi\right)\geq0$. Had this been possible, there would be no need for a Monte Carlo sampling of the phase space $\left\{ \phi\right\} $.} of $S_{\phi}$ and $h_{\phi\left(\tau\right)}$ that guarantee that the complex phase of $\text{Det}\left(I+U_{\phi}\right)$ is opposite to that of $e^{-S_{\phi}}$. For the sake of presentation, we restrict attention to the case where $S_{\phi}$ is manifestly real valued, and $\text{Det}\left(I+U_{\phi}\right)\geq0$ due to an algebraic condition on the operator $U_{\phi}$, which we write as $U_{\phi}\in\mathcal{C}_{U}$. This is assumed to follow from an algebraic condition on $h_{\phi\left(\tau\right)}$, written as $h_{\phi\left(\tau\right)}\in\mathcal{C}_{h}$, manifestly satisfied for all $\phi\left(\tau\right)$. The set $\mathcal{C}_{h}$ is assumed to be closed under addition, while $\mathcal{C}_{U}$ is closed under multiplication: $h_{1}+h_{2}\in\mathcal{C}_{h}$ for all $h_{1},h_{2}\in\mathcal{C}_{h}$, and $U_{1}U_{2}\in\mathcal{C}_{U}$ for all $U_{1},U_{2}\in\mathcal{C}_{U}$. The simplest example, where $\mathcal{C}_{U}=\mathcal{C}_{h}$ is the set of matrices obeying a fixed time reversal symmetry, is discussed in Sec.\ref{subsec:Example:-time-reversal}. In Appendix \ref{subsec:Locality-of-known} we review all other design principles known to us, demonstrate that most of them are of the simplified form above, and generalize our arguments to those that are not. Comparing with the bosonic Hamiltonians treated in Sec.\ref{sec:No-stoquastic-Hamiltonians}, we note that $\mathcal{C}_{h}$ is analogous to the set of stoquastic Hamiltonians $H$ in a fixed basis, while $\mathcal{C}_{U}$ is analogous to the resulting set of matrices $e^{-\beta H}$ with non-negative entries. Design principles, as defined above (and in the literature), are purely algebraic conditions, which carry no information about the underlying geometry of space-time. However, as demonstrated in Sec.\ref{subsec:Example:-time-reversal}, in order to allow for local interactions, mediated by an $r_{0}$-local boson $\phi$, a design principle must also be local in some sense. We will adopt the following definitions, which are shown to be satisfied by all physical applications of design principles that we are aware of, in Sec.\ref{subsec:Example:-time-reversal} and Appendix \ref{subsec:Locality-of-known}. \paragraph*{Definition (term-wise sign-free):} We say that a DQMC representation is term-wise sign-free due to a design principle $\mathcal{C}_{h}$, if each of the local terms $S_{\phi;\tau,\mathbf{x}},h_{\phi\left(\tau\right);\mathbf{x}}$ obey the design principle separately, rather than just they sums $S_{\phi},h_{\phi\left(\tau\right)}$. Thus $S_{\phi;\tau,\mathbf{x}}$ is real valued, and $h_{\phi\left(\tau\right);\mathbf{x}}\in\mathcal{C}_{h}$, for all $\tau,\mathbf{x}$. \medskip{} This is analogous to the requirement in Sec.\ref{subsec:Setup} that $H'$ be term-wise stoquastic. Note that even when a DQMC representation is term-wise sign-free, the resulting Boltzmann weights $p\left(\phi\right)$ are sign-free in a non-local manner: $\text{Det}\left(I+U_{\phi}\right)$ involves the values of $\phi$ at all space-time points, and splitting the determinant into a product of local terms by the Leibniz formula reintroduces signs, which capture the fermionic statistics. In this respect, the ``classical'' Boltzmann weights $p\left(\phi\right)$ are always non-local in DQMC. \paragraph*{Definition (on-site homogeneous design principle):\label{par:on-site-homogeneous}} A design principle is said to be on-site homogenous if any permutation of the lattice sites $\sigma\in S_{X}$ obeys it. That is, the operator \begin{align} & O_{\left(\mathbf{x},\alpha\right),\left(\mathbf{x}',\alpha'\right)}^{\left(\sigma\right)}=\delta_{\mathbf{x},\sigma\left(\mathbf{x}'\right)}\delta_{\alpha,\alpha'},\label{eq:20-0} \end{align} viewed as a single-fermion imaginary-time evolution operator, obeys the design principle: $O^{\left(\sigma\right)}\in\mathcal{C}_{U}$, for all $\sigma\in S_{X}$. \medskip{} This amounts to the statement that the design principle treats all lattice sites on equal footing, since it follows that $U_{\phi}\in\mathcal{C}_{U}$ if and only if $O^{\left(\sigma\right)}U_{\phi}O^{\left(\tilde{\sigma}\right)}\in\mathcal{C}_{U}$, for all permutations $\sigma,\tilde{\sigma}$. It may be that a design principle is on-site homogeneous only with respect to a sub-lattice $X'\subset X$. In this case we simply treat $X'$ as the spatial lattice, and add the finite set $X/X'$ to the $\mathsf{d}_{\text{F}}$ internal degrees of freedom. Comparing with Sec.\ref{subsec:Setup}, on-site homogeneous design principles are analogous to the set of Hamiltonians $H'$ which are stoquastic in an on-site homogeneous basis - any qudit permutation operator has non-negative entries in this basis, like the imaginary time evolution $e^{-\beta H'}$. \medskip{} With these two notions of locality and homogeneity in design principles, we now define the DQMC analog of locally stoquastic Hamiltonians (see Sec.\ref{sec:No-stoquastic-Hamiltonians}). \medskip{} \paragraph*{Definition (locally sign-free DQMC):} Given a local fermion-boson Hamiltonian $H$, we say that $H$ allows for a locally sign-free DQMC simulation, if there exists a local unitary $U$, such that $H'=UHU^{\dagger}$ has a local DQMC representation \eqref{eq:2}, which is term-wise sign-free due to an on-site homogeneous design principle. \medskip{} Note that the DQMC representation \eqref{eq:2} is not of the Hamiltonian but of the partition function, and clearly $Z'=\text{Tr}\left(e^{-\beta H'}\right)=\text{Tr}\left(e^{-\beta H}\right)=Z$. What the above definition entails, is that it is $H'$, rather than $H$, from which the DQMC data $S_{\phi},h_{\phi\left(\tau\right)}$ is obtained, as described in Sec.\ref{subsec:Local-determinantal-QMC}. This data is then assumed to be term-wise sign-free due to an on-site homogeneous design principle. The local unitary $U$ appearing in the above definition is generally fermionic \citep{PhysRevB.91.125149}: it can be written as a finite time evolution $U=\text{TO}e^{-i\int_{0}^{1}\tilde{H}\left(t\right)dt}$, where $\tilde{H}$ is a local fermion-boson Hamiltonian, which is either piecewise-constant or smooth as a function of $t$, c.f Sec.\ref{subsec:Setup}. \medskip{} \subsubsection{Example: time reversal design principle\label{subsec:Example:-time-reversal}} To demonstrate the above definitions in a concrete setting, consider the time-reversal design principle, defined by an anti-unitary operator $\mathsf{T}$ acting on the single-fermion Hilbert space $\mathcal{H}_{1\text{F}}\cong\mathbb{C}^{\left|X\right|}\otimes\mathbb{C}^{\mathsf{d}_{\text{F}}}$, such that $\mathtt{\mathsf{T}}^{2}=-I$. The set $\mathcal{C}_{h}$ contains all $\mathsf{T}$-invariant matrices, $\left[\mathsf{T},h_{\phi\left(\tau\right)}\right]=0$. It follows that $\left[\mathsf{T},U_{\phi}\right]=0$, so that $\mathcal{C}_{U}=\mathcal{C}_{h}$ in this case, and this implies $\text{Det}\left(I+U_{\phi}\right)\geq0$ \citep{Hands_2000,PhysRevB.71.155115}. A sufficient condition on $\mathsf{T}$ that guarantees that the design principle it defines is on-site homogenous is that it is of the form $\mathsf{T}_{0}=I_{\left|X\right|}\otimes\mathsf{t}$, where $I_{\left|X\right|}$ is the identity matrix on $\mathbb{C}^{\left|X\right|}$, and $\mathsf{t}$ is an anti-unitary on $\mathbb{C}^{\mathsf{d}_{\text{F}}}$ that squares to $-I_{\mathsf{d}_{F}}$. Equivalently, $\mathsf{T}$ is block diagonal, with identical blocks $\mathsf{t}$ corresponding to the lattice sites $\mathbf{x}\in X$. It is then clear that the permutation matrices $O^{\left(\sigma\right)}$ defined in Eq.\eqref{eq:20-0} commute with $\mathsf{T}$, so $O^{\left(\sigma\right)}\in\mathcal{C}_{U}$ for all $\sigma\in S_{X}$. Note that the design principle $\mathsf{T}$ may correspond to a \textit{physical} time-reversal $\mathcal{T}$, discussed in Sec.\ref{sec:Spontaneous-chirality}, only if it is on-site homogenous, which is why we distinguish the two in our notation. Additionally, if the operator $\mathsf{T}$ is $r_{\mathsf{T}}$-local with some range $r_{\mathsf{T}}\geq0$, then any local $h_{\phi\left(\tau\right)}$ which is sign-free due to $\mathsf{T}$ can be made term-wise sign-free. Indeed, if $\left[\mathsf{T},h_{\phi\left(\tau\right)}\right]=0$ then \begin{align} h_{\phi\left(\tau\right)} & =\frac{1}{2}\left(h_{\phi\left(\tau\right)}+\mathsf{T}h_{\phi\left(\tau\right)}\mathsf{T}^{-1}\right)\label{eq:21-0}\\ & =\sum_{\mathbf{x}}\frac{1}{2}\left(h_{\phi\left(\tau\right);\mathbf{x}}+\mathsf{T}h_{\phi\left(\tau\right);\mathbf{x}}\mathsf{T}^{-1}\right)\nonumber \\ & =\sum_{\mathbf{x}}\tilde{h}_{\phi\left(\tau\right);\mathbf{x}},\nonumber \end{align} where $\tilde{h}{}_{\phi\left(\tau\right);\mathbf{x}}$ is now supported on a disk of radius $r+2r_{\mathsf{T}}$ and commutes with $\mathsf{T}$, for all $\mathbf{x}$. We see that the specific notion of $r_{\mathsf{T}}$-locality coincides with the general notion of 'term-wise sign free'. In particular, $\mathsf{T}=\mathsf{T}_{0}$ has a range $r_{\mathsf{T}}=0$, and can therefore be applied term-wise. The above statements imply that if $\mathsf{T}=u\mathsf{T}_{0}u^{\dagger}$, where $u$ is a single-fermion local unitary, and $H$ has a local DQMC representation which is sign-free due to $\mathsf{T}$, then $H$ allows for a locally sign-free DQMC simulation. Indeed, extending $u$ to a many-body local unitary $U$, we see that $H'=UHU^{\dagger}$ admits a local DQMC representation where $\left[\mathsf{T}_{0},h_{\phi\left(\tau\right)}'\right]=0$. Since $\mathsf{T}_{0}$ is on-site homogenous, and $h_{\phi\left(\tau\right)}'$ can be assumed term-wise sign-free (see Eq.\eqref{eq:21-0}), we have the desired result. As demonstrated in Appendix \ref{subsec:Locality-of-known}, much of the above analysis carries over to other known design principles. All realizations of $\,\mathsf{T}$ presented in Ref.\citep{PhysRevB.71.155115} in the context of generalized Hubbard models, and in Ref.\citep{berg2019monte} in the context of quantum critical metals, have the on-site homogeneous form $\mathsf{T}_{0}$, and therefore correspond to locally sign-free DQMC simulations. We now consider a few specific time-reversal design principles $\mathsf{T}$. The physical spin-1/2 time reversal $\mathsf{T}=\mathcal{T}^{\left(1/2\right)}$, where $\mathcal{T}_{\left(\mathbf{x},\alpha\right),\left(\mathbf{x}',\alpha'\right)}^{\left(1/2\right)}=\delta_{\mathbf{x},\mathbf{x}'}\varepsilon_{\alpha\alpha'}\mathcal{K}$, and $\alpha,\alpha'\in\left\{ \uparrow,\downarrow\right\} $ correspond to up and down spin components, is an on-site homogeneous design principle, which accounts for the absence of signs in the attractive Hubbard model \citep{PhysRevB.71.155115}. The composition $\mathsf{T}=\mathcal{M}\mathcal{T}^{\left(1/2\right)}$ of $\mathcal{T}^{\left(1/2\right)}$ with a modulo 2 translation, $\mathcal{M}_{\left(\mathbf{x},\alpha\right),\left(\mathbf{x}',\alpha'\right)}=\delta_{\left(-1\right)^{x},\left(-1\right)^{x'+1}}\delta_{x_{e},x_{e}'}\delta_{y,y'}\delta_{\alpha,\alpha'}$, where $x_{e}=2\left\lfloor x/2\right\rfloor $ is the even part of $x$, is an on-site homogeneous design principle with respect to the sub-lattice$X'=\left\{ \left(2x_{1},x_{2}\right):\;\mathbf{x}\in X\right\} $, but not with respect to $X$. On the other hand, the composition $\mathsf{T}=\mathcal{P}^{\left(0\right)}\mathcal{T}^{\left(1/2\right)}$ of $\mathcal{T}^{\left(1/2\right)}$ with a spin-less reflection (or parity) $\mathcal{P}_{\left(\mathbf{x},\alpha\right),\left(\mathbf{x}',\alpha'\right)}^{\left(0\right)}=\delta_{x,-x'}\delta_{y,y'}\delta_{\alpha\alpha'}$, is not on-site homogeneous with respect to any sub-lattice. \begin{figure}[t] \begin{centering} \includegraphics[width=0.6\columnwidth]{PTcylinders} \par\end{centering} \caption{$\mathcal{P}\mathcal{T}$ symmetry as a 'non-local design principle' for chiral topological matter. (a), (c): $\mathcal{P}\mathcal{T}$ symmetry, where $\mathcal{P}$ is a reflection (with respect to the orange lines) and $\mathcal{P}\mathcal{T}$ is an on-site time-reversal, is a natural symmetry in chiral topological phases. If $\left(\mathcal{P}\mathcal{T}\right)^{2}=-I$, as is the case when $\mathcal{P}=\mathcal{P}^{\left(0\right)}$ is spin-less and $\mathcal{T}=\mathcal{T}^{\left(1/2\right)}$ is spin-full, it implies the non-negativity of fermionic determinants. Nevertheless, as $\mathcal{P}\mathcal{T}$ is non-local, it only allows for QMC simulations with $\mathcal{P}\mathcal{T}$ invariant bosonic fields, which mediate non-local interactions (blue lines) between fermions. Arrows indicate the chirality of boundary degrees of freedom. (b), (d): Such non-local interactions effectively fold the system into a non-chiral locally-interacting system supported on half the cylinder, where $\mathcal{P}\mathcal{T}$ acts as an on-site time reversal. In particular, the boundary degrees of freedom are now non-chiral. Thus, $\mathcal{P}\mathcal{T}$ does not allow for sign-free QMC simulations of chiral topological matter. More generally, fermionic design principles must be local in order to allow for sign-free DQMC simulations of local Hamiltonians. \label{fig:-symmetry-as}} \end{figure} The latter example is clearly non-local, and we use it to demonstrate the necessity of locality in design principles. As discussed in Sec.\ref{sec:Spontaneous-chirality}, the breaking of $\mathcal{P}$ and $\mathcal{T}$ down to $\mathcal{P}\mathcal{T}$ actually defines the notion of chirality, and therefore $\mathcal{P}\mathcal{T}$ is a natural symmetry in chiral topological matter. Accordingly, the design principle $\mathsf{T}=\mathcal{P}^{\left(0\right)}\mathcal{T}^{\left(1/2\right)}$ applies to a class of models for chiral topological phases, see Appendix \ref{subsec:A-non-local-design}. This seems to allow, from the naive algebraic perspective, for a sign-free DQMC simulation of certain chiral topological phases. However, the weights $p\left(\phi\right)$ will only be non-negative for bosonic configurations $\phi$ which are is invariant under $\mathsf{T}=\mathcal{P}^{\left(0\right)}\mathcal{T}^{\left(1/2\right)}$. Restricting the $\phi$ integration in Eq.\eqref{eq:2} to such configurations leads to non-local interactions between fermions $\psi$, coupling the points $\left(x,y\right)$ and $\left(-x,y\right)$. These interactions effectively fold the non-local chiral system into a local non-chiral system on half of space, see Fig.\ref{fig:-symmetry-as}. Thus, $\mathsf{T}=\mathcal{P}^{\left(0\right)}\mathcal{T}^{\left(1/2\right)}$ does \textit{not} allow for sign-free DQMC simulations of chiral topological matter. \subsubsection{Sign-free geometric manipulations in DQMC\label{sec:Sign-free-geometric-manipulation}} Let $Z$ be a partition function in a local DQMC form \eqref{eq:2}, on the discrete torus $X=\mathbb{Z}_{N_{x}}\times\mathbb{Z}_{N_{y}}$ and imaginary time circle $S_{\beta}^{1}=\mathbb{R}/\beta\mathbb{Z}$, which is term-wise sign-free due to an on-site homogenous design principle. In this section we show that it is possible to cut $X$ to the cylinder $C$, and subsequently introduce a screw dislocation in the space-time $C\times S_{\beta}^{1}$, which corresponds to the momentum polarization \eqref{eq:12-3-1}, while maintaining the DQMC weights $p\left(\phi\right)$ non-negative. \paragraph*{Introducing spatial boundaries} Given a translation $T^{\mathbf{d}}$ ($\mathbf{d}\in X$), we can cut the torus $X$ along a line $l$ parallel to $\mathbf{d}$, and obtain a cylinder $C$ where $T^{\mathbf{d}}$ acts as a translation within each boundary component, as in Sec.\ref{sec:No-stoquastic-Hamiltonians}. Given the DQMC representation \eqref{eq:2} on $X$, the corresponding representation on $C$ is obtained by eliminating all local terms $S_{\phi;\tau,\mathbf{x}},h_{\phi\left(\tau\right);\mathbf{x}}$ whose support overlaps $l$, as in Fig.\ref{fig:cutting}. This procedure may render $S_{\phi},h_{\phi\left(\tau\right)}$ independent of certain degrees of freedom $\phi\left(\mathbf{x},\tau\right),\psi\left(\mathbf{x},\tau\right)$, with $\mathbf{x}$ within a range $r$ of $l$, in which case we simply remove such degrees of freedom from the functional integral \eqref{eq:2}\footnote{For $r_{0}$-local $\phi$, which is defined on links, plaquettes, etc., we also remove from the functional $\phi$ integration those links, plaquettes, etc. which overlap $l$.}. Since $S_{\phi;\tau,\mathbf{x}},h_{\phi\left(\tau\right);\mathbf{x}}$ obey the design principle for every $\mathbf{x},\tau$, the resulting $S_{\phi},h_{\phi\left(\tau\right)}$ still obey the design principle and the weights $p\left(\phi\right)$ remain real and non-negative. \paragraph*{Introducing a screw dislocation in space-time} Let us now restrict attention to $\mathbf{d}=\left(1,0\right)$, and make contact with the momentum polarization \eqref{eq:12-3-1}. Given a partition function on the space-time $C\times S_{\beta}^{1}$, consider twisting the boundary conditions in the time direction, \begin{align} & \phi_{\tau+\beta,x,y}=\phi_{\tau,x-\lambda\Theta\left(y\right),y},\label{eq:5-1-0}\\ & \psi_{\tau+\beta,x,y}=-\psi_{\tau,x-\lambda\Theta\left(y\right),y}.\nonumber \end{align} Note that $\lambda\in\mathbb{Z}_{N_{x}}$, since $x\in\mathbb{Z}_{N_{x}}$. In particular, the full twist $\lambda=N_{x}$ is equivalent to the untwisted case $\lambda=0$, which is equivalent to the statement that the modular parameter of the torus is defined mod 1 (see e.g example 8.2 of \citep{nakahara2003geometry}). The case $\lambda=0$ gives the standard boundary conditions, where the partition function is, in Hamiltonian terms, just $Z=\text{Tr}\left(e^{-\beta H}\right)$. In this case $Z>0$ since $H$ is Hermitian, though its QMC representation $Z=\sum_{\phi}p\left(\phi\right)$ will generically involve complex valued weights $p$. The twisted case $\lambda=1$ includes the insertion of the half-translation operator \begin{align} \tilde{Z} & =\text{Tr}\left(T_{R}e^{-\beta H}\right),\label{eq:7-2} \end{align} which appears in the momentum polarization \eqref{eq:12-3-1}. Since $T_{R}$ is unitary rather than hermitian, $\tilde{Z}$ itself will generically be complex. However, \paragraph*{Claim:} If $Z$ has a local DQMC representation \eqref{eq:2}, which is term-wise sign-free due to an on-site homogeneous design principle, then $\tilde{Z}$ also has a sign-free QMC representation: $\tilde{Z}=\sum_{\phi}\tilde{p}\left(\phi\right)$, with $\tilde{p}\left(\phi\right)\geq0$. In particular, $\tilde{Z}\geq0$. Proof of the claim is provided below. It revolves around two physical points: (i) For the boson $\phi$, we only use the fact that all boundary conditions, and those in Eq.\eqref{eq:5-1-0} in particular, are locally invisible. (ii) For the fermion $\psi$, the local invisibility of boundary conditions does not suffice, and the important point is that translations do not act on internal degrees of freedom, and therefore correspond to permutations of the lattice sites. The same holds for the half translation $T_{R}$. This distinguishes translations from internal symmetries, as well as from all other spatial symmetries, which involve point group elements, and generically act non-trivially on internal degrees of freedom. For example, a $C_{4}$ rotation will act non-trivially on spin-full fermions. \paragraph*{Proof:} We first consider the fermionic part of the Boltzmann weight, $\text{Det}\left(I+U_{\phi}\right)$. The Hamiltonian $h_{\phi\left(\tau\right)}$ depends on the values of $\phi$ at a single time slice $\tau$, and is therefore unaffected by the twist in bosonic boundary conditions. It follows that $U_{\phi}$ is independent of the twist in bosonic boundary conditions. On the other hand, the fermionic boundary conditions in \eqref{eq:5-1-0} correspond to a change of the time evolution operator $U_{\phi}\mapsto T_{R}U_{\phi}$, in analogy with \eqref{eq:7-2}. Since the design-principle $\mathcal{C}_{U}$ is assumed to be on-site homogeneous, and $T_{R}=O^{\left(\sigma\right)}$ is a permutation operator, with $\sigma:\left(x,y\right)\mapsto\left(x+\Theta\left(y\right),y\right)$, we have $T_{R}U_{\phi}\in\mathcal{C}_{U}$, and $\text{Det}\left(I+T_{R}U_{\phi}\right)\geq0$. Let us now consider the bosonic part of the Boltzmann weight $e^{-S_{\phi}}$, where each of the local terms $S_{\phi;\tau,\mathbf{x}}$ is manifestly real valued for all $\phi$. We assume that the imaginary time circle $S_{\beta}^{1}$ is discretized, such that the total number of space-time points $\left(\tau,\mathbf{x}\right)=u\in U$ is finite. Such a discretization is common in DQMC algorithms \citep{PhysRevD.24.2278,chandrasekharan2013fermion}, and the continuum case can be obtained by taking the appropriate limit. The term $S_{\phi;\tau,\mathbf{x}}$ can then be written as a composition $f\circ g_{V}$, where $f$ is a real valued function, and $g_{V}:\left(\phi_{u}\right)_{u\in U}\mapsto\left(\phi_{u}\right)_{u\in V}$ chooses the values of $\phi$ on which $S_{\phi;\tau,\mathbf{x}}$ depends, where $V\subset U$ is the support of $S_{\phi;\tau,\mathbf{x}}$. The bosonic boundary conditions \eqref{eq:5-1-0} then amount to a modification of the support $V\mapsto V_{\lambda}$, as depicted in Fig.\ref{fig:BoundaryConditions}, but not of the function $f$, which remains real valued. In particular, for $\lambda=1$ we have $S_{\phi;\tau,\mathbf{x}}\mapsto\tilde{S}_{\phi;\tau,\mathbf{x}}=f\circ g_{V_{1}}$, and $S_{\phi}\mapsto\tilde{S}_{\phi}=\sum_{\tau,\mathbf{x}}\tilde{S}_{\phi;\tau,\mathbf{x}}\in\mathbb{R}$. Combining the above conclusions for the bosonic and fermionic parts of $\tilde{p}\left(\phi\right)=e^{-\tilde{S}_{\phi}}\text{Det}\left(I+T_{R}U_{\phi}\right)$, we find that $\tilde{p}\left(\phi\right)\geq0$ for all $\phi$. \begin{figure}[t] \begin{centering} \includegraphics[width=0.3\columnwidth]{BoundaryConditions} \par\end{centering} \caption{\label{fig:BoundaryConditions}Implementing the bosonic boundary conditions \eqref{eq:5-1-0}. The lattice lies in the $x-\tau$ plane, at $y>0$ where the boundary conditions are non trivial. The orange area marks the support, of diameter $r$, of a local term $S_{\phi;\tau,\mathbf{x}}$ which is unaffected by the boundary conditions. Blue areas correspond to the support of a local term which is affected by the boundary conditions, with pale blue indicating the un-twisted case $\lambda=0$.} \end{figure} \subsection{Excluding sign-free DQMC for chiral topological matter\label{sec:No-sign-free-DQMC}} We are now ready to demonstrate the existence of an intrinsic sign problem in chiral topological matter comprised of bosons \textit{and} fermions, using the machinery of Sections \ref{sec:No-stoquastic-Hamiltonians}-\ref{sec:Determinantal-quantum-Monte}. Let $H$ be a gapped local fermion-boson Hamiltonian on the discrete torus, which allows for a locally sign-free DQMC simulation. Unpacking the definition, this means that $H'=UHU^{\dagger}$ has a local DQMC representation which is term-wise sign-free due to an on-site homogeneous design principle. As shown in Sec.\ref{sec:Sign-free-geometric-manipulation}, this implies that $\tilde{Z}':=\text{Tr}\left(T_{R}e^{-\beta H'}\right)$, written on the cylinder, also has a local DQMC representation, obeying a local and on-site design principle, and as a result, $\tilde{Z}'>0$. Now, as shown in Sec.\ref{sec:No-stoquastic-Hamiltonians}, the positivity of $\tilde{Z}'$ implies $\theta_{a}=e^{2\pi ic/24}$ for some anyon $a$. We therefore have the fermionic version of Result \hyperref[Result 1]{1}, \begin{description} \item [{Result$\;$1F\label{Result 1F}}] If a local fermion-boson Hamiltonian $H$, which is in a chiral topological phase of matter, allows for a locally sign-free DQMC simulation, then one of the corresponding topological spins satisfies $\theta_{a}=e^{2\pi ic/24}$. Equivalently, a chiral topological phase of matter where $e^{2\pi ic/24}$ is not the topological spin of some anyon, i.e $e^{2\pi ic/24}\notin\left\{ \theta_{a}\right\} $, admits no local fermion-boson Hamiltonians for which locally sign-free DQMC simulation is possible. \end{description} As shown in Sec.\ref{sec:Spontaneous-chirality}, the positivity of $\tilde{Z}'$ implies $\theta_{a}=e^{2\pi ic/24}$ for some anyon $a$, even if chirality appears only spontaneously. We therefore obtain the fermionic version of Result \hyperref[Result 2]{2}, \begin{description} \item [{Result$\;$2F\label{Result 2F}}] If a local fermion-boson Hamiltonian $H$, which is in a spontaneously-chiral topological phase of matter, allows for a locally sign-free DQMC simulation, then one of the corresponding topological spins satisfies $\theta_{a}=e^{2\pi ic/24}$. Equivalently, a spontaneously-chiral topological phase of matter where $e^{2\pi ic/24}$ is not the topological spin of some anyon, i.e $e^{2\pi ic/24}\notin\left\{ \theta_{a}\right\} $, admits no local fermion-boson Hamiltonians which allow for a locally sign-free DQMC simulation.\textit{} \end{description} In stating these results, we do not restrict to fermionic phases, because bosonic phases may admit a fermionic description, for which DQMC is of interest. When a bosonic phase admits a fermionic description, the bosonic field $\phi$ in Eq.\eqref{eq:2} will contain a $\mathbb{Z}_{2}$ gauge field that couples to the fermion parity $\left(-1\right)^{N_{f}}$ of $\psi$. An important series of examples is given by the non-abelian Kitaev spin liquids, which admit a description in terms of gapped Majorana fermions with an odd Chern number $\nu$, coupled to a $\mathbb{Z}_{2}$ gauge field \citep{kitaev2006anyons}. As described in Table \ref{tab:1}, the criterion $e^{2\pi ic/24}\notin\left\{ \theta_{a}\right\} $ applies to the Kitaev spin liquid, for all $\nu\in2\mathbb{Z}-1$. Result \hyperref[Result 1]{1} then excludes the possibility of locally stoquastic Hamiltonians for the microscopic description in terms of spins, while Result \hyperref[Result 1F]{1F} excludes the possibility of locally sign-free DQMC simulations in the emergent fermionic description. \subsection{Conjectures: beyond chiral matter\label{sec:Generalization-and-extension} } In Sec.\ref{sec:No-stoquastic-Hamiltonians} and Appendices \ref{sec:Spontaneous-chirality}-\ref{sec:No-sign-free-DQMC} we established a criterion for the existence of intrinsic sign problems in chiral topological matter: if $e^{2\pi ic/24}\notin\left\{ \theta_{a}\right\} $, or equivalently $1\notin\text{Spec}\left(\mathbf{T}\right)$ (see Result \hyperref[Result 1']{1'}), then an intrinsic sign problem exists. Even if taken at face value, this criterion never applies to non-chiral bosonic topological phases, where $c=0$, due to the vacuum topological spin $1\in\left\{ \theta_{a}\right\} $. The same statement applies to all bosonic phases with $c\in24\mathbb{Z}$. In this section we propose a refined criterion for intrinsic sign problems in topological matter, which non-trivially applies to both chiral \textit{and} non-chiral cases, and also unifies the results of this section with those obtained by other means in a parallel work \citep{PhysRevResearch.2.033515}. Reference \citep{PhysRevLett.115.036802} proposed the 'universal wave-function overlap' method for characterizing topological order from any basis $\left\{ \ket i\right\} $ for the ground state subspace of a local gapped Hamiltonian $H$ on the torus $X$. The method is based on the conjecture \begin{align} \bra i\mathbf{T}_{\text{m}}\ket j= & e^{-\alpha_{\mathbf{T}}A+o\left(A^{-1}\right)}\mathbf{T}_{ij},\label{eq:1-3} \end{align} where $A$ is the area of the torus, $\alpha_{\mathbf{T}}$ is a non-universal complex number with non-negative real part, the microscopic Dehn-twist operator $\mathbf{T}_{\text{m}}$ implements the Dehn twist $\left(x,y\right)\mapsto\left(x+y,y\right)$ on the Hilbert space, and $\mathbf{T}_{ij}$ are the entries of the topological $\mathbf{T}$-matrix that characterizes the phase of $H$, in the basis $\left\{ \ket i\right\} $. The same statement applies to any element $\mathbf{M}$ of the mapping class group of the torus, isomorphic to $SL\left(2,\mathbb{Z}\right)$, with $\mathbf{M}$ in place of $\mathbf{T}$ in Eq.\eqref{eq:1-3}. The non-universal exponential suppression of the overlap is expected because $\mathbf{M}_{\text{m}}$ will not generically map the ground-state subspace to itself, but if $\mathbf{M}_{\text{m}}$ happens to be a symmetry of $H$, then $\alpha_{\mathbf{M}}=0$ \citep{PhysRevB.85.235151,PhysRevLett.110.067208}. Though we are not aware of a general analytic derivation of Eq.\eqref{eq:1-3}, it was verified analytically and numerically in a large number of examples in Refs.\citep{PhysRevLett.115.036802,PhysRevB.91.125123,PhysRevB.91.075114,PhysRevB.90.205114,PhysRevB.95.235107}, for Hamiltonians in both chiral and non-chiral phases. Note the close analogy between Eq.\eqref{eq:1-3} and the momentum polarization \eqref{eq:12-3-1}, where the microscopic Dehn-twist $\mathbf{T}_{\text{m}}$ on the torus and the half translation $T_{R}$ on the cylinder play a similar role, and non-universal extensive contributions are followed by sub-extensive universal data. To make this analogy clearer, and make contact with the analysis of Sections \ref{sec:No-stoquastic-Hamiltonians} and \ref{sec:No-sign-free-DQMC}, we consider the object $Z_{\mathbf{T}}=\text{Tr}\left(\mathbf{T}_{\text{m}}e^{-\beta H}\right)$, which satisfies \begin{align} Z_{\mathbf{T}}= & Ze^{-\alpha_{\mathbf{T}}A+o\left(A^{-1}\right)}\text{Tr}\left(\mathbf{T}\right),\label{eq:25-0} \end{align} and can be interpreted as either the (unnormalized) thermal expectation value of $\mathbf{T}_{\text{m}}$, or the partition function on a space-time twisted by $\mathbf{T}$, in analogy with Sec.\ref{subsec:Momentum-polarization}. Equation \eqref{eq:25-0} is valid for temperatures $\Delta E\ll1/\beta\ll E_{\text{g}}$, much lower than the bulk gap $E_{\text{g}}$ and much higher than any finite size splitting in the ground state-subspace, $\Delta E=o\left(A^{-1}\right)$. Just like $T_{R}$, the operator $\mathbf{T}_{\text{m}}$ acts as a permutation of the lattice sites. Therefore, following Section \ref{sec:No-stoquastic-Hamiltonians} and Appendix \ref{sec:No-sign-free-DQMC}, if $H$ is either locally stoquastic, or admits a locally sign-free DQMC simulation, then $\text{Tr}\left(\mathbf{T}\right)\ge0$. In terms of $c$ and $\left\{ \theta_{a}\right\} $, this implies $e^{-2\pi ic/24}\sum_{a}\theta_{a}=\text{Tr}\left(\mathbf{T}\right)\geq0$, where the sum runs over all topological spins. The last statement applies to both bosonic and fermionic Hamiltonians. For bosonic Hamiltonians, it can be strengthened by means of the Frobenius-Perron theorem. If $H'=UHU^{\dagger}$ is stoquastic in the on-site basis $\ket s$, Hermitian, and has a degenerate ground state subspace, then this subspace can be spanned by an orthonormal basis $\ket{i'}$ with positive entries in the on-site basis, $\braket s{i'}\geq0$, see e.g Ref.\citep{PhysRevResearch.2.033515}. This implies that \begin{align} 0\leq\bra{i'}\mathbf{T}_{\text{m}}\ket{j'}= & e^{-\alpha_{\mathbf{T}}'A+o\left(A^{-1}\right)}\mathbf{T}_{i'j'},\label{eq:2-3-1} \end{align} where $\alpha_{\mathbf{T}}'$ is generally different from $\alpha_{\mathbf{T}}$, but the matrix $\mathbf{T}_{i'j'}$ has the same spectrum as $\mathbf{T}_{ij}$ in Eq.\eqref{eq:1-3}. This is a stronger form of \eqref{eq:25}, which implies $\mathbf{T}_{i'j'}\geq0$. Since $\mathbf{T}_{i'j'}$ is also unitary, it is a permutation matrix, $\mathbf{T}_{i'j'}=\delta_{i',\sigma\left(j'\right)}$ for some $\sigma\in S_{N}$, where $N$ is the number of ground states. In turn, this implies that the spectrum of $\mathbf{T}$ is a disjoint union of complete sets of roots of unity, \begin{align} \left\{ \theta_{a}e^{-2\pi ic/24}\right\} _{a=1}^{N} & =\text{Spec}\left(\mathbf{T}\right)=\bigcup_{k=1}^{K}R_{n_{k}},\label{eq:27-1} \end{align} where $R_{n_{k}}$ is the set of $n_{k}$th roots of unity, $n_{k},K\in\mathbb{N}$, and $\sum_{k=1}^{K}n_{k}=N$. Therefore, \begin{description} \item [{Conjecture$\;$1}] A bosonic topological phase of matter where $\left\{ \theta_{a}e^{-2\pi ic/24}\right\} $ is not a disjoint union of complete sets of roots of unity, admits no local Hamiltonians which are locally stoquastic. \end{description} In particular, this implies an intrinsic sign problem whenever $1\notin\left\{ \theta_{a}e^{-2\pi ic/24}\right\} $, thus generalizing Result \hyperref[Result 1]{1}. Moreover, the above statement applies non-trivially to phases with $c\in24\mathbb{Z}$. In particular, for non-chiral phases, where $c=0$, it reduces to the result established in Ref.\citep{PhysRevResearch.2.033515}, thus generalizing it as well. The simplest example for a non-chiral phase with an intrinsic sign problem is the doubled semion phase, where $\left\{ \theta_{a}\right\} =\left\{ 1,i,-i,1\right\} $ \citep{PhysRevB.71.045110}. Though we are currently unaware of an analog of the Frobenius-Perron theorem that applies to DQMC, we expect that an analogous result can be established for fermionic Hamiltonians. \begin{description} \item [{Conjecture$\;$1F}] A topological phase of matter where $\left\{ \theta_{a}e^{-2\pi ic/24}\right\} $ is not a complete set of roots of unity, admits no local fermion-boson Hamiltonians for which locally sign-free DQMC simulation is possible. \end{description} The above conjectures suggest a substantial improvement over the criterion $e^{2\pi ic/24}\notin\left\{ \theta_{a}\right\} $. To demonstrate this, we go back to the $1/q$ Laughlin phases and $SU\left(2\right)_{k}$ Chern-Simons theories considered in Table \ref{tab:1}. We find a conjectured intrinsic sign problem in \textit{all} of the first one-thousand bosonic Laughlin phases ($q$ even), fermionic Laughlin phases ($q$ odd), and $SU\left(2\right)_{k}$ Chern-Simons theories. In particular, we note that the prototypical $1/3$ Laughlin phase is not captured by the criterion $e^{2\pi ic/24}\notin\left\{ \theta_{a}\right\}$, but is conjectured to be intrinsically sign-problematic. \subsection{Discussion\label{subsec:Discussion}} In this section we established the existence of intrinsic sign problems in a broad class of chiral topological phases, namely those where $e^{2\pi ic/24}$ does not happen to be the topological spin of an anyon. Since these intrinsic sign problems persist even when chirality appears spontaneously, they are rooted in the macroscopic and observable data $c$, $\left\{ \theta_{a}\right\} $, rather than the microscopic absence (or presence) of time reversal symmetry. Going beyond the simple setting of stoquastic Hamiltonians, we provided the first treatment of intrinsic sign problems in fermionic systems. In particular, we constructed a general framework which describes all DQMC algorithms and fermionic design principles that we are aware of, including the state of art design principles \citep{wang2015split,wei2016majorana,li2016majorana,wei2017semigroup} which are only beginning to be used by practitioners. Owing to its generality, it is likely that our framework will apply to additional design principles which have not yet been discovered, insofar as they are applied locally. We also presented conjectures that strengthen our results, and unify them with those obtained in Refs.\citep{hastings2016quantum,PhysRevResearch.2.033515}, under a single criterion in terms of $c$ and $\left\{ \theta_{a}\right\} $. These conjectures also imply intrinsic sign problems in many topological phases not covered by existing results. Conceptually, our results show that the sign problem is not \textit{only} a statement of computational complexity: it is, in fact, intimately connected with the physically observable properties of quantum matter. Such a connection has long been heuristically appreciated by QMC practitioners, and is placed on a firm and quantitative footing by the discovery of intrinsic sign problems. Despite the progress made here, our understanding of intrinsic sign problems is still in its infancy, and many open questions remain: \paragraph*{Quantum computation and intrinsic sign problems} Intrinsic sign-problems relate the physics of topological phases to their computational complexity, in analogy with the classification of topological phases which enable universal quantum computation \citep{Freedman:2002aa,nayak2008non}. As we have seen, many phases of matter that are known to be universal for quantum computation are also intrinsically sign-problematic, supporting the paradigm of 'quantum advantage' or 'quantum supremacy' \citep{Arute:2019aa}. Determining whether intrinsic sign-problems appear in \textit{all} phases of matter which are universal for quantum computation is an interesting open problem. Additionally, we identified intrinsic sign problems in many topological phases which are not universal for quantum computation. The intermediate complexity of such phases between classical and quantum computation is another interesting direction for future work. \paragraph*{Unconventional superconductivity and intrinsic sign problems} As described in the introduction, a major motivation for the study of intrinsic sign problems comes from long standing open problems in fermionic many-body systems, the nature of high temperature superconductivity in particular. It is currently believed that many high temperature superconductors, and the associated repulsive Hubbard models, are \textit{non-chiral} $d$-wave superconductors \citep{kantian2019understanding,berg2019monte}, in which we did not identify an intrinsic sign problem. The optimistic possibility that the sign problem can in fact be cured in repulsive Hubbard models is therefore left open, though this has not yet been accomplished in the relevant regime of parameters, away from half filling, despite intense research efforts \citep{PhysRevX.5.041041}. Nevertheless, the state of the art DMRG results of Ref.\citep{kantian2019understanding} do not exclude the possibility of a \textit{chiral} $d$-wave superconductor ($\ell=\pm 2$ in Table \ref{tab:1}). In this case we do find an intrinsic sign problem, which would account for the notorious sign problems observed in repulsive Hubbard models. More speculatively, it is possible that the mere proximity of repulsive Hubbard models to a chiral $d$-wave phase stands behind their notorious sign problems. The possible effect of an intrinsic sign problem in a given phase on the larger phase diagram was recently studied in Ref.\citep{zhang2020non}. There is also evidence for chiral $d$-wave superconductivity in doped graphene and related materials \citep{PhysRevB.84.121410,Black_Schaffer_2014}, and our results therefore suggest the impossibility of sign-free QMC simulations of these. We believe that the study of intrinsic sign problems in the context of unconventional superconductivity is a promising direction for future work. \paragraph*{Non-locality as a possible route to sign-free QMC } The intrinsic sign problems identified in this thesis add to existing evidence for the complexity of chiral topological phases - these do not admit local commuting projector Hamiltonians \citep{PhysRevB.89.195130,potter2015protection,PhysRevB.98.165104,kapustin2019thermal}, nor do they admit local Hamiltonians with a PEPS state as an exact ground state \citep{PhysRevLett.111.236805,PhysRevB.90.115133,PhysRevB.92.205307,PhysRevB.98.184409}. Nevertheless, relaxing the locality requirement does lead to positive results for the simulation of chiral topological matter using commuting projectors or PEPS. First, commuting projector Hamiltonians can be obtained if the local bosonic or fermionic degrees of freedom are replaced by anyonic (and therefore non-local) excitations of an underlying chiral topological phase \citep{PhysRevB.97.245144}. Second, chiral topological Hamiltonians can have a PEPS ground state if they include interactions (or hopping amplitudes) that slowly decay as a power-law with distance. One may therefore hope that sign-free QMC simulations of chiral topological matter can also be performed if the locality requirements made in Sec.\ref{sec:No-stoquastic-Hamiltonians} are similarly relaxed. Do such 'weakly-local' sign-free models exist? \paragraph*{Easing intrinsic sign problems} In this section we proved the existence of an intrinsic sign problem in chiral topological phases of matter, but we did not quantify the \textit{severity} of this sign problem, which is an important concept in both practical applications and theory of QMC. The severity of a sign problem is quantified by the smallness of the average sign of the QMC weights $p$ with respect to the distribution $\left|p\right|$, i.e $\left\langle \text{sgn}\right\rangle :=\sum p/\sum\left|p\right|$. Since $\left\langle \text{sgn}\right\rangle $ can be viewed as the ratio of two partition functions, it obeys the generic scaling $\left\langle \text{sgn}\right\rangle \sim e^{-\Delta\beta N}$, with $\Delta\geq0$, as $\beta N\rightarrow\infty$ \citep{troyer2005computational,hangleiter2019easing}. A sign problem exists when $\Delta>0$, in which case QMC simulations require exponential computational resources, and this is what the intrinsic sign problem we identified implies for 'most' chiral topological phases of matter. From the point of view of computational complexity, all that matters is whether $\Delta=0$ or $\Delta>0$, but for practical applications the value of $\Delta$ is very important, see e.g \citep{PhysRevB.84.121410}. One may hope for a possible refinement of our results that provides a lower bound $\Delta_{0}>0$ for $\Delta$, but since we have studied \textit{topological} phases of matter, we view this as unlikely. It may therefore be possible to obtain fine-tuned models and QMC methods that lead to a $\Delta$ small enough to be \textit{practically} useful. More generally, it may be possible to search for such models and methods algorithmically, thus \textit{easing} the intrinsic sign problem \citep{hangleiter2019easing,torlai2019wavefunction,PhysRevD.97.094510,alex2020complex}. We also note that the results presented in this thesis do not exclude approaches to the sign-problem based on a modified or constrained Monte Carlo sampling \citep{PhysRevLett.83.3116,fixed-node,constrained-path}, as well as machine-learning aided QMC \citep{ML+QMC}, and infinite-volume diagrammatic QMC \citep{PhysRevLett.119.045701}. \paragraph*{Possible extensions } The chiral central charge only appears modulo 24 in our results. Nevertheless, for full value of $c$ is physically meaningful, as reviewed in the introduction. Does an intrinsic sign problem exist in all phases with $c\neq0$? The results of Ref.\citep{ringel2017quantized} strongly suggest this. The arguments of Appendix \ref{sec:Generalization-and-extension} apply equally well to any element of the modular group, rather than just the topological $\mathbf{T}$-matrix, implying that the spectrum of all elements decomposes into full sets of roots of unity. Does this imply a tighter constraint on the TFT data than conjectured in Appendix \ref{sec:Generalization-and-extension}? Moreover, the 'universal wave-function overlap' conjecture, on which Appendix \ref{sec:Generalization-and-extension} relies, applies also to space-time dimensions $D>2+1$, which suggests intrinsic sign problems in these space-time dimensions, including the beloved $D=3+1$. Another promising direction involves symmetry protected or enriched topological phases. In particular, the sign problem was cured in a number of bosonic SPT Hamiltonians \citep{PhysRevB.95.174418,PhysRevB.85.045114, PhysRevB.86.045106,gazit2016bosonic}, and all SPT ground states can, by definition, be made non-negative in a local basis. Nevertheless, a `symmetry protected' intrinsic sign problem, where \textit{symmetric} local bases are excluded, was recently discovered \citep{ellison2020symmetryprotected}. Such constraints may be more useful for designing sign-free models than the stronger intrinsic sign problems discussed in this thesis. \pagebreak{} \section{Outlook \label{sec:Discussion-and-outlook}} In this thesis we studied the geometric physics of chiral topological phases, and related this physics to the complexity of simulating such phases using quantum Monte Carlo algorithms. The analysis in Sec.\ref{sec:Main-section-1:}-\ref{sec:Main-section-2:} was focused on chiral superfluids and superconductors, and revealed an intricate interplay of symmetry breaking, topology, and geometry. Though we stressed the Higgs mode as an emergent geometry in the relativistic analysis of Sec.\ref{sec:Main-section-1:}, the fuller non-relativistic treatment in Sec.\ref{sec:Main-section-2:} was carried out at energy scales below the Higgs mass, where the emergent geometry simply follows the background geometry (or strain). We believe that there is beautiful physics to be revealed at higher energy scales, where the dynamics of the Higgs mode will be described by a non-relativistic, parity odd, and in part topological, quantum geometry similar to the Girvin-MacDonald-Platzman (GMP) mode in fractional quantum Hall states \citep{girvin1986magneto,haldane2009hall,haldane2011geometrical,you2014theory,gromov2017bimetric}. In particular, this physics should be important near a nematic phase transition where the spin-2 Higgs mode becomes light, which can be tuned by attractive Landau interactions \citep{hsiao2018universal}. We also believe that, through `composite fermions' \citep{read2000paired,PhysRevX.5.031027,Son_2018}, this will lead to a new description of the GMP mode in paired quantum Hall states, including the illusive particle-hole invariant Pfaffian, and in analogy with the treatment of the GMP mode in Jain states close to half filling \citep{PhysRevB.48.17368,PhysRevLett.117.216403,PhysRevB.97.195103,PhysRevB.97.195314}. Regarding intrinsic sign problems, we believe that the results obtained in Sec.\ref{sec:Main-section-3:}, as well as in Refs.\citep{hastings2016quantum,ringel2017quantized,PhysRevResearch.2.033515,ellison2020symmetryprotected}, represent the tip of an iceberg. Concrete extensions of these results were proposed in Sec.\ref{subsec:Discussion}. Beyond these, are there intrinsically sign-problematic phases which are not gapped, not topological, or both? Does the physics of high temperature superconductivity, or of dense nuclear matter, imply an intrinsic sign problem, thus accounting for the persistent sign problems observed by QMC practitioners in relevant models? Taking a broader perspective, intrinsic sign problems form a bridge between the notions of phases of matter and computational complexity. This should be contrasted with most results in quantum complexity \citep{TCS-066,RevModPhys.90.015002}, which are established for classes of Hamiltonians defined by microscopic conditions such as locality, non-frustration, and stoquasticity, as well as energy gap assumptions, or for specific canonical models, but usually not for phases of matter. In fact, refining statements in quantum complexity to the level of phases was the original motivation for the study of intrinsic sign problems \citep{hastings2016quantum}. Additional statements regarding the complexity of phases are given by the classification of topological phases which enable universal quantum computation by anyon braiding \citep{Freedman:2002aa,nayak2008non}, and the proof that whenever the area law for entanglement entropy holds for a gapped Hamiltonian, it holds in its entire phase \citep{PhysRevLett.111.170501}. We find this theme to be promising for both the study of many-body quantum systems, and their use as computational devices. \newpage
2,869,038,156,061
arxiv
\section{Introduction} An aerosol or a spray is a fluid consisting of a \textit{dispersed phase}, usually liquid droplets, sometimes solid particles, immersed in a gas referred to as the \textit{propellant}. An important class of models for the dynamics of aerosol/spray flows consists of (a) a kinetic equation for the dispersed phase, and (b) a fluid equation for the propellant. The kinetic equation for the dispersed phase and the fluid equation for the propellant are coupled through the drag force exerted by the gas on the droplets/particles. This class of models applies to the case of \textit{thin} sprays, i.e. those for which the volume fraction of the dispersed phase is typically $\ll 1$. Perhaps the simplest example of this class of models is the Vlasov-Stokes system: $$ \left\{ \begin{aligned} {}&{\partial}_tF+v\cdot{\nabla}_xF-\frac{{\kappa}}{m_p}\operatorname{div}_v((v-u)F)=0\,, \\ &-\rho_g\nu{\Delta}_xu=-{\nabla}_xp+{\kappa}\int_{\mathbf{R}^3}(v-u)F\,\mathrm{d} v\,, \\ &\operatorname{div}_xu=0\,. \end{aligned} \right. $$ The unknowns in this system are $F\equiv F(t,x,v)\ge 0$, the distribution function of the dispersed phase, i.e. the number density of particles or droplets with velocity $v$ located at the position $x$ at time $t$, and $u\equiv u(t,x)\in\mathbf{R}^3$, the velocity field in the gas. The parameters ${\kappa}$, $m_p$, $\rho_g$ and $\nu$ are positive constants. Specifically, ${\kappa}$ is the friction coefficient of the gas on the dispersed phase, $m_p$ is the mass of a particle or droplet, and $\rho_g$ is the gas density, while $\nu$ is the kinematic viscosity of the gas. The aerosol considered here is assumed for simplicity to be \textit{monodisperse} --- i.e. all the particle in the dispersed phase are of the same size and of the same mass. In practice, the particles in the dispersed phase of an aerosol are in general distributed in size (and in mass). The last equation in the system above indicates that the gas flow is considered as incompressible\footnote{It is well known that the motion of a gas at a very low Mach number is governed by the equations of incompressible fluid mechanics, even though a gas is a compressible fluid. A formal justification for this fact can be found on pp. 11--12 in \cite{PLLFluidMech1}.}. The scalar pressure field $p\equiv p(t,x)\in\mathbf{R}$ is instantaneously coupled to the unknowns $F$ and $u$ by the Poisson equation $$ {\Delta}_xp={\kappa}\operatorname{div}_x\int_{\mathbf{R}^3}(v-u)F\,\mathrm{d} v\,. $$ The mathematical theory of the Vlasov-Stokes system has been discussed in \cite{Hamdache} --- see in particular section 6 there, which treats the case of a steady Stokes equation as above. Our purpose in the present work is to provide a rigorous derivation of this system from a more microscopic system. Derivations of the Stokes equation with a force term including the drag force exerted by the particles on the fluid (known as the Brinkman force) from a system consisting of a large number of particles immersed in a viscous fluid can be found in \cite{Allaire,DesvFGRicci08}. Both results are based on the method of homogenization of elliptic operators on domains with holes of finite capacity, pioneered by Khruslov and his school --- see for instance \cite{Khruslov,CioraMurat}. Unfortunately, this method assumes that the minimal distance between particles remains uniformly much larger than the particle radius $r\ll 1$. Specifically, this minimal distance is assumed in \cite{DesvFGRicci08} to be of the order of $r^{1/3}$ in space dimension $3$; this condition has been recently improved in \cite{Hillairet}. Such particle configurations are of vanishing probability as the particle number $N\to\infty$: see for instance Proposition 4 in \cite{Hauray}. Moreover, the question of propagating this separation condition by the particle dynamics seems open so far --- see however \cite{JabinOtto} for interesting ideas on a similar problem for a first order dynamics. For that reason, we have laid out in \cite{BDGR} a program for deriving dynamical equations for aerosol flows from a system of Boltzmann equations for the dispersed phase and the propellant viewed as a binary gas mixture. In \cite{BDGR}, we have given a complete formal derivation of the Vlasov-(incompressible) Navier-Stokes system from a scaled system of Boltzmann equations. We have identified the scaling leading to this system, which involves two small parameters. One is the mass ratio $\eta$ of the gas molecule to the particle in the dispersed phase. The other small parameter is the ratio ${\epsilon}$ of the thermal speed of the dispersed phase to the speed of sound in the gas. The assumption $\eta\ll 1$ implies that the gas molecules impingement on the particles in the dispersed phase results in a slight deviation of these particles, and this accounts for the replacement of one of the collision integrals in the Boltzmann system by a Vlasov type term. The assumption ${\epsilon}\ll 1$ explains why a low Mach number approximation is adequate for the motion equation in the propellant. In particular, the velocity field in the propellant is approximately divergence free, and the motion equation in the gas is the same as in an incompressible fluid with constant density. However, a more intricate scaling is needed to derive the Vlasov-Stokes system above from the sytem of Boltzmann equations for a binary mixture. If the ratio $\mu$ of the mass density of the propellant to the mass density of the dispersed phase is very small, and the thermal speed in the dispersed phase is much smaller than that of the propellant, one can hope that the friction force exerted by the dispersed phase on the propellant will slow down the gas, so that Navier-Stokes motion equation can be replaced with a Stokes equation. Although this scenario sounds highly plausible, the asymptotic limit of the system of Boltzmann equations for a binary gas mixture leading to the Vlasov-Stokes system rests on a rather delicate tuning of the three small parameters ${\epsilon},\eta,\mu$, defined in the statement of our main result, Theorem \ref{theor}. The outline of this paper is as follows: section \ref{S-S2} introduces the system of Boltzmann equation for binary gas mixtures, identifies the scaling parameters involved in the problem, and presents two classes of Boltzmann type collision integrals describing the interaction between the dispersed phase and the propellant. Section \ref{S-S3} formulates a few (specifically, five) key abstract assumptions on the interaction between the dispersed phase and the propellant, which are verified by the models introduced in section \ref{S-S2}. The main result of the present paper, i.e. the derivation of the Vlasov-Stokes system from the system of Boltzmann equations for a binary gas mixture, is Theorem \ref{theor}, stated at the begining of section \ref{S-S4}. The remaining part of section \ref{S-S4} is devoted to the proof of Theorem \ref{theor}. Obviously, the present paper shares many features with its companion \cite{BDGR} --- we have systematically used the same notation in both papers. However, the derivation of the Vlasov-Stokes system differs in places from that of the Vlasov-Navier-Stokes system in \cite{BDGR}. For instance, some assumptions on the interaction between the propellant and the dispersed phase used in the present paper are slightly different from their analogues in \cite{BDGR}. We have therefore kept the repetitions between \cite{BDGR} and the present paper to a strict minimum. Only the part of the proof of Theorem \ref{theor} that is special to the derivation of the Vlasov-Stokes system is given in full detail. The reader is referred to \cite{BDGR} for all the arguments which have been already used in the derivation of the Vlasov-Navier-Stokes system. \section{Boltzmann Equations for Multicomponent Gases}\label{S-S2} Consider a binary mixture consisting of microscopic gas molecules and much bigger solid dust particles or liquid droplets. For simplicity, we henceforth assume that the dust particles or droplets are identical (in particular, the spray is monodisperse: all particles have the same mass), and that the gas is monatomic. We denote from now on by $F\equiv F(t,x,v)\ge 0$ the distribution function of dust particles or droplets, and by $f\equiv f(t,x,w)\ge 0$ the distribution function of gas molecules. These distribution functions satisfy the system of Boltzmann equations \begin{equation}\label{BoltzSys0} \begin{aligned} ({\partial}_t+v\cdot{\nabla}_x)F&=\mathcal{D}(F,f)+\mathcal{B}(F)\,, \\ ({\partial}_t+w\cdot{\nabla}_x)f&=\mathcal{R}(f,F)+\mathcal{C}(f)\,. \end{aligned} \end{equation} The terms $\mathcal{B}(F)$ and $\mathcal{C}(f)$ are the Boltzmann collision integrals for pairs of dust particles or liquid droplets and for gas molecules respectively. The terms $\mathcal{D}(F,f)$ and $\mathcal{R}(f,F)$ are Boltzmann type collision integrals describing the deflection of dust particles or liquid droplets subject to the impingement of gas molecules, and the slowing down of gas molecules by collisions with dust particles or liquid droplets respectively. Collisions between molecules are assumed to be elastic, and satisfy therefore the usual local conservation laws of mass, momentum and energy, while collisions between dust particles may not be perfectly elastic, so that $\mathcal{B}(F)$ satisfies only the local conservation of mass and momentum. Since collisions between gas molecules and particles preserve the nature of the colliding objects, the collision integrals $\mathcal{D}$ and $\mathcal{R}$ satisfiy the local conservation laws of particle number per species and local balance of momentum. The local balance of energy is satisfied if the collisions between gas molecules and particles are elastic. The system (\ref{BoltzSys0}) is the starting point in our derivation of the Vlasov-Navier-Stokes system in \cite{BDGR}. We shall mostly follow the derivation in \cite{BDGR}, and shall insist only on the differences between the limit considered there and the derivation of the Vlasov-Stokes system studied in the present paper. \subsection{Dimensionless Boltzmann systems} We assume for simplicity that the aerosol is enclosed in a periodic box of size $L>0$, i.e. $x\in\mathbf{R}^3/L\mathbf{Z}^3$. The system of Boltzmann equations (\ref{BoltzSys0}) involves an important number of physical parameters, which are listed in the table below. \bigskip \begin{center} \begin{tabular}{|c|c|} \hline \hspace{.2cm} Parameter \hspace{.2cm} & \hspace{.2cm} Definition \hspace{.2cm}\\ \hline \hline \hspace{.2cm} $L$ \hspace{.2cm} & \hspace{.2cm} size of the container (periodic box) \hspace{.2cm}\\ \hline \hspace{.2cm} $\mathcal{N}_p$ \hspace{.2cm} & \hspace{.2cm} number of particles$/L^3$ \hspace{.2cm}\\ \hline \hspace{.2cm} $\mathcal{N}_g$ \hspace{.2cm} & \hspace{.2cm} number of gas molecules$/L^3$ \hspace{.2cm}\\ \hline \hspace{.2cm} $V_p$ \hspace{.2cm} & \hspace{.2cm} thermal speed of particles \hspace{.2cm}\\ \hline \hspace{.2cm} $V_g$ \hspace{.2cm} & \hspace{.2cm} thermal speed of gas molecules \hspace{.2cm}\\ \hline \hspace{.2cm} $S_{pp}$ \hspace{.2cm} & \hspace{.2cm} average particle/particle cross-section \hspace{.2cm}\\ \hline \hspace{.2cm} $S_{pg}$ \hspace{.2cm} & \hspace{.2cm} average particle/gas cross-section \hspace{.2cm}\\ \hline \hspace{.2cm} $S_{gg}$ \hspace{.2cm} & \hspace{.2cm} average molecular cross-section \hspace{.2cm}\\ \hline \hspace{.2cm} $\eta=m_g/m_p$ \hspace{.2cm} & \hspace{.2cm} mass ratio (molecules/particles) \hspace{.2cm}\\ \hline \hspace{.2cm} $\mu=(m_g \mathcal{N}_g)/(m_p \mathcal{N}_p)$ \hspace{.2cm} & \hspace{.2cm} mass fraction (gas/dust or droplets) \hspace{.2cm}\\ \hline \hspace{.2cm} ${\epsilon}=V_p/V_g$ \hspace{.2cm} & \hspace{.2cm} thermal speed ratio (particles/molecules) \hspace{.2cm}\\ \hline \end{tabular} \end{center} \smallskip This table of parameters is the same as in \cite{BDGR}, except for the mass fraction $\mu$ which does not appear in \cite{BDGR}. \bigskip We first define a dimensionless position variable: $$ \hat x:=x/L\,, $$ together with dimensionless velocity variables for each species: $$ \hat v:=v/V_p\,,\quad \hat w:=w/V_g\,. $$ In other words, the velocity of each species is measured in terms of the thermal speed of the particles in the species under consideration. Next, we define a time variable, which is adapted to the slowest species, i.e. the dust particles or droplets: $$ \hat t:=tV_p/L\,. $$ Finally, we define dimensionless distribution functions for each particle species: $$ \hat F(\hat t,\hat x,\hat v):=V^3_pF(t,x,v)/\mathcal{N}_p\,,\qquad\hat f(\hat t,\hat x,\hat w):=V^3_gf(t,x,w)/\mathcal{N}_g\,. $$ The definition of dimensionless collision integrals is more complex and involves the average collision cross sections $S_{pp},S_{pg},S_{gg}$, whose definition is recalled below. The collision integrals $\mathcal{B}(F)$, $\mathcal{C}(f)$, $\mathcal{D}(F,f)$ and $\mathcal{R}(f,F)$ are given by expressions of the form \begin{equation} \label{Colli} \begin{aligned} \mathcal{B}(F)(v)=&\iint_{\mathbf{R}^3\times\mathbf{R}^3}F(v')F(v'_*)\Pi_{pp}(v,\mathrm{d} v'\,\mathrm{d} v'_*) \\ &-F(v)\int_{\mathbf{R}^3}F(v_*)|v-v_*|{\Sigma}_{pp}(|v-v_*|)\,\mathrm{d} v_*\,, \\ \mathcal{C}(f)(w)=&\iint_{\mathbf{R}^3\times\mathbf{R}^3}f(w')f(w'_*)\Pi_{gg}(w,\mathrm{d} w'\,\mathrm{d} w'_*) \\ &-f(w)\int_{\mathbf{R}^3}f(w_*)|w-w_*|{\Sigma}_{gg}(|w-w_*|)\,\mathrm{d} w_*\,, \\ \mathcal{D}(F,f)(v)=&\iint_{\mathbf{R}^3\times\mathbf{R}^3}F(v')f(w')\Pi_{pg}(v,\mathrm{d} v'\,\mathrm{d} w') \\ &-F(v)\int_{\mathbf{R}^3}f(w)|v-w|{\Sigma}_{pg}(|v-w|)\,\mathrm{d} w\,, \\ \mathcal{R}(f,F)(w)=&\iint_{\mathbf{R}^3\times\mathbf{R}^3}F(v')f(w')\Pi_{gp}(w,\mathrm{d} v'\,\mathrm{d} w') \\ &-f(w)\int_{\mathbf{R}^3}F(v)|v-w|{\Sigma}_{pg}(|v-w|)\,\mathrm{d} v\,. \end{aligned} \end{equation} In these expressions, $\Pi_{pp},\Pi_{gg},\Pi_{pg},\Pi_{gp}$ are nonnegative, measure-valued measurable functions defined a.e. on $\mathbf{R}^3$, while ${\Sigma}_{pp},{\Sigma}_{gg},{\Sigma}_{pg}$ are nonnegative measurable functions defined a.e. on $\mathbf{R}_+$. This setting is the same as in \cite{BDGR}, and is taken from chapter 1 in \cite{Landau10} (see in particular formula (3.6) there). The relation between the quantities $\Pi$ and ${\Sigma}$ is the following: \begin{equation}\label{Colli2} \begin{aligned} \int_{\mathbf{R}^3}\Pi_{pp}(v,\mathrm{d} v'\,\mathrm{d} v'_*) \, \mathrm{d} v = |v'-v'_*|{\Sigma}_{pp}(|v'-v'_*|)\mathrm{d} v'\,\mathrm{d} v'_*, \\ \int_{\mathbf{R}^3} \Pi_{gg}(w,\mathrm{d} w'\,\mathrm{d} w'_*) \, \mathrm{d} w = |w'-w'_*|{\Sigma}_{gg}(|w'-w'_*|)\mathrm{d} w'\,\mathrm{d} w'_*, \\ \int_{\mathbf{R}^3} \Pi_{pg}(v,\mathrm{d} v'\,\mathrm{d} w') \, \mathrm{d} v = |v'-w'|{\Sigma}_{pg}(|v'-w'|)\mathrm{d} v'\,\mathrm{d} w', \\ \int_{\mathbf{R}^3} \Pi_{gp}(w,\mathrm{d} v'\,\mathrm{d} w') \, \mathrm{d} w = |v'-w'|{\Sigma}_{pg}(|v'-w'|)\mathrm{d} v'\,\mathrm{d} w'. \end{aligned} \end{equation} The dimensionless quantities associated to ${\Sigma}_{pp},{\Sigma}_{gg}$ and ${\Sigma}_{pg}$ are ($i,j=p,g$) $$ \begin{aligned} \hat{\Sigma}_{ii}(|\hat z|)&={\Sigma}_{ii}(V_i|\hat z|)/S_{ii}\,, \\ \hat{\Sigma}_{ij}(|\hat z|)&={\Sigma}_{ij}(V_j|\hat z|)/S_{ij}\,. \end{aligned} $$ Likewise $$ \begin{aligned} \hat\Pi_{pp}(\hat v,\mathrm{d}\hat v'\,\mathrm{d}\hat v'_*)&=\Pi_{pp}(v,\mathrm{d} v'\,\mathrm{d} v'_*)/S_{pp}V_p^4\,, \\ \hat\Pi_{gg}(\hat w,\mathrm{d}\hat w'\,\mathrm{d}\hat w'_*)&=\Pi_{gg}(w,\mathrm{d} w'\,\mathrm{d} w'_*)/S_{gg}V_g^4\,, \\ \hat\Pi_{pg}(\hat v,\mathrm{d}\hat v'\,\mathrm{d}\hat w')&=\Pi_{pg}(v,\mathrm{d} v'\,\mathrm{d} w')/S_{pg}V_g^4\,, \\ \hat\Pi_{gp}(\hat w,\mathrm{d}\hat v'\,\mathrm{d}\hat w')&=\Pi_{gp}(w,\mathrm{d} v'\,\mathrm{d} w')/S_{pg}V_gV_p^3\,. \end{aligned} $$ With the dimensionless quantities so defined, we arrive at the following dimensionless form of the multicomponent Boltzmann system: \begin{equation}\label{BoltzSys} \left\{ \begin{aligned} {}&{\partial}_{\hat t}\hat F\,+\,\hat v\cdot{\nabla}_{\hat x}\hat F\,=\mathcal{N}_gS_{pg}L\frac{V_g}{V_p}\hat\mathcal{D}(\hat F,\hat f)+\mathcal{N}_pS_{pp}L\hat\mathcal{B}(\hat F)\,, \\ &{\partial}_{\hat t}\hat f\!+\!\frac{V_g}{V_p}\hat w\!\cdot\!{\nabla}_{\hat x}\hat f=\mathcal{N}_pS_{pg}L\frac{V_g}{V_p}\hat\mathcal{R}(\hat f,\hat F)+\mathcal{N}_gS_{gg}L\frac{V_g}{V_p}\hat\mathcal{C}(\hat f)\,. \end{aligned} \right. \end{equation} Throughout the present study, we shall always assume that \begin{equation}\label{NoppColl} \mathcal{N}_pS_{pp}L\ll 1\,, \end{equation} so that the collision integral for dust particles or droplets $\mathcal{N}_pS_{pp}L\hat\mathcal{B}(\hat F)$ is considered as formally negligible (and will not appear anymore in the equations). Besides, the thermal speed $V_p$ of dust particles or droplets is in general smaller than the thermal speed $V_g$ of gas molecules; thus we denote their ratio by \begin{equation}\label{Def-eps} {\epsilon}=\frac{V_p}{V_g}\in[0,1]\,. \end{equation} Recalling that the mass ratio $[0,1]\ni \eta = m_g/m_p$ is supposed to be extremely small, since the particles are usually much heavier than the molecules, we also assume \begin{equation}\label{Def-eta} \frac{\eta}{\mu}=\frac{\mathcal{N}_p}{\mathcal{N}_g}\in[0,1]\,. \end{equation} where $\mu$ is the mass fraction of the gas with respect to the droplets, which is also supposed to be extremely small. This hypothesis on the mass ratio gives a scaling such that the mass density of the gas is very small with respect to the mass density of the dispersed phase. Finally, in the sequel (cf. eq. (\ref{kifu})), we shall assume that $$ \mathcal{N}_p\,S_{pg}\, L =\frac{{\epsilon}}{\mu}\quad\hbox{ and }\quad\mathcal{N}_g\,S_{gg}\, L = \frac{\mu}{{\epsilon}}\,,\qquad\quad\hbox{ where }{\epsilon}\ll\mu\ll 1\,. $$ With these assumptions, one has $$ \begin{aligned} (\mathcal{N}_gS_{pg}L)\frac{V_g}{V_p}=(\frac{\mathcal{N}_g}{\mathcal{N}_p})(\mathcal{N}_pS_{pg}L)(\frac{V_g}{V_p})=\frac1\eta\,, \\ \\ (\mathcal{N}_pS_{pg}L)(\frac{V_g}{V_p})=\frac1\mu\,, \quad (\mathcal{N}_gS_{gg}L)(\frac{V_g}{V_p})=\frac{\mu}{{\epsilon}^2}\,, \\ \\ \mathcal{N}_pS_{pp}L\ll 1\,, \end{aligned} $$ so that we arrive at the scaled system: \begin{equation}\label{BoltzSysSc} \left\{ \begin{aligned} {}&{\partial}_{\hat t}\hat F\,+\,\hat v\cdot{\nabla}_{\hat x}\hat F\,=\frac{1}{\eta}\hat\mathcal{D}(\hat F,\hat f)\,, \\ &{\partial}_{\hat t}\hat f+\frac{1}{{\epsilon}}\hat w\cdot{\nabla}_{\hat x}\hat f=\frac1\mu\hat\mathcal{R}(\hat f,\hat F)+\frac{\mu}{{\epsilon}^2}\hat\mathcal{C}(\hat f)\,. \end{aligned} \right. \end{equation} Henceforth, we drop hats on all dimensionless quantities and variables introduced in this section. Only dimensionless variables, distribution functions and collision integrals will be considered from now on. We also use $V,W$ as variables in the positive part of the collision operators $\mathcal{D}$ and $\mathcal{R}$, in order to avoid confusions. We define therefore the (${\epsilon}$- and $\eta$-dependent) dimensionless collision integrals \begin{equation} \label{newc} \begin{aligned} \mathcal{C}(f)( w)=&\iint_{\mathbf{R}^3\times\mathbf{R}^3}f(w') f(w'_*) \Pi_{gg}(w,\mathrm{d} w'\,\mathrm{d} w'_*) \\ &- f(w)\int_{\mathbf{R}^3} f(w_*)| w- w_*| {\Sigma}_{gg}(|w- w_*|)\,\mathrm{d} w_*\,, \end{aligned} \end{equation} \begin{equation} \label{newd} \begin{aligned} \mathcal{D}( F, f)( v)=&\iint_{\mathbf{R}^3\times\mathbf{R}^3} F(V)f(W)\Pi_{pg}( v,\mathrm{d} V\,\mathrm{d} W) \\ &- F(v)\int_{\mathbf{R}^3} f(w)\left|{\epsilon} v- w\right|{\Sigma}_{pg}\left(\left|{\epsilon} v- w\right|\right)\,\mathrm{d} w\,, \end{aligned} \end{equation} \begin{equation} \label{newr} \begin{aligned} \mathcal{R}( f, F)( w)=&\iint_{\mathbf{R}^3\times\mathbf{R}^3} F(V) f(W) \Pi_{gp}( w,\mathrm{d} V\,\mathrm{d} W) \\ &- f( w)\int_{\mathbf{R}^3} F(v)\left|{\epsilon} v- w\right| {\Sigma}_{pg}\left(\left|{\epsilon} v-w\right|\right)\,\mathrm{d} v\,, \end{aligned} \end{equation} with ${\Sigma}_{gg}$, ${\Sigma}_{pg}$ satisfying (\ref{Colli2}). Notice that $\Pi_{pg}$ and $\Pi_{gp}$ depend in fact on ${\epsilon}$ and $\eta$, and will sometimes be denoted by $\Pi_{pg}^{{\epsilon},\eta}$ and $\Pi_{gp}^{{\epsilon},\eta}$, whenever needed. With the notation defined above, the scaled Boltzmann system (\ref{BoltzSysSc}) is then recast as: \begin{equation}\label{BoltzSysSc2} \left\{ \begin{aligned} {}&{\partial}_t F\,+\,v\cdot{\nabla}_x F\,=\frac{1}{\eta}\mathcal{D}(F,f)\,, \\ &{\partial}_t f+\frac{1}{{\epsilon}}w\cdot{\nabla}_x f=\frac1\mu\mathcal{R}(f, F)+\frac{\mu}{{\epsilon}^2}\mathcal{C}(f)\,. \end{aligned} \right. \end{equation} At this point, it may be worthwhile explaining the difference between the scalings considered in the present paper and in \cite{BDGR}. In \cite{BDGR}, we implicitly assumed that $\mu=1$, while we have assumed in the present paper that $\mu\ll 1$. When $\mu\ll 1$, the density of the dispersed phase is much higher than that of the propellant, and since ${\epsilon}\ll 1$, the thermal speed of the dispersed phase is much smaller than that of the gas. Therefore the dispersed phase slows down the motion of gas molecules, so that the Reynolds number in the gas becomes small and the material derivative in the Navier-Stokes equation becomes negligible. This qualitative argument explains why the scaling considered in the present paper leads to a steady Stokes equation in the gas, while the assumption $\mu=1$ as in \cite{BDGR} leads to a Navier-Stokes equation. \subsection{The explicit form of collision integrals in two physical situations} \subsubsection{The Boltzmann collision integral for gas molecules} The dimensionless collision integral $\mathcal{C}(f)$ is given by the formula \begin{equation}\label{cc1} \mathcal{C}(f)(w)=\iint_{\mathbf{R}^3\times\mathbf{S}^2}(f(w')f(w'_*)-f(w)f(w_*))c(w-w_*,{\omega})\,\mathrm{d} w_*\mathrm{d} {\omega} , \end{equation} for each measurable $f$ defined a.e. on $\mathbf{R}^3$ and rapidly decaying at infinity, where \begin{equation}\label{cc2} \begin{aligned} w'\equiv\,w'(w,w_*,{\omega}):=w\,-(w-w_*)\cdot{\omega}\om\,, \\ w'_*\equiv\!w'_*(w,w_*,{\omega}):=w_*\!+(w-w_*)\cdot{\omega}\om\,, \end{aligned} \end{equation} (see formulas (3.11) and (4.16) in chapter II of \cite{Cerci75}). The collision kernel $c$ is of the form \begin{equation}\label{cc3} c(w-w_*,{\omega})=|w-w_*|{\sigma}_{gg}(|w-w_*|,|\cos(\widehat{w-w_*,{\omega}})|), \end{equation} where ${\sigma}_{gg}$ is the dimensionless differential cross-section of gas molecules. In other words, $$ {\Sigma}_{gg}(|z|)=4\pi\int_0^1{\sigma}_{gg}(|z|,\mu)\,\mathrm{d}\mu\,, $$ while \begin{equation} \label{Pigg} \Pi_{gg}(w,\mathrm{d} W\,\mathrm{d} W_*)=\iint_{\mathbf{R}^3\times\mathbf{S}^2}{\delta}_{w'(w,w_*,{\omega})}\otimes{\delta}_{w'_*(w,w_*,{\omega})}c(w-w_*,{\omega})\,\mathrm{d} w_*\mathrm{d}{\omega}\,. \end{equation} where this last formula is to be understood as explained in section 2.3.1 of \cite{BDGR}. Specifically, for each test function $\phi\equiv\phi(W,W_*)\in C_b(\mathbf{R}^3\times\mathbf{R}^3)$, $$ \begin{aligned} \iint_{\mathbf{R}^3\times\mathbf{R}^3}\phi(W,W_*)\Pi_{gg}(w,\mathrm{d} W\,\mathrm{d} W_*)& \\ =\iint_{\mathbf{R}^3\times\mathbf{S}^2}\phi(w'(w,w_*,{\omega}),w'_*(w,w_*,{\omega}))c(w-w_*,{\omega})\,\mathrm{d} w_*\mathrm{d}{\omega}&\,. \end{aligned} $$ \smallskip We also assume that the molecular interaction is defined in terms of a hard potential satisfying Grad's cutoff assumption. In other words, we assume: \medskip \noindent \textbf{Assumption A1}: There exists $c_*>1$ and ${\gamma}\in[0,1]$ such that \begin{equation}\label{cc4} \begin{aligned} {}&0\,<\,c(z,{\omega})\,\le\, c_*(1+|z|)^{\gamma}\,,&&\quad\hbox{ for a.e. }(z,{\omega})\in\mathbf{R}^3\times\mathbf{S}^2\,, \\ &\int_{\mathbf{S}^2}c(z,{\omega})\,\mathrm{d}{\omega}\ge\frac1{c_*}\frac{|z|}{1+|z|}\,,&&\quad\hbox{ for a.e. }z\in\mathbf{R}^3\,. \end{aligned} \end{equation} \medskip We next review in detail the properties of the linearization of the collision integral $\mathcal{C}$ about a uniform Maxwellian $M$, which is defined by the formula \begin{equation}\label{defL} \mathcal{L}\phi:=-M^{-1}D\mathcal{C}(M)\cdot(M\phi)\,, \end{equation} where $D$ designates the (formal) Fr\'echet derivative. Without loss of generality, one can choose the uniform Maxwellian to be given by the expression \begin{equation}\label{maxw} M(w):=\tfrac1{(2\pi)^{3/2}}e^{-|w|^2/2}\,, \end{equation} after some Galilean transformation eliminating the mean velocity of $M$, and some appropriate choice of units so that the temperature and pressure associated to this Maxwellian state are both equal to $1$. We recall the following theorem, due to Hilbert (in the case of hard spheres) and Grad (in the case of hard cutoff potentials): see Theorem I on p.186 and Theorem II on p.187 in \cite{Cerci75}. \begin{theorem} The linearized collision integral $\mathcal{L}$ is an unbounded operator on $L^2(Mdv)$ with domain $$ \operatorname{Dom}\mathcal{L}=L^2((\bar c\star M)^2 Mdv)\,,\quad\hbox{ where }\bar c(z):=\int_{\mathbf{S}^2}c(z,{\omega})d{\omega}\,. $$ Moreover, $\mathcal{L}=\mathcal{L}^*\ge 0$, with nullspace \begin{equation}\label{null} \operatorname{Ker}\mathcal{L}=\operatorname{span}\{1,w_1,w_2,w_3,|w|^2\}. \end{equation} Finally, $\mathcal{L}$ is a Fredholm operator, so that $$ \operatorname{Im}\mathcal{L}=\operatorname{Ker}\mathcal{L}^\bot\,. $$ \end{theorem} \medskip Defining \begin{equation}\label{defA} A(w):=w\otimes w-\tfrac13|w|^2I \end{equation} the traceless part of the quadratic tensor field $w\otimes w$, elementary computations show that $$ A\bot\operatorname{Ker}\mathcal{L}\quad\hbox{ in }L^2(Mdv)\,. $$ Hence there exists a unique $\tilde A\in\operatorname{Dom}\mathcal{L}$ such that \begin{equation}\label{defAtilde} \mathcal{L}\tilde A=A\,,\qquad\tilde A\bot\operatorname{Ker}\mathcal{L}\,, \end{equation} by the Fredholm alternative applied to the Fredholm operator $\mathcal{L}$. Using that the linearized collision integral and the tensor field $A$ are equivariant under the action of the orthogonal group, one finds that the matrix field $\tilde A$ is in fact of the form \begin{equation}\label{defalpha} \tilde A(w)={\alpha}(|w|)A(w)\,. \end{equation} See \cite{dego} or Appendix 2 of \cite{GoB}. In the sequel, we shall present results which are specific to the case when both ${\alpha}$ and its derivative ${\alpha}'$ are bounded. More precisely, we make the following assumption. \medskip \noindent \textbf{Assumption A2}: there exists a positive constant $C$ such that $$ |\tilde A(w)| \le C (1 + |w|^2)\,,\quad\hbox{ and }\quad|\nabla \tilde A(w)| \le C (1 + |w|^2)\,. $$ \medskip We recall that, in the case of Maxwell molecules, that is, for a collision kernel $c$ of the form $$ c(z,{\omega})=C(|\cos(\widehat{z,{\omega}})|) $$ the scalar ${\alpha}$ is a positive constant. \subsubsection{The collision integrals $\mathcal{D}$ and $\mathcal{R}$ for elastic collisions}\label{sec232} For each measurable $F$ and $f$ defined a.e. on $\mathbf{R}^3$ and rapidly decaying at infinity, the dimensionless collision integrals $\mathcal{D}(F,f)$ and $\mathcal{R}(f,F)$ are given by the formulas $$ \begin{aligned} \mathcal{D}(F,f)(v)&=\iint_{\mathbf{R}^3\times\mathbf{S}^2}(F(v'')f(w'')\!-\!F(v)f(w))b({\epsilon} v-w,{\omega})\,\mathrm{d} w\mathrm{d}{\omega}\,, \\ \mathcal{R}(f,F)(w)&=\iint_{\mathbf{R}^3\times\mathbf{S}^2}(f(w'')F(v'')\!-\!f(w)F(v))b({\epsilon} v-w,{\omega})\,\mathrm{d} v\mathrm{d}{\omega}\,, \end{aligned} $$ where \begin{equation}\label{ela1} \begin{aligned} {}&v''\equiv v''(v,w,{\omega})\,:=v\,-\frac{2\eta}{1+\eta}\!\left(v-\!\frac1{\epsilon} w\!\right)\!\cdot{\omega}\om\,, \\ &w''\!\equiv w''(v,w,{\omega})\!:=w-\frac{2}{1+\eta}\,\,(\,w-{\epsilon} v)\cdot{\omega}\om\,, \end{aligned} \end{equation} (see formula (5.10) in chapter II of \cite{Cerci75}). The collision kernel $b$ is of the form \begin{equation}\label{ela2} b({\epsilon} v-w,{\omega})=|{\epsilon} v-w|{\sigma}_{pg}(|{\epsilon} v-w|,|\cos(\widehat{{\epsilon} v-w,{\omega}})|), \end{equation} where ${\sigma}_{pg}$ is the dimensionless differential cross-section of gas molecules. In other words, \begin{equation}\label{ela3} {\Sigma}_{pg}(|z|)=4\pi\int_0^1{\sigma}_{pg}(|z|,\mu)\,\mathrm{d}\mu\,, \end{equation} while \begin{equation}\label{ela4} \begin{aligned} \Pi_{pg}(v,\mathrm{d} V\mathrm{d} W)&=\iint_{\mathbf{R}^3\times\mathbf{S}^2}{\delta}_{v''(v,w,{\omega})}\otimes{\delta}_{w''(v,w,{\omega})}b({\epsilon} v-w,{\omega})\mathrm{d} w\mathrm{d}{\omega}\,, \\ \Pi_{gp}(w,\mathrm{d} V\mathrm{d} W)\!&=\iint_{\mathbf{R}^3\times\mathbf{S}^2}{\delta}_{v''(v,w,{\omega})}\otimes{\delta}_{w''(v,w,{\omega})}b({\epsilon} v-w,{\omega})\mathrm{d} v\mathrm{d}{\omega}\,, \end{aligned} \end{equation} where the equalities (\ref{ela4}) are to be understood in the same way as (\ref{Pigg}). As in the case of the molecular collision kernel $c$, we assume that $b$ is a cutoff kernel associated with a hard potential, i.e. we assume that there exists $b_*>1$ and ${\beta}^*\in[0,1]$ such that \begin{equation}\label{ela5} \begin{aligned} {}&0<b(z,{\omega})\le b_*(1+|z|)^{{\beta}^*}\,,&&\quad\hbox{ for a.e. }(z,{\omega})\in\mathbf{R}^3\times\mathbf{S}^2\,, \\ &\int_{\mathbf{S}^2}b(z,{\omega})\,\mathrm{d}{\omega}\ge\frac1{b_*}\frac{|z|}{1+|z|}\,,&&\quad\hbox{ for a.e. }z\in\mathbf{R}^3\,. \end{aligned} \end{equation} We also assume that, for any $p>3$, \begin{equation} \label{colliel} \iiint |b({\epsilon} v - w,\omega) - b(w, \omega)| (1+ |v|^2 + |w|^2) M(w) (1+|v|^2)^{-p} d\omega dvdw = O({\epsilon}). \end{equation} This assumption is satisfied as soon as the angular cutoff in the hard potential is smooth, or in the case of hard spheres collisions. \subsubsection{An inelastic model of collision integrals $\mathcal{D}$ and $\mathcal{R}$}\label{sec233} Dust particles or droplets are macroscopic objects when compared to gas molecules. This suggests using the classical models of gas surface interactions to describe the impingement of gas molecules on dust particles or droplets. The simplest such model of collisions has been introduced by F. Charles in \cite{FCharlesRGD08}, with a detailed discussion in section 1.3 of \cite{FCharlesPhD} and in \cite{FCharlesSDelJSeg}. We briefly recall this model below. First, the (dimensional) particle-molecule cross-section is $$ S_{pg}=\pi(r_g+r_p)^2, $$ where $r_g$ is the molecular radius and $r_p$ the radius of dust particles or droplets. In other words, we assume that the collisions between gas molecules and the particles in the dispersed phase are hard sphere collisions. Then, the dimensionless particle-molecule cross-section is $$ {\Sigma}_{pg}(|{\epsilon} v-w|)=1\,. $$ The formulas for $S_{pg}$ and ${\Sigma}_{pg}$ correspond to a binary collision between two balls of radius $r_p$ and $r_g$. Next, the measure-valued functions $\Pi_{pg}$ and $\Pi_{gp}$ are defined by the following formulas: \begin{equation}\label{ine1} \begin{aligned} \Pi_{pg}(v,\mathrm{d} V\,\mathrm{d} W)&:=K_{pg}(v,V,W)\,\mathrm{d} V\mathrm{d} W\,, \\ \Pi_{gp}(w,\mathrm{d} V\,\mathrm{d} W)&:=K_{gp}(w,V,W)\,\mathrm{d} V\mathrm{d} W\,, \end{aligned} \end{equation} where\begin{equation}\label{ine2} \begin{aligned} K_{pg}(v,V,W):&=\tfrac1{2\pi^2}\left(\tfrac{1+\eta}\eta\right)^4{\beta}^4{\epsilon}^3\exp\left(-\tfrac12{\beta}^2\left(\tfrac{1+\eta}\eta\right)^2\left|{\epsilon} v-\frac{{\epsilon} V+\eta W}{1+\eta}\right|^2\right) \\ &\qquad\times\int_{\mathbf{S}^2}(n\cdot({\epsilon} V-W))_+\left(n\cdot\left(\frac{{\epsilon} V+\eta W}{1+\eta}-{\epsilon} v\right)\right)_+dn\,, \end{aligned} \end{equation} \begin{equation}\label{ine3} \begin{aligned} K_{gp}(w,V,W)&:=\tfrac1{2\pi^2}(1+\eta)^4{\beta}^4\exp\left(-\tfrac12{\beta}^2(1+\eta)^2\left|w-\frac{{\epsilon} V+\eta W}{1+\eta}\right|^2\right) \\ &\qquad\times\int_{\mathbf{S}^2}(n\cdot({\epsilon} V-W))_+\left(n\cdot\left(w-\frac{{\epsilon} V+\eta W}{1+\eta}\right)\right)_+dn\,. \end{aligned} \end{equation} In the formulas above, $$ \beta=\sqrt{\frac{m_g}{2 k_B T_{surf}}} $$ where $k_B$ is the Boltzmann constant and $T_{surf}$ the surface temperature of the particles. \section{Hypothesis on $\Pi_{pg}$ and $\Pi_{gp}$}\label{S-S3} In the sequel, we shall provide a theorem which holds for all diffusion and friction operators satisfying a few assumptions described below. We recall that $\Pi_{pg}$ and $\Pi_{gp}$ are nonnegative measure valued functions of the variable $v\in\mathbf{R}^3$ and of the variable $w\in\mathbf{R}^3$ respectively, which depend in general on ${\epsilon}$ and $\eta$ (cf. formulas (\ref{ela1}), (\ref{ela4}), and (\ref{ine1}) -- (\ref{ine3})). We do not systematically mention this dependence, unless absolutely necessary, as in Assumptions (H4)-(H5) below. In that case, we write $\Pi_{pg}^{{\epsilon}, \eta}$ and $\Pi_{gp}^{{\epsilon}, \eta}$ instead of $\Pi_{pg}$ and $\Pi_{gp}$, as already explained. \medskip \noindent \textbf{Assumption (H1)}: There exists a nonnegative function $q\equiv q(r)$, such that $q(r)\le C(1+r)$ for some constant $C>0$, and such that the measure-valued functions $\Pi_{pg}$ and $\Pi_{gp}$ satisfy $$ \int_{\mathbf{R}^3}\Pi_{pg}(v,\mathrm{d} V\mathrm{d} W)\,\mathrm{d} v=\int_{\mathbf{R}^3}\Pi_{gp}(w,\mathrm{d} V\mathrm{d} W)\,\mathrm{d} w=q(|{\epsilon} V-W|)\,\mathrm{d} V\mathrm{d} W\,. $$ Observe that Assumption (H1) is consistent with the fact that the same cross section ${\Sigma}_{pg}$ appears in the last two lines of (\ref{Colli2}). In particular, (H1) implies the local conservation law of mass. \medskip \noindent \textbf{Assumption (H2)}: There exists a nonnegative function $Q\equiv Q(r)$ in $C^1(\mathbf{R}_+^*)$ such that $Q(r) + |Q'(r)|\le C(1+r)$ for some constant $C>0$, and such that the measure-valued functions $\Pi_{pg}$ and $\Pi_{gp}$ satisfy $$ \begin{aligned} {\epsilon}\int_{\mathbf{R}^3}(v-V)\Pi_{pg}(v,\mathrm{d} V\mathrm{d} W)\,\mathrm{d} v&=-\eta\int_{\mathbf{R}^3}(w-W)\Pi_{gp}(w,\mathrm{d} V\mathrm{d} W)\,\mathrm{d} w \\ &=-\frac{\eta}{1+\eta}({\epsilon} V-W)Q(|{\epsilon} V-W|)\mathrm{d} V\mathrm{d} W\,. \end{aligned} $$ This assumption implies the conservation of momentum between molecules and particles. \medskip \noindent \textbf{Assumption (H3)}: There exists a constant $C>0$ such that the measure-valued function $\Pi_{pg}$ satisfies $$ \int_{\mathbf{R}^3}\left|{\epsilon} v-\frac{{\epsilon} V+\eta W}{1+\eta}\right|^2\Pi_{pg}(v,\mathrm{d} V\mathrm{d} W)\,\mathrm{d} v\le C\,\eta^2\,(1+|{\epsilon} V-W|^2)q(|{\epsilon} V-W|)\,\,, $$ where $q$ is the function in Assumption (H1). \medskip \noindent \textbf{Assumption (H4)}: In the limit as $({\epsilon},\eta)\to(0,0)$, one has $\Pi^{{\epsilon},\eta}_{gp}(w,\cdot)\to\Pi^{0,0}_{gp}(w,\cdot)$ weakly in the sense of probability measures for a.e. $w\in\mathbf{R}^3$, and the limiting measure $\Pi^{0,0}_{gp}$ satisfies the following invariance condition: \begin{equation}\label{InvPi00} \mathcal{T}_R\#\Pi^{0,0}_{gp}=\Pi^{0,0}_{gp}\quad\hbox{ for each }R\in O_3(\mathbf{R})\,, \end{equation} where \begin{equation}\label{defTR} \mathcal{T}_R:\,(w,V,W)\mapsto(Rw,V,RW)\,. \end{equation} Besides, for each $p>3$ and each $\Phi := \Phi(w,W)$ in $C^1(\mathbf{R}^3\times\mathbf{R}^3)$ such that $$ |\Phi(w,W)|+|\nabla_w\Phi(w,W)|\le C(1+|w|^2+|W|^2)M(W) $$ for some $C>0$, one has $$ \begin{aligned} \int_{\mathbf{R}^3}(1+ |V|^2)^{-p}\left|\iint_{\mathbf{R}^3\times\mathbf{R}^3}\Phi(w,W)(\Pi^{{\epsilon},\eta}_{gp}(w,\mathrm{d} V\mathrm{d} W)\,\mathrm{d} w-\Pi^{0,0}_{gp}(w,\mathrm{d} V\mathrm{d} W)\,\mathrm{d} w)\right| \\ =O({\epsilon}+\eta) \end{aligned} $$ as $({\epsilon},\eta)\to 0$. \medskip \noindent \textbf{Assumption (H5)}: There exists a positive constant $C>0$ (independent of ${\epsilon},\eta$) such that, for all $({\epsilon},\eta)$ close to $(0,0)$ and for all $h \in L^2(M(w)\mathrm{d} w)$, $$ \begin{aligned} \iiint_{\mathbf{R}^3\times\mathbf{R}^3\times\mathbf{R}^3}\frac{(1+|W|^2)}{(1+|V|^2)^p}M(W)|h(W)|(1+|w|^2)\Pi^{{\epsilon},\eta}_{gp}(w,\mathrm{d} V\mathrm{d} W)\,\mathrm{d} w& \\ \le C\|h\|_{L^2(M(w)\mathrm{d} w)}&\,. \end{aligned} $$ \medskip Assumptions (H1)-(H3) and (H5) are the same as the assumptions introduced in section 3 of \cite{BDGR}. Assumption (H4) differs from its counterpart in \cite{BDGR}: while the asymptotic invariance condition (\ref{InvPi00}) is the same as in assumption (H4) in section 3 of \cite{BDGR}, the second part of (H4) in the present paper postulates a convergence rate $O({\epsilon}+\eta)$, whereas the second part of assumption (H4) in \cite{BDGR} only requires that the same quantity should vanish in the limit as $({\epsilon},\eta)\to 0$. Besides, assumption (H4) in \cite{BDGR} involved some additional decay condition on $\Pi_{gp}^{0,0}$ which is useless here. \bigskip We recall that the elastic and inelastic models previously introduced (in subsections \ref{sec232} and \ref{sec233} resp.) satisfy the assumptions (H1)-(H3) and (H5): see section 3 of \cite{BDGR} for a detailed verification. It remains to verify (H4), with the modified asymptotic condition used in the present work. These verifications are summarized in the following propositions. We begin with the elastic collision model. \medskip \begin{proposition}\label{hypelast} We consider a cross-section $b$ of the form (\ref{ela2}) satisfying (\ref{ela5})-(\ref{colliel}), and the quantities $\Sigma_{pg}$, $\Pi_{pg}$ and $\Pi_{gp}$ defined by (\ref{ela1}), (\ref{ela3}) and (\ref{ela4}). Then, assumptions (H1) -- (H5) are satisfied, with \begin{equation}\label{H11} q(|{\epsilon} v-w|)=4\pi\int_0^1 |{\epsilon} v-w|\,\sigma_{pg}(|{\epsilon} v-w|,\mu)d\mu , \end{equation} \begin{equation}\label{H12} Q(|{\epsilon} v-w|)=8\pi\int_0^1 |{\epsilon} v-w|\,\sigma_{pg}(|{\epsilon} v-w|,\mu)\mu^2d\mu. \end{equation} \end{proposition} \begin{proof} The reader is referred to the proof of Proposition 1 in section 3 of \cite{BDGR}. We present only the part of the argument concerning the second condition in Assumption (H4), which is new. For each $p>3$ and each $\Phi:=\Phi(w,W)$ in $C^1(\mathbf{R}^3\times\mathbf{R}^3)$ such that $$ |\Phi(w,W)|+|\nabla_w\Phi(w,W)|\le C(1+|w|^2+|W|^2)M(W)\,, $$ we deduce from the mean value theorem that $$ \begin{aligned} \int_{\mathbf{R}^3}(1+ |V|^2)^{-p}\left|\iint_{\mathbf{R}^3\times\mathbf{R}^3}\Phi(w,W)(\Pi^{{\epsilon},\eta}_{gp}(w,\mathrm{d} V\mathrm{d} W)\,\mathrm{d} w-\Pi^{0,0}_{gp}(w,\mathrm{d} V\mathrm{d} W)\,\mathrm{d} w)\right| \\ =\int_{\mathbf{R}^3}(1+ |v|^2)^{-p}\left|\iint_{\mathbf{R}^3\times\mathbf{S}^2}(\Phi(w'',w)b({\epsilon} v-w,{\omega})-\Phi(S_{\omega} w)b(w,{\omega}))\,\mathrm{d} w\mathrm{d}{\omega}\right|\,\mathrm{d} v \\ \le C\iiint_{\mathbf{R}^3\times\mathbf{R}^3\times\mathbf{S}^2}\frac{(1+|w|^2+|v|^2)}{(1+ |v|^2)^p}M(w)|b({\epsilon} v-w,{\omega})-b(w,{\omega})|\,\mathrm{d} v\mathrm{d} w\mathrm{d}{\omega} \\ +C\iiint_{\mathbf{R}^3\times\mathbf{R}^3\times\mathbf{S}^2}\!\!\frac{(\eta|w|\!+\!{\epsilon}|v|)}{(1\!+\!|v|^2)^p}\phi_{{\epsilon},\eta}(w,{\omega},{\theta})b(w,\omega)\,\mathrm{d} v\mathrm{d} w\mathrm{d}{\omega} \\ \le C(\eta+{\epsilon})+C'(\eta+{\epsilon})\iint_{\mathbf{R}^3\times\mathbf{R}^3}\frac{(1\!+\!|w|)(|w|\!+\!|v|)(1\!+\!|w|^2\!+\!|v|^2)}{(1\!+\!|v|^2)^p}M(w)\,\mathrm{d} v\mathrm{d} w \\ \le C''(\eta+{\epsilon})\,. \end{aligned} $$ where $$ S_{\omega} w:=w-2 (w\cdot{\omega}){\omega}\,, $$ and $$ \begin{aligned} \phi_{{\epsilon},\eta}(w,{\omega},{\theta}):=\sup_{0<{\theta}<1}|\nabla_w\Phi(S_{\omega} w\!+\!\tfrac{2\theta}{1+\eta}(\eta w\!+\!{\epsilon} v)\!\cdot\!{\omega}\om,w)| \\ \le C(1+2|w|^2+4|\eta w+{\epsilon} v|^2+|w|^2)M(w) \\ \le C(1+11|w|^2+8|v|^2)M(w) \end{aligned} $$ for all $0<{\epsilon},\eta<1$. \end{proof} \medskip Next we check assumptions (H1)-(H5) on the inelastic collision model. \begin{proposition}\label{hypinelast} Consider the measure-valued functions $\Pi_{pg}$ and $\Pi_{gp}$ defined in (\ref{ine1})-(\ref{ine3}). Then, assumptions (H1)-(H5) are satisfied, with $$ q(|{\epsilon} v-w|)=|{\epsilon} v-w| $$ and $$ Q(|{\epsilon} v-w|)=\frac{\sqrt{2\pi}}{3{\beta}}+|{\epsilon} v-w|\,. $$ \end{proposition} \begin{proof} Here again, we present only the argument justifying the second part of (H4) which is new, and refer to Proposition 2 in section 3 of \cite{BDGR} for a complete proof of the remaining statements. With the substitution $w \mapsto z= (1+\eta) w - {\epsilon} V - \eta W$, $$ \begin{aligned} \iint_{\mathbf{R}^3\times\mathbf{R}^3}\Phi(w,W)(1+\eta)^4\exp\left(-\frac12\beta^2(1+\eta)^2\left|w-\frac{{\epsilon} V+\eta W}{1+\eta}\right|^2\right) \\ \times\int_{\mathbf{S}^2}(({\epsilon} V - W)\cdot n)_+\left(\left(\frac{{\epsilon} V+\eta W}{1+\eta}-w\right)\cdot n\right)_+\,\mathrm{d} n\mathrm{d} w\mathrm{d} W \\ =\iint_{\mathbf{R}^3\times\mathbf{R}^3}\Phi\left(\tfrac{z+{\epsilon} V+\eta W}{1+\eta},W\right)e^{-\tfrac12\beta^2|z|^2}J_{\epsilon}(V,W)\mathrm{d} z\mathrm{d} W\,, \end{aligned} $$ where $$ J_{\epsilon}(V,W,z):=\int_{\mathbf{S}^2}(({\epsilon} V-W)\cdot n)_+(n \cdot z)_+\,\mathrm{d} n\,. $$ We shall also denote $$ J(W,z):=J_{\epsilon}(0,W,z)\,,\qquad\hat J(W,z)=J(W,z)+J(-W,z)\,. $$ Then, for each $p>3$ and each $\Phi:=\Phi(w,W)$ in $C^1(\mathbf{R}^3\times\mathbf{R}^3)$ satisfying $$ |\Phi(w,W)|+|\nabla_w\Phi(w,W)|\le C(1+|w|^2+|W|^2)M(W)\,, $$ one has $$ \begin{aligned} \int_{\mathbf{R}^3}(1+ |V|^2)^{-p} \left|\iint_{\mathbf{R}^3\times\mathbf{R}^3}\Phi(w,W)(\Pi^{{\epsilon},\eta}_{gp}(w,\mathrm{d} V\mathrm{d} W)\,\mathrm{d} w-\Pi^{0,0}_{gp}(w,\mathrm{d} V\mathrm{d} W)\,\mathrm{d} w)\right|& \\ \le C\iiint\frac{e^{-\frac12\beta^2|z|^2}}{(1+|V|^2)^p}|\Phi(\tfrac{z+{\epsilon} V+\eta W}{1+\eta},W)J_{\epsilon}(V,W,z)-\Phi(z,W)J(W,z)|\,\mathrm{d} z\mathrm{d} V\mathrm{d} W& \\ \le C{\epsilon}\iiint\frac{e^{-\frac12\beta^2|z|^2}}{(1+|V|^2)^p}|\Phi(\tfrac{z+{\epsilon} V+\eta W}{1+\eta},W)|\hat J(V,z)\,\mathrm{d} z\mathrm{d} V\mathrm{d} W& \\ +C\iiint\frac{e^{-\frac12\beta^2|z|^2}}{(1+|V|^2)^p}|\Phi(\tfrac{z+{\epsilon} V+\eta W}{1+\eta},W)-\Phi(z,W)|J(W,z)\,\mathrm{d} z\mathrm{d} V\mathrm{d} W&\,. \end{aligned} $$ By the mean value theorem $$ \begin{aligned} |\Phi(\tfrac{z+{\epsilon} V+\eta W}{1+\eta},W)-\Phi(z,W)|\le|{\epsilon} V+\eta(W-z)|\sup_{0<{\theta}<1}|{\nabla}_w\Phi(z+{\theta}\tfrac{{\epsilon} V+\eta(W-z)}{1+\eta})|& \\ \le|{\epsilon} V+\eta(W-z)|(1+(|z|+|{\epsilon} V|+\eta|W-z|)^2+|W|^2)M(W)& \\ \le(\eta|z|+\eta|W|+{\epsilon}|V|)(1+6|z|^2+3|V|^2+7|W|^2)M(W)&\,, \end{aligned} $$ while $$ \begin{aligned} |\Phi(\tfrac{z+{\epsilon} V+\eta W}{1+\eta},W)|\le C(1+|z+{\epsilon} V+\eta W|^2+|W|^2)M(W)& \\ \le C(1+3|z|^2+3|V|^2+4|W|^2)M(W)&\,, \end{aligned} $$ if $0<{\epsilon},\eta<1$. Thus $$ \begin{aligned} \int_{\mathbf{R}^3}(1+ |V|^2)^{-p} \left|\iint_{\mathbf{R}^3\times\mathbf{R}^3}\Phi(w,W)(\Pi^{{\epsilon},\eta}_{gp}(w,\mathrm{d} V\mathrm{d} W)\,\mathrm{d} w-\Pi^{0,0}_{gp}(w,\mathrm{d} V\mathrm{d} W)\,\mathrm{d} w)\right|& \\ \le 4C{\epsilon}\iiint\frac{e^{-\frac12\beta^2|z|^2}}{(1+|V|^2)^p}[[z,V,W]]M(W)\hat J(V,z)\,\mathrm{d} z\mathrm{d} V\mathrm{d} W& \\ +7C\iiint\frac{e^{-\frac12\beta^2|z|^2}}{(1+|V|^2)^p}(\eta|z|+\eta|W|+{\epsilon}|V|)[[z,V,W]]M(W)J(W,z)\,\mathrm{d} z\mathrm{d} V\mathrm{d} W& \\ \le 4C{\epsilon}\iiint\frac{e^{-\frac12\beta^2|z|^2}}{(1+|V|^2)^p}[[z,V,W]]M(W)|V||z|\,\mathrm{d} z\mathrm{d} V\mathrm{d} W& \\ +7\sqrt{3}C\max({\epsilon},\eta)\iiint\frac{e^{-\frac12\beta^2|z|^2}}{(1+|V|^2)^p}[[z,V,W]]^{3/2}M(W)J(W,z)\,\mathrm{d} z\mathrm{d} V\mathrm{d} W& \\ \le C'({\epsilon}+\eta)&\,. \end{aligned} $$ We have denoted $$ [[z,V,W]]:=1+|z|^2+|V|^2+|W|^2 $$ and used the Cauchy-Schwarz inequality $$ \eta|z|+\eta|W|+{\epsilon}|V|\le\sqrt{3}\max({\epsilon},\eta)(|z|^2+|W|^2+|V|^2)^{1/2}\,. $$ \end{proof} \section{Passing to the limit}\label{S-S4} \subsection{Statement of the main result} We now consider a sequence of solutions $f_n\equiv f_n(t,x,w)\ge 0$, and $F_n\equiv F_n(t,x,v)\ge 0$ to the system of kinetic-fluid equations (\ref{BoltzSysSc2}), with ${\epsilon},\eta,\mu$ replaced with sequences ${\epsilon}_n,\eta_n,\mu_n\to 0$ respectively. Thus \begin{equation}\label{kifu} \begin{aligned} {}&{\partial}_tF_n+v\cdot{\nabla}_xF_n=\frac1{\eta_n}\mathcal{D}(F_n,f_n), \\ &{\partial}_tf_n+\frac1{{\epsilon}_n}w\cdot{\nabla}_xf_n=\frac{1}{\mu_n}\mathcal{R}(f_n,F_n)+\frac{\mu_n}{{\epsilon}_n^2}\mathcal{C}(f_n), \end{aligned} \end{equation} where $\mathcal{C},\mathcal{D}$ and $\mathcal{R}$ are defined by (\ref{cc1}), (\ref{cc3}), (\ref{newd}) and (\ref{newr}). Our main result is stated below. \begin{theorem} \label{theor} Let $g_n\equiv g_n(t,x,w)$ and $F_n\equiv F_n(t,x,v)\ge 0$ be sequences of smooth (at least $C^1$) functions. Assume that $F_n$ and $f_n$ defined by \begin{equation}\label{kifu2} f_n(t,x,w)=M(w)(1+{\epsilon}_n g_n(t,x,w)), \end{equation} where $M$ is given by (\ref{maxw}), are solutions to the system (\ref{kifu}), where $\mathcal{C}$, $\mathcal{D}$ and $\mathcal{R}$ are defined by (\ref{cc1})-(\ref{cc3}) and (\ref{newd}) and (\ref{newr}). We assume moreover that $\Pi_{pg}^{{\epsilon}_n,\eta_n}$ and $\Pi_{gp}^{{\epsilon}_n,\eta_n}$ in (\ref{newd})-(\ref{newr}) satisfy assumptions (H1)-(H5) (with ${\Sigma}_{pg}$ defined by (\ref{Colli2}) in accordance with Assumption (H1)). We also assume that the molecular interaction verifies assumptions (A1) and (A2). \smallskip Assume that $$ {\epsilon}_n\to 0\,,\quad\eta_n/{\epsilon}_n^2\to 0\,,\quad{\epsilon}_n/\mu_n^2\to 0\,,\quad\mu_n\to 0\,, $$ that $$ F_n{\rightharpoonup} F\quad\hbox{ in }L^\infty_{loc}(\mathbf{R}_+^*\times\mathbf{R}^3\times\mathbf{R}^3)\hbox{ weak-*}\,, $$ and that $$ g_n{\rightharpoonup} g\hbox{ in }L^2_{loc}(\mathbf{R}_+^* \times \mathbf{R}^3 \times \mathbf{R}^3)\hbox{ weak.} $$ (a) Assume that $F_n$ decays sufficiently fast, uniformly in $n$, in the velocity variable; in other words assume that, for some $p>3$, $$ \sup_{n\ge 1}\sup_{t,|x|\le R}\sup_{v\in\mathbf{R}^3}(1+|v|^2)^p\,F_n(t,x,v)<\infty $$ for all $R>0$. \noindent (b) Assume that, for some $q>1$, $$ \sup_{t,|x|<R}\int_{\mathbf{R}^3}(1+|w|^2)^q\,M(w)\,g_n^2(t,x,w)\,\mathrm{d} w<\infty $$ for all $R>0$. \smallskip Then there exist $L^{\infty}$ functions $\rho\equiv\rho(t,x)\in\mathbf{R}$, ${\theta}\equiv{\theta}(t,x)\in\mathbf{R}$ and a velocity field $u\equiv u(t,x)\in\mathbf{R}^3$ s.t. for a.e $t,x \in \mathbf{R}_+^* \times \mathbf{R}^3$, \begin{equation} \label{eqr} g(t,x,w)=\rho(t,x)+u(t,x)\cdot w+{\theta}(t,x)\tfrac12(|w|^2-3)\,, \end{equation} while $u,F$ satisfies the Vlasov-Stokes system \begin{equation} \label{VNS} \left\{ \begin{aligned} {}&{\partial}_tF+v\cdot{\nabla}_xF={\kappa}\operatorname{div}_v((v-u)F), \\ \\ &\operatorname{div}_xu=0, \\ \\ &-\nu{\Delta}_xu+{\nabla}_xp={\kappa}\int_{\mathbf{R}^3}(v-u)F\,\mathrm{d} v, \end{aligned} \right. \end{equation} in the sense of distributions, with \begin{equation} \label{nuka} \nu:=\tfrac1{10}\int\tilde A:\mathcal{L}\tilde A M\,\mathrm{d} w>0\,,\quad{\kappa}:=\tfrac13\int Q(|w|)|w|^2M\,\mathrm{d} w>0, \end{equation} where $Q$ is defined in assumption (H2), and $\tilde A$, $\mathcal{L}$ are defined by (\ref{defAtilde}), (\ref{defL}). \end{theorem} \subsection{Proof of Theorem \ref{theor}} We split this proof in several steps, summarized in Propositions \ref{rhout} to \ref{last}, and a final part in which the convergence of all the terms in eq. (\ref{kifu}) is established. \subsubsection{Step 1: Asymptotic form of the molecular distribution function} We first identify the asymptotic structure of the fluctuations of molecular distribution function about the Maxwellian state $M$. \medskip \begin{proposition}\label{rhout} Under the same assumptions as in Theorem \ref{theor}, there exist two functions $\rho\equiv\rho(t,x)\in\mathbf{R}$, ${\theta}\equiv{\theta}(t,x)\in\mathbf{R}$ and a vector field $u\equiv u(t,x)\in\mathbf{R}^3$ satisfying $$ \rho,{\theta}\in L^\infty(\mathbf{R}_+\times\mathbf{R}^3)\,,\qquad u\in L^\infty(\mathbf{R}_+\times\mathbf{R}^3;\mathbf{R}^3) $$ such that the limiting fluctuation $g$ of molecular distribution function about $M$ is of the form (\ref{eqr}) for a.e $t,x \in \mathbf{R}_+^* \times \mathbf{R}^3$. Moreover, $u$ satisfies the divergence-free condition $$ \operatorname{div}_xu=0\,. $$ Finally $$ \int_{\mathbf{R}^3}\tilde A(w)(w\cdot{\nabla}_xg)M\,\mathrm{d} w=\nu({\nabla}_xu+({\nabla}_xu)^T) $$ where $\tilde A$ is defined in (\ref{defAtilde}), and $\nu$ is defined in (\ref{nuka}). \end{proposition} \begin{proof} Since $\mathcal{C}$ is a quadratic operator, its Taylor expansion terminates at order $2$, so that $$ \begin{aligned} \mathcal{C}(M(1+{\epsilon}_ng_n))=&\mathcal{C}(M)+{\epsilon}_nD\mathcal{C}(M)\cdot(Mg_n)+{\epsilon}_n^2\mathcal{C}(Mg_n) \\ =&-{\epsilon}_nM\mathcal{L} g_n+{\epsilon}_n^2M\mathcal{Q}(g_n)\,, \end{aligned} $$ where $\mathcal{L}$ is defined by (\ref{defL}) and \begin{equation} \label{defQ} \mathcal{Q}(\phi):=M^{-1}\mathcal{C}(M\phi)\,. \end{equation} Then the kinetic equation for the propellant (second line of eq. (\ref{kifu})) can be recast in terms of the fluctuation of the distribution function $g_n$ as follows: \begin{equation} \label{eqg} {\partial}_tg_n+\frac1{{\epsilon}_n}w\cdot{\nabla}_xg_n+\frac{\mu_n}{{\epsilon}_n^2}\mathcal{L} g_n=\frac1{\mu_n{\epsilon}_n}M^{-1}\mathcal{R}(M(1+{\epsilon}_n g_n),F_n)+\frac{\mu_n}{{\epsilon}_n}\mathcal{Q}(g_n)\,. \end{equation} Multiplying each side of this equation by ${\epsilon}_n^2/\mu_n$ shows that \begin{equation} \label{eqgpr} \mathcal{L} g_n=\frac{{\epsilon}_n}{\mu_n^2}M^{-1}\mathcal{R}(M(1+{\epsilon}_n g_n),F_n)+{\epsilon}_n\mathcal{Q}(g_n)-\frac{{\epsilon}_n^2}{\mu_n}{\partial}_tg_n-\frac{{\epsilon}_n}{\mu_n}w\cdot{\nabla}_xg_n\,. \end{equation} The last two terms of this identity clearly converge to $0$ in the sense of distributions, since $g_n{\rightharpoonup} g$ in $L^2_{loc}(\mathbf{R}_+^*\times\mathbf{R}^3\times\mathbf{R}^3)$ weak. Next, we observe that, for $w',w'_*$ defined by (\ref{cc2}) and $\phi \in C_c(\mathbf{R}^3)$, one has $$ \begin{aligned} \int_{\mathbf{R}^3}\mathcal{Q}(g_n)(w)\phi(w)\,\mathrm{d} w& \\ =\iiint\left(\frac{\phi(w')}{M(w')}-\frac{\phi(w')}{M(w')}\right)M(w_*)g_n(w_*)M(w)g_n(w)c(w-w_*,{\omega})\,\mathrm{d}{\omega}\mathrm{d} w_*\mathrm{d} w&\,. \end{aligned} $$ By the Cauchy-Schwarz inequality, $$ \begin{aligned} \left|\int_{\mathbf{R}^3}\mathcal{Q}(g_n)(w)\phi(w)\,\mathrm{d} w\right| \\ \le C\iint_{\mathbf{R}^3\times\mathbf{R}^3}M(w_*)g_n(w_*)M(w)g_n(w)(1+|w|+|w_*|)\,\mathrm{d} w_*\mathrm{d} w \\ \le C\int_{\mathbf{R}^3}M(w)g_n(w)^2\,\mathrm{d} w\,\int_{\mathbf{R}^3}M(w)(1+|w|)^2\,\mathrm{d} w\,. \end{aligned} $$ Therefore $$ \int_{\mathbf{R}^3}\mathcal{Q}(g_n)(w)\phi(w)\,\mathrm{d} w\hbox{ is bounded in }L^1_{loc}(\mathbf{R}_+^*\times\mathbf{R}^3) $$ for each $\phi \in C_c(\mathbf{R}^3)$, so that $$ {\epsilon}_n \mathcal{Q}(g_n)\to\hbox{ in }\mathcal{D}'(\mathbf{R}_+^*\times\mathbf{R}^3)\,. $$ Similarly $$ \begin{aligned} \int_{\mathbf{R}^3}\mathcal{R}(f_n, F_n)M^{-1}(w)\phi(w)\,\mathrm{d} w \\ =\iiint_{\mathbf{R}^3\times\mathbf{R}^3\times \mathbf{R}^3}\left(\frac{\phi(w)}{M(w)}-\frac{\phi(v)}{M(v)}\right)f_n(W)F_n(V)\Pi_{gp}(w,\mathrm{d} V\mathrm{d} W)\,\mathrm{d} w \end{aligned} $$ so that $$ \begin{aligned} \left|\int_{\mathbf{R}^3}\mathcal{R}(f_n, F_n)M^{-1}(w)\phi(w)\,\mathrm{d} w\right|\le C\iint_{\mathbf{R}^3\times\mathbf{R}^3}F_n(V)f_n(W)q(|{\epsilon}_n V - W|)\,\mathrm{d} V\mathrm{d} W& \\ \le C\int_{ \mathbf{R}^3}M(W)(1+|g_n|)(W)(1+|W|)\,\mathrm{d} W&\,, \end{aligned} $$ because of assumptions (a)-(b). Therefore, the sequence $$ \int_{\mathbf{R}^3}\mathcal{R}(f_n, F_n)M^{-1}(w)\phi(w)\,\mathrm{d} w $$ is bounded in $L^\infty_{loc}(\mathbf{R}_+^*\times\mathbf{R}^3)$, and hence $$ \frac{{\epsilon}_n}{\mu_n^2}M^{-1}\mathcal{R}(f_n, F_n)\to 0\hbox{ in }\mathcal{D}'(\mathbf{R}_+^*\times\mathbf{R}^3\times\mathbf{R}^3)\,. $$ Hence, we deduce from (\ref{eqgpr}) that $$ \mathcal{L} g_n\to 0\hbox{ in }\mathcal{D}'(\mathbf{R}_+^*\times\mathbf{R}^3\times\mathbf{R}^3)\,. $$ On the other hand, assumption (b) and the fact that $g_n{\rightharpoonup} g$ in $L^2_{loc}(\mathbf{R}_+^*\times\mathbf{R}^3\times\mathbf{R}^3)$ imply that $$ \mathcal{L} g_n{\rightharpoonup}\mathcal{L} g\quad\hbox{ in }L^2_{loc}(\mathbf{R}_+^*\times\mathbf{R}^3)\hbox{ weak.} $$ Therefore $\mathcal{L} g=0$. Since $\operatorname{Ker}\mathcal{L}$ is the linear span of $\{1,v_1,v_2,v_3,|v|^2\}$, this implies the existence of $\rho,{\theta}\in L^2_{loc}(\mathbf{R}_+\times\mathbf{R}^3)$ and of $u\in L^2_{loc}(\mathbf{R}_+^*\times\mathbf{R}^3;\mathbf{R}^3)$ such that (\ref{eqr}) holds. That $\rho,{\theta}\in L^\infty_{loc}(\mathbf{R}_+\times\mathbf{R}^3)$ and $u\in L^\infty_{loc}(\mathbf{R}_+^*\times\mathbf{R}^3;\mathbf{R}^3)$ follows from assumption (b) and the formulas $$ \rho=\int_{\mathbf{R}^3}g M\,\mathrm{d} w\,,\quad u=\int_{\mathbf{R}^3}wg M\,\mathrm{d} w\,,\quad{\theta}=\int_{\mathbf{R}^3}(\tfrac13|v|^2-1)g M\,\mathrm{d} w\,. $$ The remaining statements, i.e. the divergence free condition satisfied by $u$ and the computation of $$ \int_{\mathbf{R}^3}\tilde A(w)(w\cdot{\nabla}_xg)M\,\mathrm{d} w $$ are obtained as in \cite{BDGR} --- specifically, as in Propositions 6 and 7 of \cite{BDGR} respectively. \end{proof} \smallskip \begin{remark} In the case of elastic collisions between the gas molecules and the particles in the dispersed phase, using more carefully the symmetries in the collision integrals leads to an estimate of the form $$ \left|\int_{\mathbf{R}^3}\mathcal{R}(f_n, F_n)M^{-1}(w)\phi(w)\,\mathrm{d} w\right|\leq C({\epsilon}_n+\eta_n)\,, $$ so that the assumption ${\epsilon}_n/\mu_n\to 0$ (instead of ${\epsilon}_n/\mu_n^2\to 0$) is enough to guarantee that $$ \frac{{\epsilon}_n}{\mu_n^2}M^{-1}\mathcal{R}(f_n, F_n)\to 0\hbox{ in }\mathcal{D}'(\mathbf{R}_+^*\times\mathbf{R}^3\times\mathbf{R}^3)\,. $$ The same is true for the inelastic collision model if $\beta=1$, i.e. if the surface temperature of the particles or droplets is equal to the temperature of the Maxwellian around which the distribution function of the gas is linearized. \end{remark} \subsubsection{Step 2: Asymptotic deflection and friction terms} The following result can then be proved exactly as in \cite{BDGR} (more precisely, see Propositions 4 and 5 in \cite{BDGR}). \begin{proposition}\label{defl} Under the same assumptions as in Theorem \ref{theor}, $$ \begin{aligned} {}&\frac1{\eta_n}\mathcal{D}(F_n,f_n)\to{\kappa}\operatorname{div}((v-u)F)&&\quad\hbox{ in }\mathcal{D}'(\mathbf{R}_+^*\times\mathbf{R}^3\times\mathbf{R}^3), \\ &\frac1{{\epsilon}_n}\int w\mathcal{R}(f_n,F_n)dw\to{\kappa}\int(v-u)Fdv&&\quad\hbox{ in }\mathcal{D}'(\mathbf{R}_+^*\times\mathbf{R}^3), \end{aligned} $$ with ${\kappa}$ defined by eq. (\ref{nuka}). \end{proposition} \subsubsection{Step 3: Asymptotic friction flux} The asymptotic friction flux is handled as in Proposition 9 in \cite{BDGR}, with some modifications due to the differences in the scalings used here and in \cite{BDGR}. \begin{proposition}\label{last} Under the same assumptions of Theorem \ref{theor}, $$ \frac1{\mu_n}\int_{\mathbf{R}^3}\tilde A(w)\mathcal{R}(f_n,F_n)(w)\,\mathrm{d} w\to 0\quad\hbox{ in }\mathcal{D}'(\mathbf{R}_+^*\times\mathbf{R}^3)\,. $$ \end{proposition} \begin{proof} First, we compute $$ \begin{aligned} \int_{\mathbf{R}^3}\tilde A(w)\mathcal{R}(M,F_n)(w)\,\mathrm{d} w \\ =\iiint_{\mathbf{R}^3\times\mathbf{R}^3\times\mathbf{R}^3}F_n(V)M(W)(\tilde A(w)-\tilde A(W))\Pi^{{\epsilon}_n,\eta_n}_{gp}(w,\mathrm{d} V\mathrm{d} W)\,\mathrm{d} w\,. \end{aligned} $$ We see that $$ \begin{aligned} \left|\int\tilde A(w)\mathcal{R}(M,F_n)(w)\,\mathrm{d} w\!-\!\iiint F_n(V)M(W)(\tilde A(w)\!-\!\tilde A(W))\Pi^{0,0}_{gp}(w,\mathrm{d} V\mathrm{d} W)\,\mathrm{d} w\right|& \\ \le C\int(1+|V|^2)^{-p}\left|\iint \Phi(w,W)(\Pi^{{\epsilon}_n,\eta_n}_{gp}(w,dVdW) - \Pi^{0,0}_{gp}(w,dVdW)) dw\right|&\,, \end{aligned} $$ with $$ \Phi(w,W)=M(W)(\tilde A(w)-\tilde A(W))\,. $$ By assumption (A2), $\Phi\in C^1(\mathbf{R}^3\times\mathbf{R}^3)$ and $$ |\Phi(w,W)|+|\nabla_w\Phi(w,W)|\le C(1+|w|^2+|W|^2)M(W)\,. $$ Therefore, the second part of Assumption (H4) implies that $$ \begin{aligned} \int_{\mathbf{R}^3}\tilde A(w)\mathcal{R}(M,F_n)(w)\,\mathrm{d} w& \\ = \iiint F_n(V)M(W)(\tilde A(w)-\tilde A(W))\Pi^{0,0}_{gp}(w,\mathrm{d} V\mathrm{d} W)\,\mathrm{d} w+O({\epsilon}_n+\eta_n)&\,. \end{aligned} $$ We conclude by using the symmetry assumption (first part of Assumption (H4)) as in \cite{BDGR}, and arrive at the bound \begin{equation}\label{FricFlux1} \sup_{t+|x|<R}\int_{\mathbf{R}^3}\tilde A(w)\mathcal{R}(M,F_n)(w)\,\mathrm{d} w=O({\epsilon}_n+\eta_n) \end{equation} for all $R>0$. Finally, observe that, for some $p>3$, one has $$ \begin{aligned} \left|\int_{\mathbf{R}^3}\mathcal{R}(Mg_n, F_n)\tilde A(w)\,\mathrm{d} w\right|& \\ \le C_p\iiint_{\mathbf{R}^3\times\mathbf{R}^3\times\mathbf{R}^3}\frac{|w|^2+|W|^2}{(1+|V|^2)^p}M(W)|g_n(W)|\Pi_{gp}(w,\mathrm{d} V\mathrm{d} W)\,\mathrm{d} w&\,, \end{aligned} $$ where $C_p\equiv C_p(t,x)\in L^\infty_{loc}(\mathbf{R}_+^*\times\mathbf{R}^3)$. The integral on the right hand side of this last inequality is bounded in $L^\infty_{loc}(\mathbf{R}_+^*\times\mathbf{R}^3)$ by (H5) and assumption (b) in Theorem \ref{theor}. Since $\frac{{\epsilon}_n}{\mu_n}\to 0$, $$ \frac{{\epsilon}_n}{\mu_n}\int_{\mathbf{R}^3}\tilde A(w)\mathcal{R}(Mg_n,F_n)\,\mathrm{d} w\to 0\quad\hbox{ in }\mathcal{D}'(\mathbf{R}_+^*\times\mathbf{R}^3)\,. $$ With (\ref{FricFlux1}), this concludes the proof since $f_n= M(1+{\epsilon}_ng_n)$. \end{proof} \subsubsection{Step 4: End of the proof of Theorem \ref{theor}} For simplicity, we henceforth use the notation $$ \langle\phi\rangle:=\int_{\mathbf{R}^3}\phi(w)M(w)\,\mathrm{d} w\,. $$ Since $\mathcal{L} = \mathcal{L}^*$ $$ \frac{\mu_n}{{\epsilon}_n}\langle A(w)g_n\rangle=\frac{\mu_n}{{\epsilon}_n}\langle(\mathcal{L}\tilde A)(w)g_n\rangle=\bigg\langle\tilde A(w)\frac{\mu_n}{{\epsilon}_n}\mathcal{L} g_n\bigg\rangle\,. $$ Then, we use the Boltzmann equation for $g_n$ written in the form (\ref{eqgpr}) to express the term $\frac1{{\epsilon}_n}\mathcal{L} g_n$: $$ \begin{aligned} \frac{\mu_n}{{\epsilon}_n}\langle A(w)g_n\rangle=&\mu_n\langle\tilde A(w)\mathcal{Q}(g_n)\rangle-\langle\tilde A(w)({\epsilon}_n{\partial}_t+w\cdot{\nabla}_x)g_n\rangle \\ &+\frac1{\mu_n}\langle\tilde A(w)M^{-1}\mathcal{R}(f_n,F_n)\rangle\,. \end{aligned} $$ We first pass to the limit in $\langle\tilde A(w)({\epsilon}_n{\partial}_t+w\cdot{\nabla}_x)g_n\rangle$ in the sense of distributions, using assumption (A2) and assumption (b) in Theorem \ref{theor}: $$ \langle\tilde A(w)({\epsilon}_n{\partial}_t+w\cdot{\nabla}_x)g_n\rangle\to\langle\tilde A(w)w\cdot{\nabla}_xg\rangle\,,\quad\hbox{ in }\mathcal{D}'(\mathbf{R}_+^*\times\mathbf{R}^3)\,. $$ Let $$ P(w,w_*):=\int_{\mathbf{S}^2}(\tilde A(w') - \tilde A(w))c(w-w_*,{\omega})\,\mathrm{d}{\omega}\,. $$ Then $$ \langle\tilde A\mathcal{Q}(g_n)\rangle=\iint_{\mathbf{R}^3\times\mathbf{R}^3}P(w,w_*)M(w_*)g_n(w_*)M(w)g_n(w)\,\mathrm{d} w_*\mathrm{d} w\,. $$ Clearly $|P(w,w_*)|\le C(1+|w|^3+|w_*|^3)$ by assumption (A2). Then, $$ \begin{aligned} |\langle\tilde A(w)\mathcal{Q}(g_n)\rangle|\le\left(\iint_{\mathbf{R}^3\times\mathbf{R}^3}\!\!\! M(w_*)g^2_n(w_*) M(w)g^2_n(w)\,\mathrm{d} w_*\mathrm{d} w\right)^{1/2}& \\ \times\left(\iint_{\mathbf{R}^3\times\mathbf{R}^3}\!\!\! M(w_*) M(w)(1+|w|^3+|w_*|^3)^2\,\mathrm{d} w_*\mathrm{d} w\right)^{1/2}&\,, \end{aligned} $$ so that $$ \mu_n\langle\tilde A(w)\mathcal{Q}(g_n)\rangle\to 0\hbox{ in }\mathcal{D}'(\mathbf{R}_+^*\times\mathbf{R}^3)\,. $$ By Proposition \ref{last}, $$ \frac1{{\epsilon}_n}\langle A(w)g_n\rangle\to -\nu\left(({\nabla}_xu)+({\nabla}_xu)^T\right)\quad\hbox{ in }\mathcal{D}'(\mathbf{R}_+\times\mathbf{R}^3), $$ so that $$ \operatorname{div}_x\frac{\mu_n}{{\epsilon}_n}\langle A(w)g_n\rangle\to -\nu{\Delta}_xu-\nu{\nabla}_x\operatorname{div}_xu=-\nu{\Delta}_xu $$ in $\mathcal{D}'(\mathbf{R}_+^*\times\mathbf{R}^3)$ since $u$ is divergence free by Proposition \ref{rhout}. Hence, for each divergence free test vector field $\xi\equiv\xi(x)\in\mathbf{R}^3$, $$ \begin{aligned} \int_{\mathbf{R}^3}\frac{\mu_n}{{\epsilon}_n}\langle w\otimes wg_n\rangle:{\nabla}\xi\,\mathrm{d} x=\int_{\mathbf{R}^3}\frac{\mu_n}{{\epsilon}_n}\langle A(w)g_n\rangle:{\nabla}\xi\,\mathrm{d} x \to-\nu\int_{\mathbf{R}^3}{\nabla}_xu:{\nabla}\xi\,\mathrm{d} x \end{aligned} $$ in $\mathcal{D}'(\mathbf{R}_+^*)$. Multiplying both sides of (\ref{eqg}) by $wM(w)$ and integrating over $\mathbf{R}^3$, we see that \begin{equation}\label{newlin} {\partial}_t\langle wg_n\rangle+\frac1{{\epsilon}_n}\operatorname{div}_x\langle w\otimes wg_n\rangle=\frac1{\mu_n{\epsilon}_n}\langle wM^{-1}\mathcal{R}(f_n,F_n)\rangle\,. \end{equation} By Proposition \ref{rhout}, $$ \langle wg_n\rangle\to\langle wg\rangle=u $$ in $\mathcal{D}'(\mathbf{R}_+^*\times\mathbf{R}^3)$, while, by Proposition \ref{defl}, $$ \frac1{{\epsilon}_n}\langle wM^{-1}\mathcal{R}(f_n,F_n)\rangle\to{\kappa}\int_{\mathbf{R}^3}(v-u)F\,\mathrm{d} v $$ in $\mathcal{D}'(\mathbf{R}_+^*\times\mathbf{R}^3)$. Thus, for each divergence free test vector field $\xi\equiv\xi(x)\in\mathbf{R}^3$, passing to the limit in the weak formulation (in $x$) of the momentum balance law (\ref{newlin}), i.e. $$ \mu_n{\partial}_t\int\xi\cdot\langle wg_n\rangle-\frac{\mu_n}{{\epsilon}_n}\int_{\mathbf{R}^3}\langle A(w)g_n\rangle:{\nabla}\xi\,\mathrm{d} x=\frac1{{\epsilon}_n}\int_{\mathbf{R}^3}\xi\cdot\langle wM^{-1}\mathcal{R}(f_n,F_n)\rangle\,\mathrm{d} x\,, $$ results in \begin{equation}\label{WFormStokes} 0=-\nu\int_{\mathbf{R}^3}{\nabla}_xu:{\nabla}\xi\,\mathrm{d} x+{\kappa}\iint_{\mathbf{R}^3\times\mathbf{R}^3}\xi\cdot(v-u)F\,\mathrm{d} v\mathrm{d} x\,. \end{equation} In other words, let $T=(T_1,T_2,T_3)\in\mathcal{D}'(\mathbf{R}^3;\mathbf{R}^3)$ be defined as follows: $$ T:=\nu{\Delta}_xu+{\kappa}\iint_{\mathbf{R}^3}\xi\cdot(v-u)F\,\mathrm{d} v\,. $$ Then (\ref{WFormStokes}) is equivalent to the fact that $$ \sum_{k=1}^3\ll T_k,\xi_k\gg_{\mathcal{D}',C^\infty_c}=0 $$ for each test vector field $\xi\in C^\infty_c(\mathbf{R}^3)$ such that $\operatorname{div}\xi=0$. (In the identity above, we have denoted $\ll ,\gg_{\mathcal{D}',C^\infty_c}$ the pairing between distributions and compactly supported $C^\infty$ functions.) By de Rham's characterization of currents homologous to $0$ (see Thm. 17' in \cite{deRham}), this implies the existence of $p\in\mathcal{D}'(\mathbf{R}^3)$ such that $$ T={\nabla}_xp\,. $$ Thus, the fact that the identity (\ref{WFormStokes}) holds for each test vector field $\xi\in C^\infty_c(\mathbf{R}^3)$ such that $\operatorname{div}\xi=0$ is the weak formulation (in the sense of distributions) of the last equation in (\ref{VNS}). Finally, the equation for the distribution function $F$ of the dispersed phase (i.e. the first line of (\ref{kifu})) is $$ {\partial}_tF_n+v\cdot{\nabla}_xF_n=\frac1{\eta_n}\mathcal{D}(F_n,f_n). $$ Since $F_n\to F$ in $L^\infty_{loc}(\mathbf{R}_+^*\times\mathbf{R}^3\times\mathbf{R}^3)$, one has $$ {\partial}_tF_n+v\cdot{\nabla}_xF_n\to{\partial}_tF+v\cdot{\nabla}_xF\quad\hbox{ in }\mathcal{D}'(\mathbf{R}_+^*\times\mathbf{R}^3\times\mathbf{R}^3)\,. $$ By Proposition \ref{defl}, $$ {\partial}_tF+v\cdot{\nabla}_xF={\kappa}\operatorname{div}_v((v-u)F)\,, $$ which is the first equation in (\ref{VNS}). This concludes the proof of Theorem \ref{theor}. \bigskip \noindent {\bf{Acknowledgment}}: The research leading to this paper was funded by the French ``ANR blanche'' project Kibord: ANR-13-BS01-0004, and by Universit\'e Sorbonne Paris Cit\'e, in the framework of the ``Investissements d'Avenir'', convention ANR-11-IDEX-0005. V.Ricci acknowledges the support by the GNFM (research project 2015: ``Studio asintotico rigoroso di sistemi a una o pi\`u componenti'').
2,869,038,156,062
arxiv
\section{Introduction} Braneworld scenarios are an interesting development in what concerns gravity models and their cosmological implications \cite{Maartens:2003tw}. Most often these scenarios assume, given observational constraints as well as theoretical assumptions, that the bulk space is empty except for a cosmological constant. However, more recently the implications of having vector and scalar fields in the bulk were studied in connection with Lorentz symmetry \cite{Bertolami:2006bf} and gauge symmetry breaking \cite{Bertolami:2007dt}. In this paper, we consider the presence of a perfect fluid in the bulk in the context of a five-dimensional braneword model where the scalar curvature couples non-minimally to the Lagrangian density of the perfect fluid. In (3+1) dimensions this class of models with Lagrangian density of the form \cite{Bertolami:2007gv} \begin{eqnarray} \mathcal L = \alpha f_1 (R) +\left( 1 + \lambda f_2 (R) \right)\mathcal L_M ~ , \end{eqnarray} where $f_1 (R)$ and $f_2 (R)$ are generic functions of the scalar curvature, was shown to exhibit interesting features which allow one to address problems such as the rotation curves of galaxies without the need of dark matter (see Ref.~ \cite{Bertolami:2007gv} and references therein) and the Pioneer anomaly (see Ref.~ \cite{Anderson:2001sg, Bertolami:2003ui} and references therein). The stability of these models has been examined in Ref.~\cite{Faraoni2007}. Other studies on their implications included their impact on stellar equilibrium and the analysis of their corresponding PPN parameters, which were studied in Refs.~\cite{Bertolami-Paramos2008a,Bertolami-Paramos2008b}, respectively. Recently there has also been interest in the conformal equivalence between $f(R)$ theories and Einstein gravity non-minimally coupled to a scalar field in the context of braneworlds \cite{Deruelle07}. The expected increasingly higher order of the discontinuity of the geometric quantities across the brane with the increasing power in $R$ of $f(R)$ is solved by enforcing continuity of the metric to correspondingly higher-order derivatives. Here, however, we will not impose further continuity conditions on the intervening fields, allowing for the discontinuity of the second derivative of the metric across the brane and orthogonal to its surface, despite also obtaining an increase in the power of $R.$ Crucial in the setting of our problem is a suitable implementation of the Israel matching conditions in the presence of bulk fields in order to extract the boundary conditions, both for gravity and the matter fields, which the induced equations of motion on the brane must satisfy. The method to be employed here was first introduced in Ref.~\cite{Bucher:2004we} and further developed in Refs.~\cite{Bertolami:2006bf,Bertolami:2007dt}. For completeness and clarity, the more involved technical details of our method are presented in the Appendix. As we shall see, and rather remarkably, the projection of the bulk perfect fluid induces on the brane a new cosmological constant term. This new source for a brane cosmological constant opens quite interesting perspectives for inflation at the early universe and for acceleration at the late time expansion of the universe. This result suggests that a perfect fluid in the bulk space may have a bearing on the cosmological constant problem on the brane. This paper is organized as follows. In section \ref{sec:model} we present our model and work out a suitable Lagrangian density for a perfect fluid. This development extends the approach of Hawking and Ellis \cite{Hawking:1973uf} to the bulk space. In section \ref{sec:matchingconditions}, we work out the matching conditions across the brane and derive the equations of motion therein induced. A derivation of the Gauss-Codacci relations is also presented in the Appendix for completion. Section \ref{sec:results} contains our results and section \ref{sec:conclusions} our conclusions. \section{A Modified Gravity Model in the Bulk \label{sec:model}} \subsection{The Einstein Equations} We consider the particular case of the action discussed in Ref. \cite{Bertolami:2007gv}. We set $f_1 (R) = f_2 (R) = R$ and introduce a cosmological constant as follows \begin{eqnarray} S = \int d^5 x \sqrt{-g} \left[\mpf^3 R +(1 +\lambda R )\mathcal L_M -2\mathcal L_\Lambda \right] ~. \end{eqnarray} Here $M_{P(5)}$ is the five-dimensional Planck mass, ${\cal L}_{M}$ and ${\cal L}_{\Lambda}$ are respectively the matter and the cosmological constant Lagrangian densities. We define the stress-energy tensor as usual \begin{eqnarray} T_{\mu\nu} = -{2\over\sqrt{-g}} {\delta\left(\sqrt{-g}\mathcal L_M \right)\over\delta g^{\mu\nu}} ~ \label{eq:Tmunu}. \end{eqnarray} For convenience, we define also the vacuum energy tensor as \begin{eqnarray} \Lambda_{\mu\nu} = -{2\over\sqrt{-g}} {\delta\left(\sqrt{-g}\mathcal L_\Lambda \right)\over\delta g^{\mu\nu}} ~, \end{eqnarray} assumed to be of the form \begin{eqnarray} \Lambda_{\mu\nu} =\Lambda_{(5)} g_{\mu\nu} +\sigma\delta(N)( g_{\mu\nu} -N_\nu N_\mu )~, \end{eqnarray} so as to include both the bulk vacuum value $\Lambda_{(5)}$ and the brane tension $\sigma.$ Here $N_\mu$ are the components of the unit five-vector orthogonal to the brane $\versor N = N^\mu \versor \mu.$~ Thus, in Gaussian coordinates, the cosmological constant tensor is given by: \begin{eqnarray} \mathbf\Lambda = \left(\begin{array}{ccc|c} & & & \\ & \mathbf{\dim 4 g} \left( \Lambda_{(5)} + \sigma\right)& & \\ & & & \\\hline & & & \Lambda_{(5)} \end{array}\right) ~, \end{eqnarray} The five-dimensional Einstein equation is obtained by varying the action with respect to the metric, finding that \begin{eqnarray} \mpf^3 G_{\mu\nu} -{1\over 2}\left( 1 +\lambda R\right) T_{\mu\nu} +\Lambda_{\mu\nu} -\lambda \left( \nabla_\mu \nabla_\nu -g_{\mu\nu} \mathchoice\sqr66\sqr66\sqr{2.1}3\sqr{1.5}3 -\lambda R_{\mu\nu} \right)\mathcal L_M =0 ~. \label{eq:Einstein5dim} \end{eqnarray} \subsection{A Perfect Fluid in the Bulk} Since the Einstein equation in Eq.~(\ref{eq:Einstein5dim}) contains terms in $\mathcal L_M,$ we must construct a Lagrangian density associated with a perfect fluid so that by Eq.~(\ref{eq:Tmunu}) it yields \begin{equation} T_{\mu\nu} = (\rho + p)u_\mu u_\nu + p g_{\mu\nu} ~, \end{equation} where $\rho$ is the energy density, $p$ the pressure and $u_\mu $ the unit five-velocity of the fluid (tangent to the flow lines and thus time-like, $u_\mu u^\mu = -1$). For this purpose, we follow closely the procedure described in Ref.~\cite{Hawking:1973uf}. Let the perfect fluid Lagrangian density $\mathcal L_M$ be given by \begin{eqnarray} \mathcal L_M = -\tau (1+\epsilon ) ~, \end{eqnarray} where $\tau$ is an auxiliary variable and $\epsilon$ is the internal energy of the fluid as well as a function of $\tau.$ Assuming that the fluid current vector $j^\mu = \tau u^\mu$ is conserved, $\nabla_\mu j^\mu = 0$, then $\delta (\sqrt{-g}j^\mu ) = 0$ when the metric is varied. Then, from \begin{eqnarray} \tau^2 =-j^\mu j^\nu g_{\mu\nu} ={1\over g} \left( \sqrt{-g}j^\mu\right) \left( \sqrt{-g}j^\nu\right)g_{\mu\nu}~, \end{eqnarray} it follows that \begin{eqnarray} 2 \tau \delta \tau &=&-{\delta g\over g^2}\left( \sqrt{-g}j^\mu\right) \left( \sqrt{-g}j^\nu\right)g_{\mu\nu}\cr &&+{1\over g}\left( \sqrt{-g}j^\mu\right)\left( \sqrt{-g}j^\nu\right) \delta g_{\mu\nu}\cr &=&\left( j_\mu j_\nu -j^\beta j_\beta g_{\mu\nu}\right)\delta g^{\mu\nu} ~, \end{eqnarray} and consequently that the variation of $\tau$ with respect to the metric is given by \begin{eqnarray} \delta \tau ={1\over 2}\left( \tau g_{\mu\nu} +\tau u_\mu u_\nu\right) \delta g^{\mu\nu} ~. \end{eqnarray} Using the definition of the stress-energy tensor, Eq.~(\ref{eq:Tmunu}), we find that \begin{eqnarray} T_{\mu\nu} &=&\mathcal L_M g_{\mu\nu} -2{\delta \mathcal L_M \over \delta g^{\mu\nu}}\cr &=&\left[ \tau( 1 +\epsilon) +\tau^2{d \epsilon\over d \tau}\right] u_\mu u_\nu +\tau^2 {d \epsilon\over d \tau}g_{\mu\nu} \cr &=& (\rho + p)u_\mu u_\nu + p g_{\mu\nu} ~, \end{eqnarray} where we made the following identifications \begin{eqnarray} \rho =\tau (1+\epsilon)~, \qquad p =\tau^2 {d \epsilon\over d \tau} \label{eq:ident2} ~. \label{eq:rho+p} \end{eqnarray} We have thus obtained the stress-energy tensor for a perfect fluid from the Lagrangian density $\mathcal L_M = - \rho$ and from the continuity equation $\nabla_{\mu}(\tau u^{\mu})=0.$ For an alternative formulation where the Lagrangian density is identified with the pressure see, for instance, Refs. \cite{Schutz1970,Brown1993}. To obtain the equation of motion for the perfect fluid, one could compute the divergence of the Einstein equation, Eq.~(\ref{eq:Einstein5dim}). Alternatively, we will proceed directly from the Lagrangian density. For this purpose, we consider the action of the Lie derivatives on the fluid flow lines. Let $\gamma :[a,b]\times \mathcal N \rightarrow \mathcal D \subset \mathcal M$ be a congruence of flow lines, one through each point of $\mathcal M,$ where $[a,b]$ is the interval of the parameter ascribed to the flow lines, $\mathcal N$ is some four-dimensional manifold and $\mathcal D$ is a small region of the five-dimensional spacetime manifold $\mathcal M$. The tangent vector to the flow lines is given by $\textbf U =(\partial / \partial t)_\gamma,$ with $t \in [a,b],$ which once normalized we identify with the fluid velocity \begin{eqnarray} u^\mu ={U^{\mu}\over \sqrt{-g_{\alpha\beta}U^\alpha U^\beta}} ={U^\mu\over |U|} ~. \end{eqnarray} The action $S$ is required to be stationary when the flow lines are varied. Variations with respect to the flow lines, which we here represent by $\Delta,$ amount to variations along the corresponding tangent vectors. Considering $\gamma(r,[a,b],\mathbb R^4)$, where $r$ is the parameter that selects different congruences of flow lines, then $\Delta \textbf U =\mathcal L_{\textbf V} \textbf U$, where $\textbf V = (\partial /\partial r)_\gamma$ and $\mathcal L_{\textbf V}$ is the Lie derivative along $\textbf V$. Since $\Delta ( u^\mu |U|) =\mathcal L_{\textbf V} U,$ then \begin{eqnarray} \Delta u^\mu ={1\over |U|}\left( \mathcal L_{\textbf V} U^\mu -u^\mu\Delta |U| \right) ~. \end{eqnarray} Moreover, \begin{eqnarray} \Delta |U| =-{1\over|U|}g_{\alpha\beta} U^\alpha \Delta U^\beta =-g_{\alpha\beta} u^\alpha \mathcal L_{\textbf V} U^\beta \end{eqnarray} and \begin{eqnarray} \mathcal L_{\textbf V} U^\beta =V^\sigma {U^\mu}_{;\sigma} - U^\sigma {V^\mu}_{;\sigma} ~, \end{eqnarray} and consequently \begin{eqnarray} \Delta u^\mu =V^\sigma {u^\mu}_{;\sigma} -u^\sigma {V^\mu}_{;\sigma} -u^\mu u^\beta V_{\beta ;\sigma} u^\sigma ~. \label{eq:deltau} \end{eqnarray} From the conservation of the fluid current vector ${j^\mu}_{;\mu}= 0,$ it follows that $\Delta ({j^\mu}_{;\mu}) =0 =(\Delta j^\mu)_{;\mu}$ which, using Eq.~(\ref{eq:deltau}) and integrating along the flow lines, yields \begin{eqnarray} \Delta \tau =(\tau V^\beta)_{;\beta} +\tau V_{\beta;\alpha}u^\beta u^\alpha ~. \end{eqnarray} Thus, in the Lagrangian density, $\tau$ varies so that the associated current vector $j^\mu$ is conserved. Finally, the condition for the stationarity of the metric yields \begin{eqnarray} {\partial S\over \partial \tau} &=&\int \sqrt{-g}d^5x \left\{ {d\mathcal L\over d\tau}\Delta \tau\right\}\cr &=&\int \sqrt{-g}d^5x \left\{ -(1 +\lambda R)\left[ 1 +{d(\tau \epsilon)\over d\tau} \right] \Delta \tau \right\}\cr &=&\int \sqrt{-g}d^5x \left\{ -(1+\lambda R)\left[ 1 +{d(\tau \epsilon)\over d\tau} \right] \left[ \nabla_\mu \left( \tau V^\mu \right) +\tau \left( \nabla_\beta V_ \mu \right) u^\mu u^\beta \right] \right\}=0 ~. \end{eqnarray} Integrating by parts and using the Stokes theorem to discard the surface terms, we find that \begin{eqnarray} &&\int \sqrt{-g}d^5x \left\{ \tau \nabla_\mu \left[ (1 +\lambda R)\left( 1 +{d(\tau \epsilon)\over d\tau}\right)\right]\right.\cr && \qquad + \left. \nabla_\beta \left[ (1 +\lambda R)\left( 1 +{d(\tau \epsilon)\over d\tau} \right)\tau u_\mu u^\beta\right]\right\} V^\mu =0 ~, \end{eqnarray} which holds for any vector $\textbf V.$ Therefore, the expression within curly brackets must vanish \begin{eqnarray} &&\left\{ \lambda \nabla_\beta R \left[ 1 +{d(\tau \epsilon)\over d\tau}\right] +( 1 +\lambda R) \left[ {d(\tau \epsilon)\over d\tau}\right]_{;\beta}\right\} \left( g^{\beta\mu} + u^\beta u^\mu\right) \tau \\ &+&\left( 1 +\lambda R\right) \left[ 1 +{d(\tau \epsilon)\over d\tau}\right]u^\beta (\nabla_\beta u^\mu)\tau =0~. \end{eqnarray} Using the equations in Eq.~(\ref{eq:rho+p}) and noticing that \begin{eqnarray} \tau \left[ {d(\tau\epsilon)\over d\tau}\right]_{;\beta} =\left[\tau^2 {d\epsilon\over d\tau} \right]_{;\beta} =\nabla_\beta p ~, \end{eqnarray} we obtain the equation of motion for the perfect fluid in the bulk \begin{eqnarray} \left[ \lambda\left( \rho +p\right)\nabla_\beta R +( 1 +\lambda R)\nabla_\beta p\right]( g^{\beta\mu} +u^\beta u^\mu ) +( 1 +\lambda R)( \rho +p)u^\beta \nabla_\beta u^\mu =0~. \label{eq:fluid} \end{eqnarray} For $\rho +p \neq 0,$ this equation can be rewritten as \begin{eqnarray} u^\beta \nabla_\beta u^\mu ={D u^\mu\over ds} ={du^\mu\over ds} +\Gamma^{\mu}_{\alpha\beta}u^\alpha u^\beta =f^\mu~, \end{eqnarray} where $f^\mu$ can be regarded as an exterior force given by \begin{eqnarray} f^{\mu} =-\left[ {\lambda\over 1 +\lambda R}\nabla_\beta R +{1\over \rho +p}\nabla_\beta p\right]\left( g^{\beta\mu} +u^\beta u^\mu\right)~. \label{eq:eqmotion} \end{eqnarray} Setting $\lambda = 0,$ one recovers the known equation of motion for a perfect fluid in General Relativity. Moreover, Eq.~(\ref{eq:eqmotion}) agrees with the result obtained in Ref.~\cite{Bertolami:2007gv} by taking the divergence of the Einstein's field equation. This indicates that the assumed conservation of the fluid current vector $j^\mu =\tau u^\mu$ is a consistent description of our physical system. The continuity equation, which follows from the conservation of $j^{\mu},$ provides the last equation and ensures that our problem is well defined. From Eq.~(\ref{eq:rho+p}) we find that \begin{eqnarray} p =\tau^2 \epsilon_{,\mu} {1\over \tau_{,\mu}}, \qquad \rho {\epsilon_{,\mu}\over 1 +\epsilon}( p +\rho) =\rho_{,\mu}p ~. \end{eqnarray} The continuity equation \begin{eqnarray} { j^\mu}_{;\mu} =\tau_{,\mu}u^\mu +\tau {u^\mu}_{;\mu} \end{eqnarray} is thus equivalent to \begin{eqnarray} \rho_{,\mu} u^\mu + (p+\rho){u^\mu}_{; \mu} = 0 ~. \label{eq:continuity} \end{eqnarray} For the ideal fluid, the gravitational field enters only in Eq.~(\ref{eq:fluid}). Gravity does not appear in the velocity equation, Eq.~(\ref{eq:continuity}), as the velocity is measured relative to freely moving observers \cite{Peebles80}. \section{The Induced Equations on the Brane \label{sec:matchingconditions}} In this section, we derive the equations of motion induced on the brane. First, we rewrite the components of the equations of motion derived in the previous section in Gaussian normal coordinates. In our notation, the directions parallel to the brane are denoted by $A, B,\dots$, while the normal direction is denoted by $N$ so that the brane is localized at $N=0.$ Using the results derived in Appendix \ref{sec:decomposition}, we obtain the relevant components of the Einstein equation: \begin{eqnarray} &&\mpf^3 \dim 5 G_{AB} -{1\over 2}\left( 1 +\lambda \dim 5 R\right) \left[ (\rho +p)u_A u_B +pg_{AB}\right] +\Lambda_{AB} \cr &+&\lambda \left[ \nabla_A\nabla_B +K_{AB}\nabla_N -g_{AB}\left( \mathchoice\sqr66\sqr66\sqr{2.1}3\sqr{1.5}3 +K\nabla_N +\nabla_N\nabla_N \right) -\dim 5 R_{AB} \right]\rho = 0 ~, \label{eq:einstein:ab} \\ \cr &&\mpf^3 \dim 5 G_{AN} -{1\over 2}\left( 1 +\lambda \dim 5 R\right) \left[ (\rho +p)u_A u_N\right] \cr &+&\lambda \left[ \nabla_A \nabla_N -K_{A}^B\nabla_B -\dim 5 R_{AN}\right] \rho=0 ~, \label{eq:einstein:an} \\ \cr &&\mpf^3 \dim 5 G_{NN} -{1\over 2}\left( 1 +\lambda \dim 5 R\right) \left[ (\rho +p)u_N^2 +p g_{NN}\right] +\Lambda_{(5)} \cr &+&\lambda \left[ \nabla_N \nabla_N \rho -g_{NN}\left( \mathchoice\sqr66\sqr66\sqr{2.1}3\sqr{1.5}3 +K\nabla_N +\nabla_N\nabla_N \right) -\dim 5 R_{NN}\right]\rho = 0 ~. \label{eq:einstein:nn} \end{eqnarray} Note that we will only keep the index indicating the corresponding dimension on the five-dimensional terms and whenever confusion is propitiated. We then proceed to derive the matching conditions, which follow from the presence of the brane dividing into two regions the bulk spacetime and from the symmetries of the bulk fields across the two regions. Here we regard the brane as a $\mathbb{Z}_{2}$--symmetric surface of infinitesimal thickness $2\delta$ and thus separating the bulk into two mirroring regions about $N=0.$ The symmetry about the brane establishes how bulk quantities relate on the two sides of the brane. Hence, vector components parallel to the brane are even in $N,$ whereas normal components are odd. For tensor quantities this generalizes by considering each additional normal component to reverse the parity of the component with one less normal component. Accordingly, it follows that $u_{A}(N=-\delta) =u_{A}(N=+\delta),$ whereas $u_{N}(N=-\delta)=-u_{N}(N=+\delta).$ Likewise, $g_{AB}(N=-\delta)=g_{AB}(N=+\delta)$ and $K_{AB}(N=-\delta)=-K_{AB}(N=+\delta).$ Consequently, there will be quantities that are discontinuous across the brane and whose derivatives in $N$ generate singular distributions at the position of the brane. Integration of these contributions in the coordinate normal to the brane allows to relate the induced geometry of the brane with the induced stress-energy therein localized. Hence, by extracting the singular contributions from the projected bulk equations and establishing the matching conditions, we obtain the equations of motion induced on the brane. We equate the $(AB)$ component of the Einstein equation using the Gauss-Codacci conditions in Eqs.~(\ref{eq:gc1}--\ref{eq:gc3}) as well as Eq.~(\ref{eq:rab}). Integrating along the $N$ direction across the position of the brane, we find that \begin{eqnarray} -\sigma g_{AB} &=&\lim_{\delta \rightarrow 0} \int^{+\delta}_{-\delta} dN \nabla_{N}\biggl\{ \mpf^3 \left( -K_{AB} +g_{AB}K\right)\cr &&+\lambda K\left[ ( \rho +p)u_{A}u_{B} +pg_{AB}\right] -\lambda \left( g_{AB}\nabla_{N} -K_{AB}\right)\rho \biggl\}, \end{eqnarray} where upon integration by parts non-singular terms arise which vanish upon integration and which contribute to the effective equation of motion. For that we assume the energy density, the pressure and the fluid velocity to be continuous. This implies that only higher than second order derivatives in the $N$ direction can be singular on the brane and consequently survive over the infinitesimal integration. From these considerations there follows the Israel matching condition \begin{eqnarray} \left(\mpf^3 -\lambda \rho\right)\left( -K_{AB} +g_{AB}K\right) +\lambda K(\rho +p)\left( u_{A}u_{B} +g_{AB}\right) -\lambda g_{AB}\nabla_{N}\rho =-{1\over 2}\sigma g_{AB} \label{eq:israel:ab} \end{eqnarray} which, upon taking the trace, yields \begin{eqnarray} \left(\mpf^3 -\lambda \rho\right)K( -1+d) +\lambda K( \rho +p)\left( u_{C}^2 +d\right) -\lambda d\nabla_{N}\rho =-{d\over 2}\sigma~. \label{eqn:K} \end{eqnarray} Another useful result is \begin{eqnarray} \left(\mpf^3 -\lambda \rho\right)K_{AB} =K{1\over d}\left[ g_{AB}\left(\mpf^3 -\lambda \rho\right) +\lambda(\rho +p)\left( d u_{A}u_{B} -g_{AB}u_{C}^2\right)\right]. \label{eq:useful} \end{eqnarray} Substituting Eq.~(\ref{eq:israel:ab}) back in the $(AB)$ component of the Einstein equation, we obtain \begin{eqnarray} && \left( M_{P(5)}^{3} -\lambda \rho\right) \left[ G_{AB} -KK_{AB} +{1\over 2}g_{AB}\left( K^2 +K_{CD}K^{CD}\right)\right] +g_{AB}\Lambda_{(5)} \cr &-&\left[ {1\over 2}\left(1 +\lambda\left\{ R -K^2 -K_{CD}K^{CD}\right\}\right) +\lambda K\nabla_{N}\right] \left[ ( \rho +p)\left( u_{A}u_{B} +g_{AB}\right)\right]\cr &+&\left[ {1\over 2}g_{AB} +\lambda\left( \nabla_{A}\nabla_{B} -g_{AB}\nabla_{C}^2\right)\right]\rho =0~. \label{eq:ab} \end{eqnarray} From the $(AN)$ component of the tensor equation we notice that \begin{eqnarray} G_{AN}= K_{AB|B} -K_{|A}= -\nabla_{B}\int dN G_{AB} =-\nabla_{A}\cal T_{AB}~, \label{eq:an} \end{eqnarray} where $\cal T_{AB}$ stands for the stress-energy tensor induced on the brane as given by the Israel matching condition in Eq.~(\ref{eq:israel:ab}). If we impose conservation of energy on the brane, it follows that $G_{AN}=0,$ which implies the condition \begin{eqnarray} \nabla_{A}\left[ K(\rho +p)(u_{A}u_{B} +g_{AB})\right] -\left[ g_{AB}\nabla_{N} -\left( K_{AB} -g_{AB}K\right)\right]\nabla_{A}\rho =0~. \label{eq:conservation} \end{eqnarray} Furthermore, equating the $(NN)$ component of the Einstein equation and integrating along the $N$ direction, we find that \begin{eqnarray} 0 &=&\lim_{\delta \rightarrow 0} \int^{+\delta}_{-\delta} dN \nabla_{N}\left\{ \rho K +K\left[ (\rho +p) u_N^2+p \right] \right\} =-2K (\rho + p)u_{C}^2 ~. \label{eq:israel:nn} \end{eqnarray} Substituting back in the $(NN)$ component of the Einstein equation, we find that \begin{eqnarray} &&\left( M_{P(5)}^{3} -\lambda \rho\right) {1\over 2}\left( -R +K^2 -K_{CD}K^{CD}\right) +\Lambda_{(5)}\cr &-&\left[ {1\over 2}\left(1 +\lambda\left\{ R -K^2 -K_{CD}K^{CD}\right\}\right) +\lambda K\nabla_{N}\right] \left[ ( \rho +p)\left( u_{N}^2 +1\right)\right]\cr &+&\left[ {1\over 2} -\lambda\left( \nabla_{C}^2 +K\nabla_{N}\right)\right]\rho =0~. \end{eqnarray} Moreover, substituting $\nabla_{N}\rho$ from the Israel matching condition for a time-like fluid velocity normalized so that $u_{A}^2+u_{N}^2=-1,$ it follows that \begin{eqnarray} &&\left( M_{P(5)}^3 -\lambda \rho\right) {1\over 2} \left[ -R +K^2\left( {2\over d} -1\right)-K_{CD}K^{CD}\right] +\Lambda _{(5)}\cr &+&\left[ {1\over 2}\left(1 +\lambda \left\{R -K^2 -K_{CD}K^{CD}\right\}\right) +\lambda K\nabla_N\right] \left[ (\rho +p)u_{C}^2\right]\cr &+&\left[ {1\over 2} -\lambda\nabla^2_C\right]\rho -{1\over d}\lambda K^2\left(\rho +p\right)\left( u_{C}^2 +d\right) -{1\over 2}K\sigma =0~. \label{eq:nn} \end{eqnarray} We treat analogously the equations of the perfect fluid in the bulk. The equation of motion, Eq.~(\ref{eq:fluid}), can be combined with the continuity equation, Eq.~(\ref{eq:continuity}), to yield \begin{eqnarray} \nabla _{\nu}\left[ \left( 1 +\lambda \dim 5 R\right)(\rho +p) \left( g_{\mu\nu} +u_{\mu}u_{\nu}\right)\right] -\left( 1 +\lambda \dim 5 R\right)g_{\mu\nu}\nabla_{\nu}\rho =0. \label{eq:fluid+continuity} \end{eqnarray} Substituting the expression for $\dim 5 R$ in Eq.~(\ref{eq:R^5}), we integrate both the parallel and the orthogonal components along the $N$ direction to obtain the corresponding boundary conditions on the brane. From the parallel component we find that \begin{eqnarray} 0 &=&\int ^{+\delta}_{-\delta}dN \nabla_{N}\biggl\{ -2\lambda \nabla_{A}\left[ K(\rho +p)\left( g_{AB} +u_{A}u_{B}\right)\right]\cr &&+\lambda K^2( \rho +p)u_{A}u_{N}\left[ g_{AB}\left( 1+{1\over d}\right) +{ {\lambda(\rho +p)}\over {d\left( M_{(5)}^3 -\lambda \rho\right)}} \left( du_{A}u_{B} -g_{AB}u_{C}^2\right)\right]\cr &&+\left[ 1 +\lambda\left( R -K^2 -K_{CD}K^{CD} -2K_{,N}\right)\right] (\rho +p)u_{N}u_{B} +2\lambda Kg_{AB}\nabla_{A}\rho\biggr\}~. \end{eqnarray} Here we encounter a third derivative along the $N$ direction of the induced metric $g_{AB}.$ For a continuous metric, the first derivate can be discontinuous, the second derivative can be a delta-like singularity and consequently the third derivative can be a double-peaked delta. When we integrate in $N$ we are left with a term in $K_{,N},$ which is proportional to the second derivative of the metric and thus potentially singular, evaluated at the end points along the normal direction which define the thickness of the brane. Due to the $\mathbb{Z}_{2}$--symmetry, however, whereas the delta singularity is even about the brane, with $K(N =-\delta) =-K(N =+\delta)$ and thus $\int dN \nabla_{N}K =2K,$ the double-peaked delta is odd, with $K_{,N}(N =-\delta) =+K_{,N}(N =+\delta)$ and thus $\int dN \nabla_{N}\nabla_{N}K =0.$ Consequently, when integrated along $N,$ only odd order derivatives of the metric along $N$ jump across the brane and thereby relating with the singular matter distribution at the location of the brane, while even order derivatives cancel at the end points. However, the term in question contains also the factor $u_{A}u_{N}$ which is odd about the brane, thus causing the integral to survive and yield $\int dN \nabla_{N}\left(K_{,N}u_{A}u_{N}\right) =2K_{,N}u_{A}u_{N}$ at $N=+\delta.$ On the other hand, since the boundary condition in Eq.~(\ref{eq:israel:nn}) imposes that either $u_{A}=0$ or $\rho +p =0$ or $K=0,$ then regardless the case this term vanishes on the brane. Then, the parallel component of the boundary condition becomes \begin{eqnarray} &-&2\lambda \nabla_{A}\left[ K(\rho +p)\left( g_{AB} +u_{A}u_{B}\right)\right] +2\lambda Kg_{AB}\nabla_{A}\rho \cr &+&( \rho +p)u_{B}u_{N} \left[ 1 +\lambda\left\{ R +{1\over d}K^2\left( 1+{{\lambda(\rho +p)}\over {\mpf^3 -\lambda \rho}}u_{C}^2(d-1)\right) -K_{CD}K^{CD}\right\}\right] =0~.\qquad \label{eq:israel:a} \end{eqnarray} Substituting the boundary condition in Eq.~(\ref{eq:israel:a}) back in the parallel projection of Eq.~(\ref{eq:fluid+continuity}), and using also the energy conservation condition in Eq.~(\ref{eq:conservation}), we find for the induced equation for the fluid on the brane \begin{eqnarray} &&\nabla_{A}\left[ \left( 1 +\lambda\left\{ R -2K^2 -K_{CD}K^{CD} +2K\nabla_{N}\right\}\right) (\rho +p)\left( g_{AB} +u_{A}u_{B}\right)\right]\cr &+&2\lambda K(\rho +p)\left[ u_{A}u_{B}\left( K_{AB|C} -K_{AC|B}\right) +u_{A}u_{N}\left( K_{CD}K_{DC}g_{AB} +K_{AD}K_{DB}\right)\right] \cr &-&\lambda K^2\nabla_{A}\left[ (\rho +p)\left( u_{A}u_{B} +g_{AB}\right)\right]\cr &-&\lambda K^2\left[ g_{AB}\left( 1 +{1\over d}\right) +{1\over d}{ {\lambda(\rho +p)}\over {\mpf^3 -\lambda\rho}} \left( du_{A}u_{B} -g_{AB}u_{C}^2\right) \right] \nabla_{N}\left[(\rho +p)u_{A}u_{N}\right]\cr &-&\left[ 2\lambda K\left( K_{AB} -g_{AB}K\right) +g_{AB}\left(1 +\lambda\left\{ R -K^2 -K_{CD}K^{CD}\right\} \right)\right]\nabla_{A}\rho =0~. \label{eq:a} \end{eqnarray} The orthogonal component yields a trivial matching condition because of the continuity conditions across the brane of the quantities involved. We conclude that this component will only be important for the propagation of the fluid off the brane and across the bulk. The propagation on the brane is solely described by Eq.~(\ref{eq:a}), which contains already the continuity condition in Eq.~(\ref{eq:continuity}), with the possibility of propagation off into the bulk being constrained by the conservation condition in Eq.~(\ref{eq:conservation}). These are the induced equations on the brane and can be solved the following way. The effective equations of motion in Eqs.~(\ref{eq:ab}) and (\ref{eq:a}) consist of a coupled system which must be solved together for the induced metric and for $\nabla_{N}\left[ (\rho +p)\left(u_{A}u_{B} +g_{AB}\right)\right].$ With these results, we can then solve the stress-energy conservation condition in Eq.~(\ref{eq:conservation}), derived from the $(AN)$ component of the Einstein equation, and the $(NN)$ component of the Einstein equation in Eq.~(\ref{eq:nn}) together for $\rho$ and the extrinsic curvature, constrained by the matching conditions in Eqs.~(\ref{eq:israel:ab}) and (\ref{eq:israel:a}) which then allow to find the functional form of $p$ in terms of $\rho.$ From these equations we observe that the coupling of the curvature to the matter Lagrangian density yields a contribution to the effective Newton's constant on the brane. Moreover, it is only through the non-minimal coupling that matter in the bulk interacts with that localized on the brane, in this case the tension $\sigma$ only. Notice that if $\lambda=0$, i.e. in the absence of a non-minimal coupling of the curvature to the matter Lagrangian density, Eqs.~(\ref{eq:israel:a}) and (\ref{eq:a}) read respectively \begin{eqnarray} \rho +p = 0 \label{c.c.} \end{eqnarray} and \begin{eqnarray} \nabla_{A}(\rho +p)\left( g_{AB} +u_{A}u_{B}\right)-g_{AB}\nabla_{A}\rho =0~. \label{eq.all} \end{eqnarray} This means that the presence of a perfect fluid in the bulk space induces on the brane an equation of state characteristic of a cosmological constant, without, however, the quantities $\rho$ and $p$ characterizing the fluid being necessarily constant. \section{Results \label{sec:results}} In this section we proceed to study the set of equations derived above for the three particular cases encompassed by the boundary condition derived from the $(NN)$ component of the Einstein equation, Eq.~(\ref{eq:israel:nn}). \subsection{Reflecting Boundary Condition: $u_{A}=0, \nabla_{N}u_{N}=0$} One of the cases that we can consider is that of the brane consisting of a reflecting surface for the incoming fluid flux from the bulk. This translates into setting Dirichlet boundary conditions to the parallel component of the fluid velocity $u_{A}=0$ and Neumann boundary conditions to the normal component $\nabla_{N}u_{N}=0$ at the position of the brane. Moreover, the $\mathbb{Z}_{2}$--symmetry implies that $\nabla_{N}u_{A}=0,$ whereas boost and translation invariance on the brane implies that $\nabla_{B}u_{A}=0.$ The equations derived in the previous section become as follows. The $(AB)$ and the $(NN)$ components of the Einstein equation are given respectively by \begin{eqnarray} &&\left( \mpf^3 -\lambda \rho\right) \left[ G_{AB} -KK_{AB} +{1\over 2}g_{AB}\left( K_{CD}K_{CD} +K^2\right)\right] +g_{AB}\Lambda _{(5)}\cr &-&\left[{1\over 2}\left( 1 +\lambda\left\{ R -K^2 -K_{CD}K_{CD}\right\}\right) +\lambda K\nabla_{N}\right](\rho +p)g_{AB} \cr &+&\left[ {1\over 2}g_{AB} +\lambda\left( \nabla_{A}\nabla_{B} -g_{AB}\nabla_{C}^2\right)\right]\rho =0~, \label{eq:ab:caseA} \\ \cr &&\left( \mpf^3 -\lambda \rho\right) {1\over 2}\left[ -R +K^2\left( {2\over d} -1\right) -K_{CD}K_{CD}\right] +\Lambda _{(5)} \cr &+&\left[ {1\over 2} -\lambda\nabla_{C}^2\right]\rho -\lambda K^2\left(\rho +p\right) -{1\over 2}K\sigma =0~; \label{eq:nn:caseA} \end{eqnarray} the parallel component of the induced fluid equation and the condition of stress-energy conservation on the brane respectively by \begin{eqnarray} &&\nabla_{A}\left[ \left( 1 +\lambda\left\{ R -2K^2 -K_{CD}K^{CD} +2K\nabla_{N}\right\}\right) (\rho +p)g_{AB}\right] -\lambda K^2g_{AB}\nabla_{A}(\rho +p)\cr &-&\left[2\lambda K\left( K_{AB} -g_{AB}K\right) +g_{AB}\left( 1 +\lambda\left\{ R -K^2 -K_{CD}K^{CD}\right\} \right)\right]\nabla_{A}\rho =0~, \label{eq:a:caseA} \\ \cr &&{1\over d}(d -1)\left( \mpf^3 -\lambda \rho\right)\nabla_{B}K -\lambda\left( K_{AB} -{1\over d}g_{AB}K\right)\nabla_{A}\rho =0~. \label{eq:conservation:caseA} \end{eqnarray} We present also the matching conditions from both the $(AB)$ component of the Einstein equation, Eq.~(\ref{eq:israel:ab}), and the parallel component of the fluid equation, Eq.~(\ref{eq:israel:a}), respectively \begin{eqnarray} &&\left( \mpf^3 -\lambda \rho\right)\left( -K_{AB} +g_{AB}K\right) +\lambda K(\rho +p)g_{AB} -\lambda g_{AB}\nabla_{N}\rho =-{1\over 2}\sigma g_{AB}~ \label{eq:israel:ab:caseA} \end{eqnarray} and \begin{eqnarray} &&(\rho +p)\nabla_{A}K +K\nabla_{A}p=0~. \label{eq:israel:a:caseA} \end{eqnarray} In this case, the extrinsic curvature is intertwined with both the energy density and the pressure of the fluid via the non-minimal coupling $\lambda,$ being in addition sourced by the brane tension $\sigma.$ As in the case described next, the role of the brane tension seems superfluous for generating the discontinuity of the bulk geometry at the position of the brane and thus accounting for the singular presence of the brane in the bulk space. Although the system is intricately entangled, we have just enough equations and constraints to be able to solve it unambiguously given initial conditions. The procedure follows that suggested above for the general system. For further insight into a possible nature of such bulk field, we draw the attention to the next case. \subsection{Cosmological Constant Fluid: $\rho +p = 0$} Another case contemplated by Eq.~(\ref{eq:israel:nn}) is that of the fluid equation of state induced on the brane being $\rho +p=0,$ which corresponds to the bulk fluid inducing a cosmological constant on the brane. The $(AB)$ and $(NN)$ components of the induced Einstein equations become respectively \begin{eqnarray} &&\left( \mpf^3 -\lambda \rho\right) \left[ G_{AB} -KK_{AB} +{1\over 2}g_{AB}\left( K_{CD}K_{CD} +K^2\right)\right] +g_{AB}\Lambda _{(5)}\cr &-&\lambda \left( u_{A}u_{B} +g_{AB}\right)K\nabla_{N}(\rho +p)\cr &+&\left[ {1\over 2} g_{AB} +\lambda\left( \nabla_{A}\nabla_{B} -g_{AB}\nabla_{C}^2\right)\right]\rho =0~, \label{eq:ab:caseB} \\ \cr &&\left( \mpf^3 -\lambda \rho\right) {1\over 2}\left[ -R +K^2 \left( {2\over d} -1\right) -K_{CD}K_{CD}\right] +\Lambda _{(5)} \cr &+&\lambda u_{C}^2K\nabla_{N}(\rho +p) +\left[ {1\over 2} -\lambda\nabla_{C}^2\right]\rho -{1\over 2}K\sigma =0~, \label{eq:nn:caseB} \end{eqnarray} whereas the parallel component of the fluid equation and the stress-energy conservation condition become respectively \begin{eqnarray} &&\left( 1 +\lambda\left\{ R -2K^2 -K_{CD}K^{CD} +2K\nabla_{N}\right\}\right) \left[ \left( u_{A}u_{B} +g_{AB}\right)\nabla_{A}(\rho +p)\right]\cr &-&\lambda K^2\left(u_{A}u_{B} +g_{AB}\right)\nabla_{A}(\rho +p)\cr &+&\lambda\left[ 2\left(u_{A}u_{B} +g_{AB}\right)\nabla_{A}K +2K\nabla_{A}\left( u_{A}u_{B}\right) -K^2\left( 1 +{1\over d}\right)u_{B}u_{N}\right]\nabla_{N}(\rho +p) \cr &-&\left[2\lambda K\left( K_{AB} -g_{AB}K\right) +g_{AB}\left( 1 +\lambda\left\{ R -K^2 -K_{CD}K^{CD}\right\} \right)\right]\nabla_{A}\rho =0~,\qquad \label{eq:a:caseB} \\ \cr &&{1\over d}(d -1)\left( \mpf^3 -\lambda \rho\right)\nabla_{B}K -\lambda\left( K_{AB} -{1\over d}g_{AB}K\right)\nabla_{A}\rho \cr &-&\lambda K\left(u_{A}u_{B} -u_{C}^2g_{AB}\right)\nabla_{A}(\rho +p) =0~. \label{eq:conservation:caseB} \end{eqnarray} The corresponding matching conditions are \begin{eqnarray} &&\left( \mpf^3 -\lambda \rho\right)\left( -K_{AB} +g_{AB}K\right) -\lambda g_{AB}\nabla_{N}\rho =-{1\over 2}\sigma g_{AB}~ \label{eq:israel:ab:caseB} \end{eqnarray} and \begin{eqnarray} && \nabla_{A}p +u_{A}u_{B}\nabla_{B}(\rho +p) =0~. \label{eq:israel:a:caseB} \end{eqnarray} We now proceed to investigate this case in more detail. Since $\rho +p=0$ everywhere on the brane, then from boost and translation invariance we must have that $\nabla_{A}(\rho +p)=0$ everywhere on the brane. Since both $\rho +p$ and its first derivative along the parallel directions to the brane vanish on the brane, then so must vanish its second derivative. However, terms in $\nabla_{N}(\rho +p)$ or $\nabla_{N}\nabla_{A}(\rho+p)$ do not necessarily vanish. From Eq.~(\ref{eq:israel:a:caseB}) it follows that $\nabla_{A}p=\nabla_{A}\rho=0,$ and from Eq.~(\ref{eq:conservation:caseB}) that $\nabla_{B}K=0.$ Moreover, since $\nabla ^2(\rho +p)=0$ implies that $\nabla ^2\rho =-\nabla ^2p,$ whereas $\nabla \rho =\nabla p$ implies that $\nabla ^2\rho =\nabla ^2p,$ we must have that $\nabla ^2\rho =\nabla ^2p=0.$ We can then solve the Einstein, Eq.~(\ref{eq:ab:caseB}), and the fluid equation, Eq.~(\ref{eq:a:caseB}), iteratively for the evolution of the induced metric and $\nabla_{N}(\rho +p)$ on the brane, with the sole ambiguity residing in the fluid velocity in the bulk. The value of the extrinsic curvature will then be given by Eq.~(\ref{eq:nn:caseB}). We also note that with the bulk field behaving on the brane as an effective cosmological constant the role of the brane tension becomes superfluous. Furthermore, should we allow for $\nabla_{A}(\rho +p)\not=0,$ then the discontinuity of the extrinsic curvature across the brane would be further sourced by the evolution of $\rho+p$ on the brane and thus generated dynamically according to the equation of motion for the fluid induced on the brane. This would be an interesting idea to pursue further. Despite the increased complexity of the problem, the coupled system would still be solvable. \subsection{Vanishing Extrinsic Curvature: $K=0$} The remaining case is that of a vanishing extrinsic curvature, where the $(AB)$ and $(NN)$ components of the effective Einstein equations become \begin{eqnarray} &&\left( \mpf^3 -\lambda \rho\right)G_{AB} -{1\over 2}\left(1 +\lambda R\right) \left( u_{A}u_{B} +g_{AB}\right)(\rho +p) +g_{AB}\Lambda _{(5)}\cr &+&\left[ {1\over 2} g_{AB} +\lambda\left( \nabla_{A}\nabla_{B} -g_{AB}\nabla_{C}^2\right)\right]\rho =0~, \label{eq:ab:caseC} \\ \cr &&-\left( \mpf^3 -\lambda \rho\right) {1\over 2}R +{1\over 2}\left( 1 +\lambda R\right)(\rho +p)u_{C}^2 +\Lambda _{(5)} +\left[ {1\over 2} -\lambda\nabla_{C}^2\right]\rho =0~, \label{eq:nn:caseC} \end{eqnarray} and the effective equations for the fluid \begin{eqnarray} && \nabla_{A}\left[ (\rho +p)\left( u_{A}u_{B} +g_{AB}\right)\right] -g_{AB}\nabla_{A}\rho =0~, \label{eq:a:caseC} \\ \cr &&(\rho +p)\left( u_{A}u_{B} +g_{AB}\right)\nabla_{A}K =0~, \label{eq:conservation:caseC} \end{eqnarray} where we used the Israel matching condition \begin{eqnarray} \lambda\nabla_{N}\rho ={1\over 2}\sigma~. \label{eq:israel:ab:caseC} \end{eqnarray} For $K=0,$ the brane tension is supported by the discontinuity in the energy density of the bulk fluid across the two sides about the brane. The matching condition for the fluid equation then yields \begin{eqnarray} (\rho +p)u_{B}u_{N}\left( 1+\lambda R\right)=0~ \label{eq:israel:a:caseC} \end{eqnarray} and consequently $R=-1/\lambda.$ \footnote{The other possibility, $\rho +p=0,$ would be but a particular case of the bulk fluid behaving as a cosmological constant on the brane, as discussed previously.} Then Eq.~(\ref{eq:nn:caseC}) reduces to \begin{eqnarray} \nabla_{C}^2\rho -{1\over \lambda}\left( {M^3_{P(5)}\over \lambda} +\Lambda _{(5)}\right)=0~. \label{eq:nn:caseC2} \end{eqnarray} This equation can be solved for $\rho$ given initial conditions at the intersection of the brane with the bulk past infinity, obtaining the evolution on the brane of the energy density of the bulk perfect fluid in terms of the parameters of the bulk space. The solution must also be subject to the reproduction of a consistent bulk cosmological constant, in the case of anti-de Sitter $\Lambda _{(5)}<0.$ Upon substitution of Eq.~(\ref{eq:nn:caseC}), Eq.~(\ref{eq:ab:caseC}) becomes \begin{eqnarray} \left( M^3_{P(5)} -\lambda\rho\right)\left( G_{AB} -{1\over {2\lambda}}g_{AB}\right) +\lambda\nabla_{A}\nabla_{B}\rho =0~. \label{eq:ab:caseC2} \end{eqnarray} From Eq.~(\ref{eq:conservation:caseC}), we must have that $\nabla_{A}K=0.$ We can then use the solution for $\rho$ to solve Eq.~(\ref{eq:ab:caseC2}) for $g_{AB}$ and Eq.~(\ref{eq:a:caseC}) for $p$ given $u_{A}.$ Hence, for $K=0$ the system decouples and can be solved straightforwardly. This case is, however, too limited since it does not constrain the evolution of $\nabla_{N}\rho$ capable of generating the brane tension $\sigma$ in Eq.~(\ref{eq:israel:ab:caseC}). \section{Conclusions\label{sec:conclusions}} In this paper we have considered a modified gravity model where the curvature scalar couples non-minimally to the matter Lagrangian density, which here we realize for the case of a perfect fluid. As discussed in Ref. \cite{Bertolami:2007gv}, in four dimensions this model can potentially account for the observed rotation curves of galaxies without recourse to dark matter and suggests a solution to the Pioneer anomaly. In addition to this coupling, we have also considered an extra spatial dimension, so that our spacetime is embedded in a five-dimensional bulk space where gravity is described by a five-dimensional Einstein equation. We find that the resulting model is well defined for the considered physical variables and can be solved for given initial conditions. The new terms that arise from the bulk-brane decomposition yield quite interesting consequences. We found three particular cases which conform to the matching conditions upon the assumption of $\mathbb Z_2$--symmetry about the position of the brane and investigated their contribution to the intricately entangled system of equations of motion therein induced. These cases are the following: a) reflecting boundary conditions on the brane, $u_{A}=\nabla_{N}u_{N}=0;$ b) an induced fluid equation of state characteristic of a cosmological constant, $\rho +p=0;$ and c) a vanishing extrinsic curvature, $K=0.$ Both cases a) and b) seem to render the presence of a brane tension $\sigma$ superfluous for generating the discontinuity of the bulk geometry at the position of the brane, where the energy density $\rho$ and the pressure $p$ of the fluid can source the discontinuity of the extrinsic curvature in a dynamical manner. Furthermore, case b) can be regarded as a generalization of a cosmological constant scenario, where the bulk fluid induces on the brane a cosmological constant capable of supporting the presence of the brane. This implies that the evolution on the brane of the energy density of the bulk perfect fluid determines the behaviour of the cosmological constant term on the brane. It is well known that an evolution in terms of cosmic time as $\rho \propto t^{-2}$ is consistent with the value of the vacuum energy density at present \cite{Bertolami86,Sahni}. Furthermore, this positive contribution for the brane cosmological constant can have implications for inflation and for the late time acceleration of the universe. This contribution can also have a bearing on the cosmological constant problem since the natural background for fundamental theories, such as supergravity and superstring/M-theory, is the anti-de Sitter space, which on the brane requires a compensating de Sitter contribution. The feasibility of a scenario along these lines will be considered elsewhere. Finally, case c) allows for the decoupling of the system of equations, with the energy density relating with the bulk cosmological constant $\Lambda_{(5)}.$ The presence of the brane is supported by the interaction of the brane tension with the discontinuity across the brane of the fluid energy density, which on the brane is governed dynamically by a completely defined equation for $\rho.$
2,869,038,156,063
arxiv
\section{Introduction}\label{s1} The notion of conjugacy in semigroups can be generalized from the corresponding notion for groups in several ways. Perhaps, the two most natural and commonly used notions are the relations $\sim_G$ and $\sim$, whose definitions below are taken from~\cite{La}. Let $S$ be a monoid and $G$ the group of units of $S$. The relation $\sim_G$, called {\em $G$-conjugacy}, is defined as $a\sim_G b$ if and only if $a=g^{-1}bg$ for certain $g\in G$. Let now $S$ be a semigroup. We call the elements $a,b\in S$ {\em primarily $S$-conjugate} if there exist $x,y\in S$ such that $a=xy,$ $b=yx$. This will be denoted by $\sim_{pS}$ or just by $\sim_p$ when this does not lead to ambiguity. The relation $\sim_p$ is reflexive and symmetric while not transitive in the general case. Denote by $\sim$ the transitive closure of the relation $\sim_p$. If $a\sim b$ then $a$ and $b$ are said to be {\em $S$-conjugate} or just {\em conjugate}. It is easy to see that in the case of group, both $\sim_G$ and $\sim$ coincide with usual group conjugacy. Besides, for a monoid $S$ there is an inclusion $\sim_G\subset \sim$. The structure of conjugacy and $G$-conjugacy classes for some specific regular semigroups was studied in a number of papers (\cite{GK}, \cite{KM}, \cite{KM1}, \cite{Ch}, \cite{OS}), see also the monograph \cite{Li}. For the relation of conjugacy the structure of conjugacy classes usually happens to be more complicated than for the relation of $G$-conjugacy. The present paper is devoted to the systematic study of the conjugacy relation in regular epigroups (an {\em epigroup} or a {\em group-bound semigroup} is a semigroup such that some power of each its element lies in a subgroup) and is organized as follows. In the Preliminaries we collect some notation used throughout the paper and cite some well-known facts about the structure of ${\mathcal D}$-classes. In Section~\ref{s3} we establish a criterion of conjugacy of two group elements of a given semigroup. In section~\ref{s4} we establish Theorem~\ref{main}, which gives a criterion of conjugacy of two group-bound elements of a regular semigroup and Theorem~\ref{p3}, which provides a criterion of conjugacy in terms of $G$-conjugacy for factorizable inverse epigroups. In Section~\ref{appl} we show that conjugacy criteria for many important specific examples of regular epigroups can be derived in a unified way from our main results. In particular, we give short and very clear proofs for some known conjugacy criteria. Besides, for the first time we formulate and prove conjugacy criteria for the semigroups $\mathrm{PAut}(V)$, $\mathrm{PEnd}(V)$, $Fin{\mathcal {ISA}}(X)$, $Fin{\mathcal A}(X)$ and $Fin{\mathcal {PA}}(X)$ (see the explanations for the notations at the appropriate places of the paper). In the case when a regular semigroup $S$ is not group-bound, the problem of description of conjugacy classes of $S$ seems to be much more complicated, in particular, even for such a classical semigroup as ${\mathcal T}({\mathbb N})$ the conjugacy classes are not classified yet. At the same time, for the semigroup ${\mathcal {IS}}({\mathbb N})$, which is also not an epigroup, the conjugacy classes are described (see~\cite{KM}). In Appendix~A we show that despite finiteness of $X$ the semigroup ${\mathcal {ISA}}(X)$ of partial automatic permutations over a finite alphabet $X$ ($|X|\geq 2$) is not an epigroup. This implies, in particular, that in ${\mathcal {ISA}}(X)$ there are conjugacy classes without group elements showing that the conjugacy criterion for the semigroup ${\mathcal {ISA}}(X)$ announced in Theorems 3,4 of~\cite{OS} fails to give sufficient condition of conjugacy. At the same time we have substantial arguments that the description of conjugacy classes in ${\mathcal {ISA}}(X)$ can be obtained using the methods from~\cite{GNS, KM}. A paper devoted to this question is now in preparation. \section{Preliminaries} Let $S$ be a semigroup and $a\in S$. The class containing $a$ with respect to the $\H$- ($\L$-, ${\mathcal R}$-, ${\mathcal D}$-, ${\mathcal J}$-) Green's relations will be denoted by $H_a$ ($L_a$, $R_a$, $D_a$, $J_a$). The following well-known facts about the structure of ${\mathcal D}$-classes will be used and referred to in the sequel. Their proofs can be found, for example, in~\cite{Hig}. \begin{proposition}[see \cite{Hig}, Theorem 1.2.5, p. 18]\label{d1} Let $a,b\in S$. Then $ab\in R_a\cap L_b$ if and only if $R_b\cap L_a$ contains an idempotent. In particular, a triple $a,b, ab$ belongs to the same $\H-$ class if and only if this $\H$-class is a group. \end{proposition} \begin{proposition}[see \cite{Hig}, Theorems 1.2.7 and 1.2.8., pp. 18, 19]\label{d3} Let $e,f \in S$ be idempotents and $e{\mathcal D} f$. Then for any $t\in R_e\cap L_f$ there is an inverse $t'$ of $t$ such that $t'\in R_f\cap L_e$. Furthermore, the maps $\rho_t\circ \lambda_{t'}:$ $H_e\to H_f$ and $\rho_{t'}\circ \lambda_t:$ $H_f\to H_e$ defined via $x\mapsto t'xt$ and $x\mapsto txt'$, respectively, are mutually inverse isomorphisms. \end{proposition} Recall that an element $a\in S$ is said to be a {\em group element} provided $a$ belongs to a certain subgroup of $S$. It is easily seen and well-known that for a group element $a\in S$ its $\H$-class $H_a$ is a group (in fact, $H_a$ is a maximal subgroup of $S$). Denote by $a^{-1}$ the (group) inverse of $a$ in $H_a$. Let $a\in S$ and there exists $t\in{\mathbb N}$ such that $a^t$ is a group element. In this case $a$ is called a {\em group-bound element}. $S$ is called an {\em epigroup} (or a {\em group-bound semigroup}) provided that each element of $S$ is group-bound. The following fact is known and is easily proved. \begin{lemma}\label{ll1} The following statements are equivalent. \begin{enumerate} \item\label{one} $a^k\H a^t$ for some $k>t$. \item\label{two} $a^i\H a^t$ for all $i\geq t$. \item\label{three} $H_{a^t}$ is a group. \end{enumerate} \end{lemma} Let $a\in S$ be a group-bound element and $t\in {\mathbb N}$ is such that $H_{a^t}$ is a group. It follows from Lemma~\ref{ll1} that we can correctly define $e_a$ (the notation goes from \cite{Sh1}) to be the identity element of the group $H_{a^t}$. Using Lemma~\ref{ll1} one can easily obtain the following (known) useful statement. \begin{corollary}\label{c1} Suppose $a$ is a group-bound element of $S$. Then $e_aa=ae_a$ and $ae_a\H e_a$. In particular, $ae_a$ is a group element. \end{corollary} \section{Conjugacy criterion for group elements}\label{s3} We start from conjugacy criterion for group elements of an arbitrary semigroup $S$ generalizing a similar result obtained earlier in~\cite{KM} for the case when the semigroup $S$ is finite. Recall that elements $a,b\in S$ are said to be {\em mutually inverse} provided that $a=aba$ and $b=bab$. \begin{theorem}\label{pp1} Let $S$ be a semigroup and $a,b\in S$ group elements. Then \begin{enumerate} \item $a\sim_p b$ if and only if there exists a pair of mutually inverse elements $u,v\in S$ such that $b=uav$ and $a=vbu$. \item $a\sim b$ if and only if $a\sim_p b$. \end{enumerate} \end{theorem} To prove this theorem we will need the following five lemmas. \begin{lemma}\label{aux} $a\sim_p b$ implies $a^n\sim_p b^n$ for each $n\geq 1$. \end{lemma} \begin{proof} It is enough to note that if $a=xy$ and $b=yx$ then $a^n=x(yx)^{n-1}\cdot y$ and $b=y\cdot x(yx)^{n-1}$, $n\geq 2$. \end{proof} \begin{lemma}\label{lem:new} Suppose $a, b\in S$ and $a\sim_p b$. If $b$ belongs to a group, then so does $a^2$. \end{lemma} \begin{proof} Assume $a=ts$, $b=st$ for certain $t,s\in S$. That $b=e_{b}de_{b}=e_{b}ste_{b}$ implies $b\in e_{b}sS^1$. Besides, $e_{b}s=bb^{-1}e_{b}s\in bS^1$, whence $b{\mathcal R} e_{b}s$. Analogously one shows that $b\L te_{b}$. Therefore, $e_{b}s\cdot te_{b} \in R_{e_{b}s}\cap L_{te_{b}}$ and thus in view of Proposition~\ref{d1} $L_{e_{b}s}\cap R_{te_{b}}$ contains an idempotent. Now, since $L_{te_{b}}\cap R_{e_{b}s}=H_{b}$ is a group, it follows that $te_{b}\cdot e_{b}s = te_{b}s\in R_{te_{b}}\cap L_{e_{b}s}$, so that $te_{b}s$ is a group element. This implies $(te_{b}s)^2\H te_{b}s$. Therefore, since $$a^{2}=tsts=t{b}s=te_{b}de_{b}s=(te_{b}s)^2,$$ we have that $a^{2}$ is a group element. \end{proof} Say that $c, d\in S$ are {\em conjugate in at most $k$ steps} provided that there are $k\geq 1$ and $c=c_0, c_1\dots c_k=d$ such that $c_i\sim_p c_{i+1}$, $0\leq i\leq k-1$. \begin{lemma}\label{lem:new1} Suppose $a, b\in S$ and $a\sim b$. If $b$ belongs to a group, then so does some power of $a$. \end{lemma} \begin{proof} Let $k\geq 1$ be such that $a$ and $b$ are conjugate in at most $k$ steps. Show that $a^{2^k}$ is a group element. Apply induction on $k$. For $k=1$ the statement follows from Lemma~\ref{lem:new}. Assume now that $m\geq 1$ and the statement is proved for $k=m$. Let $k=m+1$. Fix $a=c_0, c_1,\dots, c_k=b$ such that $c_i\sim_p c_{i+1}$, $0\leq i\leq k-1$. Since $c_{k-1}\sim_p b$ then $c_{k-1}^2$ is a group element by Lemma~\ref{lem:new}. Beside this, $a^{2}\sim_p c_1^2\sim_p\dots \sim_p c_{k-1}^2$ due to Lemma~\ref{aux}, which means that $a^{2}$ and $c_{k-1}^2$ are conjugate in at most $k-1=m$ steps. Applying the inductive hypothesis we obtain that $(a^{2})^{2^{k-1}}=a^{2^k}$ is a group element, as required. \end{proof} \begin{corollary}\label{epi} Suppose $a, b\in S$ and $a\sim b$. If $b$ is group-bound then so is $a$. \end{corollary} \begin{proof} The statement follows from Lemma~\ref{aux} and Lemma~\ref{lem:new1}. \end{proof} \begin{lemma}\label{l1} Suppose that $a,b\in S$ are group elements and $a\sim_p b$. Then there exist $x,y\in S$ such that $a=xy$, $b=yx$ and $x\in R_a\cap L_b$, $y\in R_b\cap L_a$. \end{lemma} \begin{proof} Since $a\sim_p b$ then $a=st$ and $b=ts$ for certain $s,t\in S$. It follows that $te_a\L a{\mathcal R} e_as$. Then $e_as\cdot te_a \in R_{e_as}\cap L_{te_a}$ which implies that $L_{e_as}\cap R_{te_a}$ contains an idempotent by Proposition~\ref{d1}. Since $L_{te_a}\cap R_{e_as}=H_{a}$ is a group then $te_a\cdot e_as = te_as\in R_{te_a}\cap L_{e_as}$, so that $te_as$ is a group element. This implies $(te_as)^2\H te_as$. Therefore, in view of $$(te_as)^2=t\cdot e_aste_a\cdot s=tsts=b^2\H b,$$ we get $te_as\H b$. Hence $te_a{\mathcal R} te_as\H e_b$, whence $e_bte_a=te_a$. Analogously, $se_bt\H a$ and $e_ase_b=e_as$. But then $$a=e_as\cdot te_a=e_ase_b\cdot e_bte_a=e_a\cdot se_bt\cdot e_a=se_bt$$ and analogously $b=te_as$. Set $x=e_ase_b$, $y=e_bte_a$. We obtain $a=xy$, $b=yx$ and $x\in R_a\cap L_b$, $y\in R_b\cap L_a$ as required. \end{proof} \begin{lemma}\label{la1} Let $a,b\in S$ be two group-bound elements. Then $a\sim_p b$ implies $ae_a\sim_p be_b$. \end{lemma} \begin{proof} Let $n\in {\mathbb N}$ be chosen such that $a^n$ and $b^n$ are group elements. Fix $x,y\in S$ such that $a=xy$ and $b=yx$. Then $$ a^{n+1}=x(yx)^ny=xb^ny=xb^ne_by=a^nxe_by=a^ne_axe_by. $$ Multiplying both sides of this equality by $(a^n)^{-1}$ from the left we obtain $$e_aa=(a^n)^{-1}a^{n+1}=e_axe_by.$$ Since $e_aa=ae_a$ by Corollary~\ref{c1} it follows that $ae_a=e_axe_by$. Similarly, $be_b=e_bye_ax$. Therefore, $ae_a\sim_p be_b$. \end{proof} \begin{lemma}\label{l2} Let $a,b \in S$ be group elements satisfying $a\H b$ and $a\sim_p b$. Then there exists $h\in H_a$ such that $a=h^{-1}bh$. \end{lemma} \begin{proof} By Lemma~\ref{l1} $a=hg$, $b=gh$ for some $h,g\in H_a$. Thus, $ah^{-1}=h^{-1}b$, which implies $a=h^{-1}bh$ as required. \end{proof} Now we are ready to prove Theorem~\ref{pp1}. \begin{proof}[Proof of Theorem~\ref{pp1}] {\em (1).} {\em Necessity.} Let $a\sim_p b$. Fix a pair of mutually inverse elements $t\in R_a\cap L_b$ and $t'\in R_b\cap L_a$ (this is possible to do by Proposition~\ref{d3}). In particular, $e_a=tt'$, $e_b=t't$ . Fix also some $x\in R_a\cap L_b$ and $y\in R_b\cap L_a$ such that $a=xy$ and $b=yx$ (such elements exist by Lemma~\ref{l1}). Then $b\H t'at=t'x\cdot yt$ and $yt\cdot t'x=ye_bx=yx=b$. It follows that $t'at\sim_p b$ and $t'at\H b$. Now Lemma~\ref{l2} ensures us that there is $g\in H_b$ such that $b=g^{-1}t'atg$. Set $u=g^{-1}t'$, $v=tg$. Since $L_g\cap R_{t'}=H_g$ contains an idempotent then $u=g^{-1}t'\in R_g\cap L_{t'} =H_{t'}$. Similarly, $v\in H_{t}$. Furthermore, $uvu=g^{-1}t'tgg^{-1}t'= u$, $vuv=tgg^{-1}t'tg=v$. Thereby, using Proposition~\ref{d3}, we have that $\rho_u\circ\lambda_{v}$ is an isomorphism from $H_a$ to $H_b$. It remains to note, that $b=uav$ and $a=vbu$. {\em Sufficiency.} Suppose $b=uav$ and $a=vbu$, where $u,v$ are mutually inverse. Then $b=uvbuv$ which implies that $uvb=uvbuv$. The two previous equalities imply $b=uvbuv=uvb$. Denote $s=vb$, $t=u$. Then $st=a$ and $ts=uvb=b$. Hence, $a\sim_p b$. {\em (2).} Clearly, we have to show only that $a\sim b$ implies $a\sim_p b$. Suppose that $a$ and $b$ are conjugate in at most $n$ steps and that $a=a_0, a_1,\dots, a_n=b$ such that $a_i\sim_p a_{i+1}$, $0\leq i\leq n-1$ are fixed. Corollary~\ref{epi} implies that all $a_i$ are group-bound. Apply induction on $n$. If $n=1$ there is nothing to prove. Let $n=2$. Suppose $a\sim_p a_1$, $a_1\sim_p b$. By the first statement of this Theorem $a=ta_1s$, where $ts=e_a$, $st=e_{a_1}$, and $a_1=ubv$, where $uv=e_{a_1}$, $vu=e_b$. Then $a=tubvs$, $b=va_1u=vsatu$ and $tuvs=te_{a_1}s=ts=e_a$, $vstu=ve_{a_1}u=vu=e_b$. Let $n\geq 2$. Assume that any two group elements, which are conjugate in at most $k$ steps with $k\leq n-1$, are primarily conjugate. It follows from Lemma~\ref{la1} that $$a=a_0e_{a_0}\sim_p a_1e_{a_1}\sim_p\dots\sim_p a_ne_{a_n}=b.$$ Note that all $a_ie_{a_i}$ are group elements by Corollary~\ref{c1}. Then $a\sim_pae_{a_{n-1}}$ by the inductive assumption. It follows that $a\sim_pa_{n-1}e_{a_{n-1}}\sim_p b$. The inductive assumption implies now that $a\sim_p b$. \end{proof} The following statements are direct consequences of Theorem~\ref{pp1}. \begin{corollary}\label{c2} Let $S$ be a semigroup. Suppose $a,b\in S$ are group elements. Then $a\sim b$ implies $a{\mathcal D} b$. \end{corollary} \begin{corollary} Let $S$ be a completely regular semigroup. Then the relations $\sim_p$ and $\sim$ on $S$ coincide. In particular, $\sim_p$ is an equivalence relation. \end{corollary} \begin{corollary} Let $S$ be a band. Then $a\sim b$ if and only if $a{\mathcal D} b$. \end{corollary} \section{The general case}\label{s4} To prove the results of this Section we will use the results of the previous Section and one important observation, from which we start. \begin{proposition}\label{key} Let $S$ be a regular semigroup and $a\in S$ a group-bound element. Then $a\sim ae_a$. \end{proposition} \begin{proof} Let $t$ be the height of $a$ (see the definition in the preliminaries). For the case $t=1$ the statement is obvious as $ae_a=a$. Suppose $t\geq 2$. For each $i$, $1\leq i\leq t-1$, denote by $\alpha_i$ any element which is inverse of $a^i$, and by $\alpha_t$ the element $(a^t)^{-1}$, which is inverse of $a^t$ in the group $H_{a^t}$. Put $c_0=a$, $c_i=a^{i+1}\alpha_i$, $1\leq i\leq t$. Note that for $0\leq i\leq t-1$ we have \begin{equation}\label{a1} a^i\alpha_i\cdot a^{i+1}\alpha_{i+1}=a^i\alpha_ia^i\cdot a\alpha_{i+1}=a^{i+1}\alpha_{i+1}. \end{equation} Let $s=c_i$, $t=a^{i+1}\alpha_{i+1}$. Then using~(\ref{a1}) we obtain $$st=a\cdot a^i\alpha_i\cdot a^{i+1}\alpha_{i+1}=a\cdot a^{i+1}\alpha_{i+1}=c_{i+1};$$ $$ts=a^{i+1}\alpha_{i+1}\cdot a^{i+1}\alpha_i=a\cdot a^i\alpha_i=c_i.$$ It follows that $c_i\sim_p c_{i+1}$, $0\leq i\leq t-1$. Therefore, $a=c_0\sim c_t=ae_a$. \end{proof} \begin{corollary}\label{c5} Let the semigroup $S$ be regular and $a,b\in S$ be group-bound elements. Then $a\sim b$ if and only if $ae_a\sim be_b$. \end{corollary} \begin{theorem}\label{main} Let $S$ be a regular epigroup and $a,b\in S$. Then $a\sim b$ if and only if there exists a pair of mutually inverse elements $u,v\in S$ such that $ae_a=u\cdot be_b\cdot v$ and $be_b=v\cdot ae_a\cdot u$. \end{theorem} \begin{proof} The statement follows from Corollary~\ref{c1}, Theorem~\ref{pp1} and Corollary~\ref{c5}. \end{proof} \begin{corollary} Let $S$ be a regular semigroup with the zero element $0$. Then any two nilpotent elements are conjugate, and if $a\sim b$ and $a$ is nilpotent then $b$ is also nilpotent. \end{corollary} \begin{proof} Suppose that $a,b$ are nilpotent. Then $ae_a=be_b=0$, so that $a\sim b$ by Theorem~\ref{main}. Suppose now that $a$ is nilpotent and $a\sim b$. This and Corollary~\ref{c5} imply that $be_b\sim ae_a=0$. It follows now from Corollary~\ref{c2} that $be_b{\mathcal D} 0$. Thus $be_b=0$, and hence $e_b\H be_b=0$, so that $e_b=0$. Therefore, $b$ is nilpotent. \end{proof} Recall (see~\cite{How}, p.199) that an inverse semigroup $S$ with the group of units $G$ is called {\em factorizable} provided that for each $s\in S$ there is $g\in G$ such that $s\leq g$ with respect to the natural partial order on $S$ i.e. $ss^{-1}=sg^{-1}$. The following theorem provides a characterization of conjugacy in terms of $G$-conjugacy for the class of factorizable inverse epigroups. \begin{theorem}\label{p3} Let $S$ be a factorizable inverse epigroup with the identity element $e$ and the group of units $G$. Let $a,b\in S$. Then $a\sim b$ if and only if $ae_a\sim_G be_b$. \end{theorem} \begin{proof} Since $\sim_G\subset \sim$ and in view of Corollary~\ref{c5} it is enough to prove only that $a\sim b$ implies $ae_a\sim_G be_b$. Suppose $a\sim b$. It follows from Theorem~\ref{main} that $ae_a=sbe_bt$ and $be_b=tae_as$ for some mutually inverse $s\in R_a\cap L_b$ and $t\in R_b\cap L_a$. Since $S$ is an inverse semigroup it follows that $t$ coincides with $s^{-1}$ --- the (unique) element, inverse to $s$. That $s^{-1}s\H b$ yields $s^{-1}sbs^{-1}s=b$. Let $g\in G$ be such that $s\leq g$. Then $a=sbs^{-1}=ss^{-1}sbs^{-1}ss^{-1}=gs^{-1}sbs^{-1}sg^{-1}=gbg^{-1}$, and the proof is complete. \end{proof} \section{Some examples}\label{appl} \subsection{Finite transformation semigroups ${\mathcal {IS}}_n$, ${\mathcal T}_n$ and ${\mathcal {PT}}_n$} Let ${\mathcal {IS}}_n$ be the {\em full finite inverse symmetric semigroup}, i.e. the semigroup of all partial permutations over an $n$-element set $X=\{1,\dots, n\}$. The group of units of ${\mathcal {IS}}_n$ is the full symmetric group $\S_n$of all everywhere defined permutations. The idempotents of ${\mathcal {IS}}_n$ are precisely the identity maps on subsets of $X$. Let $\pi\in {\mathcal {IS}}_n$. Set $G_{\pi}$ to be the directed graph whose set of vertices $V(G_{\pi})$ coincides with $X$, and $(x,y)\in E(G_{\pi})$ if and only if $\pi(x)=y$. The graph $G_{\pi}$ is called the {\em graph of action} of $\pi$. There are two types of connected components of $G_{\pi}$: {\em cycles and chains} (see~\cite{Li, GK, KM}). The {\em cyclic type} and the {\em chain type} of $\pi$ are respectively the (unordered) tuples $(n_1,\dots, n_k)$ and $(l_1, \dots, l_t)$, where $n_1,\dots, n_k$ are the lengthes of the cycles of $\pi$, and $l_1,\dots, l_t$ are the lengthes of the chains of $\pi$ (by the length of a chain $a_1\to a_2\to\dots \to a_n\to \varnothing$ we mean here the number $n$ of its vertices). The following Lemma is straightforward. \begin{lemma}\label{lll} \begin{enumerate} \item $\pi\in {\mathcal {IS}}_n$ is a group element if and only if all its chains are trivial, i.e. of length $1$. \item If the cyclic and chain types of $\pi$ are respectively $(n_1,\dots, n_k)$ and $(l_1, \dots, l_t)$ then the cyclic and chain types of $\pi e_{\pi}$ are respectively $(n_1,$ $\dots, n_k)$ and $(1, \dots, 1)$. \item $\pi\sim_{S_n} \tau$ if and only if the graphs $G_{\pi}$ and $G_{\tau}$ are isomorphic as directed graphs, which is the case if and only if the cyclic and chain types of $\pi$ and $\tau$ coincide. \end{enumerate} \end{lemma} Since ${\mathcal {IS}}_n$ is factorizable, Theorem~\ref{p3} is applicable. Together with Lemma~\ref{lll} it gives the following criterion of conjugacy for the semigroup ${\mathcal {IS}}_n$. \begin{theorem}[~\cite{Li, GK}]\label{gk} Let $\pi$, $\tau\in {\mathcal {IS}}_n$. Then $\pi\sim \tau$ if and only if the cyclic types of $\pi$ and $\tau$ coincide. \end{theorem} Let ${\mathcal T}_n$ and ${\mathcal {PT}}_n$ be the the semigroups of respectively all transformations and of all partial transformations (in both cases not necessarily injective) of the set $X=\{1,\dots, n\}$. Both of these semigroups are regular while not inverse. In the same vein as it was done in the case of ${\mathcal {IS}}_n$ we define the graph of action $G_{\pi}$ for $\pi\in {\mathcal T}_n$ or $\pi\in{\mathcal {PT}}_n$. Let $\pi\in {\mathcal T}_n$ or $\pi\in {\mathcal {PT}}_n$. One can easily make sure that each connected component of $G_{\pi}$ contains no more than one cycle. By the cyclic type of $\pi$ we will mean the (unordered) tuple $(n_1,\dots, n_k)$, where $n_1\dots, n_k$ are the lengthes of cycles of $\pi$. Denote the range of $\pi$ by $\mathrm{ran} \pi$, and the kernel of $\pi$ by $\ker \pi$. Recall that the kernel of $\pi$ is such a partition of the domain of $\pi$ that $a,b\in X$ belong to the same block if and only if $a\pi =b\pi $. \begin{lemma}\label{lema} Let $\pi \in {\mathcal T}_n$ or $\pi\in {\mathcal {PT}}_n$. Then \begin{enumerate} \item The cyclic types of $\pi$ and $\pi e_{\pi}$ coincide. \item $\pi$ is a group element if and only if $\mathrm{ran} \pi$ is a transversal of $\ker \pi$. In the latest case the restriction ${\overline \pi}$ of $\pi$ to $\mathrm{ran} \pi$ is a permutation on the set $\mathrm{ran} \pi$ and is a group element of ${\mathcal {IS}}_n$. \item If $\pi$ is a group element then the cyclic types of $\pi$ and ${\overline \pi}$ coincide. \item Two group elements $\pi$, $\tau$ $\in {\mathcal T}_n$ (or ${\mathcal {PT}}_n$) are conjugate if and only if ${\overline \pi}$ and ${\overline \tau}$ are conjugate in ${\mathcal {IS}}_n$. \end{enumerate} \end{lemma} \begin{proof} {\it 1}. Let ${\mathrm {stran}}\pi=\cap_{k\geq 1} \mathrm{ran} \pi^k$ be the stable range of $\pi$. For $a\in X$ we have that $a\in {\mathrm {stran}}\pi$ if and only if $a$ belongs to a cycle in $G_{\pi}$. This and that $e_{\pi}$ acts identically on ${\mathrm {stran}}\pi$ imply that $\pi$ and $\pi e_{\pi}$ have the same cycles (see also~\cite{KM}). {\it 2}. Follows from the description of Green's relations in ${\mathcal T}_n$ and ${\mathcal {PT}}_n$ (see, for example,~\cite{Hig}). {\it 3}. Since $\mathrm{ran} \pi={\mathrm {stran}}\pi$ in the case of the group element $\pi$, it follows that the graph of action $G_{{\overline \pi}}$ of ${\overline \pi}$ is the union of cycles of $G_{\pi}$, whence the cyclic types of $\pi$ and ${\overline \pi}$ coincide. {\it 4}. Let first $\pi\sim \tau$. It follows from Theorem~\ref{main} and its proof that there are mutually inverse elements $t\in R_{\pi}\cap L_{\tau}$ and $t'\in R_{\tau}\cap R_{\pi}$ such that $\pi=t\tau t'$ and $\tau=t'\pi t$. It follows from the description of Green's relations on ${\mathcal T}_n$ and ${\mathcal {PT}}_n$ that $$\ker\,t=\ker \tau,\,\,\, \ker \,t'=\ker \pi, \,\,\, \mathrm{ran} \,t=\mathrm{ran} \pi, \,\,\, \mathrm{ran}\, t' = \mathrm{ran} \tau.$$ Let ${\overline t}$ be the restriction of $t$ to $\mathrm{ran}\tau$ and ${\overline t'}$ --- the restriction of $t'$ to $\mathrm{ran} \pi$. It follows that ${\overline t}{\overline t'}$ is the identity map on $\mathrm{ran}\pi$ and ${\overline t'}{\overline t}$ is the identity map on $\mathrm{ran}\tau$. Hence, ${\overline t}$ and ${\overline t'}$ are the pair of mutually inverse elements from ${\mathcal {IS}}_n$. This, ${\overline \pi}={\overline t}{\overline \tau}{\overline t'}$, ${\overline \tau}={\overline t'}{\overline \pi}{\overline t}$ and Theorem~\ref{main} imply that ${\overline \pi}$ and ${\overline \tau}$ are ${\mathcal {IS}}_n$-conjugate. Now let the partial permutations ${\overline \pi}$ and ${\overline \tau}$ be ${\mathcal {IS}}_n$-conjugate. Then there exist ${\overline t}\in R_{{\overline \pi}}\cap L_{{\overline \tau}}$ and ${\overline t'}\in R_{{\overline \tau}}\cap R_{{\overline \pi}}$ such that ${\overline \pi}={\overline t} {\overline \tau} {\overline t'}$ and ${\overline \tau}={\overline t'} {\overline \pi} {\overline t}$. Define the elements $t,t'\in {\mathcal T}_n$ (${\mathcal {PT}}_n$) as follows. Set $t$ to be such that $\ker t=\ker \tau$, $\mathrm{ran} t= \mathrm{ran} \pi =\mathrm{ran} {\overline t}$ and the restriction of $t$ to $\mathrm{ran} \tau$ coincides with ${\overline t}$. Similarly, set $t'$ to be such that $\ker t'=\ker \pi$, $\mathrm{ran} t'= \mathrm{ran} \tau =\mathrm{ran} {\overline t'}$ and the restriction of $t'$ to $\mathrm{ran} \pi$ coincides with ${\overline t}$. Note that it follows from the definitions of ${\overline t}$ and ${\overline t'}$ that $t$ and $t'$ can be constructed uniquely and happen to be mutually inverse. Moreover, the construction of $t$ and $t'$ implies $\pi=t\tau t'$ and $\tau =t'\pi t$. Then by Theorem~\ref{main} $\pi$ and $\tau$ are conjugate. \end{proof} As a corollary we obtain the criterion of ${\mathcal T}_n$- (or ${\mathcal {PT}}_n$-) conjugacy in terms of cyclic types of elements. \begin{theorem}[~\cite{KM}] Let $\pi, \tau \in {\mathcal T}_n$ (${\mathcal {PT}}_n$). Then $\pi$ and $\tau$ are ${\mathcal T}_n$- (${\mathcal {PT}}_n$-) conjugate if and only if their cyclic types coincide. \end{theorem} \subsection{Full semigroups of linear transformations of a finitely dimensional vector space} Let $F$ be a field and $V_n$ be an $n$-dimensional vector space over $F$. An isomorphism $\varphi:U\to W$, where $U, W$ are some subspaces of $V_n$ is called a {\em partial automorphism} of $V_n$ with the domain $\mathrm{dom} \varphi=U$ and the range $\mathrm{ran}\varphi=W$. The set of partial automorphisms of $V_n$ with respect to the composition of partial automorphisms is an inverse semigroup and denoted by $\mathrm{PAut}(V_n)$. Let $\varphi\in \mathrm{PAut}(V_n)$. For each positive integer $k$ we have the inclusions $$ \mathrm{dom}\varphi\supset \mathrm{dom}\varphi^k\supset \mathrm{dom} \varphi^{k+1}\supset \{0\}, $$ implying that $$ n\geq \dim U \geq \dim(\mathrm{dom}\varphi^2)\geq\dots \geq\dim(\mathrm{dom}\varphi^{k})\geq \dots\geq0. $$ Since at most $n$ of this inequalities are strict we can assert that starting from some power $t$ we have $\mathrm{dom}\varphi^{t}= \mathrm{dom}\varphi^{t+i}$ for each $i\geq 0$. It follows that $\mathrm{dom}\varphi^{t}=\mathrm{ran}\varphi^t$, so that $\varphi^t\in {\mathrm {GL}}(\mathrm{dom}\varphi^t)$ is a group element of $\mathrm{PAut}(V_n)$, which shows that $\mathrm{PAut}(V_n)$ is an epigroup (the notation ${\mathrm {GL}}(W)$ stands for the full linear group over the subspace $W$). It is easily proved that $\mathrm{PAut}(V_n)$ is factorizable. From Theorem~\ref{p3} we derive the following criterion of $\mathrm{PAut}(V_n)$-conjugacy. \begin{theorem}\label{paut} Let $\varphi,\psi\in\mathrm{PAut}(V_n)$. Then $\varphi$ and $\psi$ are $\mathrm{PAut}(V_n)$-conjugate if and only if $\varphi e_{\varphi}$ and $\psi e_{\psi}$ are ${\mathrm {GL}}(V_n)$-conjugate. \end{theorem} Now switch to the regular semigroups $\mathrm{End}(V_n)$ and $\mathrm{PEnd}(V_n)$ of respectively all endomorphisms and all partial endomorphisms of $V_n$. The same arguments as in the case of $\mathrm{PAut}(V_n)$ show that both $\mathrm{End}(V_n)$ and $\mathrm{PEnd}(V_n)$ are epigroups. \begin{lemma} Let $S$ denote one of the semigroups $\mathrm{End}(V_n)$ or $\mathrm{PEnd}(V_n)$. \begin{enumerate} \item $\pi\in S$ is a group element if and only if $\mathrm{dom}\pi$ decomposes into the direct sum $\mathrm{dom}\pi=\mathrm{ran}\pi\oplus \ker\pi$. In the latest case the restriction ${\overline\pi}$ of $\pi$ to $\mathrm{ran}\pi$ is an automorphism of $\mathrm{ran}\pi$ which is a group element of $\mathrm{PAut}(V_n)$. \item Two group elements $\pi$, $\tau$ $\in S$ are $S$-conjugate if and only if ${\overline \pi}$ and ${\overline \tau}$ are $\mathrm{PAut}(V_n)$- conjugate. \end{enumerate} \end{lemma} \begin{proof} {\em 1.} Recall that $\pi\in S$ is a group element if and only if $H_{\pi}$ contains an idempotent, i.e. some projection map $e=e(V_1,V_2)$, such that $\mathrm{dom} \,e=V_1\oplus V_2$ and $e$ is a projecting of $\mathrm{dom} \,e$ onto $V_1$ parallelly to $V_2$. The statement now follows from the fact that $\pi\H e$ if and only if $\ker\pi=\ker e$ and $\mathrm{ran}\pi =\mathrm{ran} e$. {\em 2.} The proof is similar to the proof of the fourth statement of Lemma~\ref{lema}. \end{proof} As a corollary we obtain the criterion of conjugacy (where $S=\mathrm{End}(V_n)$ or $S=\mathrm{PEnd}(V_n)$) in terms of $G$-conjugacy. \begin{theorem}[~\cite{KM1} for the case of $\mathrm{End}(V_n)$] Let $S$ denote one of the semigroups $\mathrm{End}(V_n)$ or $\mathrm{PEnd}(V_n)$ and $\varphi,\psi\in S$. Then $\varphi$ and $\psi$ are $S$-conjugate if and only if ${\overline{\varphi_e{\varphi}}}$ and ${\overline{\psi e_{\psi}}}$ are ${\mathrm {GL}}(V_n)$-conjugate. \end{theorem} \subsection{Partial automatic permutations over a finite alphabet} Recall that a {\em Mealy automaton over a finite alphabet $X$} is a triple ${\mathcal A}=(X, \varphi, \psi)$, where $Q$ is the set of {\em internal states} of the automaton, $\varphi: Q\times X\to Q$--- its {\em transition function} and $\psi: Q\times X\to X$ -- its {\em output function}. In the case when the functions $\varphi$ and $\psi$ are everywhere defined the automaton ${\mathcal A}$ is called {\em full}, otherwise it is called {\em partial}. An automaton ${\mathcal A}$ is called {\em initial} if a state $q_o\in Q$ is marked as an {\em initial state}. Each initial automaton $({\mathcal A}, q_0)$ over $X$ defines a (partial) transformation of the set $X^*$ of all words over $X$ by extending functions $\varphi$ and $\psi$ to the set $Q\times X^*$ as follows: $$ \varphi(q,e)=q,\,\,\,\,\,\varphi(q, wx)=\varphi(\varphi(q,w),x); $$ $$ \psi(q,e)=e,\,\,\,\,\,\psi(q, wx)=\psi(\varphi(q,w),x), $$ where $e$ denotes the empty word. Now define the transformation $f_{{\mathcal A},q_0}:X^*\to X^*$ via \begin{equation}\label{aa} f_{{\mathcal A},q_0}(u)=\psi(q_0,x_1)\psi(\varphi(q_0,x_1),x_2)\psi(\varphi(q_0,x_1x_2),x_3)..., \end{equation} where $u=x_1x_2x_3\dots\in X^*$. The expression in the right-hand side of~(\ref{aa}) is undefined if and only if at least one of the values of $\varphi$ or $\psi$ in it is undefined. Partial injective transformations which is defined by some partial initial automaton is called a {\em partial automatic permutation} (or, in other terminology, a {\em letter-to-letter transduction}). The set of all partial automatic permutations over $X$ with respect the composition of maps is an inverse semigroup which will be denoted by ${\mathcal {ISA}}(X)$. Note that a partial automatic permutation is a group element of ${\mathcal {ISA}}(X)$ if and only if its graph of action has no chains of length greater than one. The following Lemma is straightforward. \begin{lemma}\label{gb} An element $f\in{\mathcal {ISA}}(X)$ is group-bound if and only if the lengthes of its chains are uniformly bounded. \end{lemma} A partial automatic permutation $g\in {\mathcal {ISA}}(X)$ is said to be {\em finitary} if there exists $l\geq 0$ such that for every word $x_1x_2\dots \in X^*$ belonging to the domain of $g$ and its image $y_1y_2\dots =(x_1x_2\dots)^g$ one has $x_i=y_i$ for all $i\geq l$. The set $Fin{\mathcal {ISA}}(X)$ of all finitary partial automatic permutations is an inverse subsemigroup of ${\mathcal {ISA}}(X)$ and by Lemma~\ref{gb} it is an epigroup. The group of units of $Fin{\mathcal {ISA}}(X)$ coincides with the group $Fin{\mathcal{SA}}(X)$ consisting of all everywhere defined elements of $Fin{\mathcal {ISA}}(X)$. It is easily seen that the semigroup $Fin{\mathcal{SA}}(X)$ is factorizable. From Theorem~\ref{p3} we obtain the following conjugacy criterion for this semigroup. \begin{theorem}\label{last} Two elements $f,g\in Fin{\mathcal {ISA}}(X)$ are conjugate with respect to $\sim$ if and only if $fe_f$ and $ge_g$ are $Fin{\mathcal{SA}}(X)$-conjugate. \end{theorem} Theorem~\ref{main} also gives us criteria of conjugacy for the regular epigroups $Fin{\mathcal A}(X)$ of all (not necessarily injective) {\em finitary automatic transformations of $X^*$} and $Fin{\mathcal{PA}}(X)$ of all {\em partial finitary automatic transformations of $X^*$}. It easily seen that a statement similar to Lemma~\ref{lema} holds for these semigroups. Therefore we obtain the following conjugacy criterion. \begin{theorem}\label{last1} Let $S$ denote one of the semigroups $Fin{\mathcal A}(X)$ or $Fin{\mathcal{PA}}(X)$. Two elements $f,g\in S$ are $S$-conjugate if and only if ${\overline{fe_f}}$ and ${\overline{ge_g}}$ are $Fin{\mathcal{SA}}(X)$-conjugate. \end{theorem} \section*{Appendix A}\label{ab} Here we are going to show that for $|X|\geq 2$ the semigroup ${\mathcal {ISA}}(X)$ is not an epigroup. For this we give an example of an automaton $({\mathcal A}, q_0)$ with four states over a two-letter alphabet $X=\{0,1\}$ such that $f_{{\mathcal A},q_0}$ is not a group-bound element of ${\mathcal {ISA}}(X)$. That finite automata $({\mathcal A}, q_0)$ such that $f_{{\mathcal A},q_0}$ is not group-bound exist is rather evident. However, so far as to our knowledge, this fact has never been indicated in the literature. Besides, from Corollary~\ref{epi} it follows that a non group-bound element can not be conjugate to a group element. This and the existence of non group-bound elements assure that the conjugacy criterion for the semigroup ${\mathcal {ISA}}(X)$ announced in~\cite{OS} is incorrect. Construct $({\mathcal A}, q_0)$ $=(Q,\varphi,\psi, q_0)$ as follows. Let $Q=\{A,B,C,D\}$, $q_0=A$ and $$ \varphi(A,0)=D,\,\,\,\,\, \varphi(A,1)=B\,\,\,\,\, \ \varphi(B,0) \text{ undefined},\,\,\,\,\, \varphi(B,1)=C, $$ $$ \varphi(C,0)=C,\,\,\,\,\,\varphi(C,1)=C,\,\,\,\,\,\varphi(D,0)=A,\,\,\,\,\,\varphi(D,1) \text{ undefined}; $$ $$ \psi(A,0)=1,\,\,\,\,\, \psi(A,1)=0,\,\,\,\,\, \ \psi(B,0) \text{ undefined},\,\,\,\,\, \psi(B,1)=0, $$ $$ \psi(C,0)=0,\,\,\,\,\,\psi(C,1)=1,\,\,\,\,\,\psi(D,0)=1,\,\,\,\,\,\psi(D,1) \text{ undefined}. $$ The Moore diagram of the constructed automaton is given in Figure~\ref{fig}, where the initial state is marked by a double circle, and there is no arrow with the first label $x\in X$ beginning in a state $q\in Q$ if and only if $\varphi(q,x)$ and $\psi(q,x)$ are undefined. \begin{lemma}\label{isa} $f_{{\mathcal A},q_0}$ is not a group-bound element of ${\mathcal {ISA}}(\{0,1\})$. \end{lemma} \begin{figure} \centering \includegraphics[width=4.5in, height=5.5in]{anja1.eps} \vspace*{8pt} \vspace{-7cm} \caption{}\label{fig} \end{figure} \begin{proof} Show first that for any $k\geq 1$ the orbit (with respect to the action of $f_{{\mathcal A},q_0}$) of the word ${\underbrace{1\dots 1}_{2k}}$ is a cycle of length $2^k$. Apply induction on $k$. For $k=1$ we have $11 \mapsto 00 \mapsto 11$. Suppose that $k\geq 1$ and $$ {\underbrace{1\dots 1}_{2k}}=u_1\mapsto u_2\mapsto \dots \mapsto u_{2^k}={\underbrace{0\dots 0}_{2k}}\mapsto u_1. $$ It follows from the definition of $({\mathcal A}, q_0)$ that \begin{multline*} {\underbrace{1\dots 1}_{2k+2}}=u_111\mapsto u_211\mapsto \dots \mapsto u_{2^k}11={\underbrace{0\dots 0}_{2k}}11\mapsto\\ {\underbrace{1\dots 1}_{2k}}00=u_100\mapsto u_200 \mapsto\dots \mapsto u_{2^k}00={\underbrace{1\dots 1}_{2k+2}}\mapsto u_111, \end{multline*} as required. Let now $k\geq 1$. Then $$ {\underbrace{1\dots 1}_{2k}}01=u_101\mapsto u_201\mapsto \dots \mapsto u_{2^k}01={\underbrace{0\dots 0}_{2^k}}01, $$ and $f_{{\mathcal A},q_0}({\underbrace{0\dots 0}_{2k}}01)$ is undefined. Therefore, the word ${\underbrace{1\dots 1}_{2k}}01$ belongs to the chain of length at least $2^k$, $k\geq 1$. The statement now follows from Lemma~\ref{gb}. \end{proof} \begin{corollary} If $|X|\geq 2$ then in ${\mathcal {ISA}}(X)$ there are conjugacy classes without group elements. \end{corollary} \begin{proof} This follows from Corollary~\ref{epi} and Lemma~\ref{isa}. \end{proof}
2,869,038,156,064
arxiv
\section{Introduction} Let $\mathcal{B}(X)$ be the Banach algebra of all bounded linear operators acting on a Banach space $X$, and let $\mathcal{K}(X)$ be the ideal of all compact operators in $\mathcal{B}(X)$. An operator $A\in\mathcal{B}(X)$ is called \textit{Fredholm} if its image is closed and the spaces $\ker A$ and $\ker A^*$ are finite-dimensional. In that case the number \[ \operatorname{Ind} A:=\dim\ker A-\dim\ker A^* \] is referred to as the {\it index} of $A$ (see, e.g., \cite[Sections~1.11--1.12]{BS06}, \cite[Chap.~4]{GK92}). For bounded linear operators $A$ and $B$, we will write $A\simeq B$ if $A-B\in\mathcal{K}(X)$. Recall that an operator $B_r\in\mathcal{B}(X)$ (resp. $B_l\in\mathcal{B}(X)$) is said to be a right (resp. left) regularizer for $A$ if \[ AB_r\simeq I \quad(\mbox{resp.}\quad B_lA\simeq I). \] It is well known that the operator $A$ is Fredholm on $X$ if and only if it admits simultaneously a right and a left regularizer. Moreover, each right regularizer differs from each left regularizer by a compact operator (see, e.g., \cite[Chap.~4, Section 7]{GK92}). Therefore we may speak of a regularizer $B=B_r=B_l$ of $A$ and two different regularizers of $A$ differ from each other by a compact operator. A bounded continuous function $f$ on $\mathbb{R}_+=(0,\infty)$ is called slowly oscillating (at $0$ and $\infty$) if for each (equivalently, for some) $\lambda\in(0,1)$, \[ \lim_{r\to s}\sup_{t,\tau\in[\lambda r,r]}|f(t)-f(\tau)|=0, \quad s\in\{0,\infty\}. \] The set $SO(\mathbb{R}_+)$ of all slowly oscillating functions forms a $C^*$-algebra. This algebra properly contains $C(\overline{\mathbb{R}}_+)$, the $C^*$-algebra of all continuous functions on $\overline{\mathbb{R}}_+ :=[0,+\infty]$. Suppose $\alpha$ is an orientation-preserving diffeomorphism of $\mathbb{R}_+$ onto itself, which has only two fixed points $0$ and $\infty$. We say that $\alpha$ is a slowly oscillating shift if $\log\alpha'$ is bounded and $\alpha'\in SO(\mathbb{R}_+)$. The set of all slowly oscillating shifts is denoted by $SOS(\mathbb{R}_+)$. Throughout the paper we suppose that $1<p<\infty$. It is easily seen that if $\alpha\in SOS(\mathbb{R}_+)$, then the shift operator $W_\alpha$ defined by $W_\alpha f=f\circ\alpha$ is bounded and invertible on all spaces $L^p(\mathbb{R}_+)$ and its inverse is given by $W_\alpha^{-1}=W_{\alpha_{-1}}$, where $\alpha_{-1}$ is the inverse function to $\alpha$. Along with $W_\alpha$ we consider the weighted shift operator \[ U_\alpha:=(\alpha')^{1/p}W_\alpha \] being an isometric isomorphism of the Lebesgue space $L^p(\mathbb{R}_+)$ onto itself. Let $S$ be the Cauchy singular integral operator given by \[ (Sf)(t):=\frac{1}{\pi i}\int\limits_0^\infty \frac{f(\tau)}{\tau-t}\,d\tau,\quad t\in\mathbb{R}_+, \] where the integral is understood in the principal value sense. It is well known that $S$ is bounded on $L^p(\mathbb{R}_+)$ for every $p\in(1,\infty)$. Let $\mathcal{A}$ be the smallest closed subalgebra of $\mathcal{B}(L^p(\mathbb{R}_+))$ containing the identity operator $I$ and the operator $S$. It is known (see, e.g., \cite{D87}, \cite[Section~2.1.2]{HRS94}, \cite[Sections~4.2.2--4.2.3]{RSS11}, and \cite{SM86}) that $\mathcal{A}$ is commutative and for every $y\in(1,\infty)$ it contains the weighted singular integral operator \[ (S_y f)(t):=\frac{1}{\pi i}\int\limits_0^\infty \left(\frac{t}{\tau}\right)^{1/y-1/p}\frac{f(\tau)}{\tau-t}\,d\tau, \quad t\in\mathbb{R}_+, \] and the operator with fixed singularities \[ (R_y f)(t):=\frac{1}{\pi i}\int\limits_0^\infty \left(\frac{t}{\tau}\right)^{1/y-1/p}\frac{f(\tau)}{\tau+t}\,d\tau, \quad t\in\mathbb{R}_+, \] which are understood in the principal value sense. For $y\in(1,\infty)$, put \[ P_y^\pm:=(I\pm S_y)/2. \] This paper is in some sense a continuation of our papers \cite{KKL11a,KKL11b,KKL14a}, where singular integral operators with shifts were studied under the mild assumptions that the coefficients belong to $SO(\mathbb{R}_+)$ and the shifts belong to $SOS(\mathbb{R}_+)$. In \cite{KKL11a,KKL11b} we found a Fredholm criterion for the singular integral operator \[ N=(aI-bW_\alpha) P_p^++(cI-dW_\alpha)P_p^- \] with coefficients $a,b,c,d\in SO(\mathbb{R}_+)$ and a shift $\alpha\in SOS(\mathbb{R}_+)$. However, a formula for the calculation of the index of the operator $N$ is still missing. Further, in \cite{KKL14a} we proved that the operators \[ A_{ij}=U_\alpha^i P_p^++U_\beta^j P_p^-,\quad i,j\in\mathbb{Z}, \] with $\alpha,\beta\in SOS(\mathbb{R}_+)$ are all Fredholm and their indices are equal to zero. This result was the first step in the calculation of the index of $N$. Here we make the next step towards the calculation of the index of the operator $N$. For $a\in SO(\mathbb{R})$, we will write $1\gg a$ if \[ \limsup_{t\to s}|a(t)|<1,\quad s\in\{0,\infty\}. \] \begin{thm}[Main result] \label{th:main} Let $1<p<\infty$ and $\alpha,\beta\in SOS(\mathbb{R}_+)$. Suppose $c,d\in SO(\mathbb{R}_+)$ are such that $1\gg c$ and $1\gg d$. Then the operator \[ V:=(I-cU_\alpha)P_2^++(I-dU_\beta)P_2^-, \] is Fredholm on the space $L^p(\mathbb{R}_+)$ and $\operatorname{Ind} V=0$. \end{thm} The paper is organized as follows. In Section~\ref{sec:Preliminaries} we collect necessary facts about slowly oscillating functions and slowly oscillating shifts, as well as about the invertibility of binomial functional operators $I-cU_\alpha$ with $c\in SO(\mathbb{R}_+)$ and $\alpha\in SOS(\mathbb{R}_+)$ under the assumption that $1\gg c$. Further we prove that the operators in the algebra $\mathcal{A}$ commute modulo compact operators with the operators in the algebra $\mathcal{FO}_{\alpha,\beta}$ of functional operators with shifts and slowly oscillating data. Finally, we show that the ranges of two important continuous functions on $\mathbb{R}$ do not contain the origin. In Section~\ref{sec:Mellin-convolutions} we recall that the operators $P_y^\pm$ and $R_y$, belonging to the algebra $\mathcal{A}$ for every $y\in(1,\infty)$, can be viewed as Mellin convolution operators and formulate two relations between $P_y^+$, $P_y^-$, and $R_y$. Section~\ref{sec:Mellin-PDO} contains results on the boundedness, compactness of semi-commutators, and the Fredholmness of Mellin pseudodifferential operators with slowly oscillating symbols of limited smoothness (symbols in the algebra $\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$). Results of this section are reformulations/modifications of corresponding results on Fourier pseudodifferential operators obtained by the second author in \cite{K06} (see also \cite{K06-IWOTA,K08,K09}). Notice that those results are further generalizations of earlier results by Rabinovich (see \cite[Chap.~4]{RRS04} and the references therein) obtained for Mellin pseudodifferential operators with $C^\infty$ slowly oscillating symbols. In \cite[Lemma~4.4]{KKL14a} we proved that the operator $U_\gamma R_y$ with $\gamma\in SOS(\mathbb{R}_+)$ and $y\in(1,\infty)$ can be viewed as a Mellin pseudodifferential operator with a symbol in the algebra $\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$. In Section~\ref{sec:applications-Mellin-PDO} we generalize that result and prove that $(I-vU_\gamma)R_y$ and $(I-vU_\gamma)^{-1}R_y$ with $y\in(1,\infty)$, $\gamma\in SOS(\mathbb{R}_+)$, and $v\in SO(\mathbb{R}_+)$ satisfying $1\gg v$, can be viewed as Mellin pseudodifferential operators with symbols in the algebra $\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$. This is a key result in our analysis. Section~\ref{sec:proof} is devoted to the proof of Theorem~\ref{th:main}. Here we follow the idea, which was already used in a simpler situation of the operators $A_{ij}$ in \cite{KKL14a}. With the aid of results of Section~\ref{sec:Preliminaries} and Section~\ref{sec:applications-Mellin-PDO}, we will show that for every $\mu\in[0,1]$ and $y\in(1,\infty)$, the operators \begin{align*} & [(I-\mu cU_\alpha)P_y^++(I-\mu dU_\beta)P_y^-] \cdot [(I-\mu cU_\alpha)^{-1}P_y^++(I-\mu dU_\beta)^{-1}P_y^-], \\ & [(I-\mu cU_\alpha)^{-1}P_y^++(I-\mu dU_\beta)^{-1}P_y^-] \cdot [(I-\mu cU_\alpha)P_y^++(I-\mu dU_\beta)P_y^-] \end{align*} are equal up to compact summands to the same operator similar to a Mellin pseudodifferential operator with a symbol in the algebra $\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$. Moreover, the latter pseudodifferential operator is Fredholm whenever $y=2$ in view of results of Section~\ref{sec:Mellin-PDO}. This will show that each operator \[ V_{\mu,2}=(I-\mu cU_\alpha)P_2^++(I-\mu dU_\beta)P_2^- \] is Fredholm on $L^p(\mathbb{R}_+)$. Considering the homotopy $\mu\mapsto V_{\mu,2}$ for $\mu\in[0,1]$, we see that the operator $V$ is homotopic to the identity operator. Therefore, the index of $V$ is equal to zero. This will complete the proof of Theorem~\ref{th:main}. As a by-product of the proof of the main result, in Section~\ref{sec:Regularization} we describe all regularizers of a slightly more general operator \[ W=(I-cU_\alpha^{\varepsilon_1})P_2^++(I-dU_\beta^{\varepsilon_2})P_2^-, \] where $\varepsilon_1,\varepsilon_2\in\{-1,1\}$ and show that \[ G_y W\simeq R_y \] for every $y\in(1,\infty)$, where $G_y$ is an operator similar to a Mellin pseudodifferential operator with a symbol in $\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$ with some additional properties. The latter relation for $y=2$ will play an important role in the proof of an index formula for the operator $N$ in our forthcoming work \cite{KKL15-progress}. \section{Preliminaries}\label{sec:Preliminaries} \subsection{Fundamental Property of Slowly Oscillating Functions} For a unital commutative Banach algebra $\mathfrak{A}$, let $M(\mathfrak{A})$ denote its maximal ideal space. Identifying the points $t\in\overline{\mathbb{R}}_+$ with the evaluation functionals $t(f)=f(t)$ for $f\in C(\overline{\mathbb{R}}_+)$, we get $M(C(\overline{\mathbb{R}}_+))=\overline{\mathbb{R}}_+$. Consider the fibers \[ M_s(SO(\mathbb{R}_+)) := \big\{\xi\in M(SO(\mathbb{R}_+)):\xi|_{C(\overline{\mathbb{R}}_+)}=s\big\} \] of the maximal ideal space $M(SO(\mathbb{R}_+))$ over the points $s\in\{0,\infty\}$. By \cite[Proposition~2.1]{K08}, the set \[ \Delta:=M_0(SO(\mathbb{R}_+))\cup M_\infty(SO(\mathbb{R}_+)) \] coincides with $(\operatorname{clos}_{SO^*}\mathbb{R}_+)\setminus\mathbb{R}_+$ where $\operatorname{clos}_{SO^*}\mathbb{R}_+$ is the weak-star closure of $\mathbb{R}_+$ in the dual space of $SO(\mathbb{R}_+)$. Then $M(SO(\mathbb{R}_+)) =\Delta\cup\mathbb{R}_+$. By \cite[Lemma~2.2]{KKL11b}, the fibers $M_s(SO(\mathbb{R}_+))$ for $s\in\{0,\infty\}$ are connected compact Hausdorff spaces. In what follows we write \[ a(\xi):=\xi(a) \] for every $a\in SO(\mathbb{R}_+)$ and every $\xi\in\Delta$. \begin{lem}[{\cite[Proposition~2.2]{K08}}] \label{le:SO-fundamental-property} Let $\{a_k\}_{k=1}^\infty$ be a countable subset of $SO(\mathbb{R}_+)$ and $s\in\{0,\infty\}$. For each $\xi\in M_s(SO(\mathbb{R}_+))$ there exists a sequence $\{t_n\}_{n\in\mathbb{N}}\subset\mathbb{R}_+$ such that $t_n\to s$ as $n\to\infty$ and \begin{equation}\label{eq:SO-fundamental-property} a_k(\xi)=\lim_{n\to\infty}a_k(t_n)\quad\mbox{for all}\quad k\in\mathbb{N}. \end{equation} Conversely, if $\{t_n\}_{n\in\mathbb{N}}\subset\mathbb{R}_+$ is a sequence such that $t_n\to s$ as $n\to\infty$, then there exists a functional $\xi\in M_s(SO(\mathbb{R}_+))$ such that \eqref{eq:SO-fundamental-property} holds. \end{lem} \subsection{Slowly Oscillating Functions and Shifts} Repeating literally the proof of \cite[Proposition~3.3]{KKL03}, we obtain the following statement. \begin{lem}\label{le:SO-nec} Suppose $\varphi\in C^1(\mathbb{R}_+)$ and put $\psi(t):=t\varphi'(t)$ for $t\in\mathbb{R}_+$. If $\varphi,\,\psi\in SO(\mathbb{R}_+)$, then \[ \lim_{t\to s}\psi(t)=0 \quad\mbox{for}\quad s\in\{0,\infty\}. \] \end{lem} \begin{lem}[{\cite[Lemma~2.2]{KKL11a}}] \label{le:exponent-shift} An orientation-preserving shift $\alpha:\mathbb{R}_+\to\mathbb{R}_+$ belongs to $SOS(\mathbb{R}_+)$ if and only if \[ \alpha(t)=te^{\omega (t)},\quad t\in \mathbb{R}_+, \] for some real-valued function $\omega\in SO(\mathbb{R}_+)\cap C^1(\mathbb{R}_+)$ such that the function $t\mapsto t\omega^\prime(t)$ also belongs to $SO(\mathbb{R}_+)$ and $\inf_{t\in\mathbb{R}_+}\big(1+t\omega'(t)\big)>0$. \end{lem} \begin{lem}[{\cite[Lemma~2.3]{KKL11a}}] \label{le:composition} If $c\in SO(\mathbb{R}_+)$ and $\alpha\in SOS(\mathbb{R}_+)$, then $c\circ\alpha$ belongs to $SO(\mathbb{R}_+)$ and \[ \lim_{t\to s}(c(t)-c[\alpha(t)])=0 \quad\mbox{for}\quad s\in\{0,\infty\}. \] \end{lem} For an orientation-preserving diffeomorphism $\alpha:\mathbb{R}_+\to\mathbb{R}_+$, put \[ \alpha_0(t):=t, \quad \alpha_i(t):=\alpha[\alpha_{i-1}(t)], \quad i\in\mathbb{Z}, \quad t\in\mathbb{R}_+. \] \begin{lem}[{\cite[Corollary~2.5]{KKL14a}}] \label{le:iterations} If $\alpha,\beta\in SOS(\mathbb{R}_+)$, then $\alpha_i\circ\beta_j\in SOS(\mathbb{R}_+)$ for all $i,j\in\mathbb{Z}$. \end{lem} \begin{lem}\label{le:inverse-shift-fibers} If $\alpha\in SOS(\mathbb{R}_+)$, then \[ \omega(t):=\log[\alpha(t)/t], \quad \widetilde{\omega}(t):=\log[\alpha_{-1}(t)/t], \quad t\in\mathbb{R}_+, \] are slowly oscillating functions such that $\omega(\xi)=-\widetilde{\omega}(\xi)$ for all $\xi\in\Delta$. \end{lem} \begin{proof} From Lemma~\ref{le:iterations} with $i=-1$ and $j=0$ it follows that $\alpha_{-1}$ belongs to $SOS(\mathbb{R}_+)$. Then, by Lemma~\ref{le:exponent-shift}, $\omega,\widetilde{\omega}\in SO(\mathbb{R}_+)$. It is easy to see that \[ \widetilde{\omega}(t) = \log\frac{\alpha_{-1}(t)}{t} = -\log\frac{t}{\alpha_{-1}(t)} = -\log\frac{\alpha[\alpha_{-1}(t)]}{\alpha_{-1}(t)} = -\omega[\alpha_{-1}(t)] \] for all $t\in\mathbb{R}_+$. Hence, from Lemma~\ref{le:composition} it follows that $\omega\circ\alpha_{-1}\in SO(\mathbb{R}_+)$ and \begin{equation}\label{eq:inverse-shift-fibers-1} \lim_{t\to s}(\omega(t)+\widetilde{\omega}(t))=\lim_{t\to s}(\omega(t)-\omega[\alpha_{-1}(t)])=0, \quad s\in\{0,\infty\}. \end{equation} Fix $s\in\{0,\infty\}$ and $\xi\in M_s(SO(\mathbb{R}_+))$. By Lemma~\ref{le:SO-fundamental-property}, there is a sequence $\{t_j\}_{j\in\mathbb{N}}\subset\mathbb{R}_+$ such that $t_j\to s$ and \begin{equation}\label{eq:inverse-shift-fibers-2} \omega(\xi)=\lim_{j\to\infty}\omega(t_j), \quad \widetilde{\omega}(\xi)=\lim_{j\to\infty}\widetilde{\omega}(t_j). \end{equation} From \eqref{eq:inverse-shift-fibers-1}--\eqref{eq:inverse-shift-fibers-2} we obtain \[ \omega(\xi)=\lim_{j\to\infty}\omega(t_j)-\lim_{j\to\infty}(\omega(t_j)+\widetilde{\omega}(t_j)) = -\lim_{j\to\infty}\widetilde{\omega}(t_j)=-\widetilde{\omega}(\xi), \] which completes the proof. \end{proof} \subsection{Invertibility of Binomial Functional Operators} From \cite[Theorem~1.1]{KKL11a} we immediately get the following. \begin{lem}\label{le:FO} Suppose $c\in SO(\mathbb{R}_+)$ and $\alpha\in SOS(\mathbb{R}_+)$. If $1\gg c$, then the functional operator $I-cU_\alpha$ is invertible on the space $L^p(\mathbb{R}_+)$ and \[ (I-cU_\alpha)^{-1}=\sum_{n=0}^\infty (cU_\alpha)^n. \] \end{lem} \subsection{Compactness of Commutators of SIO's and FO's} Let $\mathfrak{B}$ be a Banach algebra and $\mathfrak{S}$ be a subset of $\mathfrak{B}$. We denote by $\operatorname{alg}_\mathfrak{B}\mathfrak{S}$ the smallest closed subalgebra of $\mathfrak{B}$ containing $\mathfrak{S}$. Then \[ \mathcal{A}=\operatorname{alg}_{\mathcal{B}(L^p(\mathbb{R}_+))}\{I,S\} \] is the algebra of singular integral operators (SIO's). Fix $\alpha,\beta\in SOS(\mathbb{R}_+)$ and consider the Banach algebra of functional operators (FO's) with shifts and slowly oscillating data defined by \[ \mathcal{FO}_{\alpha,\beta}:= \operatorname{alg}_{\mathcal{B}(L^p(\mathbb{R}_+))}\{U_\alpha,U_\alpha^{-1},U_\beta,U_\beta^{-1},aI:a\in SO(\mathbb{R}_+)\}. \] \begin{lem}\label{le:compactness-commutators} Let $\alpha,\beta\in SOS(\mathbb{R}_+)$. If $A\in\mathcal{FO}_{\alpha,\beta}$ and $B\in\mathcal{A}$, then \[ AB-BA\in\mathcal{K}(L^p(\mathbb{R}_+)). \] \end{lem} \begin{proof} In view of \cite[Corollary~6.4]{KKL11a}, we have $aB-BaI\in\mathcal{K}(L^p(\mathbb{R}_+))$ for all $a\in SO(\mathbb{R}_+)$ and all $B\in\mathcal{A}$. On the other hand, from \cite[Lemma~2.7]{KKL14a} it follows that $U_\gamma^{\pm 1}B-BU_\gamma^{\pm 1}\in\mathcal{K}(L^p(\mathbb{R}_+))$ for all $\gamma\in\{\alpha,\beta\}$ and $B\in\mathcal{A}$. Hence, $AB-BA\in\mathcal{K}(L^p(\mathbb{R}_+))$ for each generator $A$ of $\mathcal{FO}_{\alpha,\beta}$ and each $B\in\mathcal{A}$. Thus, the same is true for all $A\in\mathcal{FO}_{\alpha,\beta}$ by a standard argument. \end{proof} \subsection{Ranges of Two Continuous Functions on \boldmath{$\mathbb{R}$}} Given $a\in\mathbb{C}$ and $r>0$, let $\mathbb{D}(a,r):=\{z\in\mathbb{C}:|z-a|\le r\}$. For $x\in\mathbb{R}$, put \begin{equation}\label{eq:p2} p_2^+(x):=\frac{e^{2\pi x}}{e^{2\pi x}+1}, \quad p_2^-(x):=\frac{1}{e^{2\pi x}+1}. \end{equation} \begin{lem}\label{le:range1} Let $\psi,\zeta\in\mathbb{R}$ and $v,w\in\mathbb{C}$. If \begin{equation}\label{eq:fx} f(x):=(1-ve^{i\psi x})p_2^+(x)+(1-we^{i\zeta x})p_2^-(x), \end{equation} then $f(\mathbb{R})\subset\mathbb{D}(1,r)$, where $r:=\max(|v|,|w|)$. \end{lem} \begin{proof} From \eqref{eq:p2} and \eqref{eq:fx} we see that for every $x\in\mathbb{R}$ the point $f(x)$ lies on the line segment connecting the points $1-ve^{i\psi x}$ and $1-we^{i\zeta x}$. In turn, these points lie on the concentric circles \begin{equation}\label{eq:range1-1} \{z\in\mathbb{C}:|z-1|=|v|\}, \quad \{z\in\mathbb{C}:|z-1|=|w|\}, \end{equation} respectively. Thus, each line segment mentioned above is contained in the disk $\mathbb{D}(1,r)=\{z\in\mathbb{C}:|z-1|\le\max(|v|,|w|)\}$. \end{proof} \begin{lem}\label{le:range2} Let $\psi,\zeta\in\mathbb{R}$ and $v,w\in\mathbb{C}$ with $|v|<1$, $|w|<1$. If \begin{equation}\label{eq:gx} g(x):=(1-ve^{i\psi x})^{-1}p_2^+(x)+(1-we^{i\zeta x})^{-1}p_2^-(x), \quad x\in\mathbb{R}, \end{equation} then $g(\mathbb{R})\subset\mathbb{D}((1-r^2)^{-1},(1-r^2)^{-1}r)$, where $r=\max(|v|,|w|)<1$. \end{lem} \begin{proof} From \eqref{eq:p2} and \eqref{eq:gx} we see that for every $x\in\mathbb{R}$ the point $g(x)$ lies on the line segment connecting the points $(1-ve^{i\psi x})^{-1}$ and $(1-we^{i\zeta x})^{-1}$. In turn, these points lie on the images of the circles given by \eqref{eq:range1-1} under the inversion mapping $z\mapsto 1/z$. The image of the first circle in \eqref{eq:range1-1} is the circle $\mathbb{T}_v:=\{z\in\mathbb{C}:|z-b|=\rho\}$ with center and radius given by \begin{align*} b &=[(1-|v|)^{-1}+(1+|v|)^{-1}]/2=(1-|v|^2)^{-1},\\ \rho &=[(1-|v|)^{-1}-(1+|v|)^{-1}]/2=(1-|v|^2)^{-1}|v|. \end{align*} Analogously, the image of the second circle in \eqref{eq:range1-1} is the circle \[ \mathbb{T}_w:= \left\{z\in\mathbb{C}:\big|z-(1-|w|^2)^{-1}\big|=(1-|w|^2)^{-1}|w|\right\}. \] Let $\mathbb{D}_v$ and $\mathbb{D}_w$ be the closed disks whose boundaries are $\mathbb{T}_v$ and $\mathbb{T}_w$, respectively. Obviously, one of these disks is contained in another one, namely, $\mathbb{D}_v\subset\mathbb{D}_w$ if $|v|\le|w|$ and $\mathbb{D}_w\subset\mathbb{D}_v$ otherwise. Then each point $g(x)$, lying on the segment connecting the points $(1-ve^{i\psi x})^{-1}\in\mathbb{T}_v$ and $(1-we^{i\zeta x})^{-1}\in\mathbb{T}_w$, belongs to the biggest disk in $\{\mathbb{D}_v,\mathbb{D}_w\}$, that is, to the disk with center $(1-r^2)^{-1}$ and radius $(1-r^2)^{-1}r$, where $r=\max(|v|,|w|)<1$. \end{proof} From Lemmas~\ref{le:range1} and~\ref{le:range2} it follows that the ranges $f(\mathbb{R})$ and $g(\mathbb{R})$ do not contain the origin if $|v|<1$ and $|w|<1$. \section{Weighted Singular Integral Operators Are Similar to Mellin Convolution Operators} \label{sec:Mellin-convolutions} \subsection{Mellin Convolution Operators} Let $\mathcal{F}:L^2(\mathbb{R})\to L^2(\mathbb{R})$ denote the Fourier transform, \[ (\mathcal{F} f)(x):=\int\limits_\mathbb{R} f(y)e^{-ixy}dy,\quad x\in\mathbb{R}, \] and let $\mathcal{F}^{-1}:L^2(\mathbb{R})\to L^2(\mathbb{R})$ be the inverse of $\mathcal{F}$. A function $a\in L^\infty(\mathbb{R})$ is called a Fourier multiplier on $L^p(\mathbb{R})$ if the mapping $f\mapsto \mathcal{F}^{-1}a\mathcal{F} f$ maps $L^2(\mathbb{R})\cap L^p(\mathbb{R})$ onto itself and extends to a bounded operator on $L^p(\mathbb{R})$. The latter operator is then denoted by $W^0(a)$. We let $\mathcal{M}_p(\mathbb{R})$ stand for the set of all Fourier multipliers on $L^p(\mathbb{R})$. One can show that $\mathcal{M}_p(\mathbb{R})$ is a Banach algebra under the norm \[ \|a\|_{\mathcal{M}_p(\mathbb{R})}:=\|W^0(a)\|_{\mathcal{B}(L^p(\mathbb{R}))}. \] Let $d\mu(t)=dt/t$ be the (normalized) invariant measure on $\mathbb{R}_+$. Consider the Fourier transform on $L^2(\mathbb{R}_+,d\mu)$, which is usually referred to as the Mellin transform and is defined by \[ \mathcal{M}:L^2(\mathbb{R}_+,d\mu)\to L^2(\mathbb{R}), \quad (\mathcal{M} f)(x):=\int\limits_{\mathbb{R}_+} f(t) t^{-ix}\,\frac{dt}{t}. \] It is an invertible operator, with inverse given by \[ {\mathcal{M}^{-1}}:L^2(\mathbb{R})\to L^2(\mathbb{R}_{+},d\mu), \quad ({\mathcal{M}^{-1}}g)(t)= \frac{1}{2\pi}\int\limits_{\mathbb{R}} g(x)t^{ix}\,dx. \] Let $E$ be the isometric isomorphism \begin{equation}\label{eq:def-E} E:L^p(\mathbb{R}_+,d\mu)\to L^p(\mathbb{R}), \quad (Ef)(x):=f(e^x),\quad x\in\mathbb{R}. \end{equation} Then the map $A\mapsto E^{-1}AE$ transforms the Fourier convolution operator $W^0(a)=\mathcal{F}^{-1}a\mathcal{F}$ to the Mellin convolution operator \[ \operatorname{Co}(a):=\mathcal{M}^{-1}a\mathcal{M} \] with the same symbol $a$. Hence the class of Fourier multipliers on $L^p(\mathbb{R})$ coincides with the class of Mellin multipliers on $L^p(\mathbb{R}_+,d\mu)$. \subsection{Algebra \boldmath{$\mathcal{A}$} of Singular Integral Operators} \label{subsec:algebra-A} Consider the isometric isomorphism \begin{equation}\label{eq:def-Phi} \Phi:L^p(\mathbb{R}_+)\to L^p(\mathbb{R}_+,d\mu), \quad (\Phi f)(t):=t^{1/p}f(t),\quad t\in\mathbb{R}_+, \end{equation} The following statement is well known (see, e.g., \cite{D87}, \cite[Section~2.1.2]{HRS94}, and \cite[Sections~4.2.2--4.2.3]{RSS11}). \begin{lem}\label{le:alg-A} For every $y\in(1,\infty)$, the functions $s_y$ and $r_y$ given by \[ s_y(x):=\coth[\pi(x+i/y)], \quad r_y(x):=1/\sinh[\pi(x+i/y)], \quad x\in\mathbb{R}, \] belong to $\mathcal{M}_p(\mathbb{R})$, the operators $S_y$ and $R_y$ belong to the algebra $\mathcal{A}$, and \[ S_y=\Phi^{-1}\operatorname{Co}(s_y)\Phi, \quad R_y=\Phi^{-1}\operatorname{Co}(r_y)\Phi. \] \end{lem} For $y\in(1,\infty)$ and $x\in\mathbb{R}$, put \[ p_y^\pm(x):=(1\pm s_y(x))/2. \] This definition is consistent with \eqref{eq:p2} because $s_2(x)=\tanh(\pi x)$ for $x\in\mathbb{R}$. In view of Lemma~\ref{le:alg-A} we have \[ P_y^\pm=(I\pm S_y)/2=\Phi^{-1}\operatorname{Co}(p_y^\pm)\Phi. \] \begin{lem}\label{le:PR-relations} \begin{enumerate} \item[{\rm(a)}] For $y\in(1,\infty)$ and $x\in\mathbb{R}$, we have \[ p_y^+(x)p_y^-(x)=-\frac{(r_y(x))^2}{4}, \quad (p_y^\pm(x))^2=p_y^\pm(x)+\frac{(r_y(x))^2}{4}. \] \item[{\rm(b)}] For every $y\in\mathbb{R}_+$, we have \[ P_y^+P_y^-=P_y^-P_y^+=-\frac{R_y^2}{4}, \quad (P_y^\pm)^2=P_y^\pm+\frac{R_y^2}{4}. \] \end{enumerate} \end{lem} \begin{proof} Part (a) follows straightforwardly from the identity $s_y^2(x)-r_y^2(x)=1$. Part (b) follows from part (a) and Lemma~\ref{le:alg-A}. \end{proof} \section{Mellin Pseudodifferential Operators and Their Symbols} \label{sec:Mellin-PDO} \subsection{Boundedness of Mellin Pseudodifferential Operators} In 1991 Rabinovich \cite{R92} proposed to use Mellin pseudodifferential operators with $C^\infty$ slowly oscillating symbols to study singular integral operators with slowly oscillating coefficients on $L^p$ spaces. This idea was exploited in a series of papers by Rabinovich and coauthors. A detailed history and a complete bibliography up to 2004 can be found in \cite[Sections~4.6--4.7]{RRS04}. Further, the second author developed in \cite{K06} a handy for our purposes theory of Fourier pseudodifferential operators with slowly oscillating symbols of limited smoothness (much less restrictive than in the works mentioned in \cite{RRS04}). In this section we translate necessary results from \cite{K06} to the Mellin setting with the aid of the transformation \[ A\mapsto E^{-1}AE, \] where $A\in\mathcal{B}(L^p(\mathbb{R}))$ and the isometric isomorphism $E:L^p(\mathbb{R}_+,d\mu)\to L^p(\mathbb{R})$ is defined by \eqref{eq:def-E}. Let $a$ be an absolutely continuous function of finite total variation \[ V(a):=\int\limits_\mathbb{R}|a'(x)|dx \] on $\mathbb{R}$. The set $V(\mathbb{R})$ of all absolutely continuous functions of finite total variation on $\mathbb{R}$ becomes a Banach algebra equipped with the norm \begin{equation}\label{eq:norm-V} \|a\|_V:=\|a\|_{L^\infty(\mathbb{R})}+V(a). \end{equation} Following \cite{K06,K06-IWOTA}, let $C_b(\mathbb{R}_+,V(\mathbb{R}))$ denote the Banach algebra of all bounded continuous $V(\mathbb{R})$-valued functions on $\mathbb{R}_+$ with the norm \[ \|\mathfrak{a}(\cdot,\cdot)\|_{C_b(\mathbb{R}_+,V(\mathbb{R}))} = \sup_{t\in\mathbb{R}_+}\|\mathfrak{a}(t,\cdot)\|_V. \] As usual, let $C_0^\infty(\mathbb{R}_+)$ be the set of all infinitely differentiable functions of compact support on $\mathbb{R}_+$. The following boundedness result for Mellin pseudodifferential operators follows from \cite[Theorem~6.1]{K06-IWOTA} (see also \cite[Theorem~3.1]{K06}). \begin{thm}\label{th:boundedness-PDO} If $\mathfrak{a}\in C_b(\mathbb{R}_+,V(\mathbb{R}))$, then the Mellin pseudodifferential operator $\operatorname{Op}(\mathfrak{a})$, defined for functions $f\in C_0^\infty(\mathbb{R}_+)$ by the iterated integral \[ [\operatorname{Op}(\mathfrak{a})f](t) = \frac{1}{2\pi}\int\limits_\mathbb{R} dx \int\limits_{\mathbb{R}_+} \mathfrak{a}(t,x)\left(\frac{t}{\tau}\right)^{ix}f(\tau) \frac{d\tau}{\tau} \quad\mbox{for}\quad t\in\mathbb{R}_+, \] extends to a bounded linear operator on the space $L^p(\mathbb{R}_+,d\mu)$ and there is a number $C_p\in(0,\infty)$ depending only on $p$ such that \[ \|\operatorname{Op}(\mathfrak{a})\|_{\mathcal{B}(L^p(\mathbb{R}_+,d\mu))} \le C_p\|\mathfrak{a}\|_{C_b(\mathbb{R}_+,V(\mathbb{R}))}. \] \end{thm} Obviously, if $\mathfrak{a}(t,x)=a(x)$ for all $(t,x)\in\mathbb{R}_+\times\mathbb{R}$, then the Mellin pseudodifferential operator $\operatorname{Op}(\mathfrak{a})$ becomes the Mellin convolution operator \[ \operatorname{Op}(\mathfrak{a})=\operatorname{Co}(a). \] \subsection{Algebra \boldmath{$\mathcal{E}(\mathbb{R}_+,V(\mathbb{R}))$}} Let $SO(\mathbb{R}_+,V(\mathbb{R}))$ denote the Banach subalgebra of $C_b(\mathbb{R}_+,V(\mathbb{R}))$ consisting of all $V(\mathbb{R})$-valued functions $\mathfrak{a}$ on $\mathbb{R}_+$ that slowly oscillate at $0$ and $\infty$, that is, \[ \lim_{r\to 0} \operatorname{cm}_r^C(\mathfrak{a}) = \lim_{r\to \infty} \operatorname{cm}_r^C(\mathfrak{a})=0, \] where \[ \operatorname{cm}_r^C(\mathfrak{a}) := \max\big\{ \big\|\mathfrak{a}(t,\cdot)-\mathfrak{a}(\tau,\cdot)\big\|_{L^\infty(\mathbb{R})}:t,\tau\in[r,2r] \big\}. \] Let $\mathcal{E}(\mathbb{R}_+,V(\mathbb{R}))$ be the Banach algebra of all $V(\mathbb{R})$-valued functions $\mathfrak{a}\in SO(\mathbb{R}_+,V(\mathbb{R}))$ such that \[ \lim_{|h|\to 0}\sup_{t\in\mathbb{R}_+}\big\|\mathfrak{a}(t,\cdot)-\mathfrak{a}^h(t,\cdot)\big\|_V=0 \] where $\mathfrak{a}^h(t,x):=\mathfrak{a}(t,x+h)$ for all $(t,x)\in\mathbb{R}_+\times \mathbb{R}$. Let $\mathfrak{a}\in\mathcal{E}(\mathbb{R}_+,V(\mathbb{R}))$. For every $t\in\mathbb{R}_+$, the function $\mathfrak{a}(t,\cdot)$ belongs to $V(\mathbb{R})$ and, therefore, has finite limits at $\pm\infty$, which will be denoted by $\mathfrak{a}(t,\pm\infty)$. Now we explain how to extend the function $\mathfrak{a}$ to $\Delta\times\overline{\mathbb{R}}$. By analogy with \cite[Lemma~2.7]{K06} with the aid of Lemma~\ref{le:SO-fundamental-property} one can prove the following. \begin{lem}\label{le:values} Let $s\in\{0,\infty\}$ and $\{\mathfrak{a}_k\}_{k=1}^\infty$ be a countable subset of the algebra $\mathcal{E}(\mathbb{R}_+,V(\mathbb{R}))$. For each $\xi\in M_s(SO(\mathbb{R}_+))$ there is a sequence $\{t_j\}_{j\in\mathbb{N}}\subset\mathbb{R}_+$ and functions $\mathfrak{a}_k(\xi,\cdot)\in V(\mathbb{R})$ such that $t_j\to s$ as $j\to\infty$ and \[ \mathfrak{a}_k(\xi,x)=\lim_{j\to\infty}\mathfrak{a}_k(t_j,x) \] for every $x\in\overline{\mathbb{R}}$ and every $k\in\mathbb{N}$. \end{lem} A straightforward application of Lemma~\ref{le:values} leads to the following. \begin{lem}\label{le:values-sum-product} Let $\mathfrak{b}\in\mathcal{E}(\mathbb{R}_+,V(\mathbb{R}))$, $m,n\in\mathbb{N}$, and $\mathfrak{a}_{ij}\in\mathcal{E}(\mathbb{R}_+,V(\mathbb{R}))$ for $i\in\{1,\dots,m\}$ and $j\in\{1,\dots,n\}$. If \[ \mathfrak{b}(t,x)=\sum_{i=1}^m\prod_{j=1}^n\mathfrak{a}_{ij}(t,x),\quad (t,x)\in\mathbb{R}_+\times\mathbb{R}, \] then \[ \mathfrak{b}(\xi,x)=\sum_{i=1}^m\prod_{j=1}^n\mathfrak{a}_{ij}(\xi,x),\quad (\xi,x)\in\Delta\times\overline{\mathbb{R}}. \] \end{lem} \begin{lem}[{\cite[Lemma 3.2]{KKL14b}}] \label{le:values-series} Let $\{\mathfrak{a}_n\}_{n\in\mathbb{N}}$ be a sequence of functions in $\mathcal{E}(\mathbb{R}_+,V(\mathbb{R}))$ such that the series $\sum_{n=1}^\infty\mathfrak{a}_n$ converges in the norm of the algebra $C_b(\mathbb{R}_+,V(\mathbb{R}))$ to a function $\mathfrak{a}\in\mathcal{E}(\mathbb{R}_+,V(\mathbb{R}))$. Then \begin{align} & \mathfrak{a}(t,\pm\infty) =\sum_{n=1}^\infty \mathfrak{a}_n(t,\pm\infty) && \mbox{for all}\quad t\in\mathbb{R}_+, \label{eq:values-series-1} \\ & \mathfrak{a}(\xi,x) =\sum_{n=1}^\infty \mathfrak{a}_n(\xi,x) && \mbox{for all}\quad (\xi,x)\in\Delta\times\mathbb{R}. \label{eq:values-series-2} \end{align} \end{lem} \subsection{Products of Mellin Pseudodifferential Operators} Applying the relation \begin{equation}\label{eq:translation-PDO} \operatorname{Op}(\mathfrak{a})=E^{-1}a(x,D)E \end{equation} between the Mellin pseudodifferential operator $\operatorname{Op}(\mathfrak{a})$ and the Fourier pseudodifferential operator $a(x,D)$ considered in \cite{K06}, where \begin{equation}\label{eq:translation-symbols} \mathfrak{a}(t,x)=a(\ln t,x),\quad(t,x)\in\mathbb{R}_+\times\mathbb{R}, \end{equation} and $E$ is given by \eqref{eq:def-E}, we infer from \cite[Theorem~8.3]{K06} the following compactness result. \begin{thm}\label{th:comp-semi-commutators-PDO} If $\mathfrak{a},\mathfrak{b}\in\mathcal{E}(\mathbb{R}_+,V(\mathbb{R}))$, then \[ \operatorname{Op}(\mathfrak{a})\operatorname{Op}(\mathfrak{b})\simeq \operatorname{Op}(\mathfrak{a}\mathfrak{b}). \] \end{thm} From \eqref{eq:def-E}, \eqref{eq:translation-PDO}--\eqref{eq:translation-symbols}, \cite[Lemmas~7.1,~7.2]{K06}, and the proof of \cite[Lemma~8.1]{K06} we can extract the following. \begin{lem}\label{le:PDO-3-operators} If $\mathfrak{a},\mathfrak{b},\mathfrak{c}\in\mathcal{E}(\mathbb{R}_+,V(\mathbb{R}))$ are such that $\mathfrak{a}$ depends only on the first variable and $\mathfrak{c}$ depends only on the second variable, then \[ \operatorname{Op}(\mathfrak{a})\operatorname{Op}(\mathfrak{b})\operatorname{Op}(\mathfrak{c}) = \operatorname{Op}(\mathfrak{a}\mathfrak{b}\mathfrak{c}). \] \end{lem} \subsection{Fredholmness of Mellin Pseudodifferential Operators} To study the Fredholmness of Mellin pseudodifferential operators, we need the Banach algebra $\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$ consisting of all functions $\mathfrak{a}$ belonging to $\mathcal{E}(\mathbb{R}_+,V(\mathbb{R}))$ and such that \[ \lim_{m\to\infty}\sup_{t\in\mathbb{R}_+}\int\limits_{\mathbb{R}\setminus[-m,m]} \left|\frac{\partial \mathfrak{a}(t,x)}{\partial x}\right|\,dx=0. \] Now we are in a position to formulate the main result of this section. \begin{thm}[{\cite[Theorem~5.8]{KKL14b}}] \label{th:Fredholmness-PDO} Suppose $\mathfrak{a}\in\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$. \begin{enumerate} \item[{\rm(a)}] If the Mellin pseudodifferential operator $\operatorname{Op}(\mathfrak{a})$ is Fredholm on the space $L^p(\mathbb{R}_+,d\mu)$, then \begin{equation}\label{eq:Fredholmness-PDO-1} \mathfrak{a}(t,\pm\infty)\ne 0 \ \text{ for all }\ t\in\mathbb{R}_+, \quad \mathfrak{a}(\xi,x)\ne 0 \ \text{ for all }\ (\xi,x)\in\Delta\times\overline{\mathbb{R}}. \end{equation} \item[{\rm(b)}] If \eqref{eq:Fredholmness-PDO-1} holds, then the Mellin pseudodifferential operator $\operatorname{Op}(\mathfrak{a})$ is Fredholm on the space $L^p(\mathbb{R}_+,d\mu)$ and each its regularizer has the form $\operatorname{Op}(\mathfrak{b})+K$, where $K$ is a compact operator on the space $L^p(\mathbb{R}_+,d\mu)$ and $\mathfrak{b}\in\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$ is such that \begin{align*} &\mathfrak{b}(t,\pm\infty)=1/\mathfrak{a}(t,\pm\infty) \mbox{ for all }\ t\in\mathbb{R}_+, \\ &\mathfrak{b}(\xi,x)=1/\mathfrak{a}(\xi,x) \mbox{ for all }\ (\xi,x)\in\Delta\times\overline{\mathbb{R}}. \end{align*} \end{enumerate} \end{thm} Note that part (a) follows from \cite[Theorem~4.3]{K09} and part (b) is the main result of \cite{KKL14b}. \section{Applications of Mellin Pseudodifferential Operators} \label{sec:applications-Mellin-PDO} \subsection{Some Important Functions in the Algebra $\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$} \begin{lem}[{\cite[Lemma~4.2]{KKL14a}}] \label{le:g-sp-rp} Let $g\in SO(\mathbb{R}_+)$. Then for every $y\in(1,\infty)$ the functions \[ \mathfrak{g}(t,x):=g(t), \quad \mathfrak{s}_y(t,x):=s_y(x), \quad \mathfrak{r}_y(t,x):=r_y(x), \quad (t,x)\in\mathbb{R}_+\times\mathbb{R}, \] belong to the Banach algebra $\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$. \end{lem} \begin{lem}[{\cite[Lemma~4.3]{KKL14a}}] \label{le:fb} Suppose $\omega\in SO(\mathbb{R}_+)$ is a real-valued function. Then for every $y\in(1,\infty)$ the function \[ \mathfrak{b}(t,x):=e^{i\omega(t)x}r_y(x), \quad (t,x)\in\mathbb{R}_+\times\mathbb{R}, \] belongs to the Banach algebra $\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$ and there is a positive constant $C(y)$ depending only on $y$ such that \[ \|\mathfrak{b}\|_{C_b(\mathbb{R}_+,V(\mathbb{R}))} \le C(y)\left(1+\sup_{t\in\mathbb{R}_+}|\omega(t)|\right). \] \end{lem} \subsection{Operator $U_\gamma R_y$} \begin{lem}[{\cite[Lemma~4.4]{KKL14a}}] \label{le:shift-R-exact} Let $\gamma\in SOS(\mathbb{R}_+)$ and $U_\gamma$ be the associated isometric shift operator on $L^p(\mathbb{R}_+)$. For every $y\in(1,\infty)$, the operator $U_\gamma R_y$ can be realized as the Mellin pseudodifferential operator: \[ U_\gamma R_y = \Phi^{-1}\operatorname{Op} (\mathfrak{d}) \Phi, \] where the function $\mathfrak{d}$, given for $(t,x)\in\mathbb{R}_+\times\mathbb{R}$ by \[ \mathfrak{d}(t,x):=(1+t\psi'(t))^{1/p} e^{i\psi(t)x}r_y(x) \quad\mbox{with}\quad \psi(t):=\log[\gamma(t)/t], \] belongs to the algebra $\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$. \end{lem} \subsection{Operator $(I-vU_\gamma)R_y$} The previous lemma can be easily generalized to the case of operators containing slowly oscillating coefficients. \begin{lem}\label{le:A-R} Let $y\in(1,\infty)$, $v\in SO(\mathbb{R}_+)$, and $\gamma\in SOS(\mathbb{R}_+)$. Then \begin{enumerate} \item[{\rm(a)}] the operator $(I-vU_\gamma) R_y$ can be realized as the Mellin pseudodifferential operator: \[ (I-vU_\gamma) R_y = \Phi^{-1}\operatorname{Op} (\mathfrak{a}) \Phi, \] where the function $\mathfrak{a}$, given for $(t,x)\in\mathbb{R}_+\times\mathbb{R}$ by \[ \mathfrak{a}(t,x):= \big(1-v(t)\big(\Psi(t)\big)^{1/p}e^{i\psi(t)x}\big)r_y(x) \] with $\psi(t):=\log[\gamma(t)/t]$ and $\Psi(t):=1+t\psi'(t)$, belongs to $\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$; \item[{\rm(b)}] we have \[ \mathfrak{a}(\xi,x)= \left\{\begin{array}{lll} (1-v(\xi)e^{i\psi(\xi)x})r_y(x), &\mbox{if}& (\xi,x)\in\Delta\times\mathbb{R}, \\ 0, &\mbox{if}& (\xi,x)\in(\mathbb{R}_+\cup\Delta)\times\{\pm\infty\}. \end{array}\right. \] \end{enumerate} \end{lem} \begin{proof} (a) This statement follows straightforwardly from Lemmas~\ref{le:g-sp-rp}, \ref{le:shift-R-exact}, and \ref{le:PDO-3-operators}. (b) If $t\in\mathbb{R}_+$, then obviously \begin{equation}\label{eq:A-R-1} \mathfrak{a}(t,x)=0 \quad\mbox{for}\quad x\in\{\pm\infty\}. \end{equation} By Lemma~\ref{le:exponent-shift}, $\psi\in SO(\mathbb{R}_+)$. Since $v,\psi\in SO(\mathbb{R}_+)$, from Lemma~\ref{le:g-sp-rp} it follows that the functions \begin{equation}\label{eq:A-R-2} \mathfrak{v}(t,x):=v(t), \quad \widetilde{\psi}(t,x):=\psi(t), \quad(t,x)\in\mathbb{R}_+\times\mathbb{R}, \end{equation} belong to $\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$. Consider the finite family $\{\mathfrak{a},\mathfrak{v},\widetilde{\psi}\}\in\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$. Fix $s\in\{0,\infty\}$ and $\xi\in M_s(SO(\mathbb{R}_+))$. By Lemma~\ref{le:values} and \eqref{eq:A-R-2}, there is a sequence $\{t_j\}_{j\in\mathbb{N}}\subset\mathbb{R}_+$ and a function $\mathfrak{a}(\xi,\cdot)\in V(\mathbb{R}_+)$ such that \begin{equation}\label{eq:A-R-3} \lim_{j\to\infty}t_j=s, \quad v(\xi)=\lim_{j\to\infty}v(t_j), \quad \psi(\xi)=\lim_{j\to\infty}\psi(t_j), \end{equation} \begin{equation}\label{eq:A-R-4} \mathfrak{a}(\xi,x)=\lim_{j\to\infty}\mathfrak{a}(t_j,x), \quad x\in\overline{\mathbb{R}}. \end{equation} From Lemmas~\ref{le:SO-nec} and~\ref{le:exponent-shift} we obtain \begin{equation}\label{eq:A-R-5} \lim_{j\to\infty}(\Psi(t_j))^{1/p}=1. \end{equation} From \eqref{eq:A-R-1} and \eqref{eq:A-R-4} we get \[ \mathfrak{a}(\xi,x)=0 \quad\mbox{for}\quad (\xi,x)\in(\mathbb{R}_+\cup\Delta)\times\{\pm\infty\}. \] Finally, from \eqref{eq:A-R-3}--\eqref{eq:A-R-5} we obtain for $(\xi,x)\in\Delta\times\mathbb{R}$, \begin{align*} \mathfrak{a}(\xi,x) &=\lim_{j\to\infty}\mathfrak{a}(t_j,x) \\ &=\left( 1-\left(\lim_{j\to\infty}v(t_j)\right) \left(\lim_{j\to\infty}(\Psi(t_j))^{1/p}\right) \exp\left(ix\lim_{j\to\infty}\psi(t_j)\right) \right)r_y(x) \\ &= (1-v(\xi)e^{i\psi(\xi)x})r_y(x), \end{align*} which completes the proof. \end{proof} \subsection{Operator $(I-vU_\gamma)^{-1}R_y$} The following statement is crucial for our analysis. It says that the operators $(I-vU_\gamma)R_y$ and $(I-vU_\gamma)^{-1}R_y$ have similar nature. \begin{lem}\label{le:A-inverse-R} Let $y\in(1,\infty)$, $v\in SO(\mathbb{R}_+)$, and $\gamma\in SOS(\mathbb{R}_+)$. If $1\gg v$, then \begin{enumerate} \item[{\rm(a)}] the operator $A:=I-vU_\gamma$ is invertible on $L^p(\mathbb{R}_+)$; \item[{\rm(b)}] the operator $A^{-1}R_y$ can be realized as the Mellin pseudodifferential operator: \begin{equation}\label{eq:A-inverse-R-1} A^{-1}R_y=\Phi^{-1}\operatorname{Op}(\mathfrak{c})\Phi, \end{equation} where the function $\mathfrak{c}$, given for $(t,x)\in\mathbb{R}_+\times\mathbb{R}$ by \begin{align}\label{eq:A-inverse-R-2} &&\quad \mathfrak{c}(t,x):=r_y(x)+\sum_{n=1}^\infty \left(\prod_{k=0}^{n-1} v[\gamma_k(t)]\big(\Psi[\gamma_k(t)]\big)^{1/p}e^{i\psi[\gamma_k(t)]x}\right) r_y(x) \end{align} with $\psi(t):=\log[\gamma(t)/t]$ and $\Psi(t):=1+t\psi'(t)$, belongs to $\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$; \item[{\rm(c)}] we have \begin{align*} &&\ \mathfrak{c}(\xi,x)=\left\{\begin{array}{lll} (1-v(\xi)e^{i\psi(\xi)x})^{-1}r_y(x), &\mbox{if}& (\xi,x)\in\Delta\times\mathbb{R}, \\ 0, &\mbox{if}& (\xi,x)\in(\mathbb{R}_+\cup\Delta)\times\{\pm\infty\}. \end{array}\right. \end{align*} \end{enumerate} \end{lem} \begin{proof} (a) Since $1\gg v$, from Lemma~\ref{le:FO} we conclude that $A$ is invertible on the space $L^p(\mathbb{R}_+)$ and \begin{equation}\label{eq:A-inverse-R-3} A^{-1}=\sum_{n=0}^\infty (vU_\gamma)^n. \end{equation} Part (a) is proved. (b) By Lemmas~\ref{le:alg-A} and~\ref{le:g-sp-rp}, \begin{equation}\label{eq:A-inverse-R-4} R_y=\Phi^{-1}\operatorname{Op}(\mathfrak{c}_0)\Phi, \end{equation} where the function $\mathfrak{c}_0$, given by \begin{equation}\label{eq:A-inverse-R-5} \mathfrak{c}_0(t,x):=r_y(x), \quad (t,x)\in\mathbb{R}_+\times\mathbb{R}, \end{equation} belongs to $\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$. If $\gamma\in SOS(\mathbb{R}_+)$, then from Lemma~\ref{le:iterations} it follows that $\gamma_n\in SOS(\mathbb{R}_+)$ for every $n\in\mathbb{Z}$. By Lemma~\ref{le:exponent-shift}, the functions \begin{equation}\label{eq:A-inverse-R-6} \psi_n(t):=\log\frac{\gamma_n(t)}{t}, \quad \Psi_n(t):=1+t\psi_n'(t) \quad t\in\mathbb{R}_+,\quad n\in\mathbb{Z}, \end{equation} are real-valued functions in $SO(\mathbb{R}_+)\cap C^1(\mathbb{R}_+)$. For every $n\in\mathbb{N}$, \begin{equation}\label{eq:A-inverse-R-7} (vU_\gamma)^nR_y=\left(\prod_{k=0}^{n-1}v\circ\gamma_k\right)U_{\gamma_n}R_y. \end{equation} By Lemma~\ref{le:shift-R-exact}, \begin{equation}\label{eq:A-inverse-R-8} U_{\gamma_n}R_y=\Phi^{-1}\operatorname{Op}(\mathfrak{d}_n)\Phi, \end{equation} where the function $\mathfrak{d}_n$, given by \begin{equation}\label{eq:A-inverse-R-9} \mathfrak{d}_n(t,x):=\big(\Psi_n(t)\big)^{1/p}e^{i\psi_n(t)x}r_y(x), \quad (t,x)\in\mathbb{R}_+\times\mathbb{R}, \end{equation} belongs to $\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$. From \eqref{eq:A-inverse-R-6} it follows that \[ \psi_n(t) = \log\frac{\gamma_{n-1}[\gamma(t)]}{t} = \log\frac{\gamma_{n-1}[\gamma(t)]}{\gamma(t)}+\log\frac{\gamma(t)}{t} = \psi_{n-1}[\gamma(t)]+\psi(t). \] Therefore \begin{equation}\label{eq:A-inverse-R-10} \psi_n'(t)=\psi_{n-1}'[\gamma(t)]\gamma'(t)+\psi'(t). \end{equation} By using $\gamma(t)=te^{\psi(t)}$ and $\gamma'(t)=\Psi(t)e^{\psi(t)}$, from \eqref{eq:A-inverse-R-6} and \eqref{eq:A-inverse-R-10} we get \begin{align*} \Psi_n(t) &= t\psi_{n-1}'[\gamma(t)]\Psi(t)e^{\psi(t)}+(1+t\psi'(t)) \\ &= \Psi(t)\big(1+\gamma(t)\psi_{n-1}'[\gamma(t)]\big) = \Psi(t)\Psi_{n-1}[\gamma(t)]. \end{align*} From this identity by induction we get \begin{equation}\label{eq:A-inverse-R-11} \Psi_n(t)=\prod_{k=0}^{n-1}\Psi[\gamma_k(t)], \quad t\in\mathbb{R}_+, \quad n\in\mathbb{N}. \end{equation} From \eqref{eq:A-inverse-R-7}--\eqref{eq:A-inverse-R-9} and \eqref{eq:A-inverse-R-11} we get \begin{equation}\label{eq:A-inverse-R-12} (vU_\gamma)^nR_y=\Phi^{-1}\operatorname{Op}(\mathfrak{c}_n)\Phi, \quad n\in\mathbb{N}, \end{equation} where the function $\mathfrak{c}_n$ is given for $(t,x)\in\mathbb{R}_+\times\mathbb{R}$ by \begin{equation}\label{eq:A-inverse-R-13} \mathfrak{c}_n(t,x):=a_n(t)\mathfrak{b}_n(t,x) \end{equation} with \begin{equation}\label{eq:A-inverse-R-14} a_n(t):=\prod_{k=0}^{n-1}v[\gamma_k(t)]\big(\Psi[\gamma_k(t)]\big)^{1/p}, \quad \mathfrak{b}_n(t,x):=e^{i\psi_n(t)x}r_y(x). \end{equation} By the hypothesis, $v\in SO(\mathbb{R}_+)$. On the other hand, $\Psi\in SO(\mathbb{R}_+)$ in view of Lemma~\ref{le:exponent-shift}. Hence $\Psi^{1/p}\in SO(\mathbb{R}_+)$. Then, due to Lemmas~\ref{le:composition} and \ref{le:iterations}, $a_n\in SO(\mathbb{R}_+)$ for all $n\in\mathbb{N}$. Therefore, from Lemma~\ref{le:g-sp-rp} we obtain that $\mathfrak{a}_n(t,x):=a_n(t)$, $(t,x)\in\mathbb{R}_+\times\mathbb{R}$, belongs to $\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$. On the other hand, by Lemma~\ref{le:fb}, $\mathfrak{b}_n\in\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$. Thus, $\mathfrak{c}_n=a_n\mathfrak{b}_n$ belongs to $\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$ for every $n\in\mathbb{N}$. Following the proof of \cite[Lemma~2.1]{KKL03} (see also \cite[Theorem~2.2]{A96}), let us show that \begin{equation}\label{eq:A-inverse-R-15} \limsup_{n\to\infty}\|a_n\|_{C_b(\mathbb{R}_+)}^{1/n}<1. \end{equation} By Lemmas~\ref{le:SO-nec} and \ref{le:exponent-shift}, \begin{equation}\label{eq:A-inverse-R-16} \lim_{t\to s}\Psi(t)=1+\lim_{t\to s}t\psi'(t)=1, \quad s\in\{0,\infty\}. \end{equation} If $1\gg v$, then \begin{equation}\label{eq:A-inverse-R-17} \limsup_{t\to s}|v(t)|<1, \quad s\in\{0,\infty\}. \end{equation} From \eqref{eq:A-inverse-R-16}--\eqref{eq:A-inverse-R-17} it follows that \[ L^*(s):=\limsup_{t\to s}\big|v(t)\big(\Psi(t)\big)^{1/p}\big|<1, \quad s\in\{0,\infty\}. \] Fix $\varepsilon>0$ such that $L^*(s)+\varepsilon<1$ for $s\in\{0,\infty\}$. By the definition of $L^*(s)$, there exist points $t_1,t_2\in\mathbb{R}_+$ such that \begin{equation}\label{eq:A-inverse-R-18} \begin{array}{lll} \big|v(t)\big(\Psi(t)\big)^{1/p}\big|<L^*(0)+\varepsilon &\quad\mbox{for}\quad &t\in(0,t_1), \\ \big|v(t)\big(\Psi(t)\big)^{1/p}\big|<L^*(\infty)+\varepsilon &\quad\mbox{for}\quad &t\in(t_2,\infty). \end{array} \end{equation} The mapping $\gamma$ has no fixed points other than $0$ and $\infty$. Hence, either $\gamma(t)>t$ or $\gamma(t)<t$ for all $t\in\mathbb{R}_+$. For definiteness, suppose that $\gamma(t)>t$ for all $t\in\mathbb{R}_+$. Then there exists a number $k_0\in\mathbb{N}$ such that $\gamma_{k_0}(t_1)\in (t_2,\infty)$. Put \[ M_1:=\sup_{t\in\mathbb{R}_+}\big|v(t)\big(\Psi(t)\big)^{1/p}\big|, \quad M_2:=\sup_{t\in\mathbb{R}_+\setminus[t_1,\gamma_{k_0}(t_1)]}\big|v(t)\big(\Psi(t)\big)^{1/p}\big|. \] Since $v\Psi^{1/p}\in SO(\mathbb{R}_+)$, we have $M_1<\infty$. Moreover, from \eqref{eq:A-inverse-R-18} we obtain \[ M_2\le\max(L^*(0),L^*(\infty))+\varepsilon<1. \] Then, for every $t\in\mathbb{R}_+$ and $n\in\mathbb{N}$, \begin{align*} |a_n(t)| &= \prod_{k=0}^{n-1}\big|v[\gamma_k(t)]\big(\Psi[\gamma_k(t)]\big)^{1/p}| \\ &\le M_1^{k_0}M_2^{n-k_0} \le M_1^{k_0}(\max(L^*(0),L^*(\infty))+\varepsilon)^{n-k_0}. \end{align*} From here we immediately get \eqref{eq:A-inverse-R-15}. Now let us show that \begin{equation}\label{eq:A-inverse-R-19} \limsup_{n\to\infty}\|\mathfrak{b}_n\|_{C_b(\mathbb{R}_+,V(\mathbb{R}))}^{1/n}\le 1. \end{equation} By Lemma~\ref{le:fb}, there exists a constant $C(y)\in(0,\infty)$ depending only on $y$ such that for all $n\in\mathbb{N}$, \begin{equation}\label{eq:A-inverse-R-20} \|\mathfrak{b}_n\|_{C_b(\mathbb{R}_+,V(\mathbb{R}))} \le C(y)\left(1+\sup_{t\in\mathbb{R}_+}|\psi_n(t)|\right). \end{equation} From \eqref{eq:A-inverse-R-6} we obtain \begin{equation}\label{eq:A-inverse-R-21} \psi_n(t) =\log\left( \prod_{k=0}^{n-1}\frac{\gamma[\gamma_k(t)]}{\gamma_k(t)}\right) = \sum_{k=0}^{n-1}\log\frac{\gamma[\gamma_k(t)]}{\gamma_k(t)} = \sum_{k=0}^{n-1}\psi[\gamma_k(t)]. \end{equation} Let \[ M_3:=\sup_{t\in\mathbb{R}_+}|\psi(t)|. \] Since $\gamma_k$ is a diffeomorphism of $\mathbb{R}_+$ onto itself for every $k\in\mathbb{Z}$, we have \begin{equation}\label{eq:A-inverse-R-22} M_3 = \sup_{t\in\mathbb{R}_+}|\psi(t)|=\sup_{t\in\mathbb{R}_+}|\psi[\gamma(t)]|=\dots=\sup_{t\in\mathbb{R}_+}|\psi[\gamma_{n-1}(t)]|. \end{equation} From \eqref{eq:A-inverse-R-20}--\eqref{eq:A-inverse-R-22} we obtain \[ \|\mathfrak{b}_n\|_{C_b(\mathbb{R}_+,V(\mathbb{R}))} \le C(y)(1+M_3n), \quad n\in\mathbb{N}, \] which implies \eqref{eq:A-inverse-R-19}. Combining \eqref{eq:A-inverse-R-13}, \eqref{eq:A-inverse-R-15}, and \eqref{eq:A-inverse-R-19}, we arrive at \begin{align*} \limsup_{n\to\infty}\|\mathfrak{c}_n\|_{C_b(\mathbb{R}_+,V(\mathbb{R}))}^{1/n} \le& \left(\limsup_{n\to\infty}\|a_n\|_{C_b(\mathbb{R}_+)}^{1/n}\right) \\ &\times \left(\limsup_{n\to\infty}\|\mathfrak{b}_n\|_{C_b(\mathbb{R}_+,V(\mathbb{R}))}^{1/n}\right)<1. \end{align*} This shows that the series $\sum_{n=0}^\infty \mathfrak{c}_n$ is absolutely convergent in the norm of $C_b(\mathbb{R}_+,V(\mathbb{R}))$. From \eqref{eq:A-inverse-R-13}--\eqref{eq:A-inverse-R-14} and \eqref{eq:A-inverse-R-21} we get for $(t,x)\in\mathbb{R}_+\times\mathbb{R}$ and $n\in\mathbb{N}$, \begin{equation}\label{eq:A-inverse-R-23} \mathfrak{c}_n(t,x)= \left(\prod_{k=0}^{n-1}v[\gamma_k(t)]\big(\Psi[\gamma_k(t)]\big)^{1/p} e^{i\psi[\gamma_k(t)]x}\right)r_y(x). \end{equation} We have already shown that $\mathfrak{c}_0$ given by \eqref{eq:A-inverse-R-5} and $\mathfrak{c}_n$, $n\in\mathbb{N}$, given by \eqref{eq:A-inverse-R-23} belong to $\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$. Thus $\mathfrak{c}:=\sum_{n=0}^\infty\mathfrak{c}_n$ is given by \eqref{eq:A-inverse-R-2} and it belongs to $\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$. From \eqref{eq:A-inverse-R-4}, \eqref{eq:A-inverse-R-12} and Theorem~\ref{th:boundedness-PDO} we get \begin{align*} \left\|\Phi^{-1}\operatorname{Op}(\mathfrak{c})\Phi-\sum_{n=0}^N(vU_\gamma)^nR_y\right\|_{\mathcal{B}(L^p(\mathbb{R}_+))} &=\left\|\Phi^{-1}\left(\mathfrak{c}-\sum_{n=0}^N\mathfrak{c}_n\right)\Phi\right\|_{\mathcal{B}(L^p(\mathbb{R}_+))} \\ &\le C_p\left\|\mathfrak{c}-\sum_{n=0}^N\mathfrak{c}_n\right\|_{C_b(\mathbb{R}_+,V(\mathbb{R}))} \\ &= o(1) \quad \mbox{as}\quad N\to\infty. \end{align*} Hence \[ \sum_{n=0}^\infty (vU_\gamma)^nR_y=\Phi^{-1}\operatorname{Op}(\mathfrak{c})\Phi. \] Combining this identity with \eqref{eq:A-inverse-R-3}, we arrive at \eqref{eq:A-inverse-R-1}. Part (b) is proved. (c) From \eqref{eq:A-inverse-R-5} and \eqref{eq:A-inverse-R-23} it follows that $\mathfrak{c}_n(t,\pm\infty)=0$ for $n\in\mathbb{N}\cup\{0\}$ and $t\in\mathbb{R}_+$. Then, in view of Lemma~\ref{le:values-series}, \begin{equation}\label{eq:A-inverse-R-24} \mathfrak{c}(t,\pm\infty)=0, \quad t\in\mathbb{R}_+. \end{equation} Since $v,\psi\in SO(\mathbb{R}_+)$, from Lemma~\ref{le:g-sp-rp} it follows that the functions \begin{equation}\label{eq:A-inverse-R-25} \mathfrak{v}(t,x)=v(t), \quad \widetilde{\psi}(t,x):=\psi(t), \quad (t,x)\in\mathbb{R}_+\times\mathbb{R}, \end{equation} belong to the Banach algebra $\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$. Consider the countable family $\{\mathfrak{v},\widetilde{\psi},\mathfrak{c}\}\cup\{\mathfrak{c}_n\}_{n=0}^\infty$ of functions in $\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$. Fix $s\in\{0,\infty\}$ and $\xi\in M_s(SO(\mathbb{R}_+))$. By Lemma~\ref{le:values} and \eqref{eq:A-inverse-R-25}, there is a sequence $\{t_j\}_{j\in\mathbb{N}}\subset\mathbb{R}_+$ and functions $\mathfrak{c}(\xi,\cdot)\in V(\mathbb{R}_+)$, $\mathfrak{c}_n(\xi,\cdot)\in V(\mathbb{R}_+)$, $n\in\mathbb{N}\cup\{0\}$, such that \begin{equation}\label{eq:A-inverse-R-26} \lim_{j\to\infty}t_j=s, \quad v(\xi)=\lim_{j\to\infty}v(t_j), \quad \psi(\xi)=\lim_{j\to\infty}\psi(t_j), \end{equation} and for $n\in\mathbb{N}\cup\{0\}$ and $x\in\overline{\mathbb{R}}$, \begin{equation}\label{eq:A-inverse-R-27} \mathfrak{c}_n(\xi,x)=\lim_{j\to\infty}\mathfrak{c}_n(t_j,x), \quad \mathfrak{c}(\xi,x)=\lim_{j\to\infty}\mathfrak{c}(t_j,x). \end{equation} From \eqref{eq:A-inverse-R-24} and the second limit in \eqref{eq:A-inverse-R-27} we get \begin{equation}\label{eq:A-inverse-R-28} \mathfrak{c}(\xi,\pm\infty)=\lim_{j\to\infty}\mathfrak{c}(t_j,\pm\infty)=0. \end{equation} Trivially, \begin{equation}\label{eq:A-inverse-R-29} \mathfrak{c}_0(\xi,x)=r_y(x), \quad (\xi,x)\in(\Delta\cup\mathbb{R}_+)\times\overline{\mathbb{R}}. \end{equation} From Lemmas~\ref{le:SO-nec} and \ref{le:exponent-shift} we obtain \begin{equation}\label{eq:A-inverse-R-30} \lim_{t\to s}\big(\Psi(t)\big)^{1/p}=1, \quad s\in\{0,\infty\}. \end{equation} From Lemma~\ref{le:iterations} it follows that for $k\in\mathbb{N}$, \begin{equation}\label{eq:A-inverse-R-31} \lim_{j\to\infty}v(t_j)=\lim_{j\to\infty}v[\gamma_k(t_j)], \quad \lim_{j\to\infty}\psi(t_j)=\lim_{j\to\infty}\psi[\gamma_k(t_j)]. \end{equation} Combining \eqref{eq:A-inverse-R-23}, \eqref{eq:A-inverse-R-26}, the first limit in \eqref{eq:A-inverse-R-27}, and \eqref{eq:A-inverse-R-30}--\eqref{eq:A-inverse-R-31}, we get for $x\in\mathbb{R}$ and $n\in\mathbb{N}$, \begin{align} \mathfrak{c}_n(\xi,x) &= \lim_{j\to\infty} \left( \prod_{k=0}^{n-1}v[\gamma_k(t_j)]\big(\Psi[\gamma_k(t_j)]\big)^{1/p}e^{i\psi[\gamma_k(t_j)]x} \right)r_y(x) \nonumber \\ &= \big(v(\xi)e^{i\psi(\xi)x}\big)^nr_y(x). \label{eq:A-inverse-R-32} \end{align} From \eqref{eq:A-inverse-R-29}, \eqref{eq:A-inverse-R-32}, and Lemma~\ref{le:values-series} we obtain \begin{equation}\label{eq:A-inverse-R-33} \mathfrak{c}(\xi,x) = \sum_{n=0}^\infty \big(v(\xi)e^{i\psi(\xi)x}\big)^nr_y(x). \end{equation} Since $1\gg v$, we have \[ \limsup_{t\to s}|v(t)|<1, \quad s\in\{0,\infty\}, \] whence, in view of Lemma~\ref{le:SO-fundamental-property}, we obtain \[ |v(\xi) e^{i\psi(\xi)x}|\le\max_{s\in\{0,\infty\}}\left(\limsup_{t\to s}|v(t)|\right)<1. \] Therefore, \begin{equation}\label{eq:A-inverse-R-34} \sum_{n=0}^\infty\big(v(\xi)e^{i\psi(\xi)x}\big)^n=\big(1-v(\xi)e^{i\psi(\xi)x}\big)^{-1}. \end{equation} From \eqref{eq:A-inverse-R-33} and \eqref{eq:A-inverse-R-34} we get \begin{equation}\label{eq:A-inverse-R-35} \mathfrak{c}(\xi,x)=\big(1-v(\xi)e^{i\psi(\xi)x}\big)^{-1}r_y(x), \quad (\xi,x)\in\Delta\times\mathbb{R}. \end{equation} Combining \eqref{eq:A-inverse-R-24}, \eqref{eq:A-inverse-R-28}, and \eqref{eq:A-inverse-R-35}, we arrive at the assertion of part (c). \end{proof} \section{Fredholmness and Index of the Operator $V$}\label{sec:proof} \subsection{First Step of Regularization} \begin{lem}\label{le:reg1} Let $\alpha,\beta\in SOS(\mathbb{R}_+)$ and let $c,d\in SO(\mathbb{R}_+)$ be such that $1\gg c$ and $1\gg d$. Then for every $\mu\in[0,1]$ and $y\in(1,\infty)$ the following statements hold: \begin{enumerate} \item[{\rm (a)}] the operators $I-\mu cU_\alpha$ and $I-\mu dU_\beta$ are invertible on $L^p(\mathbb{R}_+)$; \item[{\rm(b)}] for $(t,x)\in\mathbb{R}_+\times\mathbb{R}$, the functions \begin{align} \mathfrak{a}_{\mu,y}^{c,\alpha}(t,x) &:=(1-\mu c(t)(\Omega(t))^{1/p}e^{i\omega(t)x})r_y(x), \label{eq:reg1-1} \\ \mathfrak{a}_{\mu,y}^{d,\beta}(t,x) &:=(1-\mu d(t)(H(t))^{1/p}e^{i\eta(t)x})r_y(x) \label{eq:reg1-2} \end{align} and \begin{align} \mathfrak{c}_{\mu,y}^{c,\alpha}(t,x) :=& r_y(x) \nonumber\\ &+\sum_{n=1}^\infty\mu^n \left(\prod_{k=0}^{n-1}c[\alpha_k(t)]\big(\Omega[\alpha_k(t)]\big)^{1/p} e^{i\omega[\alpha_k(t)]x}\right)r_y(x), \label{eq:reg1-3} \\ \mathfrak{c}_{\mu,y}^{d,\beta}(t,x) :=& r_y(x) \nonumber\\ &+\sum_{n=1}^\infty\mu^n \left(\prod_{k=0}^{n-1}d[\beta_k(t)]\big(H[\beta_k(t)]\big)^{1/p} e^{i\eta[\beta_k(t)]x}\right)r_y(x), \label{eq:reg1-4} \end{align} with \begin{align} &\omega(t)=\log[\alpha(t)/t], \quad \Omega(t)=1+t\omega'(t), \label{eq:reg1-psi} \\ &\eta(t)=\log[\beta(t)/t], \quad H(t)=1+t\eta'(t), \label{eq:reg1-zeta} \end{align} belong to the algebra $\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$; \item[{\rm(c)}] the operators \begin{align} V_{\mu,y} &:= (I-\mu cU_\alpha)P_y^++(I-\mu dU_\beta)P_y^-, \label{eq:reg1-5} \\ L_{\mu,y} &:= (I-\mu cU_\alpha)^{-1}P_y^++(I-\mu dU_\beta)^{-1}P_y^- \label{eq:reg1-6} \end{align} are related by \begin{equation}\label{eq:reg1-7} V_{\mu,y}L_{\mu,y}\simeq L_{\mu,y}V_{\mu,y}\simeq H_{\mu,y}, \end{equation} where \begin{equation}\label{eq:reg1-8} H_{\mu,y}:=\Phi^{-1}\operatorname{Op}(\mathfrak{h}_{\mu,y})\Phi \end{equation} and the function $\mathfrak{h}_{\mu,y}$, given for $(t,x)\in\mathbb{R}_+\times\mathbb{R}$ by \begin{align} \mathfrak{h}_{\mu,y}(t,x) :=1+ \frac{1}{4}\big[& 2(r_y(x))^2 \nonumber\\ &- \mathfrak{a}_{\mu,y}^{d,\beta}(t,x)\mathfrak{c}_{\mu,y}^{c,\alpha}(t,x) - \mathfrak{a}_{\mu,y}^{c,\alpha}(t,x)\mathfrak{c}_{\mu,y}^{d,\beta}(t,x) \big], \label{eq:reg1-9} \end{align} belongs to the algebra $\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$. \end{enumerate} \end{lem} \begin{proof} (a) From Lemma~\ref{le:FO} it follows that the operators \begin{equation}\label{eq:reg1-10} I-\mu cU_\alpha, I-\mu dU_\beta\in\mathcal{FO}_{\alpha,\beta} \end{equation} are invertible and \begin{equation}\label{eq:reg1-11} (I-\mu cU_\alpha)^{-1}, (I-\mu dU_\beta)^{-1}\in\mathcal{FO}_{\alpha,\beta}. \end{equation} This completes the proof of part (a). (b) From Lemma~\ref{le:alg-A} it follows that \begin{equation}\label{eq:reg1-12} R_y^2 =\Phi^{-1}\operatorname{Co}(r_y^2)\Phi=\Phi^{-1}\operatorname{Op}(\mathfrak{r}_y^2)\Phi, \end{equation} where $\mathfrak{r}_y(t,x)=r_y(x)$ for $(t,x)\in\mathbb{R}_+\times\mathbb{R}$. From Lemma~\ref{le:g-sp-rp} we deduce that $\mathfrak{r}_y^2\in\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$. By Lemma~\ref{le:A-R}(a), \begin{align} (I-\mu cU_\alpha)R_y &=\Phi^{-1}\operatorname{Op}(\mathfrak{a}_{\mu,y}^{c,\alpha})\Phi, \label{eq:reg1-13} \\ (I-\mu dU_\beta)R_y &=\Phi^{-1}\operatorname{Op}(\mathfrak{a}_{\mu,y}^{d,\beta})\Phi, \label{eq:reg1-14} \end{align} where the functions $\mathfrak{a}_{\mu,y}^{c,\alpha}$ and $\mathfrak{a}_{\mu,y}^{d,\beta}$, given by \eqref{eq:reg1-1} and \eqref{eq:reg1-2}, respectively, belong to $\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$. Lemma~\ref{le:A-inverse-R}(b) implies that \begin{align} (I-\mu cU_\alpha)^{-1}R_y &=\Phi^{-1}\operatorname{Op}(\mathfrak{c}_{\mu,y}^{c,\alpha})\Phi, \label{eq:reg1-15} \\ (I-\mu dU_\beta)^{-1}R_y &=\Phi^{-1}\operatorname{Op}(\mathfrak{c}_{\mu,y}^{d,\beta})\Phi, \label{eq:reg1-16} \end{align} where the functions $\mathfrak{c}_{\mu,y}^{c,\alpha}$ and $\mathfrak{c}_{\mu,y}^{d,\beta}$ given by \eqref{eq:reg1-3} and \eqref{eq:reg1-4}, respectively, belong to $\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$. In particular, this completes the proof of part (b). (c) From \eqref{eq:reg1-10}--\eqref{eq:reg1-11} and Lemmas~\ref{le:compactness-commutators} and \ref{le:alg-A} it follows that \begin{align} & (I-\mu cU_\alpha)^tT\simeq T(I-\mu cU_\alpha)^t, \label{eq:reg1-17} \\ & (I-\mu dU_\beta)^tT\simeq T(I-\mu dU_\beta)^t \label{eq:reg1-18} \end{align} for every $t\in\{-1,1\}$ and $T\in\{P_y^+,P_y^-,R_y\}$. Applying consecutively relations \eqref{eq:reg1-17}--\eqref{eq:reg1-18} with $T\in\{P_y^+,P_y^-\}$, Lemma~\ref{le:PR-relations}(b), and relations \eqref{eq:reg1-17}--\eqref{eq:reg1-18} with $T=R_y$, we get \begin{align} V_{\mu,y}L_{\mu,y} \simeq & (P_y^+)^2+(I-\mu dU_\beta)(I-\mu cU_\alpha)^{-1}P_y^-P_y^+ \nonumber\\ &+(P_y^-)^2+(I-\mu cU_\alpha)(I-\mu dU_\beta)^{-1}P_y^+P_y^- \nonumber\\ =& \left(P_y^++\frac{R_y^2}{4}\right)-(I-\mu dU_\beta)(I-\mu cU_\alpha)^{-1}\frac{R_y^2}{4} \nonumber\\ &+ \left(P_y^-+\frac{R_y^2}{4}\right)-(I-\mu cU_\alpha)(I-\mu dU_\beta)^{-1}\frac{R_y^2}{4} \nonumber\\ \simeq & I+\frac{R_y^2}{2}-\frac{1}{4}(I-\mu dU_\beta)R_y(I-\mu cU_\alpha)^{-1}R_y \nonumber\\ &\quad\quad\quad\ - \frac{1}{4}(I-\mu cU_\alpha)R_y(I-\mu dU_\beta)^{-1}R_y. \label{eq:reg1-19} \end{align} Applying equalities \eqref{eq:reg1-13}--\eqref{eq:reg1-16} to \eqref{eq:reg1-19}, we obtain \begin{align} V_{\mu,y}L_{\mu,y} \simeq & I+\frac{1}{2}\Phi^{-1}\operatorname{Op}(\mathfrak{r}_y^2)\Phi-\frac{1}{4}\Phi^{-1}\operatorname{Op}(\mathfrak{a}_{\mu,y}^{d,\beta})\operatorname{Op}(\mathfrak{c}_{\mu,y}^{c,\alpha})\Phi \nonumber\\ &\quad\quad\quad\quad\quad\quad\quad\quad\ -\frac{1}{4}\Phi^{-1}\operatorname{Op}(\mathfrak{a}_{\mu,y}^{c,\alpha})\operatorname{Op}(\mathfrak{c}_{\mu,y}^{d,\beta})\Phi. \label{eq:reg1-20} \end{align} From Theorem~\ref{th:comp-semi-commutators-PDO} we get \begin{align} \operatorname{Op}(\mathfrak{a}_{\mu,y}^{d,\beta})\operatorname{Op}(\mathfrak{c}_{\mu,y}^{c,\alpha}) &\simeq \operatorname{Op}(\mathfrak{a}_{\mu,y}^{d,\beta}\mathfrak{c}_{\mu,y}^{c,\alpha}), \label{eq:reg1-21} \\ \operatorname{Op}(\mathfrak{a}_{\mu,y}^{c,\alpha})\operatorname{Op}(\mathfrak{c}_{\mu,y}^{d,\beta}) &\simeq \operatorname{Op}(\mathfrak{a}_{\mu,y}^{c,\alpha}\mathfrak{c}_{\mu,y}^{d,\beta}). \label{eq:reg1-22} \end{align} Combining \eqref{eq:reg1-20}--\eqref{eq:reg1-22}, we arrive at \[ V_{\mu,y}L_{\mu,y}\simeq \Phi^{-1}\operatorname{Op}(\mathfrak{h}_{\mu,y})\Phi, \] where the function $\mathfrak{h}_{\mu,y}$, given by \eqref{eq:reg1-9}, belongs to the algebra $\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$ because the functions \eqref{eq:reg1-1}--\eqref{eq:reg1-4} lie in this algebra in view of part (b). Analogously, it can be shown that \[ L_{\mu,y}V_{\mu,y}\simeq \Phi^{-1}\operatorname{Op}(\mathfrak{h}_{\mu,y})\Phi, \] which completes the proof. \end{proof} \subsection{Fredholmness of the Operator $H_{\mu,2}$} In this subsection we will prove that the operators $H_{\mu,2}$ given by \eqref{eq:reg1-8} are Fredholm for every $\mu\in[0,1]$. To this end, we will use Theorem~\ref{th:Fredholmness-PDO}. First we represent boundary values of $\mathfrak{h}_{\mu,y}$ in a way, which is convenient for further analysis. \begin{lem}\label{le:reg-factorization} Let $\alpha,\beta\in SOS(\mathbb{R}_+)$ and let $c,d\in SO(\mathbb{R}_+)$ be such that $1\gg c$ and $1\gg d$. If $\mathfrak{h}_{\mu,y}$ is given by \eqref{eq:reg1-9} and \eqref{eq:reg1-1}--\eqref{eq:reg1-zeta}, then for every $\mu\in[0,1]$ and $y\in(1,\infty)$, we have \[ \mathfrak{h}_{\mu,y}(\xi,x)=\left\{\begin{array}{lll} v_{\mu,y}(\xi,x)\ell_{\mu,y}(\xi,x), &\mbox{if}& (\xi,x)\in\Delta\times\mathbb{R}, \\ 1, &\mbox{if}& (\xi,x)\in(\mathbb{R}_+\cup\Delta)\times\{\pm\infty\}, \end{array}\right. \] where \begin{align} v_{\mu,y}(\xi,x) &:=(1-\mu c(\xi)e^{i\omega(\xi)x})p_y^+(x)+(1-\mu d(\xi)e^{i\eta(\xi)x})p_y^-(x), \label{eq:reg-fact-2} \\ \ell_{\mu,y}(\xi,x) &:=(1-\mu c(\xi)e^{i\omega(\xi)x})^{-1}p_y^+(x)+(1-\mu d(\xi)e^{i\eta(\xi)x})^{-1}p_y^-(x) \label{eq:reg-fact-3} \end{align} for $(\xi,x)\in\Delta\times\mathbb{R}$. \end{lem} \begin{proof} From \eqref{eq:reg1-9}, Lemmas~\ref{le:values-sum-product}, \ref{le:A-R}(b), and \ref{le:A-inverse-R}(c) it follows that \[ \mathfrak{h}_{\mu,y}(\xi,x)=1 \quad\mbox{for}\quad (\xi,x)\in(\mathbb{R}_+\cup\Delta)\times\{\pm\infty\} \] and \begin{align*} \mathfrak{h}_{\mu,y}(\xi,x) =& 1+\frac{1}{4}\big[2(r_y(x))^2 \\ &- (1-\mu d(\xi)e^{i\eta(\xi)x})r_y(x) (1-\mu c(\xi)e^{i\omega(\xi)x})^{-1}r_y(x) \\ &- (1-\mu c(\xi)e^{i\omega(\xi)x})r_y(x) (1-\mu d(\xi)e^{i\eta(\xi)x})^{-1}r_y(x)\big] \end{align*} for $(\xi,x)\in\Delta\times\mathbb{R}$. By Lemma~\ref{le:PR-relations}(a), \begin{align*} &\mathfrak{h}_{\mu,y}(\xi,x) = \\ =& \left(p_y^+(x)+\frac{(r_y(x))^2}{4}\right) - (1-\mu d(\xi)e^{i\eta(\xi)x}) (1-\mu c(\xi)e^{i\omega(\xi)x})^{-1}\frac{(r_y(x))^2}{4} \\ &+ \left(p_y^-(x)+\frac{(r_y(x))^2}{4}\right) - (1-\mu c(\xi)e^{i\omega(\xi)x}) (1-\mu d(\xi)e^{i\eta(\xi)x})^{-1}\frac{(r_y(x))^2}{4} \\ =& (p_y^+(x))^2 + (1-\mu d(\xi)e^{i\eta(\xi)x}) (1-\mu c(\xi)e^{i\omega(\xi)x})^{-1}p_y^-(x)p_y^+(x) \\ &+ (p_y^-(x))^2 + (1-\mu c(\xi)e^{i\omega(\xi)x}) (1-\mu d(\xi)e^{i\eta(\xi)x})^{-1}p_y^+(x)p_y^-(x) \\ =& v_{\mu,y}(\xi,x)\ell_{\mu,y}(\xi,x) \end{align*} for $(\xi,x)\in\Delta\times\mathbb{R}$, which completes the proof. \end{proof} We were unable to prove that $\mathfrak{h}_{\mu,y}$ satisfies the hypotheses of Theorem~\ref{th:Fredholmness-PDO} for every $y\in(1,\infty)$ or at least for $y=p$. However, the very special form of the ranges of $v_{\mu,2}$ and $\ell_{\mu,2}$ given by \eqref{eq:reg-fact-2} and \eqref{eq:reg-fact-3}, respectively, allows us to prove that $v_{\mu,2}$ and $\ell_{\mu,2}$ are separated from zero for all $\mu\in[0,1]$, and thus $\mathfrak{h}_{\mu,2}$ satisfies the assumptions of Theorem~\ref{th:Fredholmness-PDO}. \begin{lem}\label{le:Fredholmness-H} Let $\alpha,\beta\in SOS(\mathbb{R}_+)$ and let $c,d\in SO(\mathbb{R}_+)$ be such that $1\gg c$ and $1\gg d$. Then for every $\mu\in[0,1]$ the operator $H_{\mu,2}$ given by \eqref{eq:reg1-8} is Fredholm on $L^p(\mathbb{R}_+)$. \end{lem} \begin{proof} By Lemma~\ref{le:reg-factorization}, for the function $\mathfrak{h}_{\mu,2}$ defined by \eqref{eq:reg1-9} and \eqref{eq:reg1-1}--\eqref{eq:reg1-zeta} we have \begin{equation}\label{eq:Fredholmness-H-1} \mathfrak{h}_{\mu,2}(\xi,x)=1\ne 0 \quad\mbox{for}\quad (\xi,x)\in(\mathbb{R}_+\cup\Delta)\times\{\pm\infty\} \end{equation} and \[ \mathfrak{h}_{\mu,2}(\xi,x)=v_{\mu,2}(\xi,x)\ell_{\mu,2}(\xi,x) \quad\mbox{for}\quad (\xi,x)\in\Delta\times\mathbb{R}, \] where $v_{\mu,2}$ and $\ell_{\mu,2}$ are defined by \eqref{eq:reg-fact-2} and \eqref{eq:reg-fact-3}, respectively. From Lemmas~\ref{le:range1} and \ref{le:range2} it follows that for each $\xi\in\Delta$ the ranges of the continuous functions $v_{\mu,2}(\xi,\cdot)$ and $\ell_{\mu,2}(\xi,\cdot)$ defined on $\mathbb{R}$ lie in the half-plane \[ \mathcal{H}^{\mu,\xi}:=\left\{ z\in\mathbb{C}:\operatorname{Re}z> 1-\mu\max(|c(\xi)|,|d(\xi)|) \right\}. \] From Lemma~\ref{le:SO-fundamental-property} we get \begin{align*} C(\Delta)&:=\sup_{\xi\in\Delta}|c(\xi)|=\max_{s\in\{0,\infty\}}\left(\limsup_{t\to s}|c(t)|\right), \\ D(\Delta)&:=\sup_{\xi\in\Delta}|d(\xi)|=\max_{s\in\{0,\infty\}}\left(\limsup_{t\to s}|d(t)|\right). \end{align*} Since $1\gg c$ and $1\gg d$, we see that $C(\Delta)<1$ and $D(\Delta)<1$. Therefore, for every $\xi\in\Delta$ and $\mu\in[0,1]$, the half-plane $\mathcal{H}^{\mu,\xi}$ is contained in the half-plane \[ \left\{z\in\mathbb{C}:\operatorname{Re}z> 1-\max(|C(\Delta)|,|D(\Delta)|) \right\} \] and the origin does not lie in the latter half-plane. Thus \begin{equation}\label{eq:Fredholmness-H-2} \mathfrak{h}_{\mu,2}(\xi,x)=v_{\mu,2}(\xi,x)\ell_{\mu,2}(\xi,x)\ne 0 \quad\mbox{for all}\quad (\xi,x)\in\Delta\times\mathbb{R}. \end{equation} From \eqref{eq:Fredholmness-H-1}--\eqref{eq:Fredholmness-H-2} and Theorem~\ref{th:Fredholmness-PDO} we obtain that the operator $H_{\mu,2}$ is Fredholm on $L^p(\mathbb{R}_+)$. \end{proof} \subsection{Proof of the Main Result} For $\mu\in[0,1]$, consider the operators $V_{\mu,2}$ and $L_{\mu,2}$ defined by \eqref{eq:reg1-5} and \eqref{eq:reg1-6}, respectively. It is obvious that $V_{0,2}=P_y^++P_y^-=I$ and $V_{1,2}=V$. By Lemma~\ref{le:reg1}(c), \begin{equation}\label{eq:proof-main-1} V_{\mu,2}L_{\mu,2}\simeq L_{\mu,2}V_{\mu,2}\simeq H_{\mu,2}, \quad\mu\in[0,1], \end{equation} where the operator $H_{\mu,2}$ given by \eqref{eq:reg1-8} is Fredholm in view of Lemma~\ref{le:Fredholmness-H}. Let $H_{\mu,2}^{(-1)}$ be a regularizer for $H_{\mu,2}$. From \eqref{eq:proof-main-1} it follows that \begin{equation}\label{eq:proof-main-2} V_{\mu,2}(L_{\mu,2}H_{\mu,2}^{(-1)})\simeq I, \quad (H_{\mu,2}^{(-1)}L_{\mu,2})V_{\mu,2}\simeq I, \quad \mu\in[0,1]. \end{equation} Thus, $L_{\mu,2}H_{\mu,2}^{(-1)}$ is a right regularizer for $V_{\mu,2}$ and $H_{\mu,2}^{(-1)}L_{\mu,2}$ is a left regularizer for $V_{\mu,2}$. Therefore, $V_{\mu,2}$ is Fredholm for every $\mu\in[0,1]$. It is obvious that the operator-valued function $\mu\mapsto V_{\mu,2}\in\mathcal{B}(L^p(\mathbb{R}_+))$ is continuous on $[0,1]$. Hence the operators $V_{\mu,2}$ belong to the same connected component of the set of all Fredholm operators. Therefore all $V_{\mu,2}$ have the same index (see, e.g., \cite[Section~4.10]{GK92}). Since $V_{0,2}=I$, we conclude that \[ \operatorname{Ind} V=\operatorname{Ind} V_{1,2}=\operatorname{Ind} V_{0,2}=\operatorname{Ind} I=0, \] which completes the proof of Theorem~\ref{th:main}. \qed \section{Regularization of the Operator $W$} \label{sec:Regularization} \subsection{Regularizers of the Operator $W$} As a by-product of the proof of Section~\ref{sec:proof}, we can describe all regularizers of a slightly more general operator $W$. \begin{thm}\label{th:regularization-W} Let $1<p<\infty$, $\varepsilon_1,\varepsilon_2\in\{-1,1\}$, and $\alpha,\beta\in SOS(\mathbb{R}_+)$. Suppose $c,d\in SO(\mathbb{R}_+)$ are such that $1\gg c$ and $1\gg d$. Then the operator $W$ given by \[ W:=(I-cU_\alpha^{\varepsilon_1})P_2^++(I-dU_\beta^{\varepsilon_2})P_2^- \] is Fredholm on the space $L^p(\mathbb{R}_+)$ and $\operatorname{Ind} W=0$. Moreover, each regularizer $W^{(-1)}$ of the operator $W$ is of the form \begin{equation}\label{eq:regularization-W-1} W^{(-1)}=[\Phi^{-1}\operatorname{Op}(\mathfrak{f})\Phi]\cdot [(I- cU_\alpha^{\varepsilon_1})^{-1}P_2^++(I-dU_\beta^{\varepsilon_2})^{-1}P_2^-]+K, \end{equation} where $K\in\mathcal{K}(L^p(\mathbb{R}_+))$ and $\mathfrak{f}\in\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$ is such that \begin{equation}\label{eq:regularization-W-2} \mathfrak{f}(\xi,x)=\left\{\begin{array}{lll} \displaystyle\frac{1}{w(\xi,x)\ell(\xi,x)}, &\mbox{if}& (\xi,x)\in\Delta\times\mathbb{R}, \\[3mm] 1, &\mbox{if}& (\xi,x)\in(\mathbb{R}_+\cup\Delta)\times\{\pm\infty\}, \end{array}\right. \end{equation} where \begin{align} w(\xi,x) &:=(1-c(\xi)e^{i\varepsilon_1\omega(\xi)x})p_2^+(x)+(1-d(\xi)e^{i\varepsilon_2\eta(\xi)x})p_2^-(x)\ne 0, \label{eq:regularization-W-3} \\ \ell(\xi,x) &:=\frac{p_2^+(x)}{1-c(\xi)e^{i\varepsilon_1\omega(\xi)x}}+\frac{p_2^-(x)}{1-d(\xi)e^{i\varepsilon_2\eta(\xi)x}}\ne 0 \label{eq:regularization-W-4} \end{align} for $(\xi,x)\in\Delta\times\mathbb{R}$ with $\omega(t):=\log[\alpha(t)/t]$ and $\eta(t):=\log[\beta(t)/t]$ for $t\in\mathbb{R}_+$. \end{thm} \begin{proof} Since $\alpha,\beta\in SOS(\mathbb{R}_+)$, from Lemma~\ref{le:iterations} it follows that $\alpha_{-1}$ and $\beta_{-1}$ also belong to $SOS(\mathbb{R}_+)$. Taking into account that $U_\alpha^{\varepsilon_1}=U_{\alpha_{\varepsilon_1}}$ and $U_\beta^{\varepsilon_2}=U_{\beta_{\varepsilon_2}}$, from Theorem~\ref{th:main} we deduce that the operator $W$ is Fredholm and $\operatorname{Ind} W=0$. Further, from \eqref{eq:proof-main-2} and Lemma~\ref{le:Fredholmness-H} it follows that each regularizer $W^{(-1)}$ of $W$ is of the form \begin{equation}\label{eq:regularization-W-5} W^{(-1)}=H^{(-1)}L+K_1, \end{equation} where $K_1\in\mathcal{K}(L^p(\mathbb{R}_+))$, \begin{equation}\label{eq:regularization-W-6} L:=(I- cU_\alpha^{\varepsilon_1})^{-1}P_2^++(I-dU_\beta^{\varepsilon_2})^{-1}P_2^-, \end{equation} and $H^{(-1)}$ is a regularizer of the Fredholm operator $H$ given by \[ H:=\Phi^{-1}\operatorname{Op}(\mathfrak{h})\Phi, \] where $\mathfrak{h}\in\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$ is given for $(t,x)\in\mathbb{R}_+\times\mathbb{R}$ by \[ \mathfrak{h}(t,x):=1+ \frac{1}{4}\big[ 2(r_2(x))^2 - \mathfrak{a}_{1,2}^{d,\beta_{\varepsilon_2}}(t,x)\mathfrak{c}_{1,2}^{c,\alpha_{\varepsilon_1}}(t,x) - \mathfrak{a}_{1,2}^{c,\alpha_{\varepsilon_1}}(t,x)\mathfrak{c}_{1,2}^{d,\beta_{\varepsilon_2}}(t,x) \big], \] and the functions $\mathfrak{a}_{1,2}^{c,\alpha_{\varepsilon_1}}$, $\mathfrak{c}_{1,2}^{c,\alpha_{\varepsilon_1}}$ and $\mathfrak{a}_{1,2}^{d,\beta_{\varepsilon_2}}$, $\mathfrak{c}_{1,2}^{d,\beta_{\varepsilon_2}}$ are given by \eqref{eq:reg1-1}--\eqref{eq:reg1-2} and \eqref{eq:reg1-3}--\eqref{eq:reg1-4} with $\alpha$ and $\beta$ replaced by $\alpha_{\varepsilon_1}$ and $\beta_{\varepsilon_2}$, respectively. Taking into account Lemma~\ref{le:inverse-shift-fibers}, by analogy with Lemma~\ref{le:reg-factorization} we get \begin{equation}\label{eq:regularization-W-7} \mathfrak{h}(\xi,x)=\left\{\begin{array}{lll} w(\xi,x)\ell(\xi,x), &\mbox{if}& (\xi,x)\in\Delta\times\mathbb{R}, \\ 1, &\mbox{if}& (\xi,x)\in(\mathbb{R}_+\cup\Delta)\times\{\pm\infty\}. \end{array}\right. \end{equation} By Theorem~\ref{th:Fredholmness-PDO}(b), each regularizer $H^{(-1)}$ of the Fredholm operator $H$ is of the form \begin{equation}\label{eq:regularization-W-8} H^{(-1)}=\Phi^{-1}\operatorname{Op}(\mathfrak{f})\Phi+K_2, \end{equation} where $K_2\in\mathcal{K}(L^p(\mathbb{R}_+))$ and $\mathfrak{f}\in\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$ is such that \begin{equation}\label{eq:regularization-W-9} \begin{array}{lll} \mathfrak{f}(t,\pm\infty)=1/\mathfrak{h}(t,\pm\infty) &\mbox{for all}& t\in\mathbb{R}_+, \\[3mm] \mathfrak{f}(\xi,x)=1/\mathfrak{h}(\xi,x) &\mbox{for all}& (\xi,x)\in\Delta\times\overline{\mathbb{R}}. \end{array} \end{equation} From \eqref{eq:regularization-W-5}--\eqref{eq:regularization-W-6} and \eqref{eq:regularization-W-8} we get \eqref{eq:regularization-W-1}. Combining \eqref{eq:regularization-W-7} and \eqref{eq:regularization-W-9}, we arrive at \eqref{eq:regularization-W-2}. \end{proof} \subsection{One Useful Consequence of Regularization of $W$} \begin{thm}\label{th:for-index} Under the assumptions of Theorem~{\rm\ref{th:regularization-W}}, for every $y\in(1,\infty)$ there exists a function $\mathfrak{g}_y\in\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$ such that \begin{equation}\label{eq:for-index-1} (\Phi^{-1}\operatorname{Op}(\mathfrak{g}_y)\Phi)W\simeq R_y \end{equation} and \[ \mathfrak{g}_y(\xi,x)=\left\{\begin{array}{lll} \displaystyle\frac{r_y(x)}{w(\xi,x)}, &\mbox{if}& (\xi,x)\in\Delta\times\mathbb{R}, \\[3mm] 0, &\mbox{if}& (\xi,x)\in(\mathbb{R}_+\cup\Delta)\times\{\pm\infty\}, \end{array}\right. \] where the function $w$ is defined for $(\xi,x)\in\Delta\times\mathbb{R}$ by \eqref{eq:regularization-W-3}. \end{thm} \begin{proof} From Theorem~\ref{th:regularization-W} it follows that \begin{equation}\label{eq:for-index-2} (\Phi^{-1}\operatorname{Op}(\mathfrak{f})\Phi)LWR_y\simeq R_y, \end{equation} where $L$ is given by \eqref{eq:regularization-W-6} and $\mathfrak{f}\in\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$ satisfies \eqref{eq:regularization-W-2}. From Lemmas~\ref{le:compactness-commutators} and~\ref{le:alg-A} we get \begin{equation}\label{eq:for-index-3} WR_y\simeq R_y W. \end{equation} Lemmas~\ref{le:alg-A} and~\ref{le:A-inverse-R}(a)--(b) imply that \begin{align} LR_y &=(I-cU_\alpha^{\varepsilon_1})^{-1}R_yP_2^++(I-dU_\beta^{\varepsilon_2})^{-1}R_yP_2^- \nonumber\\ &= \Phi^{-1}\operatorname{Op}(\mathfrak{c}_{1,y}^{c,\alpha_{\varepsilon_1}})\operatorname{Co}(p_2^+)\Phi + \Phi^{-1}\operatorname{Op}(\mathfrak{c}_{1,y}^{d,\beta_{\varepsilon_2}})\operatorname{Co}(p_2^-)\Phi, \label{eq:for-index-4} \end{align} where the functions $\mathfrak{c}_{1,y}^{c,\alpha_{\varepsilon_1}}$ and $\mathfrak{c}_{1,y}^{d,\beta_{\varepsilon_2}}$, given by \eqref{eq:reg1-3} and \eqref{eq:reg1-4} with $\alpha$ and $\beta$ replaced by $\alpha_{\varepsilon_1}$ and $\beta_{\varepsilon_2}$, respectively, belong to $\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$. From \eqref{eq:for-index-4} and Lemmas~\ref{le:g-sp-rp} and \ref{le:PDO-3-operators} we obtain \begin{equation}\label{eq:for-index-5} LR_y=\Phi^{-1}\operatorname{Op}(\mathfrak{c}_{1,y}^{c,\alpha_{\varepsilon_1}}p_2^++\mathfrak{c}_{1,y}^{d,\beta_{\varepsilon_2}}p_2^-)\Phi, \end{equation} where $\mathfrak{c}_{1,y}^{c,\alpha_{\varepsilon_1}}p_2^++\mathfrak{c}_{1,y}^{d,\beta_{\varepsilon_2}}p_2^-\in\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R}))$. From \eqref{eq:for-index-2}--\eqref{eq:for-index-3}, \eqref{eq:for-index-5}, and Theorem~\ref{th:comp-semi-commutators-PDO} we get \eqref{eq:for-index-1} with \begin{equation}\label{eq:for-index-6} \mathfrak{g}_y:=\mathfrak{f}\,(\mathfrak{c}_{1,y}^{c,\alpha_{\varepsilon_1}}p_2^++\mathfrak{c}_{1,y}^{d,\beta_{\varepsilon_2}}p_2^-)\in\widetilde{\mathcal{E}}(\mathbb{R}_+,V(\mathbb{R})). \end{equation} Obviously, \begin{equation}\label{eq:for-index-7} p_2^\pm(\pm\infty)=1, \quad p_2^\pm(\mp\infty)=0. \end{equation} By Lemmas~\ref{le:inverse-shift-fibers} and \ref{le:A-inverse-R}(c), \begin{align} \mathfrak{c}_{1,y}^{c,\alpha_{\varepsilon_1}}(\xi,x) &= \left\{\begin{array}{ll} \displaystyle\frac{r_y(x)}{1-c(\xi)e^{i\varepsilon_1\omega(\xi)x}}, &\mbox{if } (\xi,x)\in\Delta\times\mathbb{R}, \\[3mm] 0, &\mbox{if } (\xi,x)\in(\mathbb{R}_+\cup\Delta)\times\{\pm\infty\}, \end{array}\right. \label{eq:for-index-8} \\ \mathfrak{c}_{1,y}^{d,\beta_{\varepsilon_2}}(\xi,x) &= \left\{\begin{array}{ll} \displaystyle\frac{r_y(x)}{1-d(\xi)e^{i\varepsilon_2\eta(\xi)x}}, &\mbox{if } (\xi,x)\in\Delta\times\mathbb{R}, \\[3mm] 0, &\mbox{if } (\xi,x)\in(\mathbb{R}_+\cup\Delta)\times\{\pm\infty\}. \end{array}\right. \label{eq:for-index-9} \end{align} From \eqref{eq:for-index-6}--\eqref{eq:for-index-9}, \eqref{eq:regularization-W-2}--\eqref{eq:regularization-W-4}, and Lemma~\ref{le:values-sum-product} we get \[ \mathfrak{g}_y(\xi,x)=0 \quad\mbox{for}\quad (\xi,x)\in(\mathbb{R}_+\cup\Delta)\times\{\pm\infty\} \] and \begin{align*} \mathfrak{g}_y(\xi,x) &= \mathfrak{f}(\xi,x)\left( \frac{r_y(x)p_2^+(x)}{1-c(\xi)e^{i\varepsilon_1\omega(\xi)x}} + \frac{r_y(x)p_2^-(x)}{1-d(\xi)e^{i\varepsilon_2\eta(\xi)x}} \right) \\ &= \frac{\ell(\xi,x)r_y(x)}{w(\xi,x)\ell(\xi,x)} = \frac{r_y(x)}{w(\xi,x)} \end{align*} for $(\xi,x)\in\Delta\times\mathbb{R}$. \end{proof} Relation \eqref{eq:for-index-1} will play an important role in the proof of an index formula for the operator $N$ in \cite{KKL15-progress}.
2,869,038,156,065
arxiv
\section{Introduction} At sufficiently high temperatures strongly interacting matter undergoes a transition to a new state, often called quark-gluon plasma that is characterized by chiral symmetry restoration and color screening (see e.g. Ref. \cite{Petreczky:2012rq} for a current review). Experimentally such state of matter can be studied in relativistic heavy collisions. There was considerable interest in the properties and the fate of heavy quarkonium states at finite temperature since the famous conjecture by Matsui and Satz \cite{Matsui:1986dk}. It has been argued that color screening in medium will lead to quarkonium dissociation above deconfinement, which in turn can signal quark-gluon plasma formation in heavy ion collisions. The basic assumption behind the conjecture by Matsui and Satz was the fact that medium effects can be understood in terms of a temperature-dependent heavy-quark potential. Color screening makes the potential exponentially suppressed at distances larger than the Debye radius and therefore it cannot bind the heavy quark and anti-quark once the temperature is sufficiently high. Based on this idea potential models at finite temperature with different temperature dependent potentials have been used over the last two decades to study quarkonium properties at finite temperature (see e.g. Refs. \cite{Mocsy:2008eg,Bazavov:2009us} for recent reviews). It is not clear if and to what extent medium effects on quarkonium binding can be encoded in a temperature dependent potential. Effective field theory approach, namely the so-called thermal pNRQCD, can provide an answer to this question \cite{Brambilla:2008cx}. The notion of the potential can be defined using EFT approach both at zero and non-zero temperature. Thermal pNRQCD that will be discussed in the next section is based on the weak-coupling techniques. To understand the non-perturbative aspects of color screening as well as to test the reliability of the weak-coupling approach lattice calculations of the correlation functions of static quarks are needed. The correlation functions of static quarks that propagate around the periodic time direction $\tau = 1/T$ are related to the free energy of a static quark anti-quark pair. We will see that pNRQCD is a useful tool in understanding the temperature dependence of the static correlators. We also consider Wilson loops evaluated at time extent $\tau<1/T$. They are naturally related to the static energy at non-zero temperature. In principle, it is possible to study the problem of quarkonium dissolution without any use of potential models. In-medium properties of different quarkonium states and/or their dissolution are encoded in spectral functions. Spectral functions are related to Euclidean meson correlation functions which can be calculated on the lattice. Reconstruction of the spectral functions from the lattice meson correlators turns out to be very difficult, and the corresponding results remain inconclusive. We will discuss the calculation of the spectral functions using potential models in the light of lattice calculations of Wilson loops. \section{pNRQCD at finite temperature} \label{sec_pnrqcd} There are different scales in the heavy quark bound state problem related to the heavy quark mass $m$, the inverse size $\sim m v \sim 1/r $ and the binding energy $~m v^2 \sim \alpha_s/r$. Here $v$ is the typical heavy quark velocity in the bound state and is considered to be a small parameter. Therefore it is possible to derive a sequence of effective field theories using this separation of scales (see Refs. \cite{Brambilla:2004jw,Brambilla:2010cs} for recent reviews). Integrating out modes at the highest energy scale $\sim m$ leads to an effective field theory called non-relativistic QCD or NRQCD, where the pair creation of heavy quarks is suppressed by powers of the inverse mass and the heavy quarks are described by non-relativistic Pauli spinors \cite{Caswell:1985ui}. At the next step, when the large scale related to the inverse size is integrated out, the potential NRQCD or pNRQCD appears. In this effective theory the dynamical fields include the singlet $\rm S(r,R)$ and octet $\rm O(r,R)$ fields corresponding to the heavy quark anti-quark pair in singlet and octet states respectively, as well as light quarks and gluon fields at the lowest scale $\sim mv^2$. The Lagrangian of this effective field theory has the form \begin{eqnarray} {\cal L} = && - \frac{1}{4} F^a_{\mu \nu} F^{a\,\mu \nu} + \sum_{i=1}^{n_f}\bar{q}_i\,iD\!\!\!\!/\,q_i + \int d^3r \; {\rm Tr} \, \Biggl\{ {\rm S}^\dagger \left[ i\partial_0 + \frac{\nabla_r^2}{m}-V_s(r) \right] {\rm S}\nonumber\\ && + {\rm O}^\dagger \left[ iD_0 + \frac{\nabla_r^2}{m}- V_o(r) \right] {\rm O} \Biggr\} + V_A\, {\rm Tr} \left\{ {\rm O}^\dagger {\vec r} \cdot g{\vec E} \,{\rm S} + {\rm S}^\dagger {\vec r} \cdot g{\vec E} \,{\rm O} \right\} \nonumber\\ && + \frac{V_B}{2} {\rm Tr} \left\{ {\rm O}^\dagger {\vec r} \cdot g{\vec E} \, {\rm O} + {\rm O}^\dagger {\rm O} {\vec r} \cdot g{\vec E} \right\} + \dots\;. \label{pNRQCD} \end{eqnarray} Here the dots correspond to terms which are higher order in the multipole expansion \cite{Brambilla:2004jw}. The relative distance $r$ between the heavy quark and anti-quark plays a role of a label, the light quark and gluon fields depend only on the center-of-mass coordinate $R$. The singlet $V_s(r)$ and octet $V_o(r)$ heavy quark potentials appear as matching coefficients in the Lagrangian of the effective field theory and therefore can be rigorously defined in QCD at any order of the perturbative expansion. At leading order \be V_s(r)=-\frac{4}{3} \frac{\alpha_s}{r},~V_o(r)=\frac{1}{6}\frac{\alpha_s}{r} \ee and $V_A=V_B=1$. One can generalize this approach to finite temperature. However, the presence of additional scales makes the analysis more complicated \cite{Brambilla:2008cx}. The effective Lagrangian will have the same form as above, but the matching coefficients may be temperature-dependent. In the weak coupling regime there are three different thermal scales: $T$, $g T$ and $g^2 T$. The calculations of the matching coefficients depend on the relation of these thermal scales to the heavy quark bound-state scales \cite{Brambilla:2008cx}. To simplify the analysis the static approximation has been used, in which case the scale $m v$ is replaced by the inverse distance $1/r$ between the static quark and anti-quark. The binding energy in the static limit becomes $V_o-V_s \simeq N \alpha_s/(2 r)$. When the binding energy is larger than the temperature the derivation of pNRQCD proceeds in the same way as at zero temperature and there is no medium modifications of the heavy quark potential \cite{Brambilla:2008cx}. But bound state properties will be affected by the medium through interactions with ultra-soft gluons, in particular, the binding energy will be reduced and a finite thermal width will appear due to medium induced singlet-octet transitions arising from the dipole interactions in the pNRQCD Lagrangian \cite{Brambilla:2008cx} (c.f. Eq. (\ref{pNRQCD})). When the binding energy is smaller than one of the thermal scales the singlet and octet potential will be temperature-dependent and will acquire an imaginary part \cite{Brambilla:2008cx}. The imaginary part of the potential arises because of the singlet-octet transitions induced by the dipole vertex as well as due to the Landau damping in the plasma, i.e. scattering of the gluons with space-like momentum off the thermal excitations in the plasma. In general, the thermal corrections to the potential go like $(r T)^n$ and $(m_D r)^n$ \cite{Brambilla:2008cx}, where $m_D$ denotes the Debye mass. Only for distances $r>1/m_D$ there is an exponential screening. In this region the singlet potential has a simple form \begin{equation} V_s(r)= -\frac{4}{3} \,\frac{\als}{r}\,e^{-m_Dr} + i \frac{4}{3}\,\als\, T\,\frac{2}{rm_D}\int_0^\infty dx \,\frac{\sin(m_Dr\,x)}{(x^2+1)^2}-\frac{4}{3}\, \als \left( m_D + i T \right),\nn\\ \label{Vs} \end{equation} The real part of the singlet potential coincides with the leading order result of the so-called singlet free energy \cite{Petreczky:2005bd}. The imaginary part of the singlet potential in this limit has been first calculated in \cite{Laine:2006ns}. For small distances the imaginary part vanishes, while at large distances it is twice the damping rate of a heavy quark \cite{Pisarski:1993rf}. This fact was first noted in Ref. \cite{Beraudo:2007ky} for thermal QED. The effective field theory at finite temperature has been derived in the weak-coupling regime assuming the separation of different thermal scales as well as $\Lambda_{QCD}$. In practice, the separation of these scales is not evident and one needs lattice techniques to test the approach. Lattice QCD is formulated in Euclidean time. Therefore the next section will be dedicated to the study of static quarks at finite temperature in Euclidean time formalism. \section{Static meson correlators in Euclidean time formalism} \label{sec_meson_cor} Consider static (infinitely heavy) quarks. The position of heavy quark anti-quark pair is fixed in space and propagation happens only along the time direction. With respect to the color a static quark anti-quark ($Q\bar Q$) pair can be in a singlet or in an octet state. Therefore we can define the following $Q \bar Q$ (meson) operators \begin{eqnarray} \label{Jmesstat} J(\vec{x},\vec{y};\tau)&=& \bar\psi(\vec{x},\tau)U(\vec{x},\vec{y};\tau)\psi(\vec{y},\tau),\\ \label{Jamesstat} J^a(\vec{x},\vec{y};\tau)&=& \bar\psi(\vec{x},\tau)U(\vec{x},\vec{x}_0;\tau) T^a U(\vec{x}_0,\vec{y};\tau)\psi(\vec{y},\tau), \end{eqnarray} for singlet and octet state respectively. Here $U(\vec{x},\vec{y};\tau)$ are the spatial gauge transporters connecting $\vec{x}$ and $\vec{y}$, $\vec{x}_0$ is the coordinate of the center of mass of the meson and $T^a$ are the $SU(3)$ group generators. We consider the correlation function of singlet and octet meson operators at maximal Euclidean time $\tau=1/T$: \begin{eqnarray} & G_1(r,T,\tau=1/T)=\langle J(x,y,\tau=1/T) J^{\dagger}(x,y;0) \rangle,\\ & G_8(r,T,\tau=1/T)=\frac{1}{8} \langle J^a(x,y,\tau=1/T) J^{a \dagger}(x,y;0) \rangle. \end{eqnarray} Integrating out the static quark fields $\psi$ and replacing the quark propagators by temporal Wilson lines $L(\vec{x})=\prod_{\tau=0}^{N_\tau-1} U_0(\vec{x},\tau)$ with $U_0(\vec{x},t)$ being the temporal links, we get the following expression for the above correlators: \begin{eqnarray} \displaystyle G_1(r,T) &=&\frac{1}{3} \langle {\rm Tr}\left[ L^{\dagger}(\vec{x}) U(\vec{x},\vec{y};0) L(\vec{y}) U^{\dagger}(\vec{x},\vec{y},1/T)\right]\rangle, \label{defg1}\\ \displaystyle G_8(r,T)&=&\frac{1}{8} \langle {\rm Tr} L^{\dagger}(x) {\rm Tr} L(y) \rangle-\frac{1}{24} \langle {\rm Tr}\left[ L^{\dagger}(x) U(x,y;0) L(y) U^{\dagger}(x,y,1/T)\right] \rangle, \label{defg3}\\ &&r=|\vec{x}-\vec{y}|. \nonumber \end{eqnarray} The correlators depend on the choice of the spatial transporters $U(\vec{x},\vec{y};\tau)$. Typically, a straight line connecting points $\vec{x}$ and $\vec{y}$ is used as a path in the gauge transporters, i.e. one deals with time-like rectangular cyclic Wilson loops. In the special gauge, where $U(\vec{x},\vec{y};\tau)=1$ the above correlators give the standard definition of the singlet and octet free energies of a static $Q\bar Q$ pair \begin{eqnarray} \displaystyle \exp(-F_1(r,T)/T)&=& \frac{1}{3} \langle {\rm Tr}[L^{\dagger}(x) L(y)]\rangle,\label{F1def}\\ \displaystyle \exp(-F_8(r,T)/T)&=& \frac{1}{8} \langle {\rm Tr} L^{\dagger}(x) {\rm Tr} L(y) \rangle -\frac{1}{24} \langle {\rm Tr} \left[L^{\dagger}(x) L(y)\right]\rangle.\label{Fadef} \end{eqnarray} One can also fix the Coulomb gauge and define the interpolating meson operators without the spatial transporters, and use the above expression to define the singlet and octet correlators. \begin{figure} \includegraphics[width=7cm]{F1_phys.eps} \includegraphics[width=8cm]{S1_high.eps} \vspace*{-0.3cm} \caption{The singlet free energy (left) and the screening function (right) as function of the distance $r$ at different temperatures calculated with the HISQ action.} \label{fig:f1} \vspace*{-0.3cm} \end{figure} The correlator $G(r,T)=\frac{1}{9} \langle {\rm Tr} L^{\dagger}(x) {\rm Tr} L(y) \rangle$ gives the free energy $F(r,T)=-T \ln G(r,T)$ of static quark anti-quark pair separated by distance $r$ \cite{McLerran:1981pb}. It can be expressed in terms of energy levels $E_n(r)$ of static quark anti-quark pair at $T=0$ \cite{Jahn:2004qr} \be G(r,T)=\sum_{n=1}^{\infty} e^{-E_n(r)/T}. \label{g} \ee It is tempting to rewrite Eq. (\ref{defg3}) or Eq. (\ref{Fadef}) as \be \exp(-F(r,T)/T)=\frac{1}{9} \exp(-F_1(r,T)/T)+\frac{8}{9} \exp(-F_8(r,T)/T) \label{decomp} \ee and interpret this expression as the decomposition of the free energy of static $Q\bar Q$ pair into singlet and octet contributions \cite{McLerran:1981pb,Gross:1980br,Nadkarni:1986cz,Nadkarni:1986as}. This decomposition is intuitively very appealing and should be valid in perturbation theory. However, it is problematic as $G_1(r,T)$ is path- or gauge- dependent. The problem is also evident if one writes the spectral decomposition of $G_1(r,T)$ \cite{Jahn:2004qr}: \be G_1(r,T)=\sum_{n=1}^{\infty} c_n(r) e^{-E_n(r,T)/T} \label{g1}. \ee The coefficients $c_n(r)$ are different from unity and are path- or gauge- dependent. The EFT approach can help to resolve this puzzle. One can use pNRQCD also in Euclidean time formulation \cite{Brambilla:2010xn} and study the Polyakov loop correlator in this framework. The Polyakov loop correlator can be written in terms of correlation function of singlet and octet fields \cite{Brambilla:2010xn} \be G(r,T)=Z_s(r) \langle S(r,\tau=1/T) S^{\dagger}(r,0)\rangle +Z_o(r) \langle O^a(r,\tau=1/T) O^{a \dagger}(r,0)\rangle. \ee For $r T \ll 1$ one can use the zero temperature version of pNRQCD where the singlet and octet potentials are known up to 2-loop order. One can then show that $Z_s=Z_o=1/9$ and thus in this limit the conjectured decomposition of the Polyakov loop correlator in terms of singlet and octet contribution is justified \cite{Brambilla:2010xn} \be G(r,T)=\frac{1}{9} \exp(-V_s(r)/T)+ \frac{8}{9} \exp(-V_o(r)/T). \label{decomp0} \ee The singlet and octet contributions are gauge-independent in this framework. When the binding energy $E_{bin} \sim\als/r$ is the largest scale in the problem the free energy of $Q\bar Q$ pair is dominated by the singlet contribution and is equal to the zero temperature potential \cite{Brambilla:2010xn} up to the term $T \ln 9$ coming from the normalization constant. When the temperature is much larger than the binding energy, i.e. $\als/(rT) \ll 1$ the exponentials in Eq. (\ref{decomp0}) can be expanded and we get \be F(r,T)=\frac{\als^2}{(r^2 T)}. \ee We see that despite no $T$-dependence of the potential in this limit, the free energy is strongly temperature dependent and it is very different from the potential. The complete next-to-leading order result can be found in Ref. \cite{Brambilla:2010xn}. Similarly, for the singlet correlator one can write \be G_1(r,T)=\tilde Z_s(r) \langle S(r,\tau=1/T) S^{\dagger}(r,0)\rangle \ee At leading order $\tilde Z_s(r)=1$ and $F_1(r,T) \simeq V_s(r)$. \begin{figure} \includegraphics[width=7cm]{Fa_6_phys.eps} \includegraphics[width=8cm]{Fa_4.eps} \vspace*{-0.3cm} \caption{The free energy of static $Q\bar Q$ pair (left) and the difference $F(r,T)-F_{\infty}(T)$ (right) calculated with HISQ action as function of the distance $r$ at different temperatures. In the right panel the filled symbols correspond to the lattice data, while the open symbols correspond to the values reconstructed from the singlet free energy. The legend in the left panel is the same as in Fig. 1 (left). } \vspace*{-0.3cm} \label{fig:f} \end{figure} At very high temperatures for $r \sim 1/m_D$ with $m_D=g T \sqrt{3/2}$ being the leading order Debye mass, the singlet and octet correlators can be calculated in the hard thermal loop (HTL) approximation \cite{Petreczky:2005bd} \begin{equation} F_1(r,T)=-\frac{4}{3} \frac{\alpha_s}{r} \exp(-m_D r)-\frac{4 \alpha_s m_D }{3},~~~ F_8(r,T)=\frac{1}{6} \frac{\alpha_s}{r} \exp(-m_D r)-\frac{4 \alpha_s m_D}{3}. \label{f18p} \end{equation} The singlet and octet free energies are gauge-independent at this order. At large distances the singlet and octet free energies approach a constant value $-\frac{4 \alpha_s m_D}{3}$. This constant corresponds to the leading order result for the free energy of two isolated static quarks $F_{\infty}$, which has been also calculated to next-to-leading order \cite{Burnier:2009bk,Brambilla:2010xn}. The next-to-leading corrections are small and do not change the qualitative behavior of $F_{\infty}(T)$ which decreases with increasing temperatures. At leading order we have $(F_1(r,T)-F_{\infty}(T))/(F_8(r,T)-F_{\infty}(T))=-8$. The free energy of static $Q\bar Q$ pair was calculated at leading order long time ago \cite{McLerran:1981pb,Gross:1980br,Nadkarni:1986cz} \be F(r,T)=-\frac{1}{9} \frac{\alpha_s^2}{r^2} \exp(-2 m_D r)-\frac{4 \alpha_s m_D }{3}. \ee The above expression can also be obtained by inserting Eqs. (\ref{f18p}) into Eq. (\ref{decomp}) and expanding the exponentials to order $\alpha_s^2$ thus confirming the validity of the decomposition and the partial cancellation of the singlet and octet contributions at leading order. The free energy was calculated at next-to-leading order for $r \simeq 1/m_D$ \cite{Nadkarni:1986cz} but the decomposition into singlet and octet contributions was not shown. Because of the partial cancellation of the singlet and octet contributions for $r \sim 1/m_D$ we expect that $F(r,T)-F_{\infty}(T) \ll F_1(r,T)-F_{\infty}(T)$ at sufficiently high temperatures. In summary, the partial cancellation of the singlet and octet contribution to the free energy happens both at short and long distances and leads to its strong temperature dependence. \section{Lattice results on the free energy and the singlet free energy} We calculated Polyakov loop correlators as well as singlet correlators on the lattice in 2+1 flavor QCD using Highly Improved Staggered Quark (HISQ) action \cite{Bazavov:2011nk} on $24^3 \times 6$ and $16^3 \times 4$ lattices. The strange quark mass $m_s$ was fixed to its physical value, while for the light quark masses we used $m_l=m_s/20$ that corresponds to the pion mass of about $160$ MeV. The detailed choice of the lattice parameters is discussed in Ref. \cite{Bazavov:2011nk}. To calculate the singlet free energy we used the Coulomb gauge. The free energy and the singlet free energy have an additive divergent part that has to be removed by adding a normalization constant determined from the zero temperature potential. We used the normalization constants from Ref. \cite{Bazavov:2011nk}. The numerical results for the singlet free energy are shown in Fig. \ref{fig:f1}. At short distances the singlet free energy agrees with the zero temperature potential calculated in Ref. \cite{Bazavov:2011nk}, while at large distances it approaches a constant value $F_{\infty}(T)$ equal to the excess free energy of two isolated static quarks. As the temperature increases the deviation from the zero-temperature potential shows up at shorter and shorter distances as the consequence of color screening. To explore the screening behavior in Fig. \ref{fig:f1} we also show the combination $S(r,T)=r \cdot (F(r,T)-F_{\infty}(T))$ which we call the screening function. The screening function should decay exponentially. We indeed observe the exponential decay of this quantity at distances larger than $1/T$. Thus at high temperatures the behavior of the singlet free energy expected from the weak-coupling calculations seems to be confirmed by lattice QCD, at least qualitatively. Let us also mention that at high temperatures the behavior of the singlet free energy is similar to that observed in pure gauge theory \cite{Digal:2003jc,Kaczmarek:2002mc}. In Fig. \ref{fig:f} we show our results for the free energy of static $Q \bar Q$ pair as function of the distance at different temperatures. At short distances and low temperatures the free energy is expected to be dominated by the singlet contribution and we expect it to be equal to the zero temperature potential up to the term $T \ln 9$ coming from the normalization, see the discussion in the previous section. Therefore in the figure the numerical results have been shifted by $-T \ln9$. Indeed, for the smallest temperature and the shortest distances $F(r,T)-T \ln 9$ is equal to the zero temperature potential shown as the dashed black line. At higher temperature $F(r,T)$ is very different from the zero-temperature potential. At large distance the free energy approaches a constant value $F_{\infty}(T)$ that decreases with increasing temperatures as expected (see discussions above). The temperature dependence of $F(r,T)$ is much larger than that of the singlet free energy. This is presumably due to the partial cancellation of the singlet and octet contribution discussed above. To verify this assertion we calculated $F(r,T)-F_{\infty}(T)$ using the numerical data for $F_1(r,T)-F_{\infty}(T)$ and the leading order relation $(F_1(r,T)-F_{\infty}(T))/(F_8(r,T)-F_{\infty}(T))=-8$. The corresponding results are shown in the right panel of Fig. \ref{fig:f}. As one can see from the figure the numerical data for $F(r,T)$ are in reasonable agreement with the ones reconstructed from this procedure. The reconstruction works better with increasing temperature. Thus the expected cancellation of the singlet and octet contributions to the free energy of static $Q\bar Q$ pair seems to be confirmed by lattice calculations. \begin{figure} \hspace*{-0.7cm} \includegraphics[width=5.9cm]{Veff_7.5_Nt16.eps} \hspace*{-0.7cm} \includegraphics[width=5.9cm]{Veff_7.28_Nt12.eps} \hspace*{-0.7cm} \includegraphics[width=5.9cm]{Veff_7.5_Nt12.eps} \vspace*{-0.3truecm} \caption{The effective potential $V_{eff}(r,\tau)$ as function of $r T$ calculated for $48^3 \times 16$ lattice and $\beta=7.5$ (left), $48^3 \times 12$ lattice and $\beta=7.28$ (middle), and $48^3 \times 12$ lattice and $\beta=7.5$ (right). The left, middle and right panels correspond to temperatures of $225$ MeV, $249$ MeV and $300$ MeV respectively.} \vspace*{-0.3truecm} \label{fig:veff} \end{figure} \begin{figure} \hspace*{-0.7cm} \includegraphics[width=5.9cm]{comp_6.000_9.eps} \hspace*{-0.7cm} \includegraphics[width=5.9cm]{comp_6.195_9.eps} \hspace*{-0.7cm} \includegraphics[width=5.9cm]{comp_6.285_9.eps} \vspace*{-0.4cm} \hspace*{-0.7cm} \includegraphics[width=5.9cm]{comp_6.000_36.eps} \hspace*{-0.7cm} \includegraphics[width=5.9cm]{comp_6.195_36.eps} \hspace*{-0.7cm} \includegraphics[width=5.9cm]{comp_6.285_36.eps} \vspace*{-0.2truecm} \caption{The effective potential $V_{eff}(r,\tau)$ as function of $\tau T$ calculated on $24^3 \times 6$ lattices for three different temperatures. The upper panels correspond to $rT =1/2$ while the lower panels correspond to $rT =1$. Filled and open diamonds correspond to the Coulomb gauge results at finite and zero temperature respectively.} \vspace*{-0.2truecm} \label{fig:veff_t} \end{figure} \section{Wilson loops at non-zero temperature} In the previous section we considered correlation function of static $Q\bar Q$ pair evaluated at Euclidean time $t=1/T$. These correlators are related to the free energy of static $Q\bar Q$ pair. One can consider Wilson loops for Euclidean times $t < 1/T$ which have no obvious relation to the free energy of a static $Q\bar Q$ pair. Wilson loops at non-zero temperature have been first studied in Refs. \cite{Rothkopf:2009pk,Rothkopf:2011db} in connection with heavy quark potential at non-zero temperature and a spectral decomposition has been conjectured for the Wilson loops \be W(r,\tau)=\int_0^{\infty} \sigma(\omega,r,T) e^{-\omega \tau}. \ee At zero temperature the spectral function is proportional to sum of delta functions $\sigma(r,\omega)=\sum_n c_n \delta(E_n(r)-\omega)$ and thus the spectral decomposition is just the generalization of Eq. (\ref{g1}). At high temperatures the spectral function will be proportional to sum of smeared delta functions and the position and the width of the lowest peak are related to real and imaginary part of the potential, respectively \cite{Rothkopf:2009pk,Rothkopf:2011db}. Motivated by this we calculated Wilson loops on finite temperature lattices in 2+1 flavor QCD using the HISQ action with physical strange quark mass and light quark masses $m_l=m_s/20$. We performed calculations using $48^3 \times 16$ and $48^3 \times 12$ at $\beta=7.5$ as well as $48^3 \times 12$ lattices at $\beta=7.28$ ($\beta=10/g^2$). These correspond to temperatures $225$ MeV, $300$ MeV and $249$ MeV respectively. In addition we performed calculations using $24^3 \times 6$ lattices for the lattice parameters discussed in the previous section. One of the problems in extracting physical information from the Wilson loops on the lattice is large noise associated with them. To reduce the noise smeared gauge fields are used in the spatial gauge transporters $U(\vec{x},\vec{y};\tau)$ that enter the Wilson loops. Alternatively one can fix the Coulomb gauge and omit the spatial gauge connections, i.e calculate correlation function of Wilson lines of extent $t < 1/T$. This method was used by the MILC collaboration \cite{Aubin:2004wf} as well as by the HotQCD collaboration \cite{Bazavov:2011nk} to calculate the static potential at zero temperature. We used both approaches. If the Wilson loop is dominated by the ground state for some value of $\tau$ we may try to extract the static energies at non-zero temperature from single exponential fits or from the ratio of the Wilson loops at two neighboring time-slices separated by single lattice spacing $a$ \be a V_{eff}(r,\tau)=\ln W(r,\tau/a)/W(r, \tau/a+1). \ee At zero temperature for sufficiently large $\tau$ the effective potential $V_{eff}(r,\tau)$ should reach a plateau. For non-zero temperature the situation is more complicated due to the backward propagating contribution. Lattice calculations of the Wilson loops at non-zero temperature in SU(3) gauge theory show exponential decay in $\tau$ but at distance around $\tau T=1$ the Wilson loops increase again \cite{Rothkopf:2009pk,Rothkopf:2011db}. Similar behavior was observed in 2+1 flavor QCD \cite{Bazavov:2012bq}. While no temporal boundary conditions are imposed on static quarks the gluon fields are periodic in time and this may give rise to a contribution that propagates backward in time. Such backward propagating contribution was also observed in the study of bottomonium spectral function in NRQCD at non-zero temperature \cite{Aarts:2010ek}. In Fig. \ref{fig:veff} we show the effective potential calculated on $N_{\tau}=12$ and $16$ lattices as function of the distance $r$ for different $\tau$. For $N_{\tau}=16$ lattice that corresponds to the temperature of $225$ MeV plateau seems to be reached for $\tau T \le 1/2$. For these values of $\tau$ the backward propagating contribution is expected to be small. However, the statistical errors are very large for $rT>1$. For $N_{\tau}=12$ the effective potential does not reach a plateau for $\tau T \le 1/2$. We do not consider larger values of $\tau$ because of the backward propagation contribution. We attempted to extract the static energy by removing the backward propagating contribution and fitting the remainder by single exponential. The results are shown in Fig. \ref{fig:veff} as open symbols and agree quite well with $V_{eff}(r,\tau=5/(12T))$. Thus it is reasonable to assume that the static energy is well approximated by $V_{eff}(r,\tau=5/(12T))$ at these temperatures. For $rT>1$ the static energy is larger than the free energy. These findings are in agreement with earlier findings based on $24^3 \times 6$ lattices \cite{Bazavov:2012bq}. The above analysis as well as the analysis performed in Ref. \cite{Bazavov:2012bq} is based on using the Coulomb gauge. It is important to check how the results depend on the choice of the static meson operator. In addition it is interesting to study the onset of medium effects as function of $\tau$. We calculated rectangular Wilson loops using smeared gauge fields in the spatial gauge transporters. To reduce the noise we used several iterations of APE \cite{Albanese:1987ds} and HYP \cite{Hasenfratz:2001hp} smearings. Namely, we used $5$, $10$ and $20$ steps of APE smearing and $1$, ~$2$ and $5$ steps of HYP smearing. The numerical results are shown in Fig. \ref{fig:veff_t}. As expected the HYP smearing is more efficient than APE smearing but for the coarse lattices used in our study the difference is not that large. One needs 5 steps of HYP smearing or 10 steps of APE smearing to get results comparable to the Coulomb gauge results. Fig. \ref{fig:veff_t} shows that except for the lowest temperature and distances $r T \le 1/2$ the static potential is affected by the medium. While $V_{eff}$ seems to reach a plateau at zero temperature no plateau is observed at finite temperature. Overall, the behavior of the Wilson loops with smeared spatial links is similar to the Coulomb gauge result if sufficient number of smearing steps is used. \section{Quarkonium spectral functions} Heavy meson correlation functions in Euclidean time $G(\tau,T)$ are related to the meson spectral functions $\sigma(\omega,T)$ \begin{equation} G(\tau,T)=\int_0^{\infty} d \omega \sigma(\omega,T) \frac{\cosh(\omega (\tau-1/(2T)))}{\sinh(\omega/(2T))}. \end{equation} Attempts to reconstruct quarkonium spectral function from lattice QCD using the above equation and Maximum Entropy Method have been presented in Ref. \cite{Umeda:2002vr,Asakawa:2003re,Datta:2003ww}. In these studies it was concluded that charmonium ground state may survive in the deconfined medium up to the temperature $1.6$ times the transition temperature or maybe even higher contrary to the expectations based on color screening. However, the reconstruction of meson spectral functions from Euclidean correlator is very difficult \cite{Wetzorke:2001dk}, and the spectral functions are also strongly modified by cutoff effects \cite{Karsch:2003wy}. The suggested survival of quarkonium states in the deconfined medium is closely related to the weak temperature dependence of the Euclidean time quarkonium correlators \cite{Petreczky:2008px}. Using potential models, quarkonium spectral functions have been calculated and it was shown that Euclidean correlation functions do not show significant temperature dependence even if bound states are dissolved due to the limited Euclidean time extent \cite{Mocsy:2007yj}. This seems to be confirmed by the study of spatial charmonium correlators which indicate the dissolution of the ground state for $T>300$ MeV \cite{Karsch:2012na} as well as by the study of P-wave bottomonium correlators using lattice NRQCD \cite{Aarts:2010ek}, where larger values of the Euclidean time can be used. Potential models can be related to pNRQCD. As the temperature increases the binding energy becomes smaller and eventual,ly will be the smallest scale in the problem: $E_{bin} \ll \Lambda_{QCD} \ll T,~m_D,~1/r$, and all the other scales in the problem can be integrated out \cite{Petreczky:2010tk}. In this case the potential will be equal to the static energy. The real part of the static energy can be estimated using lattice QCD. In Ref. \cite{Petreczky:2010tk} a phenomenological form based on lattice QCD calculations of the singlet free energy was used for the real part of the potential while for the imaginary part of the potential the hard thermal loop result \cite{Laine:2006ns} was used. The spectral functions calculated in this approach show that most quarkonia states melt at temperatures $T>250 $MeV, while ground state bottomonium melts at temperature $T>450$ MeV \cite{Petreczky:2010tk}. The estimates of the static energy obtained from Wilson loops and discussed in the previous section turn out to be very close to the phenomenological potential used in Ref. \cite{Petreczky:2010tk}. Therefore the above estimates of the maximal temperatures that permit the existence of quarkonium states still hold. \section{Conclusion} Color electric screening in high-temperature QCD can be studied using correlation functions of static mesons that go around the periodic Euclidean time direction. These are related to the free energy of static quark anti-quark pair. Determination of quarkonium spectral functions from the meson correlation functions calculated on the lattice is very difficult. The study of Wilson loops at non-zero temperature offers the possibility to extract the potential that can be used in potential model calculations to extract the quarkonium spectral functions. \vspace*{-0.3truecm} \section*{Acknowledgements} This work was supported by U.S. Department of Energy under Contract No. DE-AC02-98CH10886. Computations have been performed on BlueGene/L computers of the New York Center for Computational Sciences (NYCCS) at Brookhaven National Laboratory and on clusters of the USQCD collaboration in JLab and FNAL. \vspace*{-0.3truecm} \section*{References} \bibliographystyle{iopart-num}
2,869,038,156,066
arxiv
\section{} \subsection{} \subsubsection{} \section{\label{sec:Introduction}Introduction} Distinguished for their ability to carry high dissipation-less currents below a critical temperature $T_{c}$, superconductors are used in motors, generators, fault-current limiters, and particle accelerator magnets. Their impact spans beyond these examples of large-scale applications, also affecting nanoscale devices. Perhaps most renown for their key role in the quantum revolution, superconductors constitute building blocks in current and next-generation devices for computing and sensing. For example, superconducting photons detectors feature high-resolutions due to high kinetic inductance and a sharp superconductor-to-normal phase transition. Moreover, superconductors can be configured to form anharmonic oscillators that can be exploited in quantum computing. \begin{figure}[ht] \includegraphics[width = 0.4\textwidth]{figures/figure-alph.pdf} \caption{Frontiers in vortex matter research. Black line represents the vortex core. Yellow region shows how the density of superconducting electron pairs decays towards the center of the core (of size $\sim \xi$, coherence length). Blue plane (with arrows) represents amplitude of supercurrent, circulating around the core of radius up to the penetration depth $\lambda$. \label{fig:summary} } \end{figure} \begin{figure}[htp] \includegraphics[width=1\linewidth]{figures/vortexstructures.pdf} \caption{\label{fig:vortexstructures} Examples of vortex structures (curved blue lines) that are predicted to form in different defect landscapes under the influence of an applied current. Imaging these structures and defects, would allow us to establish the crucial connection between vortex excitations, vortex-defect and vortex-vortex interactions, Lorentz forces, and resulting vortex phases that is needed for efficacious defect engineering.} \end{figure} Notwithstanding these successes, the performance of superconducting devices is often impaired by the motion of vortices---lines threading a quantum $\Phi_{0} = h / 2e$ of magnetic flux through the material (see Fig.~\ref{fig:summary}). Propelled by electrical currents and thermal/quantum fluctuations, vortex motion is dissipative such that it limits the current-carrying capacity in wires, causes losses in microwave circuits, contributes to decoherence in qubits, and can also induce phase transitions. Understanding vortex dynamics is a formidable challenge because of the complex interplay between moving vortices, material disorder that can counteract (pin) vortex motion, and thermal energy that causes vortices to escape from these pinning sites. Furthermore, as depicted in Fig.~\ref{fig:vortexstructures}, in three-dimensional samples (bulk crystals or thick films), vortices are elastic objects that form complicated shapes as they wind through the disorder landscape, reshaping and moving under the influence of current-induced Lorentz forces. These complexities encumber predictability: we can neither predict technologically important parameters in superconductors nor prescribe an ideal defect landscape that optimizes these parameters for specific applications. Though modifying the disorder landscape, e.g. using particle-irradiation or by incorporating non-superconducting inclusions into growth precursors, can engender dramatic enhancements in the current carrying capacity, these processes are often designed through a trial-and-error approach. Furthermore, the optimal defect landscape is highly-material dependent. This is because the efficacy of pinning centers depends on the relationship between their geometry and the vortex structure, the latter being determined by parameters of the superconductor such as the coherence length $\xi$, penetration depth $\lambda$, and the anisotropy $\gamma$, see Fig.~\ref{fig:summary}. For example, though particle irradiation has successfully doubled the critical current in cuprates and certain iron-based superconductors, the same ions and energies do not even produce universal effects in materials belonging to the same class of superconductors\cite{Tamegai2012}. Though we can indeed tune the disorder landscape, we certainly do not have full control of it. Defects such as stacking faults, twin boundaries, and dislocations are often intrinsic to materials and their densities are challenging to tune. As a further complication to understanding vortex-defect interactions, superconductors often have mixed pinning landscapes, i.e., containing multiple types of defects. Though these landscapes immobilize vortices over a broader range of conditions (temperatures and fields) than landscapes containing only one type of defect, it is challenging to infer the vortex structures that form within these materials and no techniques currently exist to fully image these structures and vortex-defect interactions on a microscopic level. Generally speaking, achieving a materials-by-design approach first entails garnering a sufficient microscopic understanding of vortex-defect and vortex-vortex interactions, then incorporating these details into simulations. Significant headway has been made along these lines with the implementation of large-scale time-dependent Ginzburg-Landau (TDGL) simulations to study vortex motion through disordered media. Spearheaded by the Argonne National Laboratory, this effort has accurately modeled critical currents $J_{c}$ in thin films (2D), layered and anisotropic 3D materials, as well as isotropic superconductors~\cite{Sadovskyy2015a,Sadovskyy2016a,Glatz2016,Sadovskyy2017,kimmel2019}. Additionally, it has determined the optimal shape, size, and dimensionality of defects necessary to maximize $J_{c}$, depending on the magnitude and orientation of the magnetic field~\cite{Koshelev2016,Sadovskyy2016b,Kimmel2017,Sadovskyy2019}. Backed by good agreement with experimental and analytic results for simple geometries \cite{Willa2015a, Willa2015b, Willa2016, Willa2018c}, the utility of the numerical routine has successfully been extended to previously unknown territories, optimizing pinning geometries outside the scope of analytic methods \cite{Glatz2016, Kimmel2017, Koshelev2016, Kwok2016,Papari2016, Sadovskyy2015a, Sadovskyy2016a, Sadovskyy2016b, Sadovskyy2019, Willa2018b,ted100}. In fact, these TDGL simulations have unveiled new phenomena---such as a small peak in $J_{c}(B)$ at high fields that is caused by double vortex occupancy of individual pinning sites.\cite{Willa2018a} The Argonne team has even deployed mature optimization processes based on targeted evolution using genetic algorithms.~\cite{Sadovskyy2019} This is a remarkable step towards the goal of \textit{critical-current-by-design}. A critical-current-by-design must consider thermal fluctuations, which dramatically impact the critical current due to the effects of rapid thermally-induced vortex motion (thermal creep). Creep, which manifests as a decay in the persistent current over time, is rarely problematic in low-$T_{c}$ superconductors as it is typically quite slow. Consequently, Nb--Ti solenoids in magnetic resonance imaging systems can operate in \textit{persistent mode}, retaining a fairly constant magnetic field for essentially indefinite time periods. However, creep is fast in high-$T_{c}$ superconductors, restricting applications and reducing the effective $J_{c}$. For the sake of power and magnet applications, the goals are clear---maximize the critical current and minimize creep. Regarding the former, there is much room for improvement: no superconductor containing vortices has ever achieved a $J_{c}$ higher than 25\% of its theoretical maximum, which is thought to be the depairing current $J_{d}=\Phi_0/(3\sqrt{3}\pi\mu_0 \xi \lambda^2$). Regarding creep, we are fighting a theoretical lower bound.\cite{Eley2017a} This lower bound positively correlates with a material's Ginzburg number $Gi = (\gamma^2/2)(k_{\mathrm{B}} T_{c}/ \varepsilon_{sc})^2$, which is the ratio of the thermal energy to the superconducting condensation energy $\varepsilon_{sc} = (\Phi_{0}^{2} / 2 \pi \mu_{0} \xi^{2} \lambda^{2}) \xi^{3}$. The implications are grim: creep is expected to be so fast in potential, yet-to-be-discovered room-temperature superconductors rendering them unsuitable for applications. The caveat is that this lower bound is limited to low temperatures and fields (single vortex dynamics), and collective vortex dynamics could be key to achieving slow creep rates. Though superconducting sensing and computing applications do not require high currents, vortices still pose a nuisance by limiting the lifetime of the quantum state in qubits \cite{Oliver2013}, inducing microwave energy loss in resonators \cite{Song2009a}, and generally introducing noise. It is known that dissipation from vortex motion reduces the quality factor in superconducting microwave resonators, which are integral components in certain platforms for quantum sensors and the leading solid-state architecture for quantum computing (circuit-QED)\cite{Wallraff2004, Blais2004, Krantz2019, Muller2019}. They are used to address and readout qubits as well as mediate coupling between multiple qubits. Consequently, resonator stability can be essential for qubit circuit stability. Moreover, thermally activated vortex motion can contribute to $1/f$ noise and critical current fluctuations \cite{Trabaldo2019, VanHarlingen2004} in quantum circuits and is a suspected source of the dark count rate in superconducting nanowire single-photon detectors \cite{PhysRevB.83.144526, Yamashita2013}. In these quantum circuits, vortices appear due to pulsed control fields, ambient magnetic fields\cite{Song2009}, and the self-field generated by bias currents \cite{Yamashita2013}. Mitigating the effects of vortices requires heavy shielding to block external fields and careful circuit design to control their motion, the latter of which is quite tricky. The circuit should include structures to trap vortices away from operational currents and readout as well as narrow conductor linewidths\cite{Stan2004} to make vortex formation less favorable. However, these etched structures may exacerbate another major source of decoherence---parasitic two-level fluctuators---defects in which ions tunnel between two almost energetically equivalent sites, which act as dipoles and thus interact with oscillating electric fields during device operation.\cite{Muller2019} Hence, designing quantum circuits that are robust to environment noise is not trivial and has become a topic of intense interest.\cite{Muller2019, Oliver2013} Despite all of the aforementioned application-limiting problems caused by vortices, they are not pervasively detrimental to device performance. For example, vortices can trap quasiparticles---unpaired electrons that are a third source of decoherence in superconducting quantum circuits---boosting the quality factor of resonators\cite{PhysRevLett.113.117002} and the relaxation time of qubits\cite{Wang2014}. Furthermore, vortices can host elusive, exotic modes that are in fact useful for topological qubits, which are predicted to be robust to environmental noise that plagues other quantum device architectures. To exploit these modes in computing, we must control the dynamics of their vortex hosts. Hence, in general, these disparate goals of eliminating or utilizing vortices for applications both require an improved understanding of vortex formation, dynamics, and, ultimately, control. The goal of this Perspective is to present opportunities for transformative advances in vortex physics. In particular, we start by addressing vortex creep in Sec.~\ref{ssec:vortexcreep}, which notes limited knowledge of non-thermal creep processes and how recent increases in computational power will enable full consideration of creep in simulations. Second, in Sec.~\ref{ssec:Jd} we explore the true maximum achievable critical current, and the need to simultaneously exploit multiple pinning mechanisms to surpass current records for $J_{c}$. Next, Sec.~\ref{sec:RFcavities} discusses vortex-induced losses in response to AC magnetic fields and currents, with a focus on the impact on superconducting RF cavities used in accelerators and quantum circuits. We examine how the quantum revolution has handled the vortex problem for computing, while sensing applications necessitate further studies. As solving the aforementioned problems requires advanced computational algorithms, we then proceed to discuss future uses of artificial intelligence to understand the vortex matter genome in Sec.~\ref{ssec:AI}. Finally, in Sec.~\ref{ssec:microscopy}, we recognize that most experimental studies use magnetometry and electrical transport studies to \emph{infer} vortex-defect interactions, and discuss the frontiers of microscopy that could lead to observing these interactions as well as accurately determining defect densities. \section{Background} Superconductors have the remarkable ability to expel external magnetic fields up to a critical value $H_{c1}$, a phenomena that is known as the Meissner Effect. Though surpassing $H_{c1}$ quenches superconductivity in some materials, the state persists up to a higher field $H_{c2}=\Phi_0/2\pi\mu\xi^2$ in type-II superconductors. In this class of materials, $H_{c1}$ can be quite small (several \SI{}{\milli\tesla}) whereas $H_{c2}$ can be extremely large (from a few tesla up to as high as \SI{120}{\tesla})\cite{NMiura2002}, such that the interposing state between the lower and upper critical fields consumes much of the phase diagram and defines the technologically relevant regime. This \textit{mixed state} hosts a lattice of vortices, whose density scales with the magnetic field $n_v \propto B$. We should also note that, in addition to globally applied fields, self-fields from currents propagating within a superconductor can also locally induce vortices. Each vortex carries a single flux quantum $\Phi_0$ and the core defines a nanoscale region through which the magnetic field penetrates the material. As such, the vortex core is non-superconducting, of diameter $2\xi(T)$, and surrounded by circulating supercurrents of radii up to $\lambda(T)$, as depicted in Fig.~\ref{fig:summary}. Given the dependence of the vortex size on these material-dependent parameters, vortices effectively look different in different materials---for example, they are significantly smaller in the high-temperature superconductor $\mathrm{Y}\mathrm{Ba}_2\mathrm{Cu}_3\mathrm{O}_y$ (YBCO), where $\xi(0) = \SI{1.6}{\nano\meter}$, than in Nb, in which $\xi(0)= \SI{38}{\nano\meter}$.\cite{Wimbush2015} Vortex motion constitutes a major source of dissipation in superconductors. Propelled by currents of density $\vec{J}$ and thermal/quantum energy fluctuations, vortices experience a Lorentz force density $\vec{F}_{L} = (\vec{J} \times \vec{B})/c$ accompanied by Joule heating that weakens superconducting properties. It is this cascading process that is responsible for the undesirable impacts on applications, for which examples were provided in Sec.~\ref{sec:Introduction}. \subsection{\label{sec:vortexpinning}Fundamentals of vortex pinning} \begin{figure*}[t!] \includegraphics[width=1\linewidth]{figures/Jc_cropped.pdf} \caption{Enhancement in $J_{c}$ or $M \propto J_{c}$ in (a) an oxygen-ion-irradiated Dy$_2$O$_3$-doped commercial YBCO film grown by American Superconductor Corporation\cite{Leroux2015, Eley2017a} in a field of $\SI{5}{\tesla}$, (b) a BaZrO$_3$-doped (Y$_{0.77}$Gd$_{0.23}$)Ba$_2$Cu$_3$O$_y$ film grown by Miura \textit{et al.} \cite{Miura2013k}, (c) a BaZrO$_3$-doped BaFe$_2$(As$_{1-x}$P$_x$)$_2$ film grown by Miura \textit{et al.}\cite{Eley2017}, and (d) a heavy-ion-irradiated NbSe$_2$ crystal\cite{Eley2018}. Measurements by S.\ Eley. Insets show transmission electron micrographs of defect landscape, from Refs. ~[\onlinecite{Eley2017, Miura2013k, Eley2017a, Eley2018}].}.\label{fig:Jcenhancement} \end{figure*} Immobilizing vortices constitutes a major research area, in which the most prominent benchmark to assess the strength of pinning is the critical current $J_{c}$. \cite{Bean1964, Zeldov1994, Zeldov1994b, Brandt1999, Willa2014, Gurevich2014, Gurevich2017, Gurevich2018, Kubo2019, Dhakal2020} Once vortices are present in the bulk, crystallographic defects such as point defects, precipitates, twin boundaries, stacking faults, and dislocations provide an energy landscape to trap vortices. Depending on the defect type and density, one of two mechanisms are typically responsible for vortex pinning: weak collective effects from groups of small defects or strong forces exerted by larger defects. Originally formulated by Larkin and Ovchinnikov~\cite{Larkin1979}, the theory of weak collective pinning describes how atomically small defects alone cannot apply a sufficient force on a vortex line to immobilize it. However, the collective action of many can indeed pin a vortex. In the case of a random arrangement of small, weak, and uncorrelated pinning centers, the average force on a straight flux line vanishes. Then, only fluctuations in the pinning energy (higher order correlations) are capable of producing a net pinning force. Considering this, weak collective pinning theory phenomenology finds that the resulting critical current should scale \emph{quadratically} with the pin density $n_{p}$, i.e., $J_{c} \propto n_{p}^{2}$, see Ref.~[\onlinecite{Blatter1994}]. On the other hand, strong pinning results when larger defects each plastically deform a vortex line and a low density of these defects is sufficient to pin it. Competition between the \emph{bare} pinning force $f_{p}$ and the vortex elasticity $\bar{C}$ generates multi-valued solutions. Because of this, a proper averaging of the effective force $\langle f_{\mathrm{pin}}\rangle$ from individual pins is non-zero and results in a critical current $J_{c} = n_{p} \langle f_{\mathrm{pin}}\rangle$. Here, the critical current reached by strong pins depends \emph{linearly} on the defect density. While conceptually simpler than weak collective pinning, it has taken significantly longer to develop a strong pinning formalism. With its completion in the early 2000s, the formalism enabled computing numerous physical observables, including the critical current~\cite{Blatter2004, Koopmann2004}, the excess-current characteristic~\cite{Thomann2012, Thomann2017,Buchacek2018, Buchacek2019a, Buchacek2019b, Buchacek2020-condmat}, and the $ac$ Campbell response~\cite{Willa2015a, Willa2015b, Willa2016, Willa2018c}. Defects merely trap the vortex state into a metastable minimum. Thermal and quantum fluctuations release vortices from pinning sites, and this activated motion of vortices from a pinned state to a more stable state is called vortex creep. In the presence of creep, the critical current $J_{c}$ is no longer a distinct boundary separating a dissipation-free regime from a dissipative one. Experimentally, this manifests as a power-law transition $V=J^n$ between the superconducting state in which $V=0$ and Ohmic behavior. The creep rate $S \equiv - d \ln(J) / d\ln(t)$ then becomes $\propto 1/n$ and can be assessed by fitting the transitional regime in the current-voltage characteristic or measuring the temporal decay of an induced persistent current. Measurements of the vortex creep rate also provide access to microscopic details such as the effective energy barriers $U^*=T/S$ surmounted and whether single vortices or bundles are creeping. Various methods of tailoring the disorder landscape in superconductors have proven successful in remarkably enhancing the critical current. Figure \ref{fig:Jcenhancement} shows examples of cuprates, iron-based superconductors, and low-$T_c$ materials that have all benefited from incorporating inclusions. Defects can be added post-growth, using techniques such as particle irradiation,\cite{Leroux2015, Eley2017a, Tamegai2012, Averback1997, Fang2011, Gapud2015, Goeckner2003, Haberkorn2015a, Haberkorn2015b, Haberkorn2012a, Iwasa1988, Jia2013, Kihlstrom2013, Kirk1999, Konczykowski1991, Matsui2012, Nakajima2009, Roas1989, Salovich2013, Sun2015, SwiecickiPhysRevB12, Taen2012, Taen2015, Thompson1991, Vlcek1993, Zhu1993, Leonard2013} or during the growth process by incorporating impurities to the source material.\cite{Miura2013k, Haugan2004, Horide2013, Miura2011, PalauSUST2010, WimbushSUST10} Though these processes induce markedly different disorder landscapes, both can effectuate remarkable increases in $J_{c}$. However, the conditions necessary to improve electromagnetic properties are highly material-dependent---this lack of universality renders defect landscape engineering a process of trial-and-error. Particle irradiation can induce point defects (vacancies, interstitial atoms, and substitutional atoms), larger random defects, or correlated disorder (e.g., amorphous tracks known as columnar defects). Notably, the critical current in commercial YBa$_2$Cu$_3$O$_{7-\delta}$ coated conductors was nearly doubled through irradiation with protons\cite{Jia2013}, oxygen-ions\cite{Leroux2015, Eley2017}, gold-ions\cite{Rupich2016}, and silver-ions. Furthermore, iron-based superconductors have also been shown to benefit from particle irradiation\cite{Tamegai2012}. To incorporate larger defects, such as nanoparticle inclusions, numerous groups\cite{MacManus-Driscoll2004, Feighan2017, Miura2013k, Miura2011, Miura2016, Miura2017} have introduced excess Ba and $M$ (where $M$= Zr, Nb, Sn, or Hf) into growth precursors. This results in the formation of randomly distributed 5-20 nm sized non-superconducting Ba$M$O$_3$ nanoparticles or nanorods. This method has produced critical currents that are up to seven times higher than that in films without inclusions,\cite{Miura2013} therefore, has become one of the leading schemes for enhancing $J_{c}$. The enhancement achieved by inclusions and irradiation is often restricted to a narrow temperature and field range, partially because $\xi$ and $\lambda$ are temperature dependent, whereas the defect sizes and densities are fixed. Another reason for the limited range of the enhancement is that, under the right conditions, certain fast moving vortex excitations may form. For example, in materials containing parallel columnar defects, double-kink excitations form at low fields and moderate temperatures that result in fast vortex creep concomitant with reduced $J_{c}$. \cite{Maiorov2009} Mixed pinning landscapes, composed of different types of defects, can indeed enhance $J_{c}$ over a broader temperature and field range than inclusions of only one type and one size. More work is indeed necessary to optimize this. \subsection{Thermally activated vortex motion} Vortex creep is a very complex phenomenon due to the interplay between vortex-vortex interactions, vortex-defect interactions, vortex elasticity, and anisotropy. \cite{Blatter1994, Feigelman1989} \cite{Willa2020a}These interactions determine $U_{act}(T,H,J)$, a generally unknown regime-dependent function. The simplest creep model, proposed Anderson and Kim, neglects the microscopic details of pinning centers and considers vortices as non-interacting rigid objects hopping out of potential wells of depth $U_{act}\propto U_{p}|1 - J/J_c|$. However, as elastic objects, the length of vortices can increase over time under force from a current and vortex-vortex interactions are non-negligible at high fields. As such, the Anderson-Kim model's relevance is limited to low temperatures and fields. At high temperatures and fields, collective creep theories, which consider vortex elasticity, predict an inverse power law form for the current-dependent energy barrier $U_{act}(J) = U_{p}[(J_c/J)^{\mu}- 1]$, where $\mu$ is the so-called glassy exponent that is related to the size and dimensionality of the vortex bundle that hops during the creep process\cite{Blatter1991}. To capture behavior across regimes, the interpolation formula $U_{act}(J) = (U_{p}/\mu)\left[(J_c/J)^\mu - 1\right]$ is commonly used, where $\mu \rightarrow - 1$ recovers the Anderson-Kim prediction. Combining this interpolation formula with the creep time $t = t_0e^{U_{act}(J)/k_{\mathrm{B}} T}$, we find the persistent current should decay over time as $J(t) = J_{c0} [1+(\mu k_{\mathrm{B}} T/U_{p})\ln(t/t_0)]^{-1/\mu}$ and that the thermal vortex creep rate is \begin{align} S \equiv \Big| \frac{d \ln J}{d \ln t} \Big| = \frac{k_{\mathrm{B}} T}{U_{p} + \mu k_{\mathrm{B}} T \ln(t/t_0)},\label{eq:STHeqn} \end{align} where $\ln(t/t_0) \sim 25\text{-}30$. Because the magnetization $M \propto J$, creep can easily be measured by capturing the decay in the magnetization over time using a magnetometer. Moreover, as seen from Eq.~\eqref{eq:STHeqn}, knowledge of $S(T,H)$ provides access to both $U_{p}$ and $\mu$. Hence, creep measurements are a vital tool for revealing the size of the energy barrier, its dependence on current, field, and temperature, and whether the dynamics are glassy or plastic. It is important to note that Eq.~\eqref{eq:STHeqn} is typically used to analyze creep data piecewise---it can rarely be fit to the entire temperature range. Creep rates are not predictable and no analytic expression exists that broadly captures the temperature and field dependence of creep. Both $U_{p}$ and $\mu$ have unknown temperature dependencies, which is a major gap in our ability to predict vortex creep rates. \subsection{Predictive vortex matter simulations} Simulating the the behavior of vortex matter~\cite{Blatter1994,Brandt:1995,Crabtree1997,Nattermann2000,BlatterG:2003,ROPP} has a long history. Though the value of such simulations was realized long ago~\cite{BrandtJLTP83-1,BrandtJLTP83-2}, the efficacy to produce accurate results in materials containing complex defect landscapes is considered a recent success, tied to improvements in computational power. Specifically, we can now numerically solve more realistic models, ranging from Langevin dynamics to time-dependent Ginzburg-Landau (TDGL) equations to fully microscopic descriptions, including Usadel and Eilenberger, Bogoliubov-de Gennes, and non-equilibrium Keldysh-Eilenberger quantum transport equations. While the phenomenological TDGL equations describe vortex matter realistically on lengths scales above the superconducting coherence lengths, full microscopic equations are needed to describe, e.g., the vortex core accurately. This, however, means that the system sizes, which can be simulated down to the nanoscale using microscopic models, are quite limited, while TDGL can simulate macroscopic behavior including most dynamical features of vortex matter. \begin{figure*} \includegraphics[width=1\linewidth]{figures/fig_sim_exp.pdf} \caption{\textbf{(a)} 3D STEM tomogram of a 0.5 Dy-doped YBCO sample. Image processing is discussed in Ref.~[\onlinecite{Leroux2015}]. \textbf{(b)} Critical current $J_{c}$ as a function of the magnetic field $B$ applied along the c-axis of YBCO. The simulated field dependence (circles, red curve) with only the nanoparticles observed by STEM tomography in the sample with 0.5 Dy doping exhibits almost the same exponent $\alpha$, for $J_c \propto B^{-\alpha}$, as the experiment (triangles, green curve). Adding $2\xi$ diameter inclusions to the simulation makes the dependence less steep (squares, blue curve), which yields an exponent very similar to the experimental one in the sample with 0.75 Dy doping (stars, yellow curve). \textbf{(c)} Snapshot of the TDGL vortex configuration with applied magnetic field and external current for the same defect structure as in the experiment (a). Isosurfaces of the order parameter close to the normal state are shown in red and follow both vortex and defect positions. The amplitude of the order parameter is represented on the backplane of the volume where blue corresponds to maximum order parameter amplitude. Arrows indicate the experimental and simulated $J_{c}$ dependencies.}\label{fig:tomogram} \end{figure*} The Langevin approach only considers vortex degrees of freedom, while mostly neglecting elasticity and vortex-vortex interactions, which are nonlocal effects. Hence, its accuracy is limited to when inter-vortex separations are significantly larger than $\xi$, vortex pinning sites are dilute, or the superconducting host is sufficiently thin that vortices can be considered 2D particles. Nevertheless, this simple picture reveals remarkably rich, dynamical behavior -- notably realizing a dependence of $J_{c}$ on the strength and density of pinning centers ~\cite{BrandtJLTP83-1,BrandtJLTP83-2}, thermal activation of vortices from pinning sites~\cite{KoshelevPhysC92}, a crossover between plastic and elastic behavior~\cite{CaoPhysRevB00,DiScalaNJP12}, and dynamic ordering of vortex lattices at large velocities~\cite{KoshelevPhysRevLett94, MoonPhysRevLett96}. However, vortex elasticity is indeed an influential parameter in bulk systems. It results in vortex phases that are characterized by complex vortex structures, glassy phases that do not exist in 2D systems, as well as other interesting characteristics~\cite{ErtasK:1996, OtterloPRL00, BustingorryCD:2007, LuoHu:2007, LuoHuJSNM10, Koshelev:2011, DobramyslEPJ13}. Herein lies the strength of the TDGL approach, which is a good compromise between complexity and fidelity. It describes the full behavior of the superconducting order parameter~\cite{schmid} and therefore represents a `mesoscopic bridge' between microscopic and macroscopic scales. Notably, it surpasses the Langevin approach by (i) describing all essential properties of vortex dynamics, including inter-vortex interactions with crossing and reconnection events, (ii) possessing a rigorous connection to the microscopic Bardeen-Cooper-Schrieffer theory in the vicinity of the critical temperature~\cite{Gorkov:1959}, and (iii) considering realistic pinning mechanisms. Regarding pinning, it can specifically account for pinning due to modulation of critical temperature ($\delta T_{c}$-pinning) or mean-free path ($\delta \ell$-pinning), strain, magnetic impurities~\cite{DoriaEPL07}, geometric pinning through appropriate boundary conditions, and, generally, weak to strong pinning regimes---all beyond the reach of the Langevin approach. Consequently, the TDGL formulation is arguably one of the most successful physical models, describing the behavior of many different physical systems, even beyond superconductors~\cite{Aranson:2002}. In its early days, the TDGL approach was used to study depinning, plastic, and elastic steady-state vortex motion in systems containing twin and grain boundaries as well as both regular and irregular arrays of point or columnar defects.~\cite{kaper,crabtree2000} Those simulations were predominately used to illustrate the complex dynamics of individual vortices because computational limitations prohibited the study of large-scale systems with collective vortex dynamics. Only later did simulation of about a hundred vortices in two-dimensional systems become possible, resulting in predictions for, e.g., the field dependence of $J_{c}$ in thin films with columnar defects~\cite{Palonen2012}. A 2002 article by Winiecki and Adams~\cite{Winiecki:2002} deserves credit as one of the first simulation-based studies of vortex matter in three-dimensional superconductors that produced a realistic electromagnetic response. Later, in 2015, Koshelev et al.~\cite{Koshelev2016} achieved a major technical breakthrough by investigating optimal pinning by monodispersed spherical inclusions. The simulated system size of $100\xi \! \times \! 100\xi \! \times \! 50\xi$ was much larger than any previously studied system, enabling even more realistic simulations of the collective vortex dynamics than previous works. Their computational approach is based on an optimized parallel solver for the TDGL equation~\cite{sadovskyy+jcomp2015}, which allows for simulating vortex motion and determining the resulting electrical transport properties in application-relevant systems. The efficacy of this technique is best demonstrated in a study~\cite{Sadovskyy2016a} that applied the same approach to a `real' pinning landscape by incorporating scanning transmission electron microscopy tomography data of Dy-doped YBa$_2$Cu$_3$O$_{7-\delta}$ films~\cite{Ortalan:2009, Herrera2008}, and the results showed almost quantitative agreement of the field and angular dependent critical current with experimental transport measurements, see Fig.~\ref{fig:tomogram}. Finally, we discuss applying TDGL calculations to commercial high-temperature superconducting tapes, which typically consist of rare earth (RE) or yttrium barium copper oxide (REBCO) matrices. Specifically, Ref.~[\onlinecite{Sadovskyy2016b}] simulated vortex dynamics in REBCO coated conductors containing self-assembled BaZrO$_3$ nanorods, and reported good quantitative match to experimental measurements of $J_{c}$ versus the applied magnetic field-angle $\theta$. Most notably, the simulations demonstrated the non-additive effect of defects: adding irradiated columnar defects at a 45$^\circ$ angle with the nanorod (c-) axis removes the $J_{c}(\theta=0^\circ)$ peak of the nanorods and generates a peak at $\theta=45^\circ$ instead. This study then went beyond simply reproducing experimental behavior, and predicted the optimal concentrations of BaZrO$_3$ nanorods that are necessary to maximize $J_{c}$, which it found to be 12-14\% of $J_{d}$ (at specific $\theta$)---far higher than had been experimentally achieved in similar systems. This approach is certainly more efficient than the standard trial-and-error approach, growing and measuring samples with a large variety of defect landscape. These recent successes in accurately predicting $J_{c}$ in superconductors based on the microstructure highlight how close we are to the ultimate goal of tailoring pinning landscapes for specific applications with well-defined critical current requirements. Constituting the new \textit{critical-current-by-design} paradigm,~\cite{ROPP,ted100} the routine use of TDGL simulations for efficient defect landscape optimization is a transformative opportunity in vortex physics, as is expanding these computational successes to include the use of artificial intelligence algorithms. Furthermore, microscopic and far-from-equilibrium simulations of vortex matter beyond the TDGL approach require significant computational resources and are only now becoming feasible. We will discuss related developments in Sec.~\ref{ssec:AI}. \section{Transformative Opportunities} \subsection{Vortex Creep\label{ssec:vortexcreep}} In this section, we identify major opportunities to accelerate our understanding of thermally-activated vortex hopping (thermal creep) and non-thermal tunneling (quantum creep) between pinning sites. Only limited situations are amenable to an analytic treatment of vortex creep: these include thermal depinning of single vortices and vortex bundles in the regime of weak collective pinning. In the strong pinning regime, e.g., for columnar defects, we must consider complicated excitations that form during the depinning process. Activation occurs via half-loop formation\cite{PhysRevB.51.6526}, which is depicted in Fig.~\ref{fig:vortexstructures}. During this process, the vortex nucleates outside of its pinned position, and the curved unpinned segment grows over time as a current acts on it, until the entire vortex eventually leaves the pinning site. Because half-loop formation likely occurs in a range of high-current-carrying materials, which may contain amorphous tracks, nanorods, or twin boundaries, numerical treatment of vortex creep within the strong pinning framework is of significant interest. The first task involves studying creep of isolated vortices, pinned by a single strong inclusion or columnar defect. In accordance with analytic predictions, an increase in temperature shifts the characteristic depinning current below $J_{c}$, rounds the $IV$ curves and affects the excess-current characteristic far beyond $J_{c}$\cite{Buchacek2019a, Buchacek2019b, Buchacek2020-condmat}. The next steps will involve studying multiple vortices, more defects, and mixed defect landscapes, which will indeed increase the complexity of the problem, warranting computational assistance. Recent advances in computational power and high-performance codes will enable tackling these challenges, which involve long simulation times at exponentially low dynamics. Instead of simulating the thermal relaxation of a metastable configuration in a single 'linear' simulation, the same configuration can be simulated in parallel, i.e., experiencing fluctuations along different 'world lines'. This accelerates the search for a rare depinning event, after which parallel computations are interrupted and restarted from new depinned configurations. \begin{figure*} \includegraphics[width=1\linewidth]{figures/SvsGi} \caption{Creep at reduced temperature $T/T_c$ = 1/4 and a field of $\mu_0H = 1 \textnormal{ T}$ for different superconductors plotted versus $Gi^{1/2}$. The open symbols indicate materials for which the microstructure has been modified either by irradiation or incorporation of inclusions. The solid grey line represents the limit set by $Gi^{1/2}(T/T_c)$. The result predicts that the creep problem even in yet-to-be-discovered high-$T_c$ superconductors may counteract the benefits of high operating temperatures. Material from S. Eley, et al.\ Nat.\ Mater.\ 16, 409–413 (2017). Copyright 2017, \emph{Nature Publishing Group.} }\label{fig:SvsGi} \end{figure*} In a 2017 paper\cite{Eley2017a}, we found that the minimum achievable thermal creep rate in a material depends on its Ginzburg number $Gi$ as $S \sim Gi^{1/2}(T/T_c)$, shown in Fig.~\ref{fig:SvsGi}. Our result is limited to the Anderson-Kim regime and considered pinning scenarios with analytically determined pinning energies $U_P$. It also somewhat gravely predicts that there is a limit to how much creep problem in high-$T_{c}$ superconductors, which tend to have high $Gi$, can be ameliorated, such that we may expect the performance of yet-to-be discovered room temperature superconductors to be irremediably hindered by creep. However, YBCO films containing nanoparticles demonstrate non-monotonic temperature-dependent creep rates $S(T)$, such that $S$ dips to unexpectedly low values at intermediate temperatures outside of the Anderson-Kim regime\cite{Eley2017a}. This dip, thought to be induced by strong pinning from nanoparticles, suggests that collective pinning regimes may hold the key to inducing slower creep rates that dip below our proposed lower limit in the Anderson-Kim regime. A numerical tacking of the vortex creep problem would improve our theoretical understanding of creep and answer a major open question in vortex physics -- what is indeed the slowest achievable creep rate in different superconductors? Our finding of the lower limit to the creep rate reduces the guesswork in trial-and-error approaches to optimizing the disorder landscape and improves our ability to select a material for applications requiring slow creep. Yet, ultimately, a material's quantum creep rate actually sets its minimum achievable creep rate. This is a regime that is has received relatively little attention---there have been few theoretical and experimental studies of quantum creep. Theoretical models are limited to tunneling barriers induced by weak collective pinning\cite{Blatter1991, PhysRevB.47.2725} and columnar defects,\cite{PhysRevB.51.1181} though most materials have very complex, mixed pinning landscapes. Most experimental work has focused on cuprates, determining a crossover temperature of $\sim$ 8.5-11 K in YBCO films,\cite{PhysRevB.64.094509, LANDAU2000251, Luo_2002}, 1.5-2 K in YBCO crystals,\cite{PhysRevB.59.7222, LANDAU2000251} 5-6 K in Tl$_2$Ba$_2$CaCu$_2$O$_8$ films,\cite{PhysRevB.59.7222, PhysRevB.47.11552}, 17 K in TlBa$_2$CaCu$_2$O$_{7-\delta}$,\cite{PhysRevB.64.094509} 30 K in HgBa$_2$CaCu$2$O$_{6+\delta}$.\cite{PhysRevB.64.094509} Klein et al.\cite{PhysRevB.89.014514} studied an iron-based superconductor, finding crossover around 1 K in Fe(Te,Se). No studies have been conducted in materials containing inclusions nor using any systematic tuning of the energy barrier. Furthermore, the crossover between the thermal and quantum creep is unclear. As previously mentioned, the Anderson-Kim model's relevancy is limited to low temperatures $k_{\mathrm{B}} T \ll U_{p}$ in which $S$ is expected to increase approximately linearly with temperature. A linear fit to this regime often extrapolates to non-zero $S$ at $T = 0$, suggestive of non-thermal creep. In fact, it is common to perfunctorily attribute this extrapolation to quantum creep without conducting measurements in the quantum creep regime. However, there are compelling discrepancies between typical experimental results in this context and theory. For example, theory predicts that the tunneling probability should decrease with bundle size, whereas experiments often observe the opposite trend (positive correlation between low temperature $S$ and field)\cite{Lykov2013}. Theory also predicts a quadratic, rather than linear, temperature-dependent $S(T \rightarrow 0)$\cite{Lykov2013, PhysRevB.59.7222}. That is, quantum creep may be thermally assisted\cite{Blatter1991}, and not simply present itself as a temperature-independent creep rate at low temperatures. An even more confounding result is that Nicodemi et al.\cite{PhysRevLett.86.4378} predicted non-zero creep rates at $T = 0$ using Monte Carlo simulations based on a purely classical vortex model and reconciled it with non-equilibrium dynamics. It has also been suggested that the overall measured creep rate is simply the sum of the thermal and quantum components.\cite{PhysRevB.64.094509} However, in some iron-based superconductors,\cite{Haberkorn2012a, Haberkorn2014} $S$ is fairy insensitive to $T$ or even decreases with increasing $T$ up to fairly high fractions of $T_{c}$. Hence, either quantum creep is a significant component at surprisingly high temperatures or the creep rate dramatically decreases at temperatures below the measurement base temperature, motivating the need for lower temperature creep measurements. Superconductors with high normal-state resistivity $\rho_n$ and low $\xi$, such as high-$T_{c}$ cuprates, are the best candidates for having measurable quantum creep rates. This is because the effective quantum creep rate is predicted to be \begin{align} \!\!\! S_q = \begin{cases} -(e^2 \rho_n / \hbar \xi) \sqrt{J_{c} / J_{d}},& \!\!\!\!\text{if } L_c<a_0 \\ -(e^2 \rho_n / \hbar \lambda)(a_0/\lambda)^4(a_0/\xi)^9(J_{c}/J_{d})^{9/2}, & \!\!\!\!\text{if } L_c > a_0 \end{cases}\!\! \end{align} where $L_c$ is the length of the vortex segment (or bundle) that tunnels~\cite{Blatter1994}. Determining the dependence of the quantum creep rate on material parameters in superconductors would fill a major gap in our understanding of vortex physics. This would significantly contribute towards a comprehensive model of vortex dynamics, and reveal whether creep may induce measurable effects in quantum circuits, which typically operate at millikelvin temperatures. \subsection{Pinning at the extreme: Can the critical current reach the depairing current?\label{ssec:Jd}} Cooper pairs constituting the dissipationless current in superconductors will dissociate when their kinetic energy surpasses their binding energy. Theoretically, this could be achieved by a sufficiently high current, termed the \textit{depairing current}, $J_{d}$. Consequently, $J_{d}$ is recognized as the theoretical maximum achievable $J_{c}$, such that $J_{c}/J_{d}$ is often equated with the efficiency $\eta$ of the vortex pinning landscape, which may be confusing as a perfect defect would not produce $J_{c}=J_{d}$.\cite{Wimbush2015} The most successful efforts to carefully tune the defect landscape obtain $J_{c}/J_{d}$ of only 20-30\%.\cite{Civale10201, Selvamanickam_2015} As exemplified by a series of samples we have measured, see Fig.~\ref{fig:JcJd}, most samples produce $J_{c} / J_{d} < 5\%$, whereas $J_{c} / J_{d}$ is routinely higher for coated conductors (REBCO films). Though this at first appears to be a far cry from the ultimate goal, some surmise that this is indeed near the maximum that can be achieved by immobilizing vortices by means of \textit{core pinning}, which merely refers to a vortex preferentially sitting in potential wells defined by a defect to minimize the energy of its core. Wimbush et al.\ \cite{Wimbush2015} present a compelling argument that core pinning can obtain a maximum $J_{c}/J_{d}$ of only $30 \%$---A current equivalent to $J_{d}$ would produce a Lorentz force $f_d = J_{d} \Phi_0 = 4 B_{c} \Phi_0 / 3 \sqrt{6}\mu_0 \lambda$, with $B_{c} =\Phi_0/2\sqrt{2} \pi \lambda \xi$ the thermodynamic critical field. At the same time, the condensation energy $\varepsilon_{sc}$ produces a characteristic pinning force $f_p^{core} \sim \varepsilon_{sc}/\xi \approx \pi \xi^2 B_{c}^2 / 2\mu_0$, such that the ratio of the maximal core pinning force to the depairing Lorentz force is \begin{align} f_p^{core} / f_d &= 3 \sqrt{3} / 16 \approx 32 \%. \end{align} Similarly, Matsushita \cite{matsushita2007} performed a more precise calculation by considering the effects of the geometry of the flux line, and found that $f_p^{core}/f_d \approx 28 \%$. Hence, decades of work in designing the defect landscape to pin vortex cores may have indeed nearly accomplished the maximum efficiency achievable by means of core pinning. \begin{figure} \centering \includegraphics[width=1\linewidth]{figures/JcJd.pdf} \caption{\label{fig:JcJd} Critical current $J_{c}$ normalized to the depairing current for various superconductors at $T=\SI{5}{\kelvin}$ and $\mu_0 H = \SI{0.3}{\tesla}$. The data includes Dy$_2$O$_3$-doped YBa$_2$Cu$_3$O$_{7-\delta}$ commercial coated conductors and B$M$O$_3$-doped Y$_{0.77}$Gd$_{0.33}$Ba$_2$Cu$_3$O$_{7-\delta}$ films (where $M =$ Sn, Zr, or Hf), all grown via metal organic deposition.} \end{figure} If the ultimate goal of $J_{c} = J_{d}$ cannot be obtained by core pinning alone, are there other mechanisms to immobilize vortices that could indeed produce $J_{c}/J_{d} > 30 \%$? Magnetic interactions between vortices themselves or vortices and magnetic inclusions can also restrict the motion of a vortex---referred to as \textit{magnetic pinning}. Herein lies a transformative opportunity to make large strides towards approaching $J_{c} = J_{d}$. Magnetic pinning alone or in combination with core pinning may produce unprecedentedly high values for $J_{c}$, though this mechanism has received considerably less attention than core pinning because it is quite complicated to actualize. A high density of vortices tend to arrange themselves into a hexagonal lattice, and one pinned to a defect via core pinning may restrict the motion of its neighbors, subsequently affecting its neighbors' neighbors due to magnetic vortex-vortex interactions, which occur over a length scale of $\lambda$. A magnetic inclusion provides another opportunity for inflicting magnetic and core pinning on a vortex. Again, following the arguments of Wimbush\cite{Wimbush2015}, we can compare the pinning force induced by core pinning to that of magnetic pinning. The magnetic Zeeman energy $\varepsilon_{mag} = \frac{1}{2} \int_{A} M \cdot B \,dA$ produced by a strong ferromagnet is much greater than the condensation energy and may be several orders of magnitude greater than the core pinning energy. However, it is unclear whether the resulting pinning force is greater because it occurs over the longer length scale of $\lambda$ versus $\xi$, i.e., $f_p^{mag} \sim \varepsilon_{mag} / \lambda$, such that \begin{equation} f_p^{mag} / f_p^{core} \approx 2 \sqrt{2} (\mu_0 M / B_{c}). \end{equation} Hence, the advantage depends on the ratio of the magnetization of the pinning site to the thermodynamic critical field. Independent of whether $f_p^{mag}$ surpasses $f_p^{core}$, concomitant mechanisms would produce an additive effect that may surpass current record values of $J_{c}$. Yet, ferromagnets in proximity to superconductors can locally degrade superconductivity by inducing pair breaking, such that it is challenging to incorporate ferromagnetic vortex pinning centers without compromising the superconducting state. This complication combined with the typical materials science considerations of incorporating inclusions that do not induce too much strain on the surrounding superconducting matrix will make this a challenge all but insurmountable. In addition to magnetic pinning, exploiting geometric pinning provides another potentially transformative opportunity to dramatically boost $J_{c}$ in superconductors. In clean, narrow (sub-micron) superconducting strips, geometric restrictions can induce self-arrest of vortices recovering the dissipation-free state at high fields and temperatures due to surface/edge (Bean-Livingston barrier) \cite{Bean1964} or geometric \cite{Zeldov1994, Zeldov1994b, Brandt1999, Willa2014} pinning. Figure~\ref{fig:doublestrip} depicts an example of geometric vortex pinning around two superconducting strips. Moreover, at a fixed applied current, the magnetoresistance (MR) shows oscillations with increasing magnetic field, indicating the penetration of complete vortex rows into the system~\cite{Papari2016}. Therefore, these MR oscillations are a way to determine the vortex structure in nanoscale superconductors. At very high fields, the vortex lattice in these strips starts to melt. Combining magnetoresistance measurements and numerical simulations can then relate those MR oscillations to the penetration of vortex rows with intermediate geometrical pinning, where the vortex lattice remains unchanged, and uncover the details of geometrical melting. This opens the possibility to control vortices in geometrically restricted nanodevices and represents a novel technique of `geometrical spectroscopy'. Combined use of MR measurements and large-scale simulations would reveal detailed information of the structure of the vortex system. A similar re-entrant behavior was observed in superconducting strips in a parallel field configuration: Here, high fields lead to `vortex crowding', in which a higher density of vortex lines starts to straighten, therefore reducing the Lorenz force on the vortices. The result is an intermediate dissipation-less state~\cite{parallel2017}. The situation becomes more complex when one considers nanosized superconducting strips and bridges, in which vortex pinning is dictated by an intricate interplay of surface and bulk pinning. As described above, in the case of a very narrow bridge, $J_{c}$ is mostly defined by its surface barrier, whereas in the opposite case of very wide strips, it is dominated by its bulk pinning properties. However, understanding the intermediate regime, where the critical current is determined both by bulk pinning and by the Bean-Livingston barrier at the edge of a strip is of great interest in small superconducting structures and wires. \begin{figure} \centering \includegraphics[width = 0.45\textwidth]{figures/figure-double-strip} \caption{\label{fig:doublestrip} Field profile in a double-strip geometry before penetration of vortices \cite{Willa2014}. The arrangement and geometry (e.g.\ width $w$ and thickness $d$) of the specimen significantly influence the relative importance of Bean-Livingston, geometric and bulk pinning. Material from R.\ Willa, \emph{ETH Zurich Research Collection}, see Ref.~[\onlinecite{Willa2016-thesis}]. } \end{figure} Recent studies~\cite{kimmel2019} revealed that while bulk defects arrest vortex motion away from the edges, defects in their close vicinity promote vortex penetration, thus suppressing the critical current. This phenomenon is also quite important in the study of superconducting radio-frequency cavities. Furthermore, the role of defects near the penetrating edge is asymmetric compared to the exit edge of a superconducting strip. This complex interplay of bulk and edge pinning open new opportunities for tailored pinning structure for a given application. In the simple case of the straight strip with similar-type spherical defects, an optimized defect distribution can have a more than 30\% higher critical current density than a homogeneously disorder superconducting film. The need for high-current, low-cost superconductors continues to grow with new applications in lightweight motors and generators, as well as strong magnets for high-energy accelerators, NMR machines, or even Tokamak fusion reactors. Many of these applications require large magnetic fields and therefore large critical currents, which is both a fundamental research and engineering challenge as it requires reliably fabrication of uniform, km-long high-performance superconducting cables having an optimal pinning microstructure. Consequently, there are two main aspects that must be addressed for large-field applications: (i) determining the best possible pinning landscape and geometry for a targeted application and (ii) controlling fabrication of long superconducting cables to incorporate an optimized pinning landscape with highest possible uniformity. Both of these aspects are part of the critical-current-by-design paradigm~\cite{ROPP}. We will describe these in a more general context in section~\ref{ssec:AI}. Taking advantage of the modern computational approaches described there in combination with experiments opens novel pathways to new materials for large-field applications, in particular the use of high-$T_{c}$ superconducting material instead of more traditional choice of elemental Nb or Nb-based compounds. \subsection{Superconducting RF cavities and Quantum Circuits} \label{sec:RFcavities} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{figures/fig_srf.pdf} \caption{\label{fig:srf} Surface disorder and multilayers in SRF cavities. \textbf{(a)} Sketch showing how vortices (red) parallel to the surface of a cavity wall penetrate the wall (outside the superconductor, the red lines illustrate field lines), \textbf{(b)} intercalating insulating layers (SI[SI]S) will cause vortex pancakes to form and might limit the penetration depth of vortices~\cite{Gurevich2006,Gurevich2017}. \textbf{(c)} Simulation snapshot of surface vortex penetration into a type-II superconductor having spherical defects (yellow) near the surface in an AC magnetic field parallel to the surface. Vortex lines are shown in red. The planar projection shows the superconducting order parameter amplitude.} \end{figure*} \subsubsection{Superconducting RF cavities} Most studies of vortex dynamics in superconductors are conducted using DC currents and static magnetic fields. Yet, the need to understand vortex dynamics under AC magnetic fields or AC currents is rapidly increasing, as these are the operating conditions e.g. for superconducting radio frequency (SRF) cavities and quantum circuits for sensing and computing. Superconductors are desirable for RF devices because the minimal resistance enables very high quality factors $Q$, a metric that indicates high energy storage with low losses and narrow bandwidths. SRF cavities are used in current and next-generation designs for particle accelerators used in, e.g., high energy physics. In addition to $Q$, the maximum accelerating field, $E_a$, is another important metric for SRF cavity performance. The goal is often to maximize $Q$ at low drive powers, which is essential for large accelerating fields and reduced demands on cryogenic systems that are responsible for cooling the cavities. Similarly, higher $E_a$ is desirable, as it indicates larger reachable particle energies. Elemental niobium (Nb) and Nb-based compounds are the material of choice for all current accelerator applications. Advances in the fabrication of Nb-cavities have pushed their performance to extraordinary levels~\cite{rfbook,SRF2017}, with $Q$-values approaching $2\times 10^{11}$ and $E_a$ in excess of $45$ MV/m in Nb~\cite{qf2007}, and $E_a\sim$ 17 MV/m for Nb$_3$Sn resonators. In both cases, the magnetic field reached is above the lower critical field of the material, but below the theoretically predicted superheating field, at which vortices would spontaneously penetrate even a perfect cavity wall, shown in Fig.~\ref{fig:srf}a. Further increases in $E_a$ require a conceptual breakthrough in our understanding of Nb-cavity performance limits or new constituent materials. New material candidates being considered include Nb$_3$Sn, NbTiN, MgB$_2$, Fe-based superconductors, and engineered multilayer or stratified structures (see Fig.~\ref{fig:srf}b). SRF cavities operate at temperatures well below $T_{c}$ and high enough frequencies to drive the superconductor into a metastable state, near breakdown. The resulting period approaches intrinsic time scales, such as vortex nucleation and quasi-particle relaxation times. While the experimental progress in improving the performance and quality factor of SRF cavities has been impressive~\cite{SRF2017}, e.g., the counter-intuitive increase of the quality factor with nitrogen doping, it is mostly based on trail-and-error approaches. A deep fundamental understanding is important to make more systematic progress, requiring new theoretical and computational studies. Because the cavities operate out-of-equilibrium, a phenomenological description based on TDGL theory can only serve as a rough, qualitative guide. Developing a fundamental theory describing the nonlinear and non-equilibrium current response of SRF cavities requires a microscopic description based on quantum transport equations for non-equilibrium superconductors. A microscopic description, however, is challenging because the RF currents under strong drive conditions (i.e., high field frequencies and amplitudes near the breakdown/superheating field) affect both the superconducting order parameter and the kinetics of quasiparticles, all of which have to be treated self-consistently. This endeavor requires development of numerical approaches to solve the quantum transport equations, based on the Keldysh and Eilenberger formulations of non-equilibrium superconductivity in the strong-coupling limit. The Keldysh-Eilenberger quantum transport equations are, in general, non-local in space-time, non-linear, and in many physical situations involve multiple length and time scales. Solving these equations requires considerable computational resources, which are now becoming available with exa-scale computing facilities. Herein lies a transformative opportunity to dramatically boost the performance of SRF cavities. Namely, researchers are now equipped to develop microscopic theoretical models, and incorporate them into computational codes, to reveal the origin and mechanisms that limit the accelerating field of SRF cavities. The acquired knowledge will then guide materials optimization to maximize the critical currents, superheating fields, and quench fields. Reaching the theoretical limits for these parameters necessitates suppressing vortex nucleation. At high RF magnetic field amplitudes, screening currents near the vacuum-superconductor interface can nucleate Abrikosov vortices that can quench the cavity, see Fig.~\ref{fig:srf}c. This \emph{vortex breakdown} depends on (i) the amplitude and frequency of the surface field, (ii) the cavity's surface roughness, and (iii) the type, distribution, and size of defects near the interface. The impact of near-surface defects on vortices is diametric: they may reduce the potential barrier for vortex nucleation~\cite{kimmel2019}, but also pin nascent vortices generated by nucleation at the surface, preventing a vortex avalanche and substantial dissipation. Given this, there are various possible optimal microstructures for the SRF constituent materials: (i) a ``clean'' superconductor with a maximum surface barrier, (ii) a superconductor with a thin (few $\xi$) defect-free surface and nanoscale defects in its bulk, or (iii) some special spatial gradient in the size and/or density of defects. Large-scale TDGL simulation can be applied to study the conditions under which vortex avalanches form, devise mechanisms that are effective at mitigating these avalanches and, more generally, gain insight into the flux penetration under RF field conditions. Furthermore, by coupling the TDGL and heat transport equations, this method can study \emph{hot spots}, providing insight on avoiding the formation of these hot spots in SRF cavity walls. As previously mentioned, though TDGL cannot produce quantitative results, it can serve as a useful guide to experiments and also provide insight into simulations based on microscopic transport equations. Lastly, though discussed in the context of SRF cavities, these new computational methods can be applied to superconducting cables for AC power applications. \subsubsection{Vortices in Quantum Circuits\label{ssec:quantumcircuits}} \begin{figure*} \centering \includegraphics[width=0.9\linewidth]{figures/Resonators2.pdf} \caption{\label{fig:Resonator} \textbf{(a)} An optical image showing multiple $\lambda/4$ resonators multiplexed to a common feed line and surrounded by a ground plane containing holes to pin vortices and suppress vortex formation, \textbf{(b, c)} Scanning electron micrographs of superconducting CPW resonator without (\textbf{b)} and with \textbf{(c)} holes. \textbf{(d)} Quality factor $Q_i$ versus $B_{\perp}$ for varying hole density $\rho_h$. The field at which the vortex density matches the hole density (each hole is filled with one vortex) is plotted with a color-matched vertical line. Above this threshold field, additional vortices are not pinned by the holes but instead only weakly pinned by film defects and interstitial pinning effects. \textbf{(e)} $\Delta f_r/f_r$ versus $B_{\perp}$ for varying $\rho_h$. Reprinted with permission from Ref.~[\onlinecite{Kroll2019}]. Copyright 2019, \emph{American Physical Society}. } \end{figure*} \paragraph{Energy loss due to vortices.} Similar to SRF cavities, superconducting circuits for quantum information also operate at RF/microwave frequencies and are affected by vortices. Specifically, along with parasitic two-level fluctuators and quasiparticles, vortices are a considerable source of energy loss in superconducting quantum circuits\cite{Muller2019, Oliver2013, Martinis2009}. These energy loss mechanisms create a noisy environment in which the qubit irreversibly interacts stochastically, rather than deterministically. Consequently, the evolution of the quantum state is unpredictable, increasingly deviating over time from predictions until the qubit state is eventually lost. This is called decoherence, which limits the amount of time $T_1$ over which information is retained in qubits to the microsecond range and there is typically a large spread in the $T_1$ times for each qubit in multi-qubit systems\cite{Finke2019}. Vortices appear in superconducting quantum circuits due to stray magnetic fields, self-fields generated by bias currents, and pulsed control fields. In addition to limiting $T_1$ in qubits, thermally-activated vortex motion can cause significant noise in superconducting circuits and reduce the quality factor $Q$ of superconducting microwave resonators\cite{Song2009, Song2009a, Kroll2019}. To mitigate this, techniques have been developed to either prevent vortex formation or trap vortices in regions outside of the path of operating currents. Shielding circuits from ambient magnetic fields and narrowing linewidths constituting the device\cite{DVH2004, Samkharadze2016} significantly reduce the vortex population. For example, for a line of width $w$, flux will not enter until the field surpasses $\Phi_0/w^2$. Because of this realization, flux qubits typically contain linewidths of $\SI{1}{\micro\meter}$, therefore exclude vortex formation up to a threshold magnetic field of roughly \SI{2}{\milli\tesla}, which is 20 times larger than the Earth's magnetic field\cite{DVH2004}. Though shielding has enabled remarkable headway in improving the stability of superconducting qubits for computing applications, it is not a complete solution. A reasonable amount of shielding can only suppress the field by a small amount, which may be insufficient if devices must operate in high-field environments. Moreover, shielding may render devices useless in quantum sensing applications in which the purpose of the device is to sense the external environment. This has sparked research on further modifications to the device design and on understanding the effects of a magnetic field on different architectures of quantum circuits, including transmon qubits\cite{Schneider2019} and superconducting resonators\cite{Bothner2017, Kroll2019}, which are integral components to readout. In addition to shielding, another common remedy for the vortex problem in superconducting circuits involves micropatterning arrays of holes in the ground plane to serve as vortex traps and reduce the prevalence of vortex formation within that perforated area\cite{Song2009a, Bothner2011, Bothner2012, Chiaro2016}, see Fig. \ref{fig:Resonator}. For example, Bothner et al.\ found that $Q(B=\SI{4.5}{\milli\tesla})$ is a factor of 2.5 higher in Nb resonators containing flux trapping holes in the ground plane \cite{Bothner2011} compared to without the holes. However, Chiaro et al.\cite{Chiaro2016} showed that, without careful design, these features can increase the density of and subsequently losses from parasitic two-level fluctuators, thought to primarily form at surfaces and interfaces. Moreover, coplanar waveguide resonators were recently found to be more robust to external magnetic fields when the superconducting ground plane area is reduced, which lowers the effective magnetic field inside the cavity, and by coupling the resonator inductively instead of capacitively to the microwave feedline, shielding the feedline\cite{Bothner2017}. The methods we have discussed here engendered tremendous advances in suppressing the vortex problem in superconducting quantum circuits, however, the details are material-dependent. Likewise optimal mitigation strategies may be material-dependent. For example, Song et al.\ compared the microwave response of vortices in superconducting Re (rhenium) and Al coplanar waveguide resonators with a lattice of flux-trapping holes in the ground plane. Generally, in both systems, vortices shift the resonance frequency $f_0$, broaden the resonance dip $|S_{21}|(f)$, and reduce the quality factor $Q$. However, vortices in the Al resonators induce greater loss and are more sensitive to flux creep effects than in the Re resonators. The Al resonator experienced a far more substantial fractional frequency shift $df/f_0$ with increasing frequency than the Re resonator. Furthermore, while the loss $1/Q$ due to vortices increased with frequency for Re, it decreased for Al. Most research on the microwave response of vortices in quantum circuits is limited to Al\cite{Song2009, Song2009a, Chiaro2016, PhysRevLett.113.117002, Wang2014}, Nb\cite{Bothner2012, Bothner2011, Stan2004, Kwon2018, Golosovsky1995}, NbTiN\cite{ Samkharadze2016, Kroll2019}, and Re\cite{Song2009a}. Whereas Al and Nb are used in commercial quantum computers, superconducting nitrides (TiN, NbN, NiTiN)\cite{Sage2011, Ohya2013, Vissers2012a, Leduc2013, Sandberg2012, Chang2013, Kerman2006, Barends2010a, Barends2010b, Bruno2015} and Re have garnered substantial attention because they may suffer less from parasitic two-level fluctuators, which are particularly problematic in oxides and at interfaces\cite{Muller2019}. Nitrides and Re are known to develop thinner oxide layers than Al and Nb, and can be grown epitixally on common substrates\cite{Dumur2016, WangMartinis2009, Vissers2010}. To develop a generic understanding of how to design quantum circuits that are resilient to ambient magnetic fields and control vortices in circuits made of next-generation materials, we must study circuits consisting of broader ranges of materials, perform further studies on nitride-based circuits, investigate different designs for flux trapping, and conduct imaging studies that can observe rather than infer the efficacy of vortex pinning sites. There have been a few studies that imaged vortices in superconducting strips, which provided guidance on appropriate line widths to preclude vortex formation\cite{Stan2004, Kuit2008}. To build upon this, imaging studies (using e.g. a scanning SQUID or magnetic force microscope) of devices would inform on the efficacies of flux trapping sites, reveal locations in which vortices form, and track vortex motion. \paragraph{Vortices in topological quantum computing schemes.} Up until now, we have discussed vortices exclusively as a nuisance, which is indeed the case for a broad range of applications. A notable exception lies in the burgeoning field of topological quantum computing, in which vortices serve as hosts for Majorana modes\cite{Liu2019}. Qubits encoded using Majorana modes are predicted to be relatively robust to noise, thus have long coherence times. One way to realize this is to couple a superconductor to a topological insulator, induce vortices in the superconductor, and Majorana states are predicted to nucleate in the vortex core. (Also note that Majorana modes have been theorized to exist in other systems) \cite{Grosfeld2011, Nenoff2019, You2014, DasSarma2012, Bjorn2012, Liang2016, Alicea_2012}. Initially elusive, signatures of Majorana vortex modes have been recently observed in a variety of systems, including the iron-based superconductor $\mathrm{Fe}\mathrm{Te}_{x}\mathrm{Se}_{1-x}$ \cite{Chiueaay2020, Ghaemi2020}, EuS islands overlying gold nanowires \cite{Manna2020}, superconducting Al layer encasing an InAs nanowire\cite{Vaitiekenaseaav2020}, and Bi$_2$Te$_3$/NbSe$_2$ heterostructures\cite{JFJia2016}. To exploit these modes for computing, we must be able to control their vortex hosts. Consequently, vortex pinning research will be beneficial to vortex-based topological quantum computing applications. \subsection{Vortex matter genome using artificial intelligence: Critical-current-by-design}\label{ssec:AI} \begin{figure*} \centering \includegraphics[width=0.98\textwidth]{figures/fig_AIx.pdf} \caption{\label{fig:AI} Critical-current-by-design using: \textbf{(a)} Genetic algorithms to optimize critical currents. Starting with a superconductor having intrinsic defects, genetic algorithms can be used to optimize the defect structure by mutation of defects and targeted selection of landscapes with larger critical currents. A mutation of a defect (or several defects at once) can be done by, e.g., translation, resizing, deletion, or splitting as sketched in the defect sequence on the right. Overall, this procedure creates ``generations'' of mutated defect configurations and only the best is selected and chosen to be the seed for the next generation shown in the partial tree on the left (circles/dots represent configurations, where the large numbered one is the best). Using neural networks and machine learning (ML) to predict the best mutations could further improve the targeted selection approach~\cite{Sadovskyy2019}. \textbf{(b)} ML/Artificial intelligence (AI) to improve and tailor defect landscapes in superconductors. \textbf{\protect\circled{1}} illustrates how AI models can be used to predict pinning landscapes from synthesis parameters and, vice versa, to predict synthesis parameters like precursor concentrations, pressures, and temperatures in, e.g., vapor deposition methods, for a targeted pinning landscape. The models need to be trained by experimental or simulation data sets. \textbf{\protect\circled{2}} similarly shows how to directly predict critical current dependencies, like field orientation dependencies, from pinning topologies and vice versa. Again, the underlying model is trained by experimental and simulation data.} \end{figure*} Over the years, research in superconductivity and vortex pinning has produced large amounts of experimental and simulation data on microstructures, synthesis, and critical current behavior. More recently, artificial intelligence (AI) and machine learning (ML) approaches have enabled revolutionary advances in the fields of image and speech recognition as well as automatic translation, and are now finding an increasing number of applications in scientific research areas that deal with massive data sets, like particles physics, structural biology, astronomy, and spectroscopy. Combining these will enable novel approaches to predict pinning landscapes in superconductors for the future design of materials with tailored properties by using sophisticated ML algorithms and AI models. This has become a promising approach within the critical-current-by-design paradigm, which refers to designing superconductors with desired properties using sophisticated numerical methods replacing traditional trial-and-error approaches. These properties include maximizing critical currents, achieving robust critical currents with respected to variations of the pinning landscape (which is important for large-scale commercial applications), or attaining uniform critical currents with respect to the magnetic field orientation. The next step towards advancing the use of AI/ML approaches for critical-current-by-design may be to build upon the genetic algorithms implemented in Ref.~[\onlinecite{Sadovskyy2019}] to optimize pinning landscapes for maximum $J_{c}$. This approach utilizes the idea of targeted selection inspired by biological natural selection. In contrast with conventional optimization techniques, such as coordinate descent, in which one varies only a few parameters characterizing the entire sample, targeted evolution allows variations in each defect individually without any \textit{a-priori} assumptions about the defects' configuration. This essentially means that one solves an optimization problem with, theoretically, infinite degrees of freedom. Ref.~[\onlinecite{Sadovskyy2019}] demonstrated the feasibility of this approach for clean samples as well as ones with preexisting defects, e.g. as found in commercial coated conductors. The latter, therefore, provides a post-synthesis optimization step for existing state-of-the-art wires and a promising path toward the design of tailored functional materials. However, the mutations of the defects required for the genetic algorithm [see Fig.~\ref{fig:AI}(a)] were chosen randomly. Those mutations generate ``generations'' of pinning landscapes, of which the best is chosen by targeted selection and then used as a seed for the next generation. Using a simple machine learning approach could further enhance the convergence of this method, by performing only mutations which have higher probabilities of enhancing the critical current. Besides superconductors, this methodology can be used to improve the intrinsic properties of other materials where defects or the topological structure plays an important role, such as magnets or nanostructured thermoelectrics. Going beyond these ML-improved simulations, one can build quantitative data-driven AI approaches for superconductors that will enable, e.g. predicting the critical current phase diagram and extracting the defect morphology responsible for its performance directly from the existing accumulated experimental and simulation data without actual dynamics simulations. Here, we will discuss two potentially transformative opportunities, summarized in Fig.~\ref{fig:AI}(b). The first application is motivated by the need for reliably producing uniform superconductors on macroscopic commercial scales. This requires a deep understanding of material synthesis processes e.g., for self-assembled pinning structures in a superconducting matrix (see Fig.~\ref{fig:AI}(b), \textbf{\circled{1}}). Materials at the forefront of this quest are REBCO films with self-assembled oxide inclusions in the form of nanorods and nanopartices~\cite{Obradors2014, ROPP}. For example, BaZrO$_3$ (BZO) nanorods that nucleate and grow during metal organic chemical vapor deposition (MOCVD) have proven particularly effective for pinning vortices~\cite{majkic2017}. The major difficulties in achieving consistent and uniform critical currents in REBCO tapes containing BZO nanorods are the interplay of many parameters controlling the deposition process (temperature of the substrate and of the precursor gases, deposition rate, precursor composition, etc.) and strong sensitivity of the microstructure to small variations in these parameters. Even for the same nominal level of Zr additives, significant variations in nanorod diameter, size distribution, spacing distribution, and angular splay have been observed. Physical factors controlling these variations remain poorly understood. For example, the nanorods’ diameter may be a mostly equilibrium property resulting from the interplay of strain and surface energies or caused by kinetic effects controlled by surface diffusion of adatoms and deposition rate. These complexities have precluded the development of predictive models. However, making use of the accumulated experimental data sets and possibly synthesis/kinetic growth simulation data (Monte Carlo or molecular dynamic simulations, which are also still in an exploratory phase), allows building ML/AI models to predict pinning landscapes for given synthesis parameters as described above or, more relevant for commercial application, the prediction of synthesis parameters for a desired, uniform pinning landscape. To constitute a complete \textit{vortex-pinning genome}, a second notable milestone is using AI to predict $J_{c}$ for a given pinning landscape based solely on data recognition (disregarding TDGL simulations) and, conversely, predicting the necessary pinning landscape to produce a desired $J_{c}$. In fact, the latter cannot be achieved by direct simulations. Typical data sets, both experimental and simulation-based, contain information on defect structures, critical currents, and other transport characteristics for a wide range of magnetic fields and temperatures. Creating an organized database of this information would enable (i) quickly accessible critical current values for a wide range of conditions, (ii) an effective mapping of simulation parameters onto experimental measurements, and (iii) using the data as training sets for AI-driven predictions of defect structures for desired applications. Experimentally, microstructures are routinely probed by transmission electron microscopy (TEM) and, less directly, by x-ray diffraction (XRD). In contrast to simulation data, which contains all information about pinning landscapes, the extracted information is usually rather limited, since TEM only allows imaging of thin slices of the material and only detects relatively large defects. A full 3D tomography of defect landscapes [cf. Fig.~\ref{fig:tomogram}(a)] is very time consuming and expensive, and therefore currently typically infeasible. The resulting AI/ML models will also allow for a cross-validation of the simulation-based data with available experimental data on materials properties in superconductors with different defect microstructures. Overall, this AI/ML approach will directly reduce the cost and development time of commercial superconductors and, in particular, accelerate their design for targeted applications. To estimate the benefit of such an approach, one can consider, for example, a pinning landscape defined by 9 parameters. Using traditional interpolation in this 9-dimensional parameter space one would need to have a certain number of data points per parameter. For the modest case of 15 data points per direction one would need to simulate (measurements are infeasible) $15^9\approx 4\cdot 10^{10}$ pinning landscapes, which -- assuming 15 minutes per simulation on a single GPU -- results in a total simulation time of a million GPU years. This simulation time is beyond current capabilities, even on the largest supercomputers. However, surrogate ML models can reduce this to approximately $10^4$ simulations, while maintaining the same resulting accuracy (see seed studies in, e.g., Ref.~[\onlinecite{crombecq2011}]). In this section, we mentioned the complications associated with 3D tomographic imaging of a superconductor's microstructure to supply complete information for simulations. In the next section, we detail the limitations of tomographic imaging and other advanced microscopy techniques, many of which will be revolutionized by improvements in computational power and the application of advanced neural networks. This in turn will have a transformative impact on vortex physics. \subsection{Advanced microscopy to better understand vortex-defect interactions\label{ssec:microscopy}} \subsubsection{Quantitative point-defect spectroscopy } We have discussed the role of point defects in suppressing vortex motion via weak collective pinning. Notably, the theory of weak collective pinning\cite{Larkin1979} has attracted significant attention as it can explain the origin of novel vortex phases, e.g.\ vortex glass~\cite{Fisher1989,Fisher1991} and vortex liquid~\cite{Nelson1988} phases, as well as the associated vortex melting phase transition ~\cite{Brandt1989, Houghton1989}. It cannot, however, be used to predict $J_{c}$ in single crystals, whose defect landscape is dominated by point defects. This limitation is not necessarily reflective of gaps in weak collective pinning theory itself, but rather the fact that point defect densities are typically unknown because they are extremely challenging to measure over a broad spatial range. Consequently, point defects are the dark matter of materials. Herein lies yet another transformative opportunity in vortex physics. Developing a technique to accurately measure point defect profiles and subsequent systematic studies correlating point defects, $J_{c}(B,T)$, and $S(B,T)$ may lead to recipes for predictably tuning the properties of superconductors, most directly impacting crystals and epitaxial materials that lack a significant contributions from strong pinning centers. The most promising routes for quantitative point defect microscopy include scanning transmission electron microscopy (STEM), atom probe tomography (APT), atom electron tomography (AET), and positron annihilation lifetime spectroscopy (PALS). Here, we primarily focus on STEM combined with AET, then will introduce APT and PALS as other techniques with atomic-scale resolution that are relatively untapped opportunities to reveal structure-property relationships in superconductors. In scanning transmission electron microscopy (STEM), an imaging electron beam is transmitted through a thin specimen, such that detectors can construct a real-space image of the microstructure and collect other diffraction data. In superconductors, STEM studies have revealed a panoply of defects, including columnar tracks, defect clusters, dislocations, twin boundaries, grain boundaries, and stacking faults. These studies can also provide information on other pertinent microstructural properties, including strained regions that induce variations in the superconducting order parameter, therefore, preferential regions for vortices to sit in an otherwise defect-free landscape. To identify dopants (e.g. BaHfO$_3$ nanoparticles), STEM is also often performed in conjunction with analytical techniques, such as energy dispersive x-ray spectroscopy. To understand the ability of STEM to determine point defect densities in superconductors, we must first understand what limits the spatial resolution and throughput. Older STEMs cannot resolve point defects due to imperfections (aberrations) in the objective lenses and other factors that set the resolution higher than the wavelength of the imaging beam. Atomic resolution was finally achieved upon the advent of transformational aberration correction schemes, which were first successfully demonstrated in the late 1990s and have been increasingly widely adopted over the past decade\cite{Batson2002, ROSE20051, Dahmen2009, Ophus2017}. In fact, the spatial resolution of an aberration-corrected STEM has now fallen below the Bohr radius of $\SI{0.53}{\pico\meter}$ \cite{kisielowski2008, Erni2009, Alem2009, Naoya2019}. Though point defects can now be imaged in superconductors, it is not straightforward to determine point defect densities. A single scan captures a small fraction of the sample, which may not be representative of defect distributions throughout the entire specimen. Accordingly, low throughput prevents collecting a sufficiently large dataset to provide a reasonably quantitative picture of defect concentrations. One of the limiting factors for throughput is the detector speed, which has recently improved significantly owing to the development of direct electron detectors such as active pixel sensors (APS) and hybrid pixel array detectors (PAD). These detectors have higher quantum efficiency, operate at faster readout speeds, and have a broader dynamic range than conventional detectors---charge-coupled devices (CCDs) coupled with scintillators \cite{Ophus2017}. Enabled by fast detectors, the advent of 4D-STEM\cite{ophus_2019} is another recent, major milestone that is a significant step towards determining point defect densities. Note that 4D-STEM involves collecting a 2D raster scan in which 2D diffraction data is collected at each scanned pixel, generating a 4D dataset containing vast microstructural information. In addition to high-speed direct electron detectors, computational power was prerequisite for 4D-STEM implementation, in which massive datasets can be produced: see Ref.~[\onlinecite{ophus_2019}] for an example in which a single 4D-STEM image recorded in \SI{164}{\second} consumes \SI{420}{\giga\byte}. Hence, over the past few years, this has warranted efforts to develop fast image simulation algorithms \cite{Ophus2017} and schemes to apply deep neural networks to extract information, such as defect species and location\cite{Ziatdinov2017}. Furthermore, STEMs can be used for electron tomography, in which images that are collected as the sample is incrementally rotated are combined to create a 3D image of the microstructure.\cite{Miaoaaf2157} Aberration correction, high-speed detectors, and the data revolution are transformative advances that will certainly accelerate progress in understanding structure-property relationships in superconductors. Nevertheless, there are more salient impediments to an atomic-scale understanding of the true sample under study, including artifacts from sample preparation techniques\cite{SCHAFFER2012} and beam scattering within thick samples. To remedy the latter, materials are often deposited onto membranes, though this may not present a representative picture of the defect landscape when the sample is in a different form (e.g. thicker and on a different substrate). Atom probe tomography (APT) is another microscopy technique with atomic-scale resolution, and it also provides 3D compositional imaging of surface and buried features. Over the past decade, it has become increasingly popular due to the development of a commercial local-electrode atom probe (LEAP). For APT, the specimen must be shaped as a needle and an applied electric field successively ejects layers of atoms from the surface of the specimen towards a detector. By means of time-of-flight mass spectroscopy, the detector progressively collects information on the position and species of each atom, reconstructing a 3D tomographic image of the specimen that can span \SI{0.2 x 0.2 x 0.5}{\micro\meter} with resolution of $\SIrange[range-phrase=-, range-units=single]{0.1}{0.5}{\nano\meter}$ \cite{Petersen2011}. As each atom is individually identifiable, it can provide remarkably revealing information on the microstructure. Similar to STEM, sample preparation and data processing are bottlenecks; APT also currently suffers from limited detection efficiency\cite{Kelly2007}. Furthermore, the analyzed volume (field of view) is currently too small to be sufficiently representative of the sample to provide accurate quantitative details on point defect concentrations. The biggest complication, however, may be that the defect landscape of the APT specimen, shaped as a needle, may dramatically differ from the material in the form in which we typically study its electromagnetic properties. Lastly, positron annihilation spectroscopy is a hitherto untapped opportunity to correlate vacancy concentrations with electrical transport properties in superconductors. This non-destructive technique can determine information about vacancies and larger voids in a material by bombarding it with positrons at \SI{50}{\electronvolt} to \SI{30}{\kilo\electronvolt} acceleration energies,\cite{Gidley2006, Or2019} then measuring the time lapse between the implantation of positrons and emission of annihilation radiation. Upon implantation, positrons thermalize in \SI{\sim 10}{\pico\second} then either interact with an electron and annihilate or form a positronium atom (electron-positron pair)\cite{STRASKY2018455}. Positronium atoms will then ricochet off the walls of voids and eventually annihilate, releasing a $\gamma$-ray that can be detected with integrated $\gamma$-ray detectors. The lifetime of the positron can provide information on void sizes and concentration of vacancies: longer lifetimes correspond to larger voids and higher vacancy densities. PALS has been used for decades to sensitively detect vacancies and vacancy clusters in metals and semiconductors\cite{Schultz1988, Gidley2006} as well as probe subnanometer, intermolecular voids in polymers\cite{Pethrick1997, Gidley2006}. Depth profiling is possible on the nm to the \SI{}{\micro\meter} scale\cite{RevModPhys.60.701, Wagner2018, Peng2005, Gidley2006} by tuning the positron implantation energy and, though some systems have beam scanning capabilities enabling lateral resolution, spatial resolution is generally quite poor due to large beam spot sizes and positron diffusion. In most systems, the spot size is typically $\sim \SI{1}{\milli\meter}$. However, PALS instruments containing \emph{microprobes} are capable of spot sizes that are smaller than $\SI{100}{\micro\meter}$ \cite{PhysRevLett.87.067402, Gigl2017}. For example, in 2017, Gigl et al.\cite{Gigl2017} developed a state-of-the-art system with a minimum lateral resolution of \SI{33}{\micro\meter} and maximum scanning range of \SI[product-units=power]{19 x 19}{\mm}. Regarding speed, the system can scan an area of \SI[product-units=power]{1 x 1}{\mm} with a resolution of \SI{250}{\micro\meter} in less than 2 minutes, which is considered to be an exceptionally short time frame\cite{Gigl2017}. Moreover, David et al.\cite{PhysRevLett.87.067402} reported a remarkably small spot diameter of \SI{2}{\micro\meter} in a setup with a short scanning range of \SI[product-units=power]{0.2 x 0.6}{\mm}. Unfortunately, further improvements to beam focus may be ineffectual and spatial resolution comparable to electron microscopy is unreachable. The spatial resolution is ultimately limited by lateral straggle: the positron diffusion length is roughly several hundreds of nanometers in a perfect crystal, which limits the spot size even if the beam focus is improved \cite{RevModPhys.60.701}. Ongoing efforts to advance PALS include improving theoretical methods for interpretation of experimental results, advancing theoretical descriptions of positron physics (states, thermalization, and trapping), incorporating sample stages that allow tuning sample environmental conditions (e.g. temperature, biasing), and improving the efficiency of beam moderators (which convert polychromatic positron beams to monochromatic beams)\cite{RevModPhys.60.701}. \subsubsection{Cryogenic microstructural analysis for accurate determination of structure-property relationships} Accurately correlating the formation of different vortex structures and intricacies of vortex-defect interactions with electromagnetic response is not trivial. Typically, conventional microscopy is performed under conditions that differ from a material's actual working environment: structural characterization of superconductors is routinely conducted at room temperature whereas accessing the superconducting regime requires cryogenic temperatures and is probed using electromagnetic stimuli. Yet we know that temperature changes significantly impact the microstructure, causing strain-induced phase separation and altering defects such as dislocations. Electromagnetic stimuli may similarly impact the defect landscape. Hence, another transformative opportunity in vortex physics is cryogenic structural characterization of superconductors under the influence of electromagnetic stimuli, which requires advances in microscopy. Scanning transmission electron microscopy combined with spectroscopic analysis is one of the most informative methods of gathering structural and chemical analysis at the atomic-scale. Accurate determination of structure-property relationships require in-situ property measurements conducted concommitently with microscopy. Recent, rapid advances in in-situ transmission electron microscopy have been fueled by the introduction of a variety of commercial in-situ sample holders that allow for electrical biasing, heating, magnetic response, and mechanical deformation of nanomaterials\cite{McDowell2018}. These new capabilities have accelerated progress in a variety of fields, including battery electrochemistry, liquid-phase materials growth, bias-induced solid-state transformations in e.g. resistive switching devices for memory and neuromorphic applications, gas-phase reactions and catalysis, solid-state chemical transformations e.g at interfaces between semiconductors and metallic contacts, and mechanical behavior \cite{McDowell2018}. For in-situ TEM to be beneficial to superconductors, samples must be cooled to cryogenic temperatures and studied under the influence of magnetic fields. Developed in the 1960s, liquid helium cooled stages have been used to study superconductors, solidified gases, and magnetic domains \cite{goodge_2020}, initially without the benefit of aberration-corrected systems with atomic-scale resolution. More recently, cryogenic STEM with atomic scale resolution has been used to study quantum materials, including low-temperature spin states. However, these studies have been limited to a single temperature that was above the boiling point of the choice cryogen (liquid helium or liquid nitrogen)\cite{goodge_2020}, due to thermal load, whereas variable temperature capabilities are requisite for probing phase transitions and the effects of thermal energy. To this end, there has recently been a push to develop advanced sample holders with stable temperature control\cite{goodge_2020, HummingbirdSBIR}. One of the most promising efforts is led by Hummingbird Precision Machine, a company that is developing a double-tilt, cryo-electrical biasing holder for TEMs that allows samples to be concurrently cooled to liquid helium temperatures and electrically biased, while undergoing atomic-scale structural imaging \cite{HummingbirdSBIR}. Because of such industry involvement in the development and commercialization of cryogenic sample holders and, more generally, the rapid pace of in-situ TEM (e.g. the number of in-situ TEM papers doubled between 2010 and 2012\cite{Taheri2016}) we expect to see large advancements in this identified challenge over the next several years. \subsubsection{Cross-sectional imaging of vortex structures} In Sec.~\ref{sec:Introduction}, we discussed how competition between pinning forces, vortex elasticity, and current-induced forces results in complicated vortex structures, such as double-kinks, half-loops, and staircases. Typically, we may conjecture which structures have formed based on the microstructure and applied field orientation. Subsequent correlations are made between the presumed structures and the magnetization or transport results, which may be suggestive of a specific vortex phase. However, without direct proof of the structures, we cannot unequivocally correlate distinct excitations with specific vortex phases. For example, in a study of a NbSe$_2$ crystal containing columnar defects tilted 30$^\circ$ from the c-axis, magnetization results evinced glassy behavior when the field was aligned with the c-axis\cite{Eley2018}. As these conditions are likely to produce vortex staircases, the question arose whether (and why) vortex staircases would create a vortex glass phase. Direct imaging of vortex-defect interactions, in a way that captures the vortex structure overlaid on the atomic-scale structure, would enable unambiguous determination of the phases produced by specific vortex excitations. Accordingly, development of advanced microscopy techniques that can produce cross sectional images is another transformative opportunity in vortex physics. In this section, we summarize common techniques for imaging superconducting vortices, detail their limitations, and describe the features of an advanced instrument that could accelerate progress in understanding and designing materials with predetermined vortex phases. \begin{figure} \includegraphics[width=1\linewidth]{figures/LTEM.pdf} \caption{Lorentz Transmission Electron Microscopy (LTEM) image of vortices in Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta}$, irradiated to induce columnar defects. Trapped vortices can be distinguished from free ones, based on their shape and contrast (lower contrast for vortices trapped in columnar defects). Plan-view LTEM images have provided useful information on vortex dynamics, though the 3D vortex structure within the bulk is hidden from view. Reprinted with permission from Ref.~[\onlinecite{Kamimura2002}]. Copyright 2002, \emph{The Physical Society of Japan}. TEM images of (b) a heavy-ion-irradiated NbSe$_2$ crystal\cite{Eley2018} and (c) a BaZrO$_3$-doped (Y$_{0.77}$Gd$_{0.23}$)Ba$_2$Cu$_3$O$_y$ film grown by Miura \textit{et al.} \cite{Miura2013k}. Permission to use TEM image in (c) granted by M. Miura. Red line is a cartoon of how a vortex might wind through the disorder landscape. Advanced microscopy techniques designed to capture this structure is a transformative opportunity in vortex physics. \label{fig:vortexstructuresTEM}} \end{figure} Lorentz TEM (LTEM), which exploits the Aharonov-Bohm effect to capture magnetic contrast, was first used by Tonomura to image superconducting vortices~\cite{Tonomura2006} and has played a major role in identifying new materials that host exotic magnetic phases. However, in LTEM, the objective lens serves the dual purpose of applying a field and observing the response of the specimen, and is therefore limited to plan-view imaging. That is, for out-of-plane magnetic fields applied to a thin film, the technique can only image magnetic contrast across the film’s surface---such that the vortex structure itself and interactions with defects within the bulk are out-of-view. Building upon Tonomura's initial work, Hitachi\cite{Harada2008, Kawasaki2000} developed a unique, specialized system containing a multipole magnet that can apply fields up to \SI{50}{\milli\tesla} at various orientations with respect to the sample. Though this system still produces plan-view images, rather than cross-sectional images (revealing the full vortex structure), variations in the contrast of the imaged vortex section has provided remarkable evidence of vortex pinning and useful information on vortex-defect interactions.\cite{Kamimura2002, HARADA20131, PhysRevLett.88.237001, Tonomura2001} For example, Fig.~\ref{fig:vortexstructuresTEM}(a) shows a LTEM image in which a vortex trapped within a columnar defect can be identified by its shape and contrast, compared to untrapped vortices. The most promising technique for directly imaging vortex-defect interactions may be differential phase contrast microscopy (DPC)\cite{Lubk2015}. Conducted in a TEM, DPC is one of the best tools for quantitatively imaging nanoscale magnetic structures. In a TEM, an illuminating electron beam is deflected by electromagnetic fields within a material. DPC microscopy leverages these deflections to directly image electric and magnetic fields within materials at atomic resolution\cite{Dekkers1974, Chapman1978}. Consequently, scanning the beam (STEM) produces spatial maps of nanoscale magnetic field contrast to complement the atomic-scale structural information resolved by a transmitting beam. Accordingly, STEM-DPC is an invaluable tool in nanomagnetism research, used to image magnetic domains\cite{Lee2017, Chen2018} and canted structures such as skyrmions\cite{McVitie2018, Matsumotoe1501280, Schneider2018}. Most notably, it is one of few techniques that can unequivocally identify new magnetic phases and exotic magnetic quasiparticles in real-space. More generally, it can also image nanoscale electric fields\cite{Muller2014, Hachtel2018, Shibata2012, Shibata2017, Yucelen2018} in materials and devices. To image vortex-defect structures in an STEM capable of DPC, the sample stage would need to be cryogenically cooled and the chamber should contain a magnet. Complications will include designing the system in a way in which the magnetic field does not significantly distort the beam. \section{Summary and Outlook} In this Perspective, we have highlighted the pivotal role that vortices play in superconductors and how improving our ability to control vortex dynamics will have an immediate impact on a range of applications. Herein we discussed major open questions in vortex physics, which include the following: \begin{itemize}[noitemsep, leftmargin=*] \item How do thermal and quantum vortex creep depend on material parameters and how can we efficiently consider creep in predictive simulations? \item What is the highest attainable critical current $J_{c}$? \item How do we optimize vortex pinning in quantum circuits and controllably exploit vortices in certain schemes for topological computing? \item Given the multitude of variables that govern $J_{c}$, what computational methods can improve the efficacy of the critical-current-by-design approach? \item What is the relationship between $J_{c}$ and point defect densities as well as vortex structures and vortex phases? \end{itemize} To answer these and other identified questions, we delineated five major categories of near-term transformative opportunities: The first involves applying recent advances in analytical and computational methods to model vortex creep, and performing more extensive experimental investigations into quantum creep. Second, we discussed how critical currents higher than the current record of 30\% $J_d$ may be obtained by implementing a combination of core pinning and magnetic pinning. This is a promising route for dramatic advancements in large-scale applications---achieving higher currents densities enables smaller motors and generators as well as higher field magnets. Third, we noted that vortices do not only hamper large-scale applications, but also induce losses in nanoscale quantum circuits. Though shielding circuits has proven effective in minimizing vortex formation, quantum senors may require exposure to the environment, necessitating a better understanding of vortex dynamics in circuits. Furthermore, vortices are desirable for use in quantum information applications, in which case we must study how to manipulate single flux lines to implement braiding and entanglement of Majorana Bound States. Fourth, the recent advent of high-performance computational tools to study vortex matter numerically has pushed us to the verge of predicting a superconductor's electrical transport properties based on the material and microstructure. However, the quest to automatically tailor a defect landscape for specific applications requires considering a fairly high-dimensional parameter space. To enable an effective mapping between simulations and experiments and manage the multitude of variables, we propose to apply self-adjusting machine learning algorithms that use neural networks. Fifth and finally, to accurately determine structure-property relationships, we need to experimentally measure and routinely consider point defect densities, which are challenging to determine. We therefore highlighted the prevailing microscopy techniques for point defect measurements, which include 4D-STEM and positron annihilation lifetime spectroscopy. \begin{acknowledgments} S.E.\ acknowledges support from the National Science Foundation DMR-1905909. A.G.\ was supported by the U.S.\ Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division. R.W.\ acknowledges funding support from the Heidelberger Akademie der Wissenschaften through its WIN initiative (8. Teilprogramm). \end{acknowledgments} \section*{data availability} The data that support the findings of this study are available from the corresponding author upon reasonable request. \section{\label{sec:Introduction}Introduction} Distinguished for their ability to carry high dissipation-less currents below a critical temperature $T_{c}$, superconductors are used in motors, generators, fault-current limiters, and particle accelerator magnets. Their impact spans beyond these examples of large-scale applications, also affecting nanoscale devices. Perhaps most renown for their key role in the quantum revolution, superconductors constitute building blocks in current and next-generation devices for computing and sensing. For example, superconducting photons detectors feature high-resolutions due to high kinetic inductance and a sharp superconductor-to-normal phase transition. Moreover, superconductors can be configured to form anharmonic oscillators that can be exploited in quantum computing. \begin{figure}[ht] \includegraphics[width = 0.4\textwidth]{figures/figure-alph.pdf} \caption{Frontiers in vortex matter research. Black line represents the vortex core. Yellow region shows how the density of superconducting electron pairs decays towards the center of the core (of size $\sim \xi$, coherence length). Blue plane (with arrows) represents amplitude of supercurrent, circulating around the core of radius up to the penetration depth $\lambda$. \label{fig:summary} } \end{figure} \begin{figure}[htp] \includegraphics[width=1\linewidth]{figures/vortexstructures.pdf} \caption{\label{fig:vortexstructures} Examples of vortex structures (curved blue lines) that are predicted to form in different defect landscapes under the influence of an applied current. Imaging these structures and defects, would allow us to establish the crucial connection between vortex excitations, vortex-defect and vortex-vortex interactions, Lorentz forces, and resulting vortex phases that is needed for efficacious defect engineering.} \end{figure} Notwithstanding these successes, the performance of superconducting devices is often impaired by the motion of vortices---lines threading a quantum $\Phi_{0} = h / 2e$ of magnetic flux through the material (see Fig.~\ref{fig:summary}). Propelled by electrical currents and thermal/quantum fluctuations, vortex motion is dissipative such that it limits the current-carrying capacity in wires, causes losses in microwave circuits, contributes to decoherence in qubits, and can also induce phase transitions. Understanding vortex dynamics is a formidable challenge because of the complex interplay between moving vortices, material disorder that can counteract (pin) vortex motion, and thermal energy that causes vortices to escape from these pinning sites. Furthermore, as depicted in Fig.~\ref{fig:vortexstructures}, in three-dimensional samples (bulk crystals or thick films), vortices are elastic objects that form complicated shapes as they wind through the disorder landscape, reshaping and moving under the influence of current-induced Lorentz forces. These complexities encumber predictability: we can neither predict technologically important parameters in superconductors nor prescribe an ideal defect landscape that optimizes these parameters for specific applications. Though modifying the disorder landscape, e.g. using particle-irradiation or by incorporating non-superconducting inclusions into growth precursors, can engender dramatic enhancements in the current carrying capacity, these processes are often designed through a trial-and-error approach. Furthermore, the optimal defect landscape is highly-material dependent. This is because the efficacy of pinning centers depends on the relationship between their geometry and the vortex structure, the latter being determined by parameters of the superconductor such as the coherence length $\xi$, penetration depth $\lambda$, and the anisotropy $\gamma$, see Fig.~\ref{fig:summary}. For example, though particle irradiation has successfully doubled the critical current in cuprates and certain iron-based superconductors, the same ions and energies do not even produce universal effects in materials belonging to the same class of superconductors\cite{Tamegai2012}. Though we can indeed tune the disorder landscape, we certainly do not have full control of it. Defects such as stacking faults, twin boundaries, and dislocations are often intrinsic to materials and their densities are challenging to tune. As a further complication to understanding vortex-defect interactions, superconductors often have mixed pinning landscapes, i.e., containing multiple types of defects. Though these landscapes immobilize vortices over a broader range of conditions (temperatures and fields) than landscapes containing only one type of defect, it is challenging to infer the vortex structures that form within these materials and no techniques currently exist to fully image these structures and vortex-defect interactions on a microscopic level. Generally speaking, achieving a materials-by-design approach first entails garnering a sufficient microscopic understanding of vortex-defect and vortex-vortex interactions, then incorporating these details into simulations. Significant headway has been made along these lines with the implementation of large-scale time-dependent Ginzburg-Landau (TDGL) simulations to study vortex motion through disordered media. Spearheaded by the Argonne National Laboratory, this effort has accurately modeled critical currents $J_{c}$ in thin films (2D), layered and anisotropic 3D materials, as well as isotropic superconductors~\cite{Sadovskyy2015a,Sadovskyy2016a,Glatz2016,Sadovskyy2017,kimmel2019}. Additionally, it has determined the optimal shape, size, and dimensionality of defects necessary to maximize $J_{c}$, depending on the magnitude and orientation of the magnetic field~\cite{Koshelev2016,Sadovskyy2016b,Kimmel2017,Sadovskyy2019}. Backed by good agreement with experimental and analytic results for simple geometries \cite{Willa2015a, Willa2015b, Willa2016, Willa2018c}, the utility of the numerical routine has successfully been extended to previously unknown territories, optimizing pinning geometries outside the scope of analytic methods \cite{Glatz2016, Kimmel2017, Koshelev2016, Kwok2016,Papari2016, Sadovskyy2015a, Sadovskyy2016a, Sadovskyy2016b, Sadovskyy2019, Willa2018b,ted100}. In fact, these TDGL simulations have unveiled new phenomena---such as a small peak in $J_{c}(B)$ at high fields that is caused by double vortex occupancy of individual pinning sites.\cite{Willa2018a} The Argonne team has even deployed mature optimization processes based on targeted evolution using genetic algorithms.~\cite{Sadovskyy2019} This is a remarkable step towards the goal of \textit{critical-current-by-design}. A critical-current-by-design must consider thermal fluctuations, which dramatically impact the critical current due to the effects of rapid thermally-induced vortex motion (thermal creep). Creep, which manifests as a decay in the persistent current over time, is rarely problematic in low-$T_{c}$ superconductors as it is typically quite slow. Consequently, Nb--Ti solenoids in magnetic resonance imaging systems can operate in \textit{persistent mode}, retaining a fairly constant magnetic field for essentially indefinite time periods. However, creep is fast in high-$T_{c}$ superconductors, restricting applications and reducing the effective $J_{c}$. For the sake of power and magnet applications, the goals are clear---maximize the critical current and minimize creep. Regarding the former, there is much room for improvement: no superconductor containing vortices has ever achieved a $J_{c}$ higher than 25\% of its theoretical maximum, which is thought to be the depairing current $J_{d}=\Phi_0/(3\sqrt{3}\pi\mu_0 \xi \lambda^2$). Regarding creep, we are fighting a theoretical lower bound.\cite{Eley2017a} This lower bound positively correlates with a material's Ginzburg number $Gi = (\gamma^2/2)(k_{\mathrm{B}} T_{c}/ \varepsilon_{sc})^2$, which is the ratio of the thermal energy to the superconducting condensation energy $\varepsilon_{sc} = (\Phi_{0}^{2} / 2 \pi \mu_{0} \xi^{2} \lambda^{2}) \xi^{3}$. The implications are grim: creep is expected to be so fast in potential, yet-to-be-discovered room-temperature superconductors rendering them unsuitable for applications. The caveat is that this lower bound is limited to low temperatures and fields (single vortex dynamics), and collective vortex dynamics could be key to achieving slow creep rates. Though superconducting sensing and computing applications do not require high currents, vortices still pose a nuisance by limiting the lifetime of the quantum state in qubits \cite{Oliver2013}, inducing microwave energy loss in resonators \cite{Song2009a}, and generally introducing noise. It is known that dissipation from vortex motion reduces the quality factor in superconducting microwave resonators, which are integral components in certain platforms for quantum sensors and the leading solid-state architecture for quantum computing (circuit-QED)\cite{Wallraff2004, Blais2004, Krantz2019, Muller2019}. They are used to address and readout qubits as well as mediate coupling between multiple qubits. Consequently, resonator stability can be essential for qubit circuit stability. Moreover, thermally activated vortex motion can contribute to $1/f$ noise and critical current fluctuations \cite{Trabaldo2019, VanHarlingen2004} in quantum circuits and is a suspected source of the dark count rate in superconducting nanowire single-photon detectors \cite{PhysRevB.83.144526, Yamashita2013}. In these quantum circuits, vortices appear due to pulsed control fields, ambient magnetic fields\cite{Song2009}, and the self-field generated by bias currents \cite{Yamashita2013}. Mitigating the effects of vortices requires heavy shielding to block external fields and careful circuit design to control their motion, the latter of which is quite tricky. The circuit should include structures to trap vortices away from operational currents and readout as well as narrow conductor linewidths\cite{Stan2004} to make vortex formation less favorable. However, these etched structures may exacerbate another major source of decoherence---parasitic two-level fluctuators---defects in which ions tunnel between two almost energetically equivalent sites, which act as dipoles and thus interact with oscillating electric fields during device operation.\cite{Muller2019} Hence, designing quantum circuits that are robust to environment noise is not trivial and has become a topic of intense interest.\cite{Muller2019, Oliver2013} Despite all of the aforementioned application-limiting problems caused by vortices, they are not pervasively detrimental to device performance. For example, vortices can trap quasiparticles---unpaired electrons that are a third source of decoherence in superconducting quantum circuits---boosting the quality factor of resonators\cite{PhysRevLett.113.117002} and the relaxation time of qubits\cite{Wang2014}. Furthermore, vortices can host elusive, exotic modes that are in fact useful for topological qubits, which are predicted to be robust to environmental noise that plagues other quantum device architectures. To exploit these modes in computing, we must control the dynamics of their vortex hosts. Hence, in general, these disparate goals of eliminating or utilizing vortices for applications both require an improved understanding of vortex formation, dynamics, and, ultimately, control. The goal of this Perspective is to present opportunities for transformative advances in vortex physics. In particular, we start by addressing vortex creep in Sec.~\ref{ssec:vortexcreep}, which notes limited knowledge of non-thermal creep processes and how recent increases in computational power will enable full consideration of creep in simulations. Second, in Sec.~\ref{ssec:Jd} we explore the true maximum achievable critical current, and the need to simultaneously exploit multiple pinning mechanisms to surpass current records for $J_{c}$. Next, Sec.~\ref{sec:RFcavities} discusses vortex-induced losses in response to AC magnetic fields and currents, with a focus on the impact on superconducting RF cavities used in accelerators and quantum circuits. We examine how the quantum revolution has handled the vortex problem for computing, while sensing applications necessitate further studies. As solving the aforementioned problems requires advanced computational algorithms, we then proceed to discuss future uses of artificial intelligence to understand the vortex matter genome in Sec.~\ref{ssec:AI}. Finally, in Sec.~\ref{ssec:microscopy}, we recognize that most experimental studies use magnetometry and electrical transport studies to \emph{infer} vortex-defect interactions, and discuss the frontiers of microscopy that could lead to observing these interactions as well as accurately determining defect densities. \section{Background} Superconductors have the remarkable ability to expel external magnetic fields up to a critical value $H_{c1}$, a phenomena that is known as the Meissner Effect. Though surpassing $H_{c1}$ quenches superconductivity in some materials, the state persists up to a higher field $H_{c2}=\Phi_0/2\pi\mu\xi^2$ in type-II superconductors. In this class of materials, $H_{c1}$ can be quite small (several \SI{}{\milli\tesla}) whereas $H_{c2}$ can be extremely large (from a few tesla up to as high as \SI{120}{\tesla})\cite{NMiura2002}, such that the interposing state between the lower and upper critical fields consumes much of the phase diagram and defines the technologically relevant regime. This \textit{mixed state} hosts a lattice of vortices, whose density scales with the magnetic field $n_v \propto B$. We should also note that, in addition to globally applied fields, self-fields from currents propagating within a superconductor can also locally induce vortices. Each vortex carries a single flux quantum $\Phi_0$ and the core defines a nanoscale region through which the magnetic field penetrates the material. As such, the vortex core is non-superconducting, of diameter $2\xi(T)$, and surrounded by circulating supercurrents of radii up to $\lambda(T)$, as depicted in Fig.~\ref{fig:summary}. Given the dependence of the vortex size on these material-dependent parameters, vortices effectively look different in different materials---for example, they are significantly smaller in the high-temperature superconductor $\mathrm{Y}\mathrm{Ba}_2\mathrm{Cu}_3\mathrm{O}_y$ (YBCO), where $\xi(0) = \SI{1.6}{\nano\meter}$, than in Nb, in which $\xi(0)= \SI{38}{\nano\meter}$.\cite{Wimbush2015} Vortex motion constitutes a major source of dissipation in superconductors. Propelled by currents of density $\vec{J}$ and thermal/quantum energy fluctuations, vortices experience a Lorentz force density $\vec{F}_{L} = (\vec{J} \times \vec{B})/c$ accompanied by Joule heating that weakens superconducting properties. It is this cascading process that is responsible for the undesirable impacts on applications, for which examples were provided in Sec.~\ref{sec:Introduction}. \subsection{\label{sec:vortexpinning}Fundamentals of vortex pinning} \begin{figure*}[t!] \includegraphics[width=1\linewidth]{figures/Jc_cropped.pdf} \caption{Enhancement in $J_{c}$ or $M \propto J_{c}$ in (a) an oxygen-ion-irradiated Dy$_2$O$_3$-doped commercial YBCO film grown by American Superconductor Corporation\cite{Leroux2015, Eley2017a} in a field of $\SI{5}{\tesla}$, (b) a BaZrO$_3$-doped (Y$_{0.77}$Gd$_{0.23}$)Ba$_2$Cu$_3$O$_y$ film grown by Miura \textit{et al.} \cite{Miura2013k}, (c) a BaZrO$_3$-doped BaFe$_2$(As$_{1-x}$P$_x$)$_2$ film grown by Miura \textit{et al.}\cite{Eley2017}, and (d) a heavy-ion-irradiated NbSe$_2$ crystal\cite{Eley2018}. Measurements by S.\ Eley. Insets show transmission electron micrographs of defect landscape, from Refs. ~[\onlinecite{Eley2017, Miura2013k, Eley2017a, Eley2018}].}.\label{fig:Jcenhancement} \end{figure*} Immobilizing vortices constitutes a major research area, in which the most prominent benchmark to assess the strength of pinning is the critical current $J_{c}$. \cite{Bean1964, Zeldov1994, Zeldov1994b, Brandt1999, Willa2014, Gurevich2014, Gurevich2017, Gurevich2018, Kubo2019, Dhakal2020} Once vortices are present in the bulk, crystallographic defects such as point defects, precipitates, twin boundaries, stacking faults, and dislocations provide an energy landscape to trap vortices. Depending on the defect type and density, one of two mechanisms are typically responsible for vortex pinning: weak collective effects from groups of small defects or strong forces exerted by larger defects. Originally formulated by Larkin and Ovchinnikov~\cite{Larkin1979}, the theory of weak collective pinning describes how atomically small defects alone cannot apply a sufficient force on a vortex line to immobilize it. However, the collective action of many can indeed pin a vortex. In the case of a random arrangement of small, weak, and uncorrelated pinning centers, the average force on a straight flux line vanishes. Then, only fluctuations in the pinning energy (higher order correlations) are capable of producing a net pinning force. Considering this, weak collective pinning theory phenomenology finds that the resulting critical current should scale \emph{quadratically} with the pin density $n_{p}$, i.e., $J_{c} \propto n_{p}^{2}$, see Ref.~[\onlinecite{Blatter1994}]. On the other hand, strong pinning results when larger defects each plastically deform a vortex line and a low density of these defects is sufficient to pin it. Competition between the \emph{bare} pinning force $f_{p}$ and the vortex elasticity $\bar{C}$ generates multi-valued solutions. Because of this, a proper averaging of the effective force $\langle f_{\mathrm{pin}}\rangle$ from individual pins is non-zero and results in a critical current $J_{c} = n_{p} \langle f_{\mathrm{pin}}\rangle$. Here, the critical current reached by strong pins depends \emph{linearly} on the defect density. While conceptually simpler than weak collective pinning, it has taken significantly longer to develop a strong pinning formalism. With its completion in the early 2000s, the formalism enabled computing numerous physical observables, including the critical current~\cite{Blatter2004, Koopmann2004}, the excess-current characteristic~\cite{Thomann2012, Thomann2017,Buchacek2018, Buchacek2019a, Buchacek2019b, Buchacek2020-condmat}, and the $ac$ Campbell response~\cite{Willa2015a, Willa2015b, Willa2016, Willa2018c}. Defects merely trap the vortex state into a metastable minimum. Thermal and quantum fluctuations release vortices from pinning sites, and this activated motion of vortices from a pinned state to a more stable state is called vortex creep. In the presence of creep, the critical current $J_{c}$ is no longer a distinct boundary separating a dissipation-free regime from a dissipative one. Experimentally, this manifests as a power-law transition $V=J^n$ between the superconducting state in which $V=0$ and Ohmic behavior. The creep rate $S \equiv - d \ln(J) / d\ln(t)$ then becomes $\propto 1/n$ and can be assessed by fitting the transitional regime in the current-voltage characteristic or measuring the temporal decay of an induced persistent current. Measurements of the vortex creep rate also provide access to microscopic details such as the effective energy barriers $U^*=T/S$ surmounted and whether single vortices or bundles are creeping. Various methods of tailoring the disorder landscape in superconductors have proven successful in remarkably enhancing the critical current. Figure \ref{fig:Jcenhancement} shows examples of cuprates, iron-based superconductors, and low-$T_c$ materials that have all benefited from incorporating inclusions. Defects can be added post-growth, using techniques such as particle irradiation,\cite{Leroux2015, Eley2017a, Tamegai2012, Averback1997, Fang2011, Gapud2015, Goeckner2003, Haberkorn2015a, Haberkorn2015b, Haberkorn2012a, Iwasa1988, Jia2013, Kihlstrom2013, Kirk1999, Konczykowski1991, Matsui2012, Nakajima2009, Roas1989, Salovich2013, Sun2015, SwiecickiPhysRevB12, Taen2012, Taen2015, Thompson1991, Vlcek1993, Zhu1993, Leonard2013} or during the growth process by incorporating impurities to the source material.\cite{Miura2013k, Haugan2004, Horide2013, Miura2011, PalauSUST2010, WimbushSUST10} Though these processes induce markedly different disorder landscapes, both can effectuate remarkable increases in $J_{c}$. However, the conditions necessary to improve electromagnetic properties are highly material-dependent---this lack of universality renders defect landscape engineering a process of trial-and-error. Particle irradiation can induce point defects (vacancies, interstitial atoms, and substitutional atoms), larger random defects, or correlated disorder (e.g., amorphous tracks known as columnar defects). Notably, the critical current in commercial YBa$_2$Cu$_3$O$_{7-\delta}$ coated conductors was nearly doubled through irradiation with protons\cite{Jia2013}, oxygen-ions\cite{Leroux2015, Eley2017}, gold-ions\cite{Rupich2016}, and silver-ions. Furthermore, iron-based superconductors have also been shown to benefit from particle irradiation\cite{Tamegai2012}. To incorporate larger defects, such as nanoparticle inclusions, numerous groups\cite{MacManus-Driscoll2004, Feighan2017, Miura2013k, Miura2011, Miura2016, Miura2017} have introduced excess Ba and $M$ (where $M$= Zr, Nb, Sn, or Hf) into growth precursors. This results in the formation of randomly distributed 5-20 nm sized non-superconducting Ba$M$O$_3$ nanoparticles or nanorods. This method has produced critical currents that are up to seven times higher than that in films without inclusions,\cite{Miura2013} therefore, has become one of the leading schemes for enhancing $J_{c}$. The enhancement achieved by inclusions and irradiation is often restricted to a narrow temperature and field range, partially because $\xi$ and $\lambda$ are temperature dependent, whereas the defect sizes and densities are fixed. Another reason for the limited range of the enhancement is that, under the right conditions, certain fast moving vortex excitations may form. For example, in materials containing parallel columnar defects, double-kink excitations form at low fields and moderate temperatures that result in fast vortex creep concomitant with reduced $J_{c}$. \cite{Maiorov2009} Mixed pinning landscapes, composed of different types of defects, can indeed enhance $J_{c}$ over a broader temperature and field range than inclusions of only one type and one size. More work is indeed necessary to optimize this. \subsection{Thermally activated vortex motion} Vortex creep is a very complex phenomenon due to the interplay between vortex-vortex interactions, vortex-defect interactions, vortex elasticity, and anisotropy. \cite{Blatter1994, Feigelman1989} \cite{Willa2020a}These interactions determine $U_{act}(T,H,J)$, a generally unknown regime-dependent function. The simplest creep model, proposed Anderson and Kim, neglects the microscopic details of pinning centers and considers vortices as non-interacting rigid objects hopping out of potential wells of depth $U_{act}\propto U_{p}|1 - J/J_c|$. However, as elastic objects, the length of vortices can increase over time under force from a current and vortex-vortex interactions are non-negligible at high fields. As such, the Anderson-Kim model's relevance is limited to low temperatures and fields. At high temperatures and fields, collective creep theories, which consider vortex elasticity, predict an inverse power law form for the current-dependent energy barrier $U_{act}(J) = U_{p}[(J_c/J)^{\mu}- 1]$, where $\mu$ is the so-called glassy exponent that is related to the size and dimensionality of the vortex bundle that hops during the creep process\cite{Blatter1991}. To capture behavior across regimes, the interpolation formula $U_{act}(J) = (U_{p}/\mu)\left[(J_c/J)^\mu - 1\right]$ is commonly used, where $\mu \rightarrow - 1$ recovers the Anderson-Kim prediction. Combining this interpolation formula with the creep time $t = t_0e^{U_{act}(J)/k_{\mathrm{B}} T}$, we find the persistent current should decay over time as $J(t) = J_{c0} [1+(\mu k_{\mathrm{B}} T/U_{p})\ln(t/t_0)]^{-1/\mu}$ and that the thermal vortex creep rate is \begin{align} S \equiv \Big| \frac{d \ln J}{d \ln t} \Big| = \frac{k_{\mathrm{B}} T}{U_{p} + \mu k_{\mathrm{B}} T \ln(t/t_0)},\label{eq:STHeqn} \end{align} where $\ln(t/t_0) \sim 25\text{-}30$. Because the magnetization $M \propto J$, creep can easily be measured by capturing the decay in the magnetization over time using a magnetometer. Moreover, as seen from Eq.~\eqref{eq:STHeqn}, knowledge of $S(T,H)$ provides access to both $U_{p}$ and $\mu$. Hence, creep measurements are a vital tool for revealing the size of the energy barrier, its dependence on current, field, and temperature, and whether the dynamics are glassy or plastic. It is important to note that Eq.~\eqref{eq:STHeqn} is typically used to analyze creep data piecewise---it can rarely be fit to the entire temperature range. Creep rates are not predictable and no analytic expression exists that broadly captures the temperature and field dependence of creep. Both $U_{p}$ and $\mu$ have unknown temperature dependencies, which is a major gap in our ability to predict vortex creep rates. \subsection{Predictive vortex matter simulations} Simulating the the behavior of vortex matter~\cite{Blatter1994,Brandt:1995,Crabtree1997,Nattermann2000,BlatterG:2003,ROPP} has a long history. Though the value of such simulations was realized long ago~\cite{BrandtJLTP83-1,BrandtJLTP83-2}, the efficacy to produce accurate results in materials containing complex defect landscapes is considered a recent success, tied to improvements in computational power. Specifically, we can now numerically solve more realistic models, ranging from Langevin dynamics to time-dependent Ginzburg-Landau (TDGL) equations to fully microscopic descriptions, including Usadel and Eilenberger, Bogoliubov-de Gennes, and non-equilibrium Keldysh-Eilenberger quantum transport equations. While the phenomenological TDGL equations describe vortex matter realistically on lengths scales above the superconducting coherence lengths, full microscopic equations are needed to describe, e.g., the vortex core accurately. This, however, means that the system sizes, which can be simulated down to the nanoscale using microscopic models, are quite limited, while TDGL can simulate macroscopic behavior including most dynamical features of vortex matter. \begin{figure*} \includegraphics[width=1\linewidth]{figures/fig_sim_exp.pdf} \caption{\textbf{(a)} 3D STEM tomogram of a 0.5 Dy-doped YBCO sample. Image processing is discussed in Ref.~[\onlinecite{Leroux2015}]. \textbf{(b)} Critical current $J_{c}$ as a function of the magnetic field $B$ applied along the c-axis of YBCO. The simulated field dependence (circles, red curve) with only the nanoparticles observed by STEM tomography in the sample with 0.5 Dy doping exhibits almost the same exponent $\alpha$, for $J_c \propto B^{-\alpha}$, as the experiment (triangles, green curve). Adding $2\xi$ diameter inclusions to the simulation makes the dependence less steep (squares, blue curve), which yields an exponent very similar to the experimental one in the sample with 0.75 Dy doping (stars, yellow curve). \textbf{(c)} Snapshot of the TDGL vortex configuration with applied magnetic field and external current for the same defect structure as in the experiment (a). Isosurfaces of the order parameter close to the normal state are shown in red and follow both vortex and defect positions. The amplitude of the order parameter is represented on the backplane of the volume where blue corresponds to maximum order parameter amplitude. Arrows indicate the experimental and simulated $J_{c}$ dependencies.}\label{fig:tomogram} \end{figure*} The Langevin approach only considers vortex degrees of freedom, while mostly neglecting elasticity and vortex-vortex interactions, which are nonlocal effects. Hence, its accuracy is limited to when inter-vortex separations are significantly larger than $\xi$, vortex pinning sites are dilute, or the superconducting host is sufficiently thin that vortices can be considered 2D particles. Nevertheless, this simple picture reveals remarkably rich, dynamical behavior -- notably realizing a dependence of $J_{c}$ on the strength and density of pinning centers ~\cite{BrandtJLTP83-1,BrandtJLTP83-2}, thermal activation of vortices from pinning sites~\cite{KoshelevPhysC92}, a crossover between plastic and elastic behavior~\cite{CaoPhysRevB00,DiScalaNJP12}, and dynamic ordering of vortex lattices at large velocities~\cite{KoshelevPhysRevLett94, MoonPhysRevLett96}. However, vortex elasticity is indeed an influential parameter in bulk systems. It results in vortex phases that are characterized by complex vortex structures, glassy phases that do not exist in 2D systems, as well as other interesting characteristics~\cite{ErtasK:1996, OtterloPRL00, BustingorryCD:2007, LuoHu:2007, LuoHuJSNM10, Koshelev:2011, DobramyslEPJ13}. Herein lies the strength of the TDGL approach, which is a good compromise between complexity and fidelity. It describes the full behavior of the superconducting order parameter~\cite{schmid} and therefore represents a `mesoscopic bridge' between microscopic and macroscopic scales. Notably, it surpasses the Langevin approach by (i) describing all essential properties of vortex dynamics, including inter-vortex interactions with crossing and reconnection events, (ii) possessing a rigorous connection to the microscopic Bardeen-Cooper-Schrieffer theory in the vicinity of the critical temperature~\cite{Gorkov:1959}, and (iii) considering realistic pinning mechanisms. Regarding pinning, it can specifically account for pinning due to modulation of critical temperature ($\delta T_{c}$-pinning) or mean-free path ($\delta \ell$-pinning), strain, magnetic impurities~\cite{DoriaEPL07}, geometric pinning through appropriate boundary conditions, and, generally, weak to strong pinning regimes---all beyond the reach of the Langevin approach. Consequently, the TDGL formulation is arguably one of the most successful physical models, describing the behavior of many different physical systems, even beyond superconductors~\cite{Aranson:2002}. In its early days, the TDGL approach was used to study depinning, plastic, and elastic steady-state vortex motion in systems containing twin and grain boundaries as well as both regular and irregular arrays of point or columnar defects.~\cite{kaper,crabtree2000} Those simulations were predominately used to illustrate the complex dynamics of individual vortices because computational limitations prohibited the study of large-scale systems with collective vortex dynamics. Only later did simulation of about a hundred vortices in two-dimensional systems become possible, resulting in predictions for, e.g., the field dependence of $J_{c}$ in thin films with columnar defects~\cite{Palonen2012}. A 2002 article by Winiecki and Adams~\cite{Winiecki:2002} deserves credit as one of the first simulation-based studies of vortex matter in three-dimensional superconductors that produced a realistic electromagnetic response. Later, in 2015, Koshelev et al.~\cite{Koshelev2016} achieved a major technical breakthrough by investigating optimal pinning by monodispersed spherical inclusions. The simulated system size of $100\xi \! \times \! 100\xi \! \times \! 50\xi$ was much larger than any previously studied system, enabling even more realistic simulations of the collective vortex dynamics than previous works. Their computational approach is based on an optimized parallel solver for the TDGL equation~\cite{sadovskyy+jcomp2015}, which allows for simulating vortex motion and determining the resulting electrical transport properties in application-relevant systems. The efficacy of this technique is best demonstrated in a study~\cite{Sadovskyy2016a} that applied the same approach to a `real' pinning landscape by incorporating scanning transmission electron microscopy tomography data of Dy-doped YBa$_2$Cu$_3$O$_{7-\delta}$ films~\cite{Ortalan:2009, Herrera2008}, and the results showed almost quantitative agreement of the field and angular dependent critical current with experimental transport measurements, see Fig.~\ref{fig:tomogram}. Finally, we discuss applying TDGL calculations to commercial high-temperature superconducting tapes, which typically consist of rare earth (RE) or yttrium barium copper oxide (REBCO) matrices. Specifically, Ref.~[\onlinecite{Sadovskyy2016b}] simulated vortex dynamics in REBCO coated conductors containing self-assembled BaZrO$_3$ nanorods, and reported good quantitative match to experimental measurements of $J_{c}$ versus the applied magnetic field-angle $\theta$. Most notably, the simulations demonstrated the non-additive effect of defects: adding irradiated columnar defects at a 45$^\circ$ angle with the nanorod (c-) axis removes the $J_{c}(\theta=0^\circ)$ peak of the nanorods and generates a peak at $\theta=45^\circ$ instead. This study then went beyond simply reproducing experimental behavior, and predicted the optimal concentrations of BaZrO$_3$ nanorods that are necessary to maximize $J_{c}$, which it found to be 12-14\% of $J_{d}$ (at specific $\theta$)---far higher than had been experimentally achieved in similar systems. This approach is certainly more efficient than the standard trial-and-error approach, growing and measuring samples with a large variety of defect landscape. These recent successes in accurately predicting $J_{c}$ in superconductors based on the microstructure highlight how close we are to the ultimate goal of tailoring pinning landscapes for specific applications with well-defined critical current requirements. Constituting the new \textit{critical-current-by-design} paradigm,~\cite{ROPP,ted100} the routine use of TDGL simulations for efficient defect landscape optimization is a transformative opportunity in vortex physics, as is expanding these computational successes to include the use of artificial intelligence algorithms. Furthermore, microscopic and far-from-equilibrium simulations of vortex matter beyond the TDGL approach require significant computational resources and are only now becoming feasible. We will discuss related developments in Sec.~\ref{ssec:AI}. \section{Transformative Opportunities} \subsection{Vortex Creep\label{ssec:vortexcreep}} In this section, we identify major opportunities to accelerate our understanding of thermally-activated vortex hopping (thermal creep) and non-thermal tunneling (quantum creep) between pinning sites. Only limited situations are amenable to an analytic treatment of vortex creep: these include thermal depinning of single vortices and vortex bundles in the regime of weak collective pinning. In the strong pinning regime, e.g., for columnar defects, we must consider complicated excitations that form during the depinning process. Activation occurs via half-loop formation\cite{PhysRevB.51.6526}, which is depicted in Fig.~\ref{fig:vortexstructures}. During this process, the vortex nucleates outside of its pinned position, and the curved unpinned segment grows over time as a current acts on it, until the entire vortex eventually leaves the pinning site. Because half-loop formation likely occurs in a range of high-current-carrying materials, which may contain amorphous tracks, nanorods, or twin boundaries, numerical treatment of vortex creep within the strong pinning framework is of significant interest. The first task involves studying creep of isolated vortices, pinned by a single strong inclusion or columnar defect. In accordance with analytic predictions, an increase in temperature shifts the characteristic depinning current below $J_{c}$, rounds the $IV$ curves and affects the excess-current characteristic far beyond $J_{c}$\cite{Buchacek2019a, Buchacek2019b, Buchacek2020-condmat}. The next steps will involve studying multiple vortices, more defects, and mixed defect landscapes, which will indeed increase the complexity of the problem, warranting computational assistance. Recent advances in computational power and high-performance codes will enable tackling these challenges, which involve long simulation times at exponentially low dynamics. Instead of simulating the thermal relaxation of a metastable configuration in a single 'linear' simulation, the same configuration can be simulated in parallel, i.e., experiencing fluctuations along different 'world lines'. This accelerates the search for a rare depinning event, after which parallel computations are interrupted and restarted from new depinned configurations. \begin{figure*} \includegraphics[width=1\linewidth]{figures/SvsGi} \caption{Creep at reduced temperature $T/T_c$ = 1/4 and a field of $\mu_0H = 1 \textnormal{ T}$ for different superconductors plotted versus $Gi^{1/2}$. The open symbols indicate materials for which the microstructure has been modified either by irradiation or incorporation of inclusions. The solid grey line represents the limit set by $Gi^{1/2}(T/T_c)$. The result predicts that the creep problem even in yet-to-be-discovered high-$T_c$ superconductors may counteract the benefits of high operating temperatures. Material from S. Eley, et al.\ Nat.\ Mater.\ 16, 409–413 (2017). Copyright 2017, \emph{Nature Publishing Group.} }\label{fig:SvsGi} \end{figure*} In a 2017 paper\cite{Eley2017a}, we found that the minimum achievable thermal creep rate in a material depends on its Ginzburg number $Gi$ as $S \sim Gi^{1/2}(T/T_c)$, shown in Fig.~\ref{fig:SvsGi}. Our result is limited to the Anderson-Kim regime and considered pinning scenarios with analytically determined pinning energies $U_P$. It also somewhat gravely predicts that there is a limit to how much creep problem in high-$T_{c}$ superconductors, which tend to have high $Gi$, can be ameliorated, such that we may expect the performance of yet-to-be discovered room temperature superconductors to be irremediably hindered by creep. However, YBCO films containing nanoparticles demonstrate non-monotonic temperature-dependent creep rates $S(T)$, such that $S$ dips to unexpectedly low values at intermediate temperatures outside of the Anderson-Kim regime\cite{Eley2017a}. This dip, thought to be induced by strong pinning from nanoparticles, suggests that collective pinning regimes may hold the key to inducing slower creep rates that dip below our proposed lower limit in the Anderson-Kim regime. A numerical tacking of the vortex creep problem would improve our theoretical understanding of creep and answer a major open question in vortex physics -- what is indeed the slowest achievable creep rate in different superconductors? Our finding of the lower limit to the creep rate reduces the guesswork in trial-and-error approaches to optimizing the disorder landscape and improves our ability to select a material for applications requiring slow creep. Yet, ultimately, a material's quantum creep rate actually sets its minimum achievable creep rate. This is a regime that is has received relatively little attention---there have been few theoretical and experimental studies of quantum creep. Theoretical models are limited to tunneling barriers induced by weak collective pinning\cite{Blatter1991, PhysRevB.47.2725} and columnar defects,\cite{PhysRevB.51.1181} though most materials have very complex, mixed pinning landscapes. Most experimental work has focused on cuprates, determining a crossover temperature of $\sim$ 8.5-11 K in YBCO films,\cite{PhysRevB.64.094509, LANDAU2000251, Luo_2002}, 1.5-2 K in YBCO crystals,\cite{PhysRevB.59.7222, LANDAU2000251} 5-6 K in Tl$_2$Ba$_2$CaCu$_2$O$_8$ films,\cite{PhysRevB.59.7222, PhysRevB.47.11552}, 17 K in TlBa$_2$CaCu$_2$O$_{7-\delta}$,\cite{PhysRevB.64.094509} 30 K in HgBa$_2$CaCu$2$O$_{6+\delta}$.\cite{PhysRevB.64.094509} Klein et al.\cite{PhysRevB.89.014514} studied an iron-based superconductor, finding crossover around 1 K in Fe(Te,Se). No studies have been conducted in materials containing inclusions nor using any systematic tuning of the energy barrier. Furthermore, the crossover between the thermal and quantum creep is unclear. As previously mentioned, the Anderson-Kim model's relevancy is limited to low temperatures $k_{\mathrm{B}} T \ll U_{p}$ in which $S$ is expected to increase approximately linearly with temperature. A linear fit to this regime often extrapolates to non-zero $S$ at $T = 0$, suggestive of non-thermal creep. In fact, it is common to perfunctorily attribute this extrapolation to quantum creep without conducting measurements in the quantum creep regime. However, there are compelling discrepancies between typical experimental results in this context and theory. For example, theory predicts that the tunneling probability should decrease with bundle size, whereas experiments often observe the opposite trend (positive correlation between low temperature $S$ and field)\cite{Lykov2013}. Theory also predicts a quadratic, rather than linear, temperature-dependent $S(T \rightarrow 0)$\cite{Lykov2013, PhysRevB.59.7222}. That is, quantum creep may be thermally assisted\cite{Blatter1991}, and not simply present itself as a temperature-independent creep rate at low temperatures. An even more confounding result is that Nicodemi et al.\cite{PhysRevLett.86.4378} predicted non-zero creep rates at $T = 0$ using Monte Carlo simulations based on a purely classical vortex model and reconciled it with non-equilibrium dynamics. It has also been suggested that the overall measured creep rate is simply the sum of the thermal and quantum components.\cite{PhysRevB.64.094509} However, in some iron-based superconductors,\cite{Haberkorn2012a, Haberkorn2014} $S$ is fairy insensitive to $T$ or even decreases with increasing $T$ up to fairly high fractions of $T_{c}$. Hence, either quantum creep is a significant component at surprisingly high temperatures or the creep rate dramatically decreases at temperatures below the measurement base temperature, motivating the need for lower temperature creep measurements. Superconductors with high normal-state resistivity $\rho_n$ and low $\xi$, such as high-$T_{c}$ cuprates, are the best candidates for having measurable quantum creep rates. This is because the effective quantum creep rate is predicted to be \begin{align} \!\!\! S_q = \begin{cases} -(e^2 \rho_n / \hbar \xi) \sqrt{J_{c} / J_{d}},& \!\!\!\!\text{if } L_c<a_0 \\ -(e^2 \rho_n / \hbar \lambda)(a_0/\lambda)^4(a_0/\xi)^9(J_{c}/J_{d})^{9/2}, & \!\!\!\!\text{if } L_c > a_0 \end{cases}\!\! \end{align} where $L_c$ is the length of the vortex segment (or bundle) that tunnels~\cite{Blatter1994}. Determining the dependence of the quantum creep rate on material parameters in superconductors would fill a major gap in our understanding of vortex physics. This would significantly contribute towards a comprehensive model of vortex dynamics, and reveal whether creep may induce measurable effects in quantum circuits, which typically operate at millikelvin temperatures. \subsection{Pinning at the extreme: Can the critical current reach the depairing current?\label{ssec:Jd}} Cooper pairs constituting the dissipationless current in superconductors will dissociate when their kinetic energy surpasses their binding energy. Theoretically, this could be achieved by a sufficiently high current, termed the \textit{depairing current}, $J_{d}$. Consequently, $J_{d}$ is recognized as the theoretical maximum achievable $J_{c}$, such that $J_{c}/J_{d}$ is often equated with the efficiency $\eta$ of the vortex pinning landscape, which may be confusing as a perfect defect would not produce $J_{c}=J_{d}$.\cite{Wimbush2015} The most successful efforts to carefully tune the defect landscape obtain $J_{c}/J_{d}$ of only 20-30\%.\cite{Civale10201, Selvamanickam_2015} As exemplified by a series of samples we have measured, see Fig.~\ref{fig:JcJd}, most samples produce $J_{c} / J_{d} < 5\%$, whereas $J_{c} / J_{d}$ is routinely higher for coated conductors (REBCO films). Though this at first appears to be a far cry from the ultimate goal, some surmise that this is indeed near the maximum that can be achieved by immobilizing vortices by means of \textit{core pinning}, which merely refers to a vortex preferentially sitting in potential wells defined by a defect to minimize the energy of its core. Wimbush et al.\ \cite{Wimbush2015} present a compelling argument that core pinning can obtain a maximum $J_{c}/J_{d}$ of only $30 \%$---A current equivalent to $J_{d}$ would produce a Lorentz force $f_d = J_{d} \Phi_0 = 4 B_{c} \Phi_0 / 3 \sqrt{6}\mu_0 \lambda$, with $B_{c} =\Phi_0/2\sqrt{2} \pi \lambda \xi$ the thermodynamic critical field. At the same time, the condensation energy $\varepsilon_{sc}$ produces a characteristic pinning force $f_p^{core} \sim \varepsilon_{sc}/\xi \approx \pi \xi^2 B_{c}^2 / 2\mu_0$, such that the ratio of the maximal core pinning force to the depairing Lorentz force is \begin{align} f_p^{core} / f_d &= 3 \sqrt{3} / 16 \approx 32 \%. \end{align} Similarly, Matsushita \cite{matsushita2007} performed a more precise calculation by considering the effects of the geometry of the flux line, and found that $f_p^{core}/f_d \approx 28 \%$. Hence, decades of work in designing the defect landscape to pin vortex cores may have indeed nearly accomplished the maximum efficiency achievable by means of core pinning. \begin{figure} \centering \includegraphics[width=1\linewidth]{figures/JcJd.pdf} \caption{\label{fig:JcJd} Critical current $J_{c}$ normalized to the depairing current for various superconductors at $T=\SI{5}{\kelvin}$ and $\mu_0 H = \SI{0.3}{\tesla}$. The data includes Dy$_2$O$_3$-doped YBa$_2$Cu$_3$O$_{7-\delta}$ commercial coated conductors and B$M$O$_3$-doped Y$_{0.77}$Gd$_{0.33}$Ba$_2$Cu$_3$O$_{7-\delta}$ films (where $M =$ Sn, Zr, or Hf), all grown via metal organic deposition.} \end{figure} If the ultimate goal of $J_{c} = J_{d}$ cannot be obtained by core pinning alone, are there other mechanisms to immobilize vortices that could indeed produce $J_{c}/J_{d} > 30 \%$? Magnetic interactions between vortices themselves or vortices and magnetic inclusions can also restrict the motion of a vortex---referred to as \textit{magnetic pinning}. Herein lies a transformative opportunity to make large strides towards approaching $J_{c} = J_{d}$. Magnetic pinning alone or in combination with core pinning may produce unprecedentedly high values for $J_{c}$, though this mechanism has received considerably less attention than core pinning because it is quite complicated to actualize. A high density of vortices tend to arrange themselves into a hexagonal lattice, and one pinned to a defect via core pinning may restrict the motion of its neighbors, subsequently affecting its neighbors' neighbors due to magnetic vortex-vortex interactions, which occur over a length scale of $\lambda$. A magnetic inclusion provides another opportunity for inflicting magnetic and core pinning on a vortex. Again, following the arguments of Wimbush\cite{Wimbush2015}, we can compare the pinning force induced by core pinning to that of magnetic pinning. The magnetic Zeeman energy $\varepsilon_{mag} = \frac{1}{2} \int_{A} M \cdot B \,dA$ produced by a strong ferromagnet is much greater than the condensation energy and may be several orders of magnitude greater than the core pinning energy. However, it is unclear whether the resulting pinning force is greater because it occurs over the longer length scale of $\lambda$ versus $\xi$, i.e., $f_p^{mag} \sim \varepsilon_{mag} / \lambda$, such that \begin{equation} f_p^{mag} / f_p^{core} \approx 2 \sqrt{2} (\mu_0 M / B_{c}). \end{equation} Hence, the advantage depends on the ratio of the magnetization of the pinning site to the thermodynamic critical field. Independent of whether $f_p^{mag}$ surpasses $f_p^{core}$, concomitant mechanisms would produce an additive effect that may surpass current record values of $J_{c}$. Yet, ferromagnets in proximity to superconductors can locally degrade superconductivity by inducing pair breaking, such that it is challenging to incorporate ferromagnetic vortex pinning centers without compromising the superconducting state. This complication combined with the typical materials science considerations of incorporating inclusions that do not induce too much strain on the surrounding superconducting matrix will make this a challenge all but insurmountable. In addition to magnetic pinning, exploiting geometric pinning provides another potentially transformative opportunity to dramatically boost $J_{c}$ in superconductors. In clean, narrow (sub-micron) superconducting strips, geometric restrictions can induce self-arrest of vortices recovering the dissipation-free state at high fields and temperatures due to surface/edge (Bean-Livingston barrier) \cite{Bean1964} or geometric \cite{Zeldov1994, Zeldov1994b, Brandt1999, Willa2014} pinning. Figure~\ref{fig:doublestrip} depicts an example of geometric vortex pinning around two superconducting strips. Moreover, at a fixed applied current, the magnetoresistance (MR) shows oscillations with increasing magnetic field, indicating the penetration of complete vortex rows into the system~\cite{Papari2016}. Therefore, these MR oscillations are a way to determine the vortex structure in nanoscale superconductors. At very high fields, the vortex lattice in these strips starts to melt. Combining magnetoresistance measurements and numerical simulations can then relate those MR oscillations to the penetration of vortex rows with intermediate geometrical pinning, where the vortex lattice remains unchanged, and uncover the details of geometrical melting. This opens the possibility to control vortices in geometrically restricted nanodevices and represents a novel technique of `geometrical spectroscopy'. Combined use of MR measurements and large-scale simulations would reveal detailed information of the structure of the vortex system. A similar re-entrant behavior was observed in superconducting strips in a parallel field configuration: Here, high fields lead to `vortex crowding', in which a higher density of vortex lines starts to straighten, therefore reducing the Lorenz force on the vortices. The result is an intermediate dissipation-less state~\cite{parallel2017}. The situation becomes more complex when one considers nanosized superconducting strips and bridges, in which vortex pinning is dictated by an intricate interplay of surface and bulk pinning. As described above, in the case of a very narrow bridge, $J_{c}$ is mostly defined by its surface barrier, whereas in the opposite case of very wide strips, it is dominated by its bulk pinning properties. However, understanding the intermediate regime, where the critical current is determined both by bulk pinning and by the Bean-Livingston barrier at the edge of a strip is of great interest in small superconducting structures and wires. \begin{figure} \centering \includegraphics[width = 0.45\textwidth]{figures/figure-double-strip} \caption{\label{fig:doublestrip} Field profile in a double-strip geometry before penetration of vortices \cite{Willa2014}. The arrangement and geometry (e.g.\ width $w$ and thickness $d$) of the specimen significantly influence the relative importance of Bean-Livingston, geometric and bulk pinning. Material from R.\ Willa, \emph{ETH Zurich Research Collection}, see Ref.~[\onlinecite{Willa2016-thesis}]. } \end{figure} Recent studies~\cite{kimmel2019} revealed that while bulk defects arrest vortex motion away from the edges, defects in their close vicinity promote vortex penetration, thus suppressing the critical current. This phenomenon is also quite important in the study of superconducting radio-frequency cavities. Furthermore, the role of defects near the penetrating edge is asymmetric compared to the exit edge of a superconducting strip. This complex interplay of bulk and edge pinning open new opportunities for tailored pinning structure for a given application. In the simple case of the straight strip with similar-type spherical defects, an optimized defect distribution can have a more than 30\% higher critical current density than a homogeneously disorder superconducting film. The need for high-current, low-cost superconductors continues to grow with new applications in lightweight motors and generators, as well as strong magnets for high-energy accelerators, NMR machines, or even Tokamak fusion reactors. Many of these applications require large magnetic fields and therefore large critical currents, which is both a fundamental research and engineering challenge as it requires reliably fabrication of uniform, km-long high-performance superconducting cables having an optimal pinning microstructure. Consequently, there are two main aspects that must be addressed for large-field applications: (i) determining the best possible pinning landscape and geometry for a targeted application and (ii) controlling fabrication of long superconducting cables to incorporate an optimized pinning landscape with highest possible uniformity. Both of these aspects are part of the critical-current-by-design paradigm~\cite{ROPP}. We will describe these in a more general context in section~\ref{ssec:AI}. Taking advantage of the modern computational approaches described there in combination with experiments opens novel pathways to new materials for large-field applications, in particular the use of high-$T_{c}$ superconducting material instead of more traditional choice of elemental Nb or Nb-based compounds. \subsection{Superconducting RF cavities and Quantum Circuits} \label{sec:RFcavities} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{figures/fig_srf.pdf} \caption{\label{fig:srf} Surface disorder and multilayers in SRF cavities. \textbf{(a)} Sketch showing how vortices (red) parallel to the surface of a cavity wall penetrate the wall (outside the superconductor, the red lines illustrate field lines), \textbf{(b)} intercalating insulating layers (SI[SI]S) will cause vortex pancakes to form and might limit the penetration depth of vortices~\cite{Gurevich2006,Gurevich2017}. \textbf{(c)} Simulation snapshot of surface vortex penetration into a type-II superconductor having spherical defects (yellow) near the surface in an AC magnetic field parallel to the surface. Vortex lines are shown in red. The planar projection shows the superconducting order parameter amplitude.} \end{figure*} \subsubsection{Superconducting RF cavities} Most studies of vortex dynamics in superconductors are conducted using DC currents and static magnetic fields. Yet, the need to understand vortex dynamics under AC magnetic fields or AC currents is rapidly increasing, as these are the operating conditions e.g. for superconducting radio frequency (SRF) cavities and quantum circuits for sensing and computing. Superconductors are desirable for RF devices because the minimal resistance enables very high quality factors $Q$, a metric that indicates high energy storage with low losses and narrow bandwidths. SRF cavities are used in current and next-generation designs for particle accelerators used in, e.g., high energy physics. In addition to $Q$, the maximum accelerating field, $E_a$, is another important metric for SRF cavity performance. The goal is often to maximize $Q$ at low drive powers, which is essential for large accelerating fields and reduced demands on cryogenic systems that are responsible for cooling the cavities. Similarly, higher $E_a$ is desirable, as it indicates larger reachable particle energies. Elemental niobium (Nb) and Nb-based compounds are the material of choice for all current accelerator applications. Advances in the fabrication of Nb-cavities have pushed their performance to extraordinary levels~\cite{rfbook,SRF2017}, with $Q$-values approaching $2\times 10^{11}$ and $E_a$ in excess of $45$ MV/m in Nb~\cite{qf2007}, and $E_a\sim$ 17 MV/m for Nb$_3$Sn resonators. In both cases, the magnetic field reached is above the lower critical field of the material, but below the theoretically predicted superheating field, at which vortices would spontaneously penetrate even a perfect cavity wall, shown in Fig.~\ref{fig:srf}a. Further increases in $E_a$ require a conceptual breakthrough in our understanding of Nb-cavity performance limits or new constituent materials. New material candidates being considered include Nb$_3$Sn, NbTiN, MgB$_2$, Fe-based superconductors, and engineered multilayer or stratified structures (see Fig.~\ref{fig:srf}b). SRF cavities operate at temperatures well below $T_{c}$ and high enough frequencies to drive the superconductor into a metastable state, near breakdown. The resulting period approaches intrinsic time scales, such as vortex nucleation and quasi-particle relaxation times. While the experimental progress in improving the performance and quality factor of SRF cavities has been impressive~\cite{SRF2017}, e.g., the counter-intuitive increase of the quality factor with nitrogen doping, it is mostly based on trail-and-error approaches. A deep fundamental understanding is important to make more systematic progress, requiring new theoretical and computational studies. Because the cavities operate out-of-equilibrium, a phenomenological description based on TDGL theory can only serve as a rough, qualitative guide. Developing a fundamental theory describing the nonlinear and non-equilibrium current response of SRF cavities requires a microscopic description based on quantum transport equations for non-equilibrium superconductors. A microscopic description, however, is challenging because the RF currents under strong drive conditions (i.e., high field frequencies and amplitudes near the breakdown/superheating field) affect both the superconducting order parameter and the kinetics of quasiparticles, all of which have to be treated self-consistently. This endeavor requires development of numerical approaches to solve the quantum transport equations, based on the Keldysh and Eilenberger formulations of non-equilibrium superconductivity in the strong-coupling limit. The Keldysh-Eilenberger quantum transport equations are, in general, non-local in space-time, non-linear, and in many physical situations involve multiple length and time scales. Solving these equations requires considerable computational resources, which are now becoming available with exa-scale computing facilities. Herein lies a transformative opportunity to dramatically boost the performance of SRF cavities. Namely, researchers are now equipped to develop microscopic theoretical models, and incorporate them into computational codes, to reveal the origin and mechanisms that limit the accelerating field of SRF cavities. The acquired knowledge will then guide materials optimization to maximize the critical currents, superheating fields, and quench fields. Reaching the theoretical limits for these parameters necessitates suppressing vortex nucleation. At high RF magnetic field amplitudes, screening currents near the vacuum-superconductor interface can nucleate Abrikosov vortices that can quench the cavity, see Fig.~\ref{fig:srf}c. This \emph{vortex breakdown} depends on (i) the amplitude and frequency of the surface field, (ii) the cavity's surface roughness, and (iii) the type, distribution, and size of defects near the interface. The impact of near-surface defects on vortices is diametric: they may reduce the potential barrier for vortex nucleation~\cite{kimmel2019}, but also pin nascent vortices generated by nucleation at the surface, preventing a vortex avalanche and substantial dissipation. Given this, there are various possible optimal microstructures for the SRF constituent materials: (i) a ``clean'' superconductor with a maximum surface barrier, (ii) a superconductor with a thin (few $\xi$) defect-free surface and nanoscale defects in its bulk, or (iii) some special spatial gradient in the size and/or density of defects. Large-scale TDGL simulation can be applied to study the conditions under which vortex avalanches form, devise mechanisms that are effective at mitigating these avalanches and, more generally, gain insight into the flux penetration under RF field conditions. Furthermore, by coupling the TDGL and heat transport equations, this method can study \emph{hot spots}, providing insight on avoiding the formation of these hot spots in SRF cavity walls. As previously mentioned, though TDGL cannot produce quantitative results, it can serve as a useful guide to experiments and also provide insight into simulations based on microscopic transport equations. Lastly, though discussed in the context of SRF cavities, these new computational methods can be applied to superconducting cables for AC power applications. \subsubsection{Vortices in Quantum Circuits\label{ssec:quantumcircuits}} \begin{figure*} \centering \includegraphics[width=0.9\linewidth]{figures/Resonators2.pdf} \caption{\label{fig:Resonator} \textbf{(a)} An optical image showing multiple $\lambda/4$ resonators multiplexed to a common feed line and surrounded by a ground plane containing holes to pin vortices and suppress vortex formation, \textbf{(b, c)} Scanning electron micrographs of superconducting CPW resonator without (\textbf{b)} and with \textbf{(c)} holes. \textbf{(d)} Quality factor $Q_i$ versus $B_{\perp}$ for varying hole density $\rho_h$. The field at which the vortex density matches the hole density (each hole is filled with one vortex) is plotted with a color-matched vertical line. Above this threshold field, additional vortices are not pinned by the holes but instead only weakly pinned by film defects and interstitial pinning effects. \textbf{(e)} $\Delta f_r/f_r$ versus $B_{\perp}$ for varying $\rho_h$. Reprinted with permission from Ref.~[\onlinecite{Kroll2019}]. Copyright 2019, \emph{American Physical Society}. } \end{figure*} \paragraph{Energy loss due to vortices.} Similar to SRF cavities, superconducting circuits for quantum information also operate at RF/microwave frequencies and are affected by vortices. Specifically, along with parasitic two-level fluctuators and quasiparticles, vortices are a considerable source of energy loss in superconducting quantum circuits\cite{Muller2019, Oliver2013, Martinis2009}. These energy loss mechanisms create a noisy environment in which the qubit irreversibly interacts stochastically, rather than deterministically. Consequently, the evolution of the quantum state is unpredictable, increasingly deviating over time from predictions until the qubit state is eventually lost. This is called decoherence, which limits the amount of time $T_1$ over which information is retained in qubits to the microsecond range and there is typically a large spread in the $T_1$ times for each qubit in multi-qubit systems\cite{Finke2019}. Vortices appear in superconducting quantum circuits due to stray magnetic fields, self-fields generated by bias currents, and pulsed control fields. In addition to limiting $T_1$ in qubits, thermally-activated vortex motion can cause significant noise in superconducting circuits and reduce the quality factor $Q$ of superconducting microwave resonators\cite{Song2009, Song2009a, Kroll2019}. To mitigate this, techniques have been developed to either prevent vortex formation or trap vortices in regions outside of the path of operating currents. Shielding circuits from ambient magnetic fields and narrowing linewidths constituting the device\cite{DVH2004, Samkharadze2016} significantly reduce the vortex population. For example, for a line of width $w$, flux will not enter until the field surpasses $\Phi_0/w^2$. Because of this realization, flux qubits typically contain linewidths of $\SI{1}{\micro\meter}$, therefore exclude vortex formation up to a threshold magnetic field of roughly \SI{2}{\milli\tesla}, which is 20 times larger than the Earth's magnetic field\cite{DVH2004}. Though shielding has enabled remarkable headway in improving the stability of superconducting qubits for computing applications, it is not a complete solution. A reasonable amount of shielding can only suppress the field by a small amount, which may be insufficient if devices must operate in high-field environments. Moreover, shielding may render devices useless in quantum sensing applications in which the purpose of the device is to sense the external environment. This has sparked research on further modifications to the device design and on understanding the effects of a magnetic field on different architectures of quantum circuits, including transmon qubits\cite{Schneider2019} and superconducting resonators\cite{Bothner2017, Kroll2019}, which are integral components to readout. In addition to shielding, another common remedy for the vortex problem in superconducting circuits involves micropatterning arrays of holes in the ground plane to serve as vortex traps and reduce the prevalence of vortex formation within that perforated area\cite{Song2009a, Bothner2011, Bothner2012, Chiaro2016}, see Fig. \ref{fig:Resonator}. For example, Bothner et al.\ found that $Q(B=\SI{4.5}{\milli\tesla})$ is a factor of 2.5 higher in Nb resonators containing flux trapping holes in the ground plane \cite{Bothner2011} compared to without the holes. However, Chiaro et al.\cite{Chiaro2016} showed that, without careful design, these features can increase the density of and subsequently losses from parasitic two-level fluctuators, thought to primarily form at surfaces and interfaces. Moreover, coplanar waveguide resonators were recently found to be more robust to external magnetic fields when the superconducting ground plane area is reduced, which lowers the effective magnetic field inside the cavity, and by coupling the resonator inductively instead of capacitively to the microwave feedline, shielding the feedline\cite{Bothner2017}. The methods we have discussed here engendered tremendous advances in suppressing the vortex problem in superconducting quantum circuits, however, the details are material-dependent. Likewise optimal mitigation strategies may be material-dependent. For example, Song et al.\ compared the microwave response of vortices in superconducting Re (rhenium) and Al coplanar waveguide resonators with a lattice of flux-trapping holes in the ground plane. Generally, in both systems, vortices shift the resonance frequency $f_0$, broaden the resonance dip $|S_{21}|(f)$, and reduce the quality factor $Q$. However, vortices in the Al resonators induce greater loss and are more sensitive to flux creep effects than in the Re resonators. The Al resonator experienced a far more substantial fractional frequency shift $df/f_0$ with increasing frequency than the Re resonator. Furthermore, while the loss $1/Q$ due to vortices increased with frequency for Re, it decreased for Al. Most research on the microwave response of vortices in quantum circuits is limited to Al\cite{Song2009, Song2009a, Chiaro2016, PhysRevLett.113.117002, Wang2014}, Nb\cite{Bothner2012, Bothner2011, Stan2004, Kwon2018, Golosovsky1995}, NbTiN\cite{ Samkharadze2016, Kroll2019}, and Re\cite{Song2009a}. Whereas Al and Nb are used in commercial quantum computers, superconducting nitrides (TiN, NbN, NiTiN)\cite{Sage2011, Ohya2013, Vissers2012a, Leduc2013, Sandberg2012, Chang2013, Kerman2006, Barends2010a, Barends2010b, Bruno2015} and Re have garnered substantial attention because they may suffer less from parasitic two-level fluctuators, which are particularly problematic in oxides and at interfaces\cite{Muller2019}. Nitrides and Re are known to develop thinner oxide layers than Al and Nb, and can be grown epitixally on common substrates\cite{Dumur2016, WangMartinis2009, Vissers2010}. To develop a generic understanding of how to design quantum circuits that are resilient to ambient magnetic fields and control vortices in circuits made of next-generation materials, we must study circuits consisting of broader ranges of materials, perform further studies on nitride-based circuits, investigate different designs for flux trapping, and conduct imaging studies that can observe rather than infer the efficacy of vortex pinning sites. There have been a few studies that imaged vortices in superconducting strips, which provided guidance on appropriate line widths to preclude vortex formation\cite{Stan2004, Kuit2008}. To build upon this, imaging studies (using e.g. a scanning SQUID or magnetic force microscope) of devices would inform on the efficacies of flux trapping sites, reveal locations in which vortices form, and track vortex motion. \paragraph{Vortices in topological quantum computing schemes.} Up until now, we have discussed vortices exclusively as a nuisance, which is indeed the case for a broad range of applications. A notable exception lies in the burgeoning field of topological quantum computing, in which vortices serve as hosts for Majorana modes\cite{Liu2019}. Qubits encoded using Majorana modes are predicted to be relatively robust to noise, thus have long coherence times. One way to realize this is to couple a superconductor to a topological insulator, induce vortices in the superconductor, and Majorana states are predicted to nucleate in the vortex core. (Also note that Majorana modes have been theorized to exist in other systems) \cite{Grosfeld2011, Nenoff2019, You2014, DasSarma2012, Bjorn2012, Liang2016, Alicea_2012}. Initially elusive, signatures of Majorana vortex modes have been recently observed in a variety of systems, including the iron-based superconductor $\mathrm{Fe}\mathrm{Te}_{x}\mathrm{Se}_{1-x}$ \cite{Chiueaay2020, Ghaemi2020}, EuS islands overlying gold nanowires \cite{Manna2020}, superconducting Al layer encasing an InAs nanowire\cite{Vaitiekenaseaav2020}, and Bi$_2$Te$_3$/NbSe$_2$ heterostructures\cite{JFJia2016}. To exploit these modes for computing, we must be able to control their vortex hosts. Consequently, vortex pinning research will be beneficial to vortex-based topological quantum computing applications. \subsection{Vortex matter genome using artificial intelligence: Critical-current-by-design}\label{ssec:AI} \begin{figure*} \centering \includegraphics[width=0.98\textwidth]{figures/fig_AIx.pdf} \caption{\label{fig:AI} Critical-current-by-design using: \textbf{(a)} Genetic algorithms to optimize critical currents. Starting with a superconductor having intrinsic defects, genetic algorithms can be used to optimize the defect structure by mutation of defects and targeted selection of landscapes with larger critical currents. A mutation of a defect (or several defects at once) can be done by, e.g., translation, resizing, deletion, or splitting as sketched in the defect sequence on the right. Overall, this procedure creates ``generations'' of mutated defect configurations and only the best is selected and chosen to be the seed for the next generation shown in the partial tree on the left (circles/dots represent configurations, where the large numbered one is the best). Using neural networks and machine learning (ML) to predict the best mutations could further improve the targeted selection approach~\cite{Sadovskyy2019}. \textbf{(b)} ML/Artificial intelligence (AI) to improve and tailor defect landscapes in superconductors. \textbf{\protect\circled{1}} illustrates how AI models can be used to predict pinning landscapes from synthesis parameters and, vice versa, to predict synthesis parameters like precursor concentrations, pressures, and temperatures in, e.g., vapor deposition methods, for a targeted pinning landscape. The models need to be trained by experimental or simulation data sets. \textbf{\protect\circled{2}} similarly shows how to directly predict critical current dependencies, like field orientation dependencies, from pinning topologies and vice versa. Again, the underlying model is trained by experimental and simulation data.} \end{figure*} Over the years, research in superconductivity and vortex pinning has produced large amounts of experimental and simulation data on microstructures, synthesis, and critical current behavior. More recently, artificial intelligence (AI) and machine learning (ML) approaches have enabled revolutionary advances in the fields of image and speech recognition as well as automatic translation, and are now finding an increasing number of applications in scientific research areas that deal with massive data sets, like particles physics, structural biology, astronomy, and spectroscopy. Combining these will enable novel approaches to predict pinning landscapes in superconductors for the future design of materials with tailored properties by using sophisticated ML algorithms and AI models. This has become a promising approach within the critical-current-by-design paradigm, which refers to designing superconductors with desired properties using sophisticated numerical methods replacing traditional trial-and-error approaches. These properties include maximizing critical currents, achieving robust critical currents with respected to variations of the pinning landscape (which is important for large-scale commercial applications), or attaining uniform critical currents with respect to the magnetic field orientation. The next step towards advancing the use of AI/ML approaches for critical-current-by-design may be to build upon the genetic algorithms implemented in Ref.~[\onlinecite{Sadovskyy2019}] to optimize pinning landscapes for maximum $J_{c}$. This approach utilizes the idea of targeted selection inspired by biological natural selection. In contrast with conventional optimization techniques, such as coordinate descent, in which one varies only a few parameters characterizing the entire sample, targeted evolution allows variations in each defect individually without any \textit{a-priori} assumptions about the defects' configuration. This essentially means that one solves an optimization problem with, theoretically, infinite degrees of freedom. Ref.~[\onlinecite{Sadovskyy2019}] demonstrated the feasibility of this approach for clean samples as well as ones with preexisting defects, e.g. as found in commercial coated conductors. The latter, therefore, provides a post-synthesis optimization step for existing state-of-the-art wires and a promising path toward the design of tailored functional materials. However, the mutations of the defects required for the genetic algorithm [see Fig.~\ref{fig:AI}(a)] were chosen randomly. Those mutations generate ``generations'' of pinning landscapes, of which the best is chosen by targeted selection and then used as a seed for the next generation. Using a simple machine learning approach could further enhance the convergence of this method, by performing only mutations which have higher probabilities of enhancing the critical current. Besides superconductors, this methodology can be used to improve the intrinsic properties of other materials where defects or the topological structure plays an important role, such as magnets or nanostructured thermoelectrics. Going beyond these ML-improved simulations, one can build quantitative data-driven AI approaches for superconductors that will enable, e.g. predicting the critical current phase diagram and extracting the defect morphology responsible for its performance directly from the existing accumulated experimental and simulation data without actual dynamics simulations. Here, we will discuss two potentially transformative opportunities, summarized in Fig.~\ref{fig:AI}(b). The first application is motivated by the need for reliably producing uniform superconductors on macroscopic commercial scales. This requires a deep understanding of material synthesis processes e.g., for self-assembled pinning structures in a superconducting matrix (see Fig.~\ref{fig:AI}(b), \textbf{\circled{1}}). Materials at the forefront of this quest are REBCO films with self-assembled oxide inclusions in the form of nanorods and nanopartices~\cite{Obradors2014, ROPP}. For example, BaZrO$_3$ (BZO) nanorods that nucleate and grow during metal organic chemical vapor deposition (MOCVD) have proven particularly effective for pinning vortices~\cite{majkic2017}. The major difficulties in achieving consistent and uniform critical currents in REBCO tapes containing BZO nanorods are the interplay of many parameters controlling the deposition process (temperature of the substrate and of the precursor gases, deposition rate, precursor composition, etc.) and strong sensitivity of the microstructure to small variations in these parameters. Even for the same nominal level of Zr additives, significant variations in nanorod diameter, size distribution, spacing distribution, and angular splay have been observed. Physical factors controlling these variations remain poorly understood. For example, the nanorods’ diameter may be a mostly equilibrium property resulting from the interplay of strain and surface energies or caused by kinetic effects controlled by surface diffusion of adatoms and deposition rate. These complexities have precluded the development of predictive models. However, making use of the accumulated experimental data sets and possibly synthesis/kinetic growth simulation data (Monte Carlo or molecular dynamic simulations, which are also still in an exploratory phase), allows building ML/AI models to predict pinning landscapes for given synthesis parameters as described above or, more relevant for commercial application, the prediction of synthesis parameters for a desired, uniform pinning landscape. To constitute a complete \textit{vortex-pinning genome}, a second notable milestone is using AI to predict $J_{c}$ for a given pinning landscape based solely on data recognition (disregarding TDGL simulations) and, conversely, predicting the necessary pinning landscape to produce a desired $J_{c}$. In fact, the latter cannot be achieved by direct simulations. Typical data sets, both experimental and simulation-based, contain information on defect structures, critical currents, and other transport characteristics for a wide range of magnetic fields and temperatures. Creating an organized database of this information would enable (i) quickly accessible critical current values for a wide range of conditions, (ii) an effective mapping of simulation parameters onto experimental measurements, and (iii) using the data as training sets for AI-driven predictions of defect structures for desired applications. Experimentally, microstructures are routinely probed by transmission electron microscopy (TEM) and, less directly, by x-ray diffraction (XRD). In contrast to simulation data, which contains all information about pinning landscapes, the extracted information is usually rather limited, since TEM only allows imaging of thin slices of the material and only detects relatively large defects. A full 3D tomography of defect landscapes [cf. Fig.~\ref{fig:tomogram}(a)] is very time consuming and expensive, and therefore currently typically infeasible. The resulting AI/ML models will also allow for a cross-validation of the simulation-based data with available experimental data on materials properties in superconductors with different defect microstructures. Overall, this AI/ML approach will directly reduce the cost and development time of commercial superconductors and, in particular, accelerate their design for targeted applications. To estimate the benefit of such an approach, one can consider, for example, a pinning landscape defined by 9 parameters. Using traditional interpolation in this 9-dimensional parameter space one would need to have a certain number of data points per parameter. For the modest case of 15 data points per direction one would need to simulate (measurements are infeasible) $15^9\approx 4\cdot 10^{10}$ pinning landscapes, which -- assuming 15 minutes per simulation on a single GPU -- results in a total simulation time of a million GPU years. This simulation time is beyond current capabilities, even on the largest supercomputers. However, surrogate ML models can reduce this to approximately $10^4$ simulations, while maintaining the same resulting accuracy (see seed studies in, e.g., Ref.~[\onlinecite{crombecq2011}]). In this section, we mentioned the complications associated with 3D tomographic imaging of a superconductor's microstructure to supply complete information for simulations. In the next section, we detail the limitations of tomographic imaging and other advanced microscopy techniques, many of which will be revolutionized by improvements in computational power and the application of advanced neural networks. This in turn will have a transformative impact on vortex physics. \subsection{Advanced microscopy to better understand vortex-defect interactions\label{ssec:microscopy}} \subsubsection{Quantitative point-defect spectroscopy } We have discussed the role of point defects in suppressing vortex motion via weak collective pinning. Notably, the theory of weak collective pinning\cite{Larkin1979} has attracted significant attention as it can explain the origin of novel vortex phases, e.g.\ vortex glass~\cite{Fisher1989,Fisher1991} and vortex liquid~\cite{Nelson1988} phases, as well as the associated vortex melting phase transition ~\cite{Brandt1989, Houghton1989}. It cannot, however, be used to predict $J_{c}$ in single crystals, whose defect landscape is dominated by point defects. This limitation is not necessarily reflective of gaps in weak collective pinning theory itself, but rather the fact that point defect densities are typically unknown because they are extremely challenging to measure over a broad spatial range. Consequently, point defects are the dark matter of materials. Herein lies yet another transformative opportunity in vortex physics. Developing a technique to accurately measure point defect profiles and subsequent systematic studies correlating point defects, $J_{c}(B,T)$, and $S(B,T)$ may lead to recipes for predictably tuning the properties of superconductors, most directly impacting crystals and epitaxial materials that lack a significant contributions from strong pinning centers. The most promising routes for quantitative point defect microscopy include scanning transmission electron microscopy (STEM), atom probe tomography (APT), atom electron tomography (AET), and positron annihilation lifetime spectroscopy (PALS). Here, we primarily focus on STEM combined with AET, then will introduce APT and PALS as other techniques with atomic-scale resolution that are relatively untapped opportunities to reveal structure-property relationships in superconductors. In scanning transmission electron microscopy (STEM), an imaging electron beam is transmitted through a thin specimen, such that detectors can construct a real-space image of the microstructure and collect other diffraction data. In superconductors, STEM studies have revealed a panoply of defects, including columnar tracks, defect clusters, dislocations, twin boundaries, grain boundaries, and stacking faults. These studies can also provide information on other pertinent microstructural properties, including strained regions that induce variations in the superconducting order parameter, therefore, preferential regions for vortices to sit in an otherwise defect-free landscape. To identify dopants (e.g. BaHfO$_3$ nanoparticles), STEM is also often performed in conjunction with analytical techniques, such as energy dispersive x-ray spectroscopy. To understand the ability of STEM to determine point defect densities in superconductors, we must first understand what limits the spatial resolution and throughput. Older STEMs cannot resolve point defects due to imperfections (aberrations) in the objective lenses and other factors that set the resolution higher than the wavelength of the imaging beam. Atomic resolution was finally achieved upon the advent of transformational aberration correction schemes, which were first successfully demonstrated in the late 1990s and have been increasingly widely adopted over the past decade\cite{Batson2002, ROSE20051, Dahmen2009, Ophus2017}. In fact, the spatial resolution of an aberration-corrected STEM has now fallen below the Bohr radius of $\SI{0.53}{\pico\meter}$ \cite{kisielowski2008, Erni2009, Alem2009, Naoya2019}. Though point defects can now be imaged in superconductors, it is not straightforward to determine point defect densities. A single scan captures a small fraction of the sample, which may not be representative of defect distributions throughout the entire specimen. Accordingly, low throughput prevents collecting a sufficiently large dataset to provide a reasonably quantitative picture of defect concentrations. One of the limiting factors for throughput is the detector speed, which has recently improved significantly owing to the development of direct electron detectors such as active pixel sensors (APS) and hybrid pixel array detectors (PAD). These detectors have higher quantum efficiency, operate at faster readout speeds, and have a broader dynamic range than conventional detectors---charge-coupled devices (CCDs) coupled with scintillators \cite{Ophus2017}. Enabled by fast detectors, the advent of 4D-STEM\cite{ophus_2019} is another recent, major milestone that is a significant step towards determining point defect densities. Note that 4D-STEM involves collecting a 2D raster scan in which 2D diffraction data is collected at each scanned pixel, generating a 4D dataset containing vast microstructural information. In addition to high-speed direct electron detectors, computational power was prerequisite for 4D-STEM implementation, in which massive datasets can be produced: see Ref.~[\onlinecite{ophus_2019}] for an example in which a single 4D-STEM image recorded in \SI{164}{\second} consumes \SI{420}{\giga\byte}. Hence, over the past few years, this has warranted efforts to develop fast image simulation algorithms \cite{Ophus2017} and schemes to apply deep neural networks to extract information, such as defect species and location\cite{Ziatdinov2017}. Furthermore, STEMs can be used for electron tomography, in which images that are collected as the sample is incrementally rotated are combined to create a 3D image of the microstructure.\cite{Miaoaaf2157} Aberration correction, high-speed detectors, and the data revolution are transformative advances that will certainly accelerate progress in understanding structure-property relationships in superconductors. Nevertheless, there are more salient impediments to an atomic-scale understanding of the true sample under study, including artifacts from sample preparation techniques\cite{SCHAFFER2012} and beam scattering within thick samples. To remedy the latter, materials are often deposited onto membranes, though this may not present a representative picture of the defect landscape when the sample is in a different form (e.g. thicker and on a different substrate). Atom probe tomography (APT) is another microscopy technique with atomic-scale resolution, and it also provides 3D compositional imaging of surface and buried features. Over the past decade, it has become increasingly popular due to the development of a commercial local-electrode atom probe (LEAP). For APT, the specimen must be shaped as a needle and an applied electric field successively ejects layers of atoms from the surface of the specimen towards a detector. By means of time-of-flight mass spectroscopy, the detector progressively collects information on the position and species of each atom, reconstructing a 3D tomographic image of the specimen that can span \SI{0.2 x 0.2 x 0.5}{\micro\meter} with resolution of $\SIrange[range-phrase=-, range-units=single]{0.1}{0.5}{\nano\meter}$ \cite{Petersen2011}. As each atom is individually identifiable, it can provide remarkably revealing information on the microstructure. Similar to STEM, sample preparation and data processing are bottlenecks; APT also currently suffers from limited detection efficiency\cite{Kelly2007}. Furthermore, the analyzed volume (field of view) is currently too small to be sufficiently representative of the sample to provide accurate quantitative details on point defect concentrations. The biggest complication, however, may be that the defect landscape of the APT specimen, shaped as a needle, may dramatically differ from the material in the form in which we typically study its electromagnetic properties. Lastly, positron annihilation spectroscopy is a hitherto untapped opportunity to correlate vacancy concentrations with electrical transport properties in superconductors. This non-destructive technique can determine information about vacancies and larger voids in a material by bombarding it with positrons at \SI{50}{\electronvolt} to \SI{30}{\kilo\electronvolt} acceleration energies,\cite{Gidley2006, Or2019} then measuring the time lapse between the implantation of positrons and emission of annihilation radiation. Upon implantation, positrons thermalize in \SI{\sim 10}{\pico\second} then either interact with an electron and annihilate or form a positronium atom (electron-positron pair)\cite{STRASKY2018455}. Positronium atoms will then ricochet off the walls of voids and eventually annihilate, releasing a $\gamma$-ray that can be detected with integrated $\gamma$-ray detectors. The lifetime of the positron can provide information on void sizes and concentration of vacancies: longer lifetimes correspond to larger voids and higher vacancy densities. PALS has been used for decades to sensitively detect vacancies and vacancy clusters in metals and semiconductors\cite{Schultz1988, Gidley2006} as well as probe subnanometer, intermolecular voids in polymers\cite{Pethrick1997, Gidley2006}. Depth profiling is possible on the nm to the \SI{}{\micro\meter} scale\cite{RevModPhys.60.701, Wagner2018, Peng2005, Gidley2006} by tuning the positron implantation energy and, though some systems have beam scanning capabilities enabling lateral resolution, spatial resolution is generally quite poor due to large beam spot sizes and positron diffusion. In most systems, the spot size is typically $\sim \SI{1}{\milli\meter}$. However, PALS instruments containing \emph{microprobes} are capable of spot sizes that are smaller than $\SI{100}{\micro\meter}$ \cite{PhysRevLett.87.067402, Gigl2017}. For example, in 2017, Gigl et al.\cite{Gigl2017} developed a state-of-the-art system with a minimum lateral resolution of \SI{33}{\micro\meter} and maximum scanning range of \SI[product-units=power]{19 x 19}{\mm}. Regarding speed, the system can scan an area of \SI[product-units=power]{1 x 1}{\mm} with a resolution of \SI{250}{\micro\meter} in less than 2 minutes, which is considered to be an exceptionally short time frame\cite{Gigl2017}. Moreover, David et al.\cite{PhysRevLett.87.067402} reported a remarkably small spot diameter of \SI{2}{\micro\meter} in a setup with a short scanning range of \SI[product-units=power]{0.2 x 0.6}{\mm}. Unfortunately, further improvements to beam focus may be ineffectual and spatial resolution comparable to electron microscopy is unreachable. The spatial resolution is ultimately limited by lateral straggle: the positron diffusion length is roughly several hundreds of nanometers in a perfect crystal, which limits the spot size even if the beam focus is improved \cite{RevModPhys.60.701}. Ongoing efforts to advance PALS include improving theoretical methods for interpretation of experimental results, advancing theoretical descriptions of positron physics (states, thermalization, and trapping), incorporating sample stages that allow tuning sample environmental conditions (e.g. temperature, biasing), and improving the efficiency of beam moderators (which convert polychromatic positron beams to monochromatic beams)\cite{RevModPhys.60.701}. \subsubsection{Cryogenic microstructural analysis for accurate determination of structure-property relationships} Accurately correlating the formation of different vortex structures and intricacies of vortex-defect interactions with electromagnetic response is not trivial. Typically, conventional microscopy is performed under conditions that differ from a material's actual working environment: structural characterization of superconductors is routinely conducted at room temperature whereas accessing the superconducting regime requires cryogenic temperatures and is probed using electromagnetic stimuli. Yet we know that temperature changes significantly impact the microstructure, causing strain-induced phase separation and altering defects such as dislocations. Electromagnetic stimuli may similarly impact the defect landscape. Hence, another transformative opportunity in vortex physics is cryogenic structural characterization of superconductors under the influence of electromagnetic stimuli, which requires advances in microscopy. Scanning transmission electron microscopy combined with spectroscopic analysis is one of the most informative methods of gathering structural and chemical analysis at the atomic-scale. Accurate determination of structure-property relationships require in-situ property measurements conducted concommitently with microscopy. Recent, rapid advances in in-situ transmission electron microscopy have been fueled by the introduction of a variety of commercial in-situ sample holders that allow for electrical biasing, heating, magnetic response, and mechanical deformation of nanomaterials\cite{McDowell2018}. These new capabilities have accelerated progress in a variety of fields, including battery electrochemistry, liquid-phase materials growth, bias-induced solid-state transformations in e.g. resistive switching devices for memory and neuromorphic applications, gas-phase reactions and catalysis, solid-state chemical transformations e.g at interfaces between semiconductors and metallic contacts, and mechanical behavior \cite{McDowell2018}. For in-situ TEM to be beneficial to superconductors, samples must be cooled to cryogenic temperatures and studied under the influence of magnetic fields. Developed in the 1960s, liquid helium cooled stages have been used to study superconductors, solidified gases, and magnetic domains \cite{goodge_2020}, initially without the benefit of aberration-corrected systems with atomic-scale resolution. More recently, cryogenic STEM with atomic scale resolution has been used to study quantum materials, including low-temperature spin states. However, these studies have been limited to a single temperature that was above the boiling point of the choice cryogen (liquid helium or liquid nitrogen)\cite{goodge_2020}, due to thermal load, whereas variable temperature capabilities are requisite for probing phase transitions and the effects of thermal energy. To this end, there has recently been a push to develop advanced sample holders with stable temperature control\cite{goodge_2020, HummingbirdSBIR}. One of the most promising efforts is led by Hummingbird Precision Machine, a company that is developing a double-tilt, cryo-electrical biasing holder for TEMs that allows samples to be concurrently cooled to liquid helium temperatures and electrically biased, while undergoing atomic-scale structural imaging \cite{HummingbirdSBIR}. Because of such industry involvement in the development and commercialization of cryogenic sample holders and, more generally, the rapid pace of in-situ TEM (e.g. the number of in-situ TEM papers doubled between 2010 and 2012\cite{Taheri2016}) we expect to see large advancements in this identified challenge over the next several years. \subsubsection{Cross-sectional imaging of vortex structures} In Sec.~\ref{sec:Introduction}, we discussed how competition between pinning forces, vortex elasticity, and current-induced forces results in complicated vortex structures, such as double-kinks, half-loops, and staircases. Typically, we may conjecture which structures have formed based on the microstructure and applied field orientation. Subsequent correlations are made between the presumed structures and the magnetization or transport results, which may be suggestive of a specific vortex phase. However, without direct proof of the structures, we cannot unequivocally correlate distinct excitations with specific vortex phases. For example, in a study of a NbSe$_2$ crystal containing columnar defects tilted 30$^\circ$ from the c-axis, magnetization results evinced glassy behavior when the field was aligned with the c-axis\cite{Eley2018}. As these conditions are likely to produce vortex staircases, the question arose whether (and why) vortex staircases would create a vortex glass phase. Direct imaging of vortex-defect interactions, in a way that captures the vortex structure overlaid on the atomic-scale structure, would enable unambiguous determination of the phases produced by specific vortex excitations. Accordingly, development of advanced microscopy techniques that can produce cross sectional images is another transformative opportunity in vortex physics. In this section, we summarize common techniques for imaging superconducting vortices, detail their limitations, and describe the features of an advanced instrument that could accelerate progress in understanding and designing materials with predetermined vortex phases. \begin{figure} \includegraphics[width=1\linewidth]{figures/LTEM.pdf} \caption{Lorentz Transmission Electron Microscopy (LTEM) image of vortices in Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta}$, irradiated to induce columnar defects. Trapped vortices can be distinguished from free ones, based on their shape and contrast (lower contrast for vortices trapped in columnar defects). Plan-view LTEM images have provided useful information on vortex dynamics, though the 3D vortex structure within the bulk is hidden from view. Reprinted with permission from Ref.~[\onlinecite{Kamimura2002}]. Copyright 2002, \emph{The Physical Society of Japan}. TEM images of (b) a heavy-ion-irradiated NbSe$_2$ crystal\cite{Eley2018} and (c) a BaZrO$_3$-doped (Y$_{0.77}$Gd$_{0.23}$)Ba$_2$Cu$_3$O$_y$ film grown by Miura \textit{et al.} \cite{Miura2013k}. Permission to use TEM image in (c) granted by M. Miura. Red line is a cartoon of how a vortex might wind through the disorder landscape. Advanced microscopy techniques designed to capture this structure is a transformative opportunity in vortex physics. \label{fig:vortexstructuresTEM}} \end{figure} Lorentz TEM (LTEM), which exploits the Aharonov-Bohm effect to capture magnetic contrast, was first used by Tonomura to image superconducting vortices~\cite{Tonomura2006} and has played a major role in identifying new materials that host exotic magnetic phases. However, in LTEM, the objective lens serves the dual purpose of applying a field and observing the response of the specimen, and is therefore limited to plan-view imaging. That is, for out-of-plane magnetic fields applied to a thin film, the technique can only image magnetic contrast across the film’s surface---such that the vortex structure itself and interactions with defects within the bulk are out-of-view. Building upon Tonomura's initial work, Hitachi\cite{Harada2008, Kawasaki2000} developed a unique, specialized system containing a multipole magnet that can apply fields up to \SI{50}{\milli\tesla} at various orientations with respect to the sample. Though this system still produces plan-view images, rather than cross-sectional images (revealing the full vortex structure), variations in the contrast of the imaged vortex section has provided remarkable evidence of vortex pinning and useful information on vortex-defect interactions.\cite{Kamimura2002, HARADA20131, PhysRevLett.88.237001, Tonomura2001} For example, Fig.~\ref{fig:vortexstructuresTEM}(a) shows a LTEM image in which a vortex trapped within a columnar defect can be identified by its shape and contrast, compared to untrapped vortices. The most promising technique for directly imaging vortex-defect interactions may be differential phase contrast microscopy (DPC)\cite{Lubk2015}. Conducted in a TEM, DPC is one of the best tools for quantitatively imaging nanoscale magnetic structures. In a TEM, an illuminating electron beam is deflected by electromagnetic fields within a material. DPC microscopy leverages these deflections to directly image electric and magnetic fields within materials at atomic resolution\cite{Dekkers1974, Chapman1978}. Consequently, scanning the beam (STEM) produces spatial maps of nanoscale magnetic field contrast to complement the atomic-scale structural information resolved by a transmitting beam. Accordingly, STEM-DPC is an invaluable tool in nanomagnetism research, used to image magnetic domains\cite{Lee2017, Chen2018} and canted structures such as skyrmions\cite{McVitie2018, Matsumotoe1501280, Schneider2018}. Most notably, it is one of few techniques that can unequivocally identify new magnetic phases and exotic magnetic quasiparticles in real-space. More generally, it can also image nanoscale electric fields\cite{Muller2014, Hachtel2018, Shibata2012, Shibata2017, Yucelen2018} in materials and devices. To image vortex-defect structures in an STEM capable of DPC, the sample stage would need to be cryogenically cooled and the chamber should contain a magnet. Complications will include designing the system in a way in which the magnetic field does not significantly distort the beam. \section{Summary and Outlook} In this Perspective, we have highlighted the pivotal role that vortices play in superconductors and how improving our ability to control vortex dynamics will have an immediate impact on a range of applications. Herein we discussed major open questions in vortex physics, which include the following: \begin{itemize}[noitemsep, leftmargin=*] \item How do thermal and quantum vortex creep depend on material parameters and how can we efficiently consider creep in predictive simulations? \item What is the highest attainable critical current $J_{c}$? \item How do we optimize vortex pinning in quantum circuits and controllably exploit vortices in certain schemes for topological computing? \item Given the multitude of variables that govern $J_{c}$, what computational methods can improve the efficacy of the critical-current-by-design approach? \item What is the relationship between $J_{c}$ and point defect densities as well as vortex structures and vortex phases? \end{itemize} To answer these and other identified questions, we delineated five major categories of near-term transformative opportunities: The first involves applying recent advances in analytical and computational methods to model vortex creep, and performing more extensive experimental investigations into quantum creep. Second, we discussed how critical currents higher than the current record of 30\% $J_d$ may be obtained by implementing a combination of core pinning and magnetic pinning. This is a promising route for dramatic advancements in large-scale applications---achieving higher currents densities enables smaller motors and generators as well as higher field magnets. Third, we noted that vortices do not only hamper large-scale applications, but also induce losses in nanoscale quantum circuits. Though shielding circuits has proven effective in minimizing vortex formation, quantum senors may require exposure to the environment, necessitating a better understanding of vortex dynamics in circuits. Furthermore, vortices are desirable for use in quantum information applications, in which case we must study how to manipulate single flux lines to implement braiding and entanglement of Majorana Bound States. Fourth, the recent advent of high-performance computational tools to study vortex matter numerically has pushed us to the verge of predicting a superconductor's electrical transport properties based on the material and microstructure. However, the quest to automatically tailor a defect landscape for specific applications requires considering a fairly high-dimensional parameter space. To enable an effective mapping between simulations and experiments and manage the multitude of variables, we propose to apply self-adjusting machine learning algorithms that use neural networks. Fifth and finally, to accurately determine structure-property relationships, we need to experimentally measure and routinely consider point defect densities, which are challenging to determine. We therefore highlighted the prevailing microscopy techniques for point defect measurements, which include 4D-STEM and positron annihilation lifetime spectroscopy. \begin{acknowledgments} S.E.\ acknowledges support from the National Science Foundation DMR-1905909. A.G.\ was supported by the U.S.\ Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division. R.W.\ acknowledges funding support from the Heidelberger Akademie der Wissenschaften through its WIN initiative (8. Teilprogramm). \end{acknowledgments} \section*{data availability} The data that support the findings of this study are available from the corresponding author upon reasonable request. \section{} \subsection{} \subsubsection{}
2,869,038,156,067
arxiv
\section{Introduction} Three-level lambda ($\Lambda-$)systems are ubiquitous in quantum processes \cite{Gaubatz1990,Kuhn2002,Sorensen2006,Kral2007,Bergmann2015,Vitanov2017}. Many applications, particularly in quantum control, are based on our ability of controlling the population transfer in this system \cite{Gaubatz1990,Kuhn2002,Sorensen2006,Kral2007,Shore2011}. Fine control compatible with quantum information requirements imposes producing robust transfer of population with ultra high fidelity (UH-fidelity), i.e.~under the quantum computation infidelity benchmark of $\epsilon<10^{-4}$ \cite{Preskill1998}, between the two ground states of the system, while maintaining a low transient population on the intermediate (and often lossy) excited state during the dynamics. A standard method to perform this task is the stimulated Raman adiabatic passage, commonly known as STIRAP; widely used with applications in many physical and chemical problems \cite{Gaubatz1990,Kuhn2002,Sorensen2006,Kral2007,Shore2011,Bergmann2015,Vitanov2017}. STIRAP uses adiabaticity in order to avoid populating the intermediate state of a three-level system and to produce a robust transfer, at the expense of the process duration and pulse energy. It requires two fields: one, coupling the initial state of the system with the excited state, to which we refer as the pump $P$ and another, coupling the excited state with the target state, to which we refer as the Stokes field $S$; both names kept for historical reasons. In STIRAP, the fields coupling the ground states with the excited one must be counter-intuitively ordered (Stokes before pump) and exhibit high pulse areas and/or long time durations (technically any combination of factors fulfilling the adiabatic condition). Pulsed fields with increasingly higher areas and a counter-intuitive order, signatures of STIRAP, jointly with an optimized delay between the pulses, improve the adiabaticity and, in consequence, the robustness of the process, while minimizing the unwanted transient population of the excited state. Even though STIRAP is the `go-to' standard protocol when to increase the process robustness becomes necessary, it is only at the adiabatic limit that it produces a complete transfer to the desired state and maintains the excited state depopulated. That is to say that the target state $|\psi_T\rangle$ is approached asymptotically by the system state $|\psi(t)\rangle$ while the pulses areas $A_P=\int_{-\infty}^{\infty}P(t)dt$ and $A_S=\int_{-\infty}^{\infty}S(t)dt$ grow without limit. Concretely, the precision of the transfer can be measured with the fidelity $F=|\langle\psi_T|\psi(t_f)\rangle|^2=1-\epsilon$: a quantity equal to 1 when the transfer is perfect (target state achieved exactly by the system state at the process final time $t_f$) and to 0 when the final state is orthogonal to the target. Thus, in STIRAP, the fidelity tends to unity ($F\rightarrow1$) as the pulses areas tend to infinity ($\{A_P,A_S\}\rightarrow\infty$). In this manner, STIRAP provides a robust but inexact way of transferring population between the ground states of a three-level system. Additionally, the use of high area pulses hinders the application of such technique. Be it due to the destructive effects the usage of high intensity fields can produce, like ionization, or to the decoherence and experimental instabilities to which slow processes are susceptible, fields of moderate areas are most desirable for quantum state manipulation, especially for the UH-fidelity we aim at. Consequently, in this paper, we intend to propose a scheme for robust UH-fidelity transfers similar to STIRAP, but exact and with moderate areas, to which we refer as stimulated Raman exact passage (STIREP) \cite{Chen2012,Boscain2002,Dorier2017}. Exact, in this context, refers to schemes that provide the dynamics of the system ``exactly'', i.e.~approaches that prescribe a mathematical description of the complete dynamics of the system (the control fields and, consequently, the state are known for all instances of time). Improvements of STIRAP have been proposed by optimizing single properties: nonresonant fast STIRAP \cite{Dridi2009} but with large transient population in the excited state and robust but slow STIRAP \cite{Vasilev2009}. However, there are exact methods available that take different approaches on their search to compete with STIRAP's well-stablished robustness, such as single-shot shaped pulses (SSSP) \cite{Daems2013,Van-Damme2017} and composite pulses \cite{Levitt1986,Torosov2011,Torosov2011a,Genov2011,Genov2014,Torosov2013,Bruns2018}, among others. Techniques as SSSP and composite pulses deal with error reduction directly, while methods as shortcuts to adiabaticity \cite{Demirplak2005,Demirplak2008,Dridi2009,Vasilev2009,Berry2009,Chen2010,Chen2011,Bason2012,Chen2012,Ruschhaupt2012, Baksic2016,Li2016,Ban2018} rely on optimizing the adiabaticity of the process as their source of robustness. In a way, the first ones are bottom-up techniques, starting with energy economic strategies and remolding them to gain robustness; while the latter are top-down technologies, starting with the adiabatic and infinitely energetically costly paradigm and working their way down towards faster and cheaper processes. Physically speaking, exact methods are all those that offer detailed mathematical solutions for the desired task, i.e.~a description of the process with which to obtain the goal at a finite time. Meanwhile, adiabatic methods rely on the asymptotic behavior of the system under the adiabatic condition. To use an exact technique instead of an adiabatic one means to sacrifice the freedom that adiabaticity affords on field shapes for the rigidity of prescribed pulses and state dynamics. These prescriptions, provided by means of inverse engineering, are applied in order to gain the advantage of reaching the desired target state with finite pulse areas in a finite time. SSSP is a technique that takes exact transfer reverse-engineering as a first step, and error resistance through the transfer perturbative expansion as a second step. Firstly, SSSP applies reverse-engineering from the desired process onto the control fields by means of the prescription of a tracking solution for a certain parameterization of the quantum state of the system. Then, it uses perturbation theory to gradually diminish the susceptibility of the transfer fidelity to deviations from the optimal experimental conditions. Perturbation theory is applied in terms of deviations from the ideal conditions, taking into consideration realistic experimental complications, and is analyzed through the Schr\"odinger equation. The minimization of the deviation terms, representing the result of non-optimal conditions, is expected to have the systematic decimation of the dynamics sensitivity to perturbations as a consequence, i.e.~improving the robustness. In order to manipulate the deviation terms, the tracking expression of the reverse-engineered dynamics must contain a suitable parameterization, meaning that the desired system evolution is prescribed with expressions containing free parameters to be chosen afterwards regarding they nullify or at least reduce the terms of the perturbative expansion. In this paper, we introduce SSSP for the robust UH-fidelity transfer of population between the ground states of a 3-level $\Lambda$-system. We show a scheme similar to STIRAP but exact (thus not actually adiabatic) and highly robust using the Lewis-Riesenfeld (L-R) method driving a single dynamical mode \cite{Chen2011,Lewis1969}. The second section contains the parameterization of the propagator and Hamiltonian in terms of Euler angles. Section \ref{section2} shows the application of perturbation theory on the Hamiltonian, a working tracking solution (based on \cite{Boscain2002,Chen2012,Dorier2017}) and an analysis of the origin of robustness for this chosen tracking solution. We propose the direct study of the robustness of any given process for a range of pulse areas through the usage of a measurement of robustness based on the minimum UH-fidelity confidence range around the unperturbed ideal system. Additionally, definitions of STIRAP, considering Gaussian-shaped fields, and the adiabatically-optimized pulses with which we compare our SSSP are described. Section 4 presents the discussion and conclusions. \section{\label{section1}The Hamiltonian and its state angular parameterization} Let's consider a 3-level system driven by two resonant fields, $P(t)$ and $S(t)$, for which the Hamiltonian, on the bare states basis $\{|1\rangle,|2\rangle,|3\rangle\}$ and under the rotating wave approximation, is \begin{equation} \label{ResonantHamiltonian} H(t) = \frac{\hbar}{2}\left[\begin{matrix}0&P&0\\P&0&S\\0&S&0\end{matrix}\right]. \end{equation} In STIRAP, the state of the system is written in terms of the eigenstates of the Hamiltonian, \begin{equation} \label{darkstateSTIRAP} \Phi_0=\left[\begin{matrix}\cos\vartheta\\0\\-\sin\vartheta\end{matrix}\right],\quad \Phi_\pm=\frac1{\sqrt{2}}\left[\begin{matrix}\sin\vartheta\\\pm1\\\cos\vartheta\end{matrix}\right], \end{equation} where $\vartheta(t)$ is the so-called mixing angle, given by $\sin\vartheta=P/\sqrt{P^2+S^2}$ ($\cos\vartheta=S/\sqrt{P^2+S^2}$). The idea is to follow the dark state, the Hamiltonian eigenstate $|\Phi_0\rangle$, whose projection on the excited state is always null. This state allows for control of population transfer between the ground states without populating the intermediate state, the desired dynamics, which prescribes the signature counter-intuitive ordering of $P$ and $S$. However, the derivatives of the mixing angle, the non-adiabatic coupling, couple the $|\Phi_n\rangle$'s, the adiabatic states, preventing their exact following (since population would be uncontrollably exchanged via it). Then, adiabaticity, the condition in which the non-adiabatic coupling is negligible (with $\dot{\vartheta}\rightarrow0$ being the adiabatic limit), is paramount to minimize the deviations of the dynamics from the dark state and produce the desired transfer. Naturally, very slow-evolving pulses would minimize the non-adiabatic coupling and practically uncouple the adiabatic states in consequence. Nevertheless, the adiabatic states can never be followed exactly in real-world implementations. \subsection{Lewis-Riesenfeld invariant} A method that has taken notoriety in recent years is the use of dynamical invariants, also referred to as Lewis-Riesenfeld (L-R) invariants \cite{Lewis1969,Lai1996,Lai1996a,Chen2011,Chen2012}. The L-R invariant $I(t)$ is defined by having a time-invariant expectation value, i.e.~a constant $\langle\psi(t)|I|\psi(t)\rangle$, where $|\psi\rangle$ is the state of the system. This condition is equivalent to $\ii\hbar\dot{I}=[H,I]$ when considering the evolution of such system as described by the Schr\"odinger equation $\ii\hbar|\dot{\psi}(t)\rangle=H|\psi(t)\rangle$, where the dotted function denotes its partial derivative with respect to time. We can use the eigenstates of this invariant, $|\varphi_n(t)\rangle$, to write the state of our system with the advantage that, unlike with the adiabatic states, the coupling between these is always null under any condition. This can be shown by applying the transformation operator $T_{\textrm{LR}}(t)=\sum_n|\varphi_n\rangle\langle n|$, that writes the system into the basis of the L-R eigenstates, onto the Sch\"odinger equation and demonstrating the effective Hamiltonian on the new basis, $H^{\textrm{LR}}(t)=T_{\textrm{LR}}HT_{\textrm{LR}}^\dagger-\ii\hbar T_{\textrm{LR}}\dot{T}_{\textrm{LR}}^\dagger$, to have only the diagonal elements $H^\textrm{LR}_n=\langle\varphi_n|H|\varphi_n\rangle$. Thus, we can describe the complete dynamics of our system by a fixed combination of the L-R eigenstates and, with a suitable parameterization and tracking solution, we can follow exactly the system evolution and, consequently, reach exactly the desired target state. A simple picture of the difference between the use of adiabatic states (key of STIRAP) and of the eigenvectors of the dynamical invariant (L-R method) is: while the adiabatic states represent the dynamics of the system under the adiabatic condition, the L-R eigenvectors contain the whole dynamics of the system; the firsts are a particular case of the seconds, as we will show at the end of this section. In order to write the solution of the Schr\"odinger equation in terms of the eigenvectors of the L-R invariant we first need to write the latter explicitly in terms of practical parameters. For this purpose, we can exploit the property that establishes that, for an invariant that is member of the Lie algebra with (Hermitian) generators $Q_n$, i.e.~$I=\sum_n^N\alpha_n(t)Q_n$, these coefficients must obey the relation $\sum_n^N\alpha_n^2=\alpha_0^2$, where the $\alpha_n$'s are real quantities, $\alpha_0$ is a constant and $N$ is the number of generators of the algebra. Considering that the propagator of the Hamiltonian \eqref{ResonantHamiltonian} belongs to the $\mathrm{SU(3)}$ symmetry group, we can write said Hamiltonian as a linear combination of the well-known Gell-Mann matrices $\lambda_n$ of the group \cite{GellMann1962,Carroll1988,Chen2012} (generators of the Lie algebra of $\mathrm{SU(3)}$ as the Pauli matrices are the generators of the algebra of $\mathrm{SU(2)}$), i.e.~$H=\hbar/2(P\lambda_1+S\lambda_6)$, with \begin{equation} \lambda_1=\left[\begin{matrix}0&1&0\\1&0&0\\0&0&0\end{matrix}\right]\!,\ \lambda_5=\left[\begin{matrix}0&0&-\ii\\0&0&0\\\ii&0&0\end{matrix}\right]\!,\ \lambda_6=\left[\begin{matrix}0&0&0\\0&0&1\\0&1&0\end{matrix}\right]. \end{equation} Moreover, given that the matrices $\lambda_1$, $\lambda_5$ and $\lambda_6$ form a closed algebra, fulfilling the Lie algebra of $\mathrm{SU(2)}$, i.e.~their commutation relations require no other generator ($[\lambda_i,\lambda_j]=C_{ij}^k\lambda_k$ for $i$, $j$, and $k$ taking any combination of values 1, 5 and 6 without repetitions, $C_{ij}^k=-C_{ji}^k=C_{jk}^i=C_{ki}^j$ and $C_{16}^5=\ii$), we can now write the L-R invariant in terms of only these three matrices and three $\alpha_n$'s: \begin{equation} I(t)=\alpha_1\lambda_1+\alpha_2\lambda_6+\alpha_3\lambda_5. \end{equation} This is a much simpler case than that of a general member of the $\mathrm{SU(3)}$ algebra that contains up to 8 $\alpha_n$'s (7 of which are independent). With this simple expression for our dynamical invariant we can solve the eigenvalue equation. Using the eigenvectors $|\varphi_n(t)\rangle$ of this invariant to write the state of the system solution to the Schr\"odinger equation: \begin{equation} |\psi(t)\rangle=\sum_{n=1}^3C_n\e^{\ii\eta_n(t)}|\varphi_n(t)\rangle, \end{equation} with the Lewis-Riesenfeld phase \begin{equation} \eta_n(t)=\frac1{\hbar}\int_{t_i}^t\left\langle\varphi_n(t')\left|\ii\hbar\frac{\partial}{\partial t'}-H(t')\right|\varphi_n(t')\right\rangle dt', \end{equation} we can also write the evolution operator $U(t,t_i)$, to which we refer as the propagator of the system, in terms of the $\alpha_n$'s through $U=\sum_{n=1}^3\exp[\ii\eta_n(t)]|\varphi_n(t)\rangle\langle\varphi_n(t_i)|$. The Lewis-Riesenfeld phase corresponding to the null eigenvalue, e.g., $\eta_1$, is a constant we set to 0. Considering we intend to prescribe the time evolution of the $\alpha_n$'s, we facilitate the search for the boundary conditions by imposing a single-mode driving, i.e.~a dynamics along a single eigenvector of the invariant, setting $C_1=1$ and $C_2=C_3=0$, which makes $|\psi\rangle=|\varphi_1\rangle$. This dynamics can be seen as a generalization of adiabatic passage, occurring along a single eigenstate, to an exact passage. Given the relation between the $\alpha_n$'s, we can propose the following representation in terms of time-dependent Euler angles: \begin{subequations} \label{Euler-representation} \begin{align} \alpha_1&=\alpha_0\cos\phi\sin\theta,\\ \alpha_2&=-\alpha_0\cos\phi\cos\theta,\\ \alpha_3&=\alpha_0\sin\phi, \end{align} \end{subequations} which consequently makes the other two L-R phases $\eta\equiv\eta_2=-\eta_3=-\int_{t_i}^t\dot{\theta}(t')/\sin[\phi(t')]dt'$. Defining the desired transfer to be $|\psi(t_i)\rangle=|1\rangle\rightarrow|\psi_T\rangle=|3\rangle$, we can now say that, for a Hamiltonian fulfilling the closed algebra of $\lambda_1$, $\lambda_5$ and $\lambda_6$, with no coupling $|1\rangle$--$|3\rangle$, the propagator of the system can be written as \begin{equation} U=\left[|\varphi_1\rangle\quad|\psi_+\rangle\quad|\psi_-\rangle\right], \end{equation} with the composing column vectors described by \begingroup \allowdisplaybreaks \begin{subequations} \label{propagatorvectors} \begin{align} \label{single-mode-eigenvector} |\varphi_1\rangle&=\left[\begin{matrix}\cos\phi\cos\theta\\\ii\sin\phi\\\cos\phi\sin\theta\end{matrix}\right],\\ |\psi_+\rangle&=\left[\begin{matrix}\ii\cos\eta\sin\phi\cos\theta-\ii\sin\eta\sin\theta\\\cos\eta\cos\phi\\\ii\cos\eta\sin\phi\sin\theta+\ii\sin\eta\cos\theta\end{matrix}\right],\\ |\psi_-\rangle&=\left[\begin{matrix}-\sin\eta\sin\phi\cos\theta-\cos\eta\sin\theta\\\ii\sin\eta\cos\phi\\-\sin\eta\sin\phi\sin\theta+\cos\eta\cos\theta\end{matrix}\right], \end{align} \end{subequations} \endgroup where the first column of the propagator corresponds to a parameterization in Euler angles of the solution of the Schr\"odinger equation. With the representation in \eqref{Euler-representation}, the control fields can also be expressed in terms of these so-called Euler angles as \begin{subequations} \label{fields-Eulerangles} \begin{align} P/2&=-\dot{\theta}\cot\phi\sin\theta-\dot{\phi}\cos\theta,\\ S/2&=\dot{\theta}\cot\phi\cos\theta-\dot{\phi}\sin\theta, \end{align} \end{subequations} which provide the remaining boundary conditions when demanding the pulses to have finite area, i.e.~$0\leftarrow P\rightarrow0$ and $0\leftarrow S\rightarrow0$, thus \begin{equation} \label{BoundaryConditons} 0\leftarrow\{\phi,\dot{\phi},\dot{\theta},\dot{\eta}\}\rightarrow0, \text{ and }0\leftarrow\theta\rightarrow\pi/2, \end{equation} where the arrows to the right and left represent the limits when $t\rightarrow t_f$ and $t\rightarrow t_i$, respectively. It can be noted that the transient population of the excited state in this representation is given exactly by \begin{equation} \label{pop2-formula} P_2(t)=|\langle2|\psi(t)\rangle|^2=\sin^2\phi(t). \end{equation} We can interpret the invariant's eigenstate $|\varphi_1\rangle$ as equivalent to the dark state of STIRAP, $|\Phi_0\rangle$, where the latter has been allowed to exhibit a non-zero transient excited state population in order to make the dynamics exact. In fact, the particular case of single-mode driving corresponding to adiabatic following is given by $|\Phi_0\rangle=|\varphi_1(\theta=-\vartheta,\phi=0)\rangle$; for which the excited state population \eqref{pop2-formula} remains exactly null, the fields \eqref{fields-Eulerangles} are infinite and, thus, the adiabatic condition is fulfilled. Equations \eqref{fields-Eulerangles} and \eqref{BoundaryConditons} define a family of exact transfer solutions. Consequently, if such tracking solutions satisfying the previous conditions can be engineered, then we are able to control at will, in principle, the population on the middle state and we would be exposing an exact method for realizing stimulated Raman passage. \section{\label{section2}Perturbed Hamiltonian, exact tracking and the measure of robustness} Having set the requirements the angles must fulfill to describe the desired process, we proceed to deal with its robustness. Firstly, we add an unknown deviation $V(\rho)$ to the Hamiltonian \eqref{ResonantHamiltonian}, introducing the possibility of a non-optimal implementation of the control strategy that contains an error $\rho$ in the area of the pulses interacting with the system, i.e.~$H_\rho=H+V(\rho)$, where $V=\rho H$; thus, \begin{equation} \label{perturbedHamiltonian} H_{\rho}=\frac{\hbar}{2}\left[\begin{matrix}0&(1+\rho)P&0\\(1+\rho)P&0&(1+\rho)S\\0&(1+\rho)S&0\end{matrix}\right]. \end{equation} Secondly, we apply standard perturbation theory at the transfer profile regarding the perfect realization, or \begin{subequations} \label{transferprofile} \begin{align} \langle\psi_T|\psi_\rho(t_f)\rangle&=1+O_1+O_2+\cdots,\\ |\langle\psi_T|\psi_\rho(t_f)\rangle|^2&=1+\tilde{O}_1+\tilde{O}_2+\cdots=F. \end{align} \end{subequations} The deviation terms $O_n\equiv O(\rho^n)$ are integral expressions whose level of complexity increases accordingly to the corresponding perturbation orders. Given that the evolution of the state of our system coincides with that of $|\varphi_1\rangle$, and that conjointly with $|\psi_+\rangle$ and $|\psi_-\rangle$ these form a complete basis, the deviation terms are, explicitly, \begin{subequations} \label{InfidelityIntegrals} \begin{align} O_1&=0,\\ O_2&=\int_{t_i}^{t_f}dt\int_{t_i}^tdt'\left[mm'-nn'\right]\in\Re,\\ O_3&=\int_{t_i}^{t_f}dt\int_{t_i}^tdt'\int_{t_i}^{t'}dt''[nr'm''-mr'n'']\in\Im,\\ O_4&=\int_{t_i}^{t_f}dt\int_{t_i}^tdt'\int_{t_i}^{t'}dt''\int_{t_i}^{t''}dt'''[mm'm''m'''\nonumber\\ &\quad-mm'n''n'''+mr'r''m'''-nn'm''m'''\nonumber\\ &\quad+nn'n''n'''-nr'r''n''']\in\Re, \end{align} \end{subequations} and so on, where the non-null elements of the Hamiltonian deviation, for an unknown pulse area scaling error $\rho$, on the basis of the vectors in \eqref{propagatorvectors}, are identified as $m=\langle\varphi_1|V/\hbar|\psi_+\rangle$, $n=\langle\varphi_1|V/\hbar|\psi_-\rangle$ and $r=\langle\psi_+|V/\hbar|\psi_-\rangle$, with the primed function representing the function with its argument primed, e.g., $m'=\langle\varphi_1(t')|V(t')/\hbar|\psi_+(t')\rangle$. To consistently increase the robustness of the process via the nullification of the first orders of infidelity, $\tilde{O}_n\equiv\tilde{O}(\rho^n)$, is the goal of our strategy. These terms are, from \eqref{transferprofile}, given by $\tilde{O}_1=O_1+\bar{O}_1$, $\tilde{O}_2=O_2+O_1\bar{O}_1+\bar{O}_2$ and so on, where the odd orders are automatically null. However, the prescription of adequate tracking solutions with free parameters is the actual core of our recipe and also its sole non-systematic step. Finally, we propose a tracking solution where the maximum transient population on the excited state, $P_2^{\mathrm{max}}=\mathrm{max}\left[|\langle2|\psi(t)\rangle|^2\right]$, is the control parameter. \subsection{\label{subsection1}Population cap parameterization} The first found successful parameterization contains a unique free coefficient fixing a cap for the transient population on the excited state. The mixing angle of the levels $|1\rangle$ and $|3\rangle$ with $|2\rangle$, identified as $\phi(t)$, is written in terms of the other one, $\theta(t)$, which describes the state evolution from $|1\rangle$ to $|3\rangle$, and, in this manner, we propose the following suitable (fulfilling the requirements on \eqref{BoundaryConditons}) and convenient tracking solutions (based on \cite{Boscain2002,Chen2012,Dorier2017}): \begin{subequations} \begin{align} \theta(t)&=(\pi/4)\left\{\tanh[(t-t_i-T/2)/v_0]+1\right\},\\ \tilde{\phi}(\theta)&=(4\phi_0/\pi)\sqrt{\theta(\pi/2-\theta)}, \end{align} \end{subequations} where the tilde signals functions of $\theta$. These give, with $\dot{\tilde{\phi}}\equiv\partial\tilde{\phi}/\partial\theta$, \begin{subequations} \begin{align} \tilde{\dot{\theta}}(\theta)&=(4/\pi v_0)\theta(\pi/2-\theta),\\ \dot{\tilde{\phi}}(\theta)&=(4\phi_0/\pi)\frac{\pi/4-\theta}{\sqrt{\theta(\pi/2-\theta)}}, \end{align} \end{subequations} where $T=t_f-t_i$ is the total duration of the process and $v_0$ is a parameter setting the speed of the function change (chosen as $v_0=0.028T$ to provide a numerical error below $10^{-6}$ for the normalized field at the boundaries of the process). The free parameter $\phi_0$ allows us to control simultaneously the maximum population on the excited state, parameterized as $P_2^{\mathrm{max}}=\sin^2\phi_0$, and the robustness of the transfer, by means of the nullification or minimization of the first orders of population infidelity $\tilde{O}_n$'s; the first two non-zero orders are shown in Fig.~\ref{O2O4andAreaVSphi0}. \begin{figure} \includegraphics[width=\columnwidth]{O2O4P2andAreaVSphi0latex2.eps} \caption{\label{O2O4andAreaVSphi0}Second and fourth orders of infidelity, $\tilde{O}_2$ and $\tilde{O}_4$, maximum excited state population $P_2^{\mathrm{max}}$ and the corresponding generalized area $A_G$ vs the free parameter $\phi_0$.} \end{figure} The relationship between $\phi_0$ and the generalized area of the pulses, $A_G=\int_T\sqrt{P^2+S^2}dt$, corresponds to that which is well known from STIRAP: higher the area $A_G$ of the pulses, lower the maximum transient population on the excited state $P_2^{\mathrm{max}}$, which can be noted straightforwardly in Fig.~\ref{O2O4andAreaVSphi0}, to where we can also refer to extract the correspondence between $\phi_0$ and $A_G$. It can be highlighted that the additional amount of pulse area $\Delta A_G=A_G\left(P_2^{\mathrm{max}}(\phi_0)+\Delta P_2^{\mathrm{max}}\right)-A_G\left(P_2^{\mathrm{max}}(\phi_0)\right)$ that would be required to decrease the maximum intermediate state population by a certain amount $\Delta P_2^{\mathrm{max}}$ rises rapidly when considering ever lower values of $\phi_0$, i.e.~$\Delta A_G/\Delta P_2^{\mathrm{max}}\xrightarrow{\ \phi_0\rightarrow0\ }\infty$, thus exhibiting the asymptotic behavior of the adiabatic condition (the adiabatic limit). \subsection{Measurement of robustness} \subsubsection{Single-shot shaped pulses} With the purpose of generating simple pulses, we choose to nullify the terms of the perturbative expansion of the infidelity maintaining a single control parameter. Since we only have one free variable, we can't, in general, use it to nullify more than one term; this is visible in Fig.~\ref{O2O4andAreaVSphi0}. However, given the particularity of our control, the absolute value of the perturbations, like the maximum population of the excited state, decreases in average as $\phi_0$ is decreased, contrary to the increase of the required pulse areas. We use this feature to restrict our focus to the range of $\phi_0$ corresponding to moderate pulse areas, e.g., $A_G\leq15\pi$, and examine the resultant robustness of the fidelity for the desired transfer. Considering the limited character of a single-parameter parameterization, we opt to not search to nullify individual terms of the perturbative expansion of the fidelity, but to search for particular values of $\phi_0$ for which the robustness of the transfer presents local maxima. Figure \ref{infidelityVSadiabAreaVSrho} permits to analyze the dependence of the infidelity $\epsilon$ on generalized pulse area and area scaling error $\rho$; this figure presents the contours of the regions with very high fidelities (over 99\%), showcasing them with the logarithm of the infidelity at the evaluated conditions, where we give special attention to the region of the so called ultra high fidelity (UH-fidelity) for which the infidelity $\epsilon\equiv1-P_3(t_f)\leq10^{-4}$. \begin{figure} \includegraphics[width=\columnwidth]{infVSadiabAreaVSrhoFILLED300.eps} \caption{\label{infidelityVSadiabAreaVSrho}Contour plot of infidelity $\epsilon$ (log base 10) vs generalized area $A_G$ and area perturbation $\rho$.} \end{figure} The desired robustness can be understood as the non-susceptibility of the fidelity transfer (over a certain limit set to $10^{-4}$ for UH-fidelity) for different values of $\rho$, or how large does $\rho$ need to be (qualitatively around the unperturbed $\rho=0$ condition) to fall below the UH-fidelity definition. In Fig.~\ref{infidelityVSadiabAreaVSrho} we can observe how the robustness, in its qualitative sense from the broader UH-fidelity regions, tend to increase when more energy (or generalized pulse area) is invested. The oscillatory behavior of the robustness is obtained from the oscillations of the infidelity orders $\tilde{O}_n$'s, shown in Fig.~\ref{O2O4andAreaVSphi0}, and the global increase of robustness with $A_G$ from the damping of such oscillations (the asymptotic decrease on the average of the absolute value of the infidelity orders). The asymmetry in Fig.~\ref{infidelityVSadiabAreaVSrho} arrives naturally from the fact that a positive $\rho$ increases the effective amplitude of the pulses, decreasing the generalized area required to achieve the UH-fidelity transfer, and vice versa. In order to have a quantitative measure of robustness, appropriate for its exhaustive analysis and for establishing grounds of comparison with other techniques, we extract the maximum absolute area deviation, $\max|\rho|$, at which transfers with ultra high fidelity are achieved for $\rho<0$ and $\rho>0$ separately. To the minimum of these two quantities we will refer as UH-fidelity radius and it is shown in Fig.~\ref{minDeltarhovsArea} in comparison with equal measures for Gaussian pulses and adiabatically-optimized pulses built from hypergaussians \citep{Vasilev2009}. We can remark that the discontinuous character of its definition, the operation of obtaining the minimum between the left and right values $|\rho|$ where the infidelity goes over $10^{-4}$, produces a UH-fidelity radius function with discontinuous derivatives. \begin{figure} \includegraphics[width=\columnwidth]{fullcomparisonlatex2.eps} \caption{\label{minDeltarhovsArea} UH-fidelity radius vs generalized area $A_G$. A comparison between selected techniques.} \end{figure} \subsubsection{STIRAP with Gaussian pulses} One of the most commonly used pulse shapes, especially for STIRAP, is Gaussian. Gaussian pulses have three free parameters: peak, waist and delay. The pulse areas $A_P$ and $A_S$ depend on the first two, and the generalized area $A_G$ depends on the three of them. Fixing the waist we can control the area by tuning the peak, but the efficiency of the process will also depend greatly on the delay. Thus, we optimize the delay and show the UH-fidelity radius in terms of $A_G$ to serve as a base reference for STIREP in Fig.~\ref{minDeltarhovsArea}. For the Gaussian pulses, we use \begin{subequations} \begin{align} P^{(G)}&=-\Upsilon\exp\left[-(\hat{t}-\tau/2)^2/\sigma^2\right],\\ S^{(G)}&=\Upsilon\exp\left[-(\hat{t}+\tau/2)^2/\sigma^2\right], \end{align} \end{subequations} with $\hat{t}=t-t_i-T/2$. Where $\Upsilon$, $\tau$ and $\sigma$ are the peak, delay and waist of the gaussian pulses, respectively, which we restrict, while setting $\sigma=0.04T$, to a set of values that produce moderate area fields with smaller amplitudes (in their absolute values) than $10^{-6}\times\Upsilon$ at the boundaries of the process $[t_i,t_f]$, in order to have a proper numerical implementation with high precision. \subsubsection{Adiabatically-optimized pulses} The conditions for adiabatic optimization of pulse shapes, or designing adiabatically-optimal pulse shapes, are shown in \cite{Vasilev2009} while also proposing a combination of hypergaussian and trigonometric shapes as an example of pulses that fulfill these conditions for UH-fidelity STIRAP. The formulas for these pulses are: \begin{subequations} \begin{align} P^{(O)}&=-\Upsilon\exp\left[-\left(\frac{\hat{t}}{m\sigma}\right)^{2n}\right]\sin\left(\frac{\pi/2}{f(\hat{t})}\right),\\ S^{(O)}&=\Upsilon\exp\left[-\left(\frac{\hat{t}}{m\sigma}\right)^{2n}\right]\cos\left(\frac{\pi/2}{f(\hat{t})}\right), \end{align} \end{subequations} with $f=1+\exp(-\lambda\hat{t}/\sigma)$. The dependence of the transfer robustness on area for a fixed waist $\sigma$, in order to be compared with Gaussian pulses of the same waist, has three remaining free parameters: $m$ (waist factor relative to the Gaussian pulses), $n$ (power of the hypergaussian) and $\lambda$ (speed of change of the trigonometric function). These adiabatically-optimized pulses are shown \cite{Vasilev2009} to be superior to Gaussian pulses regarding the pulse area they require to achieve UH-fidelity standards when implemented for STIRAP. Moreover, these pulses are area-wise robuster than Gaussians when sufficiently (for UH-fidelity) high areas are used. The UH-fidelity radius of a pair of adiabatically-optimized pulses, labeled as OPT\#1 $(m=1,n=1,\lambda=4)$ and OPT\#2 $(m=1,n=2,\lambda=5)$ for two of the parameter sets (from sets with natural numbers as parameters) performing well at low to moderate pulse areas, is shown in Fig.~\ref{minDeltarhovsArea} for the purpose of comparison. \section{Discussions and conclusion} The UH-fidelity radius for the SSSP pulses developed in this paper, for Gaussian pulses and for adiabatically-optimized pulses is shown as a function of generalized area in Fig.~\ref{minDeltarhovsArea}. SSSP is shown to be superior, for most areas under $A_G\leq15\pi$ at the very least, to the two other methods considered. The maximum of the UH-fidelity radius of SSSP is about 13\% over the Gaussian pulses with the highest performance and almost twice the maximum for the pulses OPT\#2, which is the second best performing technique, even though the latter requires over $2\pi$ higher pulse areas and is supposed to be, in that regard, more adiabatic than the presented single-parameter SSSP. Comparing Fig.~\ref{minDeltarhovsArea} with Fig.~\ref{O2O4andAreaVSphi0} we can discuss the locations of the maxima of the UH-fidelity radius for SSSP. From the low and insufficient pulse areas to the first maximum at about $6\pi$ we are observing the first minimum of the first non-null infidelity order $\tilde{O}_2$. The second most notable peak (neglecting the almost imperceptible one at $7.5\pi$) is located at about $10\pi$, an intermediate position between the second minimum of $\tilde{O}_2$ and the fourth of $\tilde{O}_4$. Finally, the largest, broadest and most relevant maxima to extract from this paper is located beyond the third minimum of $\tilde{O}_2$ and closer to, presumably, higher infidelity orders $\tilde{O}_n$'s. This UH-fidelity radius maxima at $\sim12\pi$ is the consequence of the simultaneous and local minimization of multiple infidelity orders and the best robustness obtained for $A_G\leq15\pi$ and among the comparable implementations of STIRAP shown on this study. The highest UH-fidelity radius reached by our SSSP, of 22.36\% for $A_G=12.23\pi$ or $\phi_0=0.12815$, generates the pulse shapes shown in Fig.~\ref{time-evolution} with its corresponding temporal population evolution and state's projection onto the adiabatic eigenvectors, time axis is limited to 40\% of the full time interval considered of duration $T$. The projection of the state's dynamics onto the adiabatic states shows that the system doesn't follow the dark state along the evolution, it departs from it to populate a superposition of bright states, and, even though it comes back to it towards the end of the process, this differentiates it from the ideal STIRAP. In practice, this result would be similar for all counter-intuitively ordered control fields and differ only in the degree in which the excited state is populated during the dynamics. The pulses shapes are quite simple and similar to Gaussians but clearly asymmetric. The absolute value of the pump pulse, $|P|$, is shown instead of its direct value $P$, as it is shown for $S$, because observation is simplified this way, providing the figure with the only relevant information about the pulses: their shapes. For the same pulse shapes, pulses with equal or different relative signs will lead to identical results for the population fidelity; only the actual states involved would vary between $|1\rangle\rightarrow-|3\rangle$ (or $-|1\rangle\rightarrow|3\rangle$) for $P$ and $S$ of same sign, and $|1\rangle\rightarrow|3\rangle$ (or $-|1\rangle\rightarrow-|3\rangle$) for $P$ and $S$ of different sign. The population of the excited state finds its maximum in the middle between the pulses, or $t-t_i=T/2$, and it has the reduced maximal value of $P_2=0.016$. \begin{figure} \includegraphics[width=\columnwidth]{timeevolutionwithPROJslatex2.eps} \caption{\label{time-evolution} Time evolution of populations and the corresponding shaped fields, at best performing conditions, i.e.~$A_G\approx12\pi$ (regarding the UH-fidelity radius shown in Fig.~\ref{minDeltarhovsArea}).} \end{figure} The UH-fidelity radius has been defined through the implementation of a Hamiltonian perturbation, shown in \eqref{perturbedHamiltonian}, that can be seen as considering a lack of perfect knowledge over the quantum system while having perfect control over the fields, some practical examples can be readily provided: \begin{itemize} \item Pump and Stokes beams with equal intensity profiles (like Gaussian profiles with the same waist) interacting with atomic systems of no perfectly known location \cite{Bergmann1998}. \item Certain variations on the dipole moment of the transitions, such as on their orientation, can affect both pump and Stokes fields on equal manner. \item All those cases in which both controls are produced by the same source and thus any unexpected deviation affecting field amplitudes would be equal for the fields \cite{Vitanov2017}, such as when the considered transition frequencies are so close to each other that a single field can excite them. Another case would be that of when the addressed transitions involve Zeeman sublevels, where the coupling fields are only required to differ in polarization (right- and left-handed circular polarization for example). Having fields that originate from the same source impose them to have the same temporal shape, or to be mirror images of each other if we can use counter-propagating fields. \end{itemize} In conclusion, we have optimized robustness from an exact solution derived from the Lewis-Riesenfeld method with one mode, which allowed a full shaping of the fields. This strongly contrasts with respect to most of the previous attempts at optimizing STIRAP (fidelity and robustness) which were based on the optimization of a set of natural parameters, e.g., delay, waist, amplitude, among others. We have derived a parameterization achieving high robustness for moderate pulse areas. Additionally, this solution opens further prospects for designing various exact and robust solutions based on STIRAP, or its extensions, such as N-pod STIRAP \cite{Rousseaux2013} or other multilevel systems \cite{Hioe1985,Hioe1987,Hioe1988,Guengoerdue2012,Shore2013}. \begin{acknowledgments} This work was supported by the ``Investissements d'Avenir'' program, project ISITE-BFC / IQUINS (ANR-15-IDEX-03), and the EUR-EIPHI Graduate School (17-EURE-0002). X.L.~and S.G.~also acknowledge support from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No.~765075 (LIMQUET) and 641272 (HICONO). Also, X.C.~acknowledges the following funding: NSFC (11474193), the Shuguang Program (14SG35), the program of Shanghai Municipal Science and Technology Commission (18010500400 and 18ZR1415500). \end{acknowledgments} \input{references.bbl} \end{document}
2,869,038,156,068
arxiv
\section{Introduction} \subsection{Fibonacci Sequence} \quad Fibonacci sequence is defined by recursion formula; \begin{equation} F_{n+1}=F_{n}+F_{n-1} \label{recursion}, \end{equation} where $F_0=0$ , $F_1=1$ , $n=1,2,3,...$ First few Fibonacci numbers are : $$ 0,1,1,2,3,5,8,13,21,34,55,89,144,233,377,610,987... $$ The sequence is named after Leonardo Fibonacci(1170-1250) \cite{fibonacci}. Fibonacci numbers appear in Nature so frequently that they can be considered as Nature's Perfect Numbers. Also, another important Nature's number, the Golden ratio, which seen, in every area of life and art, and usually it is associated with aesthetics, is related to Fibonacci sequence. \subsection{Binet Formula} \quad Formula for Fibonacci sequence was derived by Binet in 1843. It has the form; \begin{equation} F_n=\frac{\varphi^n-\varphi'^n}{\varphi-\varphi'} \label{binetformula} \end{equation} where $\varphi$ and $\varphi'$ are roots of equation $x^2-x-1=0$, having the values;\\ $$\varphi=\frac{1+\sqrt{5}}{2}\approx 1,6180339.. \,\,\,\, \mbox{and} \,\,\,\, \varphi'=\frac{1-\sqrt{5}}{2}\approx -0,6180339..$$\\ From this formula, due to irrational character of Golden ratio $\varphi$ and $\varphi'$, it is not at all evident that $F_n$ are integer numbers. Though it is clear from the recursion formula (\ref{recursion}).\\ Number $\varphi$ is called the Golden ratio. Binet formula allows one to find corresponding Fibonacci numbers directly without of using recursion, like $F_{98}$ ,$F_{50}$ ,... For example, to find $F_{20}$ by using Binet formula we have; \\ \begin{equation} F_{20}=\frac{\varphi^{20}-\varphi'^{20}}{\varphi-\varphi'}=6765. \nonumber \\ \end{equation} Binet formula allows one to define also Fibonacci numbers for negative n;\\ \begin{equation} F_{-n}=\frac{\varphi^{-n}-\varphi'^{-n}}{\varphi-\varphi'}=\frac{\frac{1}{\varphi^n}-\frac{1}{\varphi'^n}}{\varphi-\varphi'}=\frac{\varphi'^{n}-\varphi^{n}}{\varphi-\varphi'}\frac{1}{(\varphi\varphi')^n} \:,\nonumber\\ \end{equation}\\ since $\varphi\varphi'=-1$,\\ \begin{equation} \Rightarrow \qquad=-\frac{\varphi^{n}-\varphi'^{n}}{\varphi-\varphi'}\frac{1}{(-1)^n}=-F_{n}\frac{1}{(-1)^n}=-F_{n}(-1)^n=(-1)^{n+1}F_{n} \nonumber \\ \end{equation} So, we have;\\ \begin{equation} F_{-n}=(-1)^{n+1}F_{n} \label{relationwith negative index} \end{equation} \section{Generalized Fibonacci numbers} \qquad Here, we are going to study different generalizations of Fibonacci numbers. As a first generalization, by choosing different initial numbers $G_0$ and $G_1$, but preserving the recursion formula (\ref{recursion}), we can define so called generalized Fibonacci numbers. For example, if $G_0=0$, $G_1=4$ we have the sequence;\\ $$4,4,8,12,20,32,52,...$$\\\qquad \subsection{Addition of two Fibonacci sequences} Let us we consider generalized Fibonacci numbers, with the recursion formula $G_{n+1}=G_{n}+G_{n-1}$, and an arbitrary initial numbers $G_0$ $\&$ $G_1$;\\ $G_0=G_0$ \\ $G_1=G_1$ \\ $G_2=G_1+G_0=F_2G_1+F_1G_0$ \\ $G_3=G_2+G_1=2G_1+G_0=F_3G_1+F_2G_0$ \\ $G_4=G_3+G_2=3G_1+2G_0=F_4G_1+F_3G_0$ \\ $G_5=G_4+G_3=5G_1+3G_0=F_5G_1+F_4G_0$ \\ \phantom{abc}.\\ \phantom{abc}.\\ Then \begin{equation} G_n=G_1F_n+G_0F_{n-1} \label{2rec.for.} \end{equation} is obtained as a recursion relation. This shows that $G_n$ is a linear combination of two Fibonacci sequences. \\ By substituting Binet formulas for $F_n,F_{n-1}$ gives;\\ \begin{align} G_n&=G_1F_n+G_0F_{n-1}, \nonumber \\ G_n&=G_1\frac{\varphi^n-\varphi'^n}{\varphi-\varphi'}+G_0\frac{\varphi^{n-1}-\varphi'^{n-1}}{\varphi-\varphi'}, \nonumber \\ G_n&=\frac{1}{\varphi-\varphi'}\left[(G_1\varphi^n+G_0\varphi^{n-1})-(G_1\varphi'^n+G_0\varphi'^{n-1})\right], \nonumber \\ G_n&=\frac{1}{\varphi-\varphi'}\left[(G_1\varphi^n+G_0\varphi^{n}\frac{1}{\varphi})-(G_1\varphi'^n+G_0\varphi'^{n}\frac{1}{\varphi'})\right],\,\, \mbox{and since\,\,} \varphi'=-\frac{1}{\varphi}, \nonumber \\ G_n&=\frac{1}{\varphi-\varphi'}\left[(G_1\varphi^n-G_0\varphi^{n}\varphi')-(G_1\varphi'^n-G_0\varphi'^{n}\varphi)\right] \nonumber \\ G_n&=\frac{(G_1-\varphi'G_0)\varphi^n-(G_1-\varphi G_0)\varphi'^n}{\varphi-\varphi'}. \label{bin.form.for.gen.fib.} \end{align} \\ \phantom{abh}Equation (\ref{bin.form.for.gen.fib.}) we called Binet type formula for generalized Fibonacci numbers. Also note that, if $G_0=0$ and $G_1=1$, our equation becomes Binet formula (\ref{binetformula}) for Fibonacci numbers.\\ Now, let us see in other way that our recursion $G_{n+1}=G_{n}+G_{n-1}$ is valid for the equation (\ref{bin.form.for.gen.fib.}). Suppose;\\ $$G_{n+1}=AG_{n}+BG_{n-1}.$$\\ \phantom{abh}We have to find A=B=1. By writing corresponding recursion relation formulas for generalized Fibonacci numbers; \\ $$G_1F_{n+1}+G_0F_{n}=A(G_1F_n+G_0F_{n-1})+B(G_1F_{n-1}+G_0F_{n-2}),$$\\ $$G_1(F_n+F_{n-1})+G_0(F_{n-1}+F_{n-2})=G_1(AF_{n}+BF_{n-1})+G_0(AF_{n-1}+BF_{n-2}).$$ From above equality, we can say that A and B are 1.\\ \subsection{Arbitrary linear combination} As we have seen, the linear combination of two Fibonacci sequences (\ref{2rec.for.}) with shifted by 1 index $n$, produces Generalized Fibonacci numbers. Now let us think more general case of recursion formula for Generalized Fibonacci numbers defined as; $$G_n^{(k)}=\alpha_0F_n+\alpha_1F_{n+1}+...+\alpha_kF_{n+k}.$$\\ Here the sequence $G_n^{(k)}$ is determined by coefficient, of a polynomial degree k; \\ $$P_k(x)=\alpha_0+\alpha_1 x+...+\alpha_k x^{k}.$$\\ As easy to see, the sequence $G_{n+1}^{(k)}$ satisfies the standard recursion formula \\ \begin{equation} G_{n+1}^{(k)}=G_n^{(k)}+G_{n-1}^{(k)}. \end{equation} \\ By using Binet Formula for Fibonacci numbers we can derive Binet type formula for our Generalized Fibonacci numbers as follows;\\ \begin{equation} G_n^{(k)}=\frac{\varphi^nP_k(\varphi)-\varphi'^nP_k(\varphi')}{\varphi-\varphi'} \label{implct.form.of.most.gen.case.for.shfted.gen.nbs}.\\ \end{equation} \subsubsection{Binet formula and dual polynomials} The above Binet type formula (\ref{implct.form.of.most.gen.case.for.shfted.gen.nbs}) can be rewritten in terms of dual polynomials.\\ \textbf{Definition:}\quad For given polynomial degree k, \begin{equation} P_k(x)=\sum^k_{l=0}a_l x^l \end{equation} we define the dual polynomial degree k according to formula \begin{equation} \tilde P_k(x)=\sum^k_{l=0}a_{k-l} x^l. \end{equation} It means that if $P_k(x)$ is a vector with components $(a_0,a_1,...,a_k)$, then the dual $\tilde P_k(x)$ is the vector with components $(a_k,a_{k-1},...,a_0)$. These polynomials are related by formula; \begin{equation} P_k(x)=x^k \tilde P_k\left(\frac{1}{x}\right). \end{equation} Let us consider polynomial:\\ $$P_k(\varphi)=\alpha_0+\alpha_1 \varphi+...+\alpha_k \varphi^{k},$$ \\ then $$P_k(\varphi)=\varphi^k\left(\alpha_0\frac{1}{\varphi^k}+\alpha_1\frac{1}{\varphi^{k-1}}+...+\alpha_k\right)\;\;\;\mbox{or by using}\;\;\;\varphi'=-\frac{1}{\varphi};$$ \\ $$P_k(\varphi)=\varphi^k\left(\alpha_0(-\varphi')^{k}+\alpha_1(-\varphi')^{k-1}+...+\alpha_k\right),$$ \\ and $$P_k(\varphi)=\varphi^k\left(\alpha_k+\alpha_{k-1}(-1)\varphi'+\alpha_{k-2}(-1)^2\varphi'^2+...+\alpha_1(-1)^{k-1}\varphi'^{k-1}+ \alpha_0(-1)^{k}\varphi'^k\right)$$ \\ Thus, finally we have;\\ $$P_{\alpha_0,\alpha_1,...,\alpha_k}(\varphi)=\varphi^k \tilde P_{\alpha_k,-\alpha_{k-1},(-1)^2\alpha_{k-2},...,(-1)^{k-1}\alpha_1,(-1)^k\alpha_0}(\varphi').$$\\ It can be written as, \\ $$P_k(\varphi)=\varphi^k \tilde{P}_k(-\varphi'),$$ where $\tilde{P}_k(-\varphi')$ is the dual polynomial to $P_k(\varphi)$. \\ \phantom{ab}By following the same procedure and starting from $P_k(\varphi')$, we can easily obtain that;\\ $$P_k(\varphi')=\varphi'^k \tilde{P}_k(-\varphi).$$\\ Therefore, for given polynomials with arguments $\varphi$ or $\varphi'$ we have the dual one with argument $-\varphi'$ and $-\varphi$ respectively. This allow us to write Binet type formula for Generalized Fibonacci numbers in two different forms:\\ $$G_n^{(k)}=\frac{\varphi^nP_k(\varphi)-\varphi'^nP_k(\varphi')}{\varphi-\varphi'}=\frac{\varphi^{n+k}\tilde{P}_k(-\varphi')-\varphi'^{n+k}\tilde{P}_k(-\varphi)}{\varphi-\varphi'}.$$ \section{Fibonacci Polynomials} \qquad In previous section we studied Generalized Fibonacci numbers with Fibonacci recursion formula. Here we are going to generalize this recursion formula by introducing two arbitrary numbers. Then, the corresponding sequence will depend on two numbers. This sequence of two variable polynomials is called the Fibonacci polynomials. Let $\mbox{a}\, \&\, \mbox{b}$ be (real) roots of the second order equation;\\ $(x-a)(x-b) = x^2-(a+b)x+ab=0 $. And let us say $ a+b=p $ and $ ab=-q $. Then,\\ $$ x^2-px-q=0 .$$ Since, both $\mbox{a} \& \mbox{b}$ are roots, they must satisfy the equation. Thus, the above equation becomes;\\ $$ a^2-pa-q=0 \qquad \mbox{and} \qquad b^2-pb-q=0. $$\\ Starting from $a^2-pa-q=0$, we get recursion for powers;\\ $a^2=pa+q(1)$ \\ $a^3=(p^2+q)a+qp$ \\ $a^4=p(p^2+2q)a+q(p^2+q)$ \\ $a^5=[p^2(p^2+2q)+q(p^2+q)]a+qp(p^2+2q).$ \\ Introducing sequence of polynomials of two variables $F_n(p,q)$; \\ $F_0(p,q)=0$\\ $F_1(p,q)=1$\\ $F_2(p,q)=p$\\ $F_3(p,q)=p^2+q$\\ $F_4(p,q)=p(p^2+2q)$\\ $F_5(p,q)=p^2(p^2+2q)+q(p^2+q)$\\ $F_6(p,q)=p^3(p^2+2q)+2qp(p^2+q)+q^2p$ \\ $F_7(p,q)=p^4(p^2+2q)+3p^4q+6p^2q^2+q^3$\\ \phantom{abc}.\\ \phantom{abc}.\\ we get $n^{th}$ power by recursion and find ; \begin{equation} a^n=F_n(p,q)a+qF_{n-1}(p,q) \label{nthpower}.\\ \end{equation} By applying the same procedure for b, $b^n=F_n(p,q)b+qF_{n-1}(p,q)$ is obtained. Both formulas can be proved by induction.\\ By subtracting $b_n$ from $a_n$, we get Binet formula for these polynomials:\\ \begin{equation} F_n{(p,q)}=\frac{a^n-b^n}{a-b}, \label{generalizedbinet} \end{equation}\\ where $a,b=\frac{p}{2}\pm\sqrt{\frac{p^2}{4}+q}.$\\ This way, we get Binet type formula for sequence of Fibonacci polynomials.\\ Now, we are going to derive recursion formula for Fibonacci polynomials;\\ $$F_{n+1}(p,q)=AF_{n}(p,q)+BF_{n-1}(p,q)$$\\ To find the coefficients A and B in above equation we substitute the Binet formulas for Fibonacci polynomials (\ref{generalizedbinet}) in $F_{n+1},F_{n},F_{n-1}$. Thus we have;\\ $$a^{n+1}=Aa^n+Ba^{n-1}$$\\ $$b^{n+1}=Ab^n+Bb^{n-1}$$ Equivalently;\\ $$a^2=Aa+B$$\\ $$b^2=Ab+B$$\\ Since we found that $a^2=pa+q$ \,(when we looked for the recursion relation for powers a), by substituting it;\\ $$a^2=Aa+B \; \longrightarrow pa+q=Aa+B \; \longrightarrow \; A=p \; \mbox{and} \; B=q.$$\\ Also, similarly from $b^2=Ab+B$ we can get the same result. Now we have,\\ \begin{equation} F_{n+1}(p,q)=pF_{n}(p,q)+qF_{n-1}(p,q). \end{equation}\\ Thus, we obtained recursion relation for sequence of Fibonacci polynomials. If p and q are arbitrary integer numbers, then we get the sequence of integer numbers;\\ $F_0(p,q)=0$\\ $F_1(p,q)=1$\\ $F_2(p,q)=p$\\ $F_3(p,q)=p^2+q$\\ $F_4(p,q)=p(p^2+q)+qp$\\ $F_5(p,q)=p^2(p^2+2q)+q(p^2+q)$\\ $F_6(p,q)=p^3(p^2+2q)+2qp(p^2+q)+q^2p$ \\ $F_7(p,q)=p^4(p^2+2q)+3p^4q+6p^2q^2+q^3$\\ \phantom{abc}.\\ \phantom{abc}.\\ which we call Fibonacci polynomial numbers.\\ Also; when we choose $p=q=1$, the recursion relation will be standard recursion and Fibonacci numbers come. So; $F_n(1,1)=F_n.$ \subsection{Generalized Fibonacci Polynomials} Fibonacci polynomials with initial values $F_0=0$\;\;and\;\;$F_1=1$ can be generalized to arbitrary initial values $G_0$ and $G_1$. So we define generalized Fibonacci polynomials $G_n(p,q)$ by the recursion formula \begin{equation} G_{n+1}(p,q)=pG_{n}(p,q)+qG_{n-1}(p,q), \label{gfp1} \end{equation} with initial values. \begin{equation} G_{0}(p,q)=G_{0},G_{1}(p,q)=G_{1}. \label{gfp2} \end{equation} It is easy to show that generalized Fibonacci polynomials can be represented as superposition of Fibonacci polynomial sequences:\\ \begin{equation} G_{n}(p,q)=G_{1}F_{n}(p,q)+qG_{0}F_{n-1}(p,q). \label{gfp3} \end{equation} For generalized Fibonacci polynomials we find the following Binet type formula;\\ $$G_{n}(p,q)=\frac{(G_1-bG_0)a^n-(G_1-aG_0)b^n}{a-b},$$\\ where\;\; $a,b=\frac{p}{2}\pm\sqrt{\frac{p^2}{4}+q}.$\\ First few Generalized Fibonacci numbers are;\\ $G_0(p,q)=G_0$ \\ $G_1(p,q)=G_1$ \\ $G_2(p,q)=G_1p+qG_0$ \\ $G_3(p,q)=G_1(p^2+q)+qpG_0$ \\ $G_4(p,q)=G_1(p(p^2+q)+qp)+q(p^2+q)G_0$ \\ $G_5(p,q)=G_1(p^2(p^2+2q)+q(p^2+q))+q(p(p^2+q)+qp)G_0$ \\ \phantom{abc}.\\ \phantom{abc}.\\ If initial values $G_0=k$ and $G_1=l$ are integer, as well as $p=s$ and $q=t$ coefficients then, we get Generalized Fibonacci polynomial numbers. \\ Also as a special case, if we choose $G_0=0$ and $G_1=p=q=1$, then we get the sequence of Fibonacci numbers. \\ \section{Applications} \subsection{Fibonacci subsequences} \qquad If we consider a subsequence of the Fibonacci sequence, then this subsequence of numbers should satisfy some rules, which in general are different from the Fibonacci addition formula. Here we consider special type of subsequences generated by equidistant numbers, or the congruent numbers. Let us consider the sequence $G_n$ as subsequence of Fibonacci numbers $G_n=F_{3n}$, which corresponds to equidistant integers $3,3+3,... ,3n$, or $n=0 \phantom{ab} (mod \phantom{.} 3)$. We like to know the recursion relation and corresponding initial conditions for this sequence. For this, we have to find A and B coefficients in equation: \begin{equation} G_{n+1}=AG_{n}+BG_{n-1} \label{triplecoefcnt.} \end{equation} or equivalently,\quad $F_{3n+3}=AF_{3n}+BF_{3n-3}$. By using Fibonacci recursion we rewrite $F_{3n+3}$ in terms of bases $F_{3n}$ and $F_{3n-3}$ as follows; \begin{align} \quad F_{3n+3}&=(F_{3n+2})+F_{3n+1} \nonumber \\ &=(F_{3n+1}+F_{3n})+F_{3n+1} \nonumber \\ &=2(F_{3n+1})+F_{3n} \nonumber\\ &=2(F_{3n}+F_{3n-1})+F_{3n} \nonumber\\ &=3F_{3n}+2(F_{3n-1}) \nonumber \\ &=3F_{3n}+2(F_{3n-2}+F_{3n-3})\nonumber \\ &=3F_{3n}+2F_{3n-3}+(2F_{3n-2})\nonumber \end{align} \begin{align} \qquad \;\;\;\;\;\;\;\;\; &=3F_{3n}+2F_{3n-3}+(F_{3n}-F_{3n-3}) \nonumber \\ &=4F_{3n}+F_{3n-3}. \nonumber \end{align} As a result we find the recursion formula; $$G_{n+1}=4G_{n}+G_{n-1}.$$ This recursion formula shows that this sequence is the Generalized Fibonacci polynomial sequence with initial values $G_0=0$, $G_1=2$ and $p=4$, $q=1$. \subsubsection{Equi-Fibonacci Sequences} Here we introduce equidistant numbers determined by formula $kn+\alpha$, where $\alpha=0,1,2,...,k-1$ and $k=1,2,...$ are fixed numbers, and $n=0,1,2,3,...$. Distance between such numbers is k, this way we call them equi-distant numbers. The equidistant numbers are congruent numbers $\alpha$ (mod \phantom{.}k). Fibonacci numbers $F_{kn+\alpha}$ corresponding to such equi-distant numbers we call Equi-Fibonacci numbers. Now we are going to show that equi-Fibonacci number subsequence of Fibonacci sequence is Generalized Fibonacci Polynomial number sequence.\\ Let us consider subsequence of Fibonacci numbers determined by formula ${G^{(k;\alpha)}_n} \equiv F_{kn+\alpha}$. This subsequence satisfies the recursion formula according to the next theorem.\\ \textbf{THEOREM 1:}\quad Subsequence ${G^{(k;\alpha)}_n} \equiv F_{kn+\alpha}$ is subject to the next recursion formula \begin{equation} G^{(k;\alpha)}_{n+1}=L_k{G^{(k;\alpha)}_n}+(-1)^{k-1}{G^{(k;\alpha)}_{n-1}}, \label{section:Triple relations1} \end{equation} where $L_k$ are Lucas numbers. This is why it is given by Generalized Fibonacci polynomials with integer arguments as \begin{equation} G^{(k;\alpha)}_{n}=F_{kn+\alpha}=G_n{(L_k,(-1)^{k-1})}. \end{equation} \\ For proof see Appendix 7.1\\ As an example we apply formula (\ref{section:Triple relations1}), to the sequence $G^{(3;0)}_{n}=F_{3n}$, $$G^{(3;0)}_{n+1}=L_3G^{(3;0)}_{n}+(-1)^{3-1}G^{(3;0)}_{n-1}=4G^{(3;0)}_{n}+G^{(3;0)}_{n-1}$$ and find the same recursion relation as before,\\ $$G^{(3;0)}_{n+1}=4G^{(3;0)}_{n}+G^{(3;0)}_{n-1}.$$\\ Also, the same recursion is valid for both sequences $G^{(3;1)}_n=F_{3n+1}$, and $G^{(3;2)}_n=F_{3n+2}$.\\ As we can see from these sequences;\\ $G^{(3;0)}_n=F_{3n}=F_{0\phantom{..}( \mbox{mod}\phantom{..} 3)}=0,2,8,34,..$\\ $G^{(3;1)}_n=F_{3n+1}=F_{1\phantom{..}( \mbox{mod}\phantom{..} 3)}=1,3,13,55,..$\\ $G^{(3;2)}_n=F_{3n+2}=F_{2\phantom{..}( \mbox{mod}\phantom{..} 3)}=1,5,21,89,..$\\ each of three sequences starts with different initial values and they cover the whole Fibonacci sequence.\\ We can easily say that when we generate sequence of $F_{kn}$, the same recursion will be valid to generate the sequences $F_{kn+1}$,...,$F_{kn+(k-1)}$ and all sequences begin with different initial conditions. At the end they cover the whole Fibonacci sequence. \\ As an example, for even numbers $n=2m$ the subsequence of Fibonacci numbers $F_{2m}=G_n$ satisfies recursion formula $$G_{n+1}=3 G_{n}-G_{n-1}.$$\\ For odd numbers $n=2m+1$, the subsequence of Fibonacci numbers $F_{2m+1}=G_n$ satisfies the same recurrence relation.\\ For the case $k=3$, we have three subsequences of natural numbers as $n=3m, 3m+1, 3m+2$\,, then corresponding subsequences of Fibonacci numbers $F_{3m},\,F_{3m+1},\,F_{3m+2}$ satisfy the recursion formula $$G_{n+1}=4 G_{n}+G_{n-1}.$$\\ Case k; for the subsets of natural numbers $km,km+1,...,km+k-1$ we have subsequences of Fibonacci numbers satisfying recursion formula $$G_{n+1}=L_k G_{n}+(-1)^{k-1}G_{n+1}.$$\\ Below schema shows valid recursion relations for the desired sequences which have the $\alpha$ values, respectively;\\ \quad SEQUENCES \hspace{1cm} DIFFERENCE \hspace{1cm} VALID RECURSION RELATION \\ $$\quad \quad \hspace{0,5cm} G^{(1,0)}_n=\{F_{n}\} \quad \quad\hspace{1,5cm} 0 \quad \quad \hspace{1cm} G_{n+1}^{(1;0)}=G_{n}^{(1;0)}+G_{n-1}^{(1;0)}\hspace{0,25cm} \;k=1,\alpha=0$$ \\ $$\qquad G^{(2,\alpha)}_n=\{F_{2n},F_{2n+1}\} \quad \hspace{1,05cm}1 \quad \quad \quad \quad G_{n+1}^{(2;\alpha)}=3G_{n}^{(2;\alpha)}-G_{n-1}^{(2;\alpha)}\hspace{0,5cm} \;k=2,\alpha=0,1$$ \\ $$\,\,G^{(3,\alpha)}_n=\{{F_{3n},F_{3n+1},F_{3n+2}}\} \hspace{1,05cm}2 \quad \quad \quad G_{n+1}^{(3;\alpha)}=4G_{n}^{(3;\alpha)}+G_{n-1}^{(3;\alpha)}\hspace{0,5cm} \;k=3,\alpha=0,1,2$$ \\ \phantom{abc}.\\ \phantom{abc}.\\$$G^{(k,\alpha)}_n=\{{F_{kn},...,F_{kn+(k-1)}}\}\quad \hspace{0,5cm}k-1\qquad G_{n+1}^{(k;\alpha)}=L_kG_{n}^{(k;\alpha)}+(-1)^{k-1}G_{n-1}^{(k;\alpha)}\hspace{0,3cm}\;k,\alpha=0...k-1$$\\ \subsubsection{Equi-Fibonacci superposition} Equi-Fibonacci numbers are determined in terms of Generalized Fibonacci polynomials as follows, \begin{equation} G^{(k;\alpha)}_{n}=F_{kn+\alpha}=G_n{(L_k,(-1)^{k+1})} \label{super} \end{equation} $$G_{0}^{(k,\alpha)}=F_\alpha \;\;,\;\; G_{1}^{(k,\alpha)}=F_{\alpha+k}$$\\ Then, by using superposition formula for Generalized Fibonacci Polynomials (\ref{gfp3}) we get equi-Fibonacci numbers as superposition $$G^{(k;\alpha)}_{n}=F_{kn+\alpha}=F_{\alpha+k}F_n^{(k)}+F_{\alpha}(-1)^{k+1}F_{n-1}^{(k)}$$ of Higher Order Fibonacci numbers. Easy calculation shows that Higher Order Fibonacci numbers can be written as ratio;\\ $$F_n^{(k)}=\frac{(\varphi^k)^n-(\varphi'^k)^n}{\varphi^k-\varphi'^k}=\frac{F_{nk}}{F_k}.$$ This Higher Order Fibonacci numbers satisfy the same recursion formula as (\ref{section:Triple relations1}). \subsection{Limit of Ratio Sequences} Binet Formula for Fibonacci numbers allows one to find the limit of ratio sequence $U_n=\frac{F_{n+1}}{F_{n}}$ as a Golden Ratio. Here, we will derive similar limits for Generalized Fibonacci numbers and Fibonacci Polynomials. \subsubsection{Fibonacci Sequence and Golden Ratio} \quad Golden Ratio appears many times in geometry, art, architecture and other areas to analyze the proportions of natural objects. \\ There is a hidden beauty in Fibonacci sequence. While going to infinity, if we take any two successive Fibonacci numbers, their ratio become more and more close to the Golden Ratio $"\varphi"$, which is approximately 1,618.\\ In a mathematical way we can see it as;\\ $$\mbox{Since} \quad \varphi'=-\frac{1}{\varphi} , \;\lim_{n{\to \infty}} \varphi'^{n+1}=\lim_{n{\to \infty}} \varphi'^{n}=0,$$\\ then, $$\lim_{n{\to \infty}} \frac{F_{n+1}}{F_{n}}=\lim_{n{\to \infty}} \frac{\varphi^{n+1}-\varphi'^{n+1}}{\varphi^{n}-\varphi'^{n}}=\lim_{n{\to \infty}} \frac{\varphi^{n+1}-\frac{1}{\varphi^{n+1}}}{\varphi^{n}-\frac{1}{\varphi^{n}}} =\lim_{n{\to \infty}} \frac{\varphi^{n+1}}{\varphi^{n}}=\varphi.$$ \\ \subsubsection{Two Fibonacci subsequences} $$G_n=\frac{(G_1-\varphi'G_0)\varphi^n-(G_1-\varphi G_0)\varphi'^n}{\varphi-\varphi'},$$ $$\lim_{n{\to \infty}} \frac{G_{n+1}}{G_n}=\varphi.$$ \subsubsection{Arbitrary number of subsequences} $$G_n^{(k)}=\frac{\varphi^nP_k(\varphi)-\varphi'^nP_k(\varphi')}{\varphi-\varphi'},$$ $$\lim_{n{\to \infty}} \frac{G^{(k)}_{n+1}}{G^{(k)}_n}=\varphi.$$ \subsubsection{Fibonacci polynomials} From Binet formula for Fibonacci polynomials;\\ $$F_{n}{(a,b)}=\frac{a^n-b^n}{a-b},\,\quad\mbox{where}\,\quad a,b=\frac{p}{2}\pm\sqrt{\frac{p^2}{4}+q}$$\\ we get the limit,\\ $$\lim_{n{\to \infty}} \frac{F_{n+1}{(a,b)}}{F_{n}{(a,b)}}=\mbox{max}(a,b)=a.$$ \section{Binet-Fibonacci Curve} \subsection{Fibonacci Numbers of Real argument} \quad In Binet formula if we consider index t as an integer, corresponding Fibonacci numbers will be integer numbers,\\ $$ t \in \mathbb{Z} \rightarrow F_{t} \in \mathbb{Z}. $$ \\ When we choose t as a real number, corresponding $F_t$ will be in complex plane \cite{PN}. Thus, we have;\\ $$t \in \mathbb{R} \rightarrow F_{t} \in \mathbb{C}. $$ \\ Let us analyze the second case;\\ $$F_t=\frac{\varphi^t-\varphi'^t}{\varphi-\varphi'}, \quad \mbox{where} \; \mbox{t} \in \mathbb{R}.$$\\ In this formula, we have now multi-valued function; $$(-1)^t=e^{t log(-1)}=e^{i \pi t(2n+1)},$$ where $n=0,\pm1,\pm2,...$ In following calculations we choose only one branch of this function with $n=0:$\\ $$\varphi'^t=\left(\frac{-1}{\varphi}\right)^t=\frac{(-1)^t}{\varphi^t}=\frac{e^{i \pi t}}{\varphi^t},$$\\ \begin{align} F_{t}=&\frac{\varphi^t-(\frac{-1}{\varphi})^t}{\varphi-\varphi'} \nonumber \\ F_{t}=&\frac{e^{t\ln(\varphi)}-e^{i \pi t}\varphi^{-t}}{\varphi-(\frac{-1}{\varphi})} \nonumber \\ F_{t}=&\frac{1}{\varphi+\varphi^{-1}}\,\,\,\left(e^{t\ln(\varphi)}-e^{i \pi t}\,\,\,e^{-t\ln(\varphi)}\right) \nonumber \\ F_{t}=&\frac{1}{\varphi+\varphi^{-1}}\,\,\,\left(e^{t\ln(\varphi)}-(\cos(\pi t)+i.\sin(\pi t))e^{-t\ln(\varphi)}\right) \nonumber \\ F_{t}=&\frac{1}{\varphi+\varphi^{-1}}\,\,\,\left(e^{t\ln(\varphi)}-\cos(\pi t)e^{-t\ln(\varphi)}-i\sin(\pi t))e^{-t\ln(\varphi)}\right) \nonumber \\ F_{t}=&\frac{1}{\varphi+\varphi^{-1}}\,\,\,(e^{t\ln(\varphi)}-\cos(\pi t)e^{-t\ln(\varphi)})+i\frac{1}{\varphi+\varphi^{-1}}(-\sin(\pi t))e^{-t\ln(\varphi)}) \nonumber \end{align} \\ This way, we have complex valued function $F_{t}$ of real variable t. This function describes a curve in complex plane parameterized by real variable t. This curve for -$\infty$$<$t$<$$\infty$, we call the ``Binet-Fibonacci curve". Parametric form of this curve is;\\ $$F_{t}=\left(x(t),y(t)\right)\,;$$\\ $$Re(F_t)=x(t)=\frac{1}{\varphi+\varphi^{-1}}\left(e^{t\ln(\varphi)}-\cos(\pi t)e^{-t\ln(\varphi)}\right)$$\\ $$Im(F_t)=y(t)=\frac{1}{\varphi+\varphi^{-1}}\left(-\sin(\pi t)e^{-t\ln(\varphi)}\right)$$\\ We plot this curve for 0$<$t$<$$\infty$, in Figure 1 by Mathematica. \begin{center}\vspace{0.25cm} \includegraphics[width=1\linewidth]{Binetforreal} \captionof{figure}{Binet-Fibonacci Oscillating Curve (B.F.O.C.)} \end{center}\vspace{0.25cm} The curve intersects real axis x at Fibonacci numbers $F_n$ corresponding to integer values of parameter t=n. \\ Below we discuss some properties of this curve. \subsection{Area Sequences for Binet-Fibonacci\\ Oscillating Curve} \quad Here we analyze the area sequences under Binet-Fibonacci Oscillating curve. We use the Green's Theorem area formula to calculate the area value of segments;\\ $$A_{n,n+1}=\frac{1}{2} \int_n^{n+1} (x dy-y dx).$$ \\ By rewriting $dy=\left(\frac{dy}{dt}\right)dt$ and $dx=\left(\frac{dx}{dt}\right)dt$, our formula becomes;\\ \begin{equation} A_{n,n+1}=\frac{1}{2}\int_n^{n+1} (x \dot{y}-y \dot{x})dt. \label{greensthmareaformula}\\ \end{equation} Since we know $x(t)$ \& $y(t)$ as components of Binet formula for real argument, we substitute them into equation (\ref{greensthmareaformula}) and by taking integral we obtain the area formula for the desired segments of B.F.O.C. We put the - sign in front of the area formula to be hold with signs of calculated area segments;\\ \begin{equation} A_{n,n+1}=-\frac{1}{10} \left[\frac{4(-1)^nln(\varphi)}{\pi}-\frac{\pi}{2ln(\varphi)}(F_{2n}-\frac{1}{\varphi}F_{2n+1})\right]. \\ \end{equation} From this formula, we have interesting result in B.F.O.C, when we look the limit value of area segment at infinity;\\ \begin{align} \nonumber \lim_{n{\to \infty}} A_{n,n+1}&=\lim_{n{\to \infty}} -\frac{1}{10}\left[\frac{4(-1)^nln(\varphi)}{\pi}-\frac{\pi}{2ln(\varphi)} \varphi'^{2n+1})\right] \nonumber \\ &=\lim_{n{\to \infty}} -\frac{1}{10}\left[\frac{4(-1)^nln(\varphi)}{\pi}-\frac{\pi}{2ln(\varphi)}(-\frac{1}{\varphi})^{2n+1})\right] \nonumber \\ &=\lim_{n{\to \infty}} -\frac{1}{10}\left[\frac{4(-1)^nln(\varphi)}{\pi}+\frac{\pi}{2ln(\varphi)}\frac{1}{\varphi^{2n+1}}\right] \nonumber \\ \nonumber \end{align} Thus, when n goes to infinity we find that the sequence of segment's area has finite limit;\\ \begin{equation} A_\infty=\lim_{n{\to \infty}} |A_{n,n+1}|=\left|-\frac{1}{10}\frac{4(-1)^nln(\varphi)}{\pi}\right|=\frac{2}{5\pi}\,ln(\varphi). \label{area} \end{equation}\\ Since we have \;$A_\infty=\frac{2}{5\pi}\,ln(\varphi)$, other possible identities can also be written; \\ $$\pi A_\infty=\frac{2}{5}\,ln(\varphi)\quad \mbox{and} \quad \varphi=e^{\frac{5\pi}{2}A_\infty}$$\\ We can say that at infinity, the area of segments becomes close to the value $\frac{2}{5}\frac{ln(\varphi)}{\pi}\approx 0.06\;.$\\ Remarkable is that result (\ref{area}) includes three different fundamental constants of mathematics as $\varphi,\pi$ and $e$. As we can see from B.F.O.C, area segments starting with even number have negative sign, and area segments starting with odd number have positive sign as an area value. According to this observation we formulate our next theorem.\\ \textbf{THEOREM 2:}\quad$A_{n,n+1}<0$ if $n=2k$ and $A_{n,n+1}>0$ if $n=2k+1$, where k=1,2,3,...\\ For proof see Appendix 7.2\;.\\ If we look to the sign of addition two consecutive area segments under B.F.O.C, we obtain interesting observation.\\ \textbf{THEOREM 3:}\quad $A_{n,n+1}+A_{n+1,n+2}<0$ for both $n=2k$ and $n=2k+1$ n=2,3,4,..\\ For proof see Appendix 7.2\;. In Binet formula for real argument, segment's area formula was;\\ $$A_{n,n+1}=-\frac{1}{10} \left[\frac{4(-1)^nln(\varphi)}{\pi}-\frac{\pi}{2ln(\varphi)}(F_{2n}-\frac{1}{\varphi}F_{2n+1})\right].$$\\ Also we can write;\\ $$A_{n-1,n}=-\frac{1}{10} \left[\frac{4(-1)^{n-1}ln(\varphi)}{\pi}-\frac{\pi}{2ln(\varphi)}(F_{2n-2}-\frac{1}{\varphi}F_{2n-1})\right].$$\\ Here, we define $\lambda_n$ as ratio of areas;\\ $$\lambda_n=\left|\frac{A_{n,n+1}}{A_{n-1,n}}\right|=\left|\frac{8(ln(\varphi))^2(-1)^n-\pi^2(F_{2n}-\frac{1}{\varphi}F_{2n+1})}{-8(ln(\varphi))^2(-1)^n-\pi^2(F_{2n-2}-\frac{1}{\varphi}F_{2n-1})}\right|$$\\ and after some calculations (see Appendix 7.3), as a result;\\ $$\;\lambda_n=\left|\frac{A_{n,n+1}}{A_{n-1,n}}\right|=\frac{1}{\varphi^2}\frac{|\frac{8(ln(\varphi))^2}{\pi^2}\varphi^{2n}\varphi+(-1)^n|}{|\frac{8(ln(\varphi))^2}{\pi^2}\varphi^{2n}\frac{1}{\varphi}-(-1)^n|}\;\;\;\;\;\;\mbox{is obtained.}$$ If $n=2k$;\\ $$\lambda_{2k}=\left|\frac{A_{2k,2k+1}}{A_{2k-1,2k}}\right|=\frac{1}{\varphi^2}\frac{|\frac{8(ln(\varphi))^2}{\pi^2}\varphi^{4k+1}+1|}{|\frac{8(ln(\varphi))^2}{\pi^2}\varphi^{4k-1}-1|}=\frac{|\frac{8(ln(\varphi))^2}{\pi^2}\varphi^{4k-1}+\frac{1}{\varphi^2}|}{|\frac{8(ln(\varphi))^2}{\pi^2}\varphi^{4k-1}-1|}>1.$$\\ If $n=2k+1$;\\ $$\lambda_{2k+1}=\left|\frac{A_{2k+1,2k+2}}{A_{2k,2k+1}}\right|=\frac{1}{\varphi^2}\frac{|\frac{8(ln(\varphi))^2}{\pi^2}\varphi^{4k+3}-1|}{|\frac{8(ln(\varphi))^2}{\pi^2}\varphi^{4k+1}+1|}=\frac{|\frac{8(ln(\varphi))^2}{\pi^2}\varphi^{4k+1}-\frac{1}{\varphi^2}|}{|\frac{8(ln(\varphi))^2}{\pi^2}\varphi^{4k+1}+1|}<1.$$\\ Thus Theorem 2, which is about sign of the addition two consecutive segments, is proved in other logical way.\\ Now, let us look;\\ $$\lim_{n{\to \infty}} \lambda_n=\frac{1}{\varphi^2}\frac{|\varphi|}{|\frac{1}{\varphi}|}=\frac{1}{\varphi^2}\varphi^2=1.$$\\ From this calculation, we can interpret that at infinity, the area values of consecutive segments are equal. In fact, we found before that at infinity, segment's area value is $\frac{2}{5\pi}\,ln(\varphi)$ under Binet-Fibonacci Oscillating curve. In a logical manner, the ratio of consecutive area segments will be equal at infinity.\\ Now, we calculate the curvature of the Binet-Fibonacci Oscillating curve. It is calculated by using the formula;\\ $$\kappa(t)=\frac{|\dot{r}\times\ddot{r}|}{|\dot{r}|^3}$$\\ where, $$\vec{r}(t)=F_{t}=(x(t),y(t))=\frac{1}{\varphi+\varphi^{-1}}\left(e^{t\ln(\varphi)}-\cos(\pi t)e^{-t\ln(\varphi)},-\sin(\pi t)e^{-t\ln(\varphi)}\right)$$\\ Then, by substituting vector $\vec{r}$, we find how much the Binet-Fibonacci Oscillating curve is curved at any given point t;\\ \begin{equation} \kappa(t)=\frac{3\pi ln^{2}\varphi \cos(\pi t)+\sin(\pi t)(\pi^{2}ln\varphi-2ln^{3}\varphi)+e^{-2tln\varphi}(\pi(\pi^2+ln^2\varphi))}{5\left[\frac{1}{5}\left(ln^{2}\varphi(e^{2tln\varphi}+e^{-2tln\varphi})+2ln\varphi\; \pi\sin(\pi t)+2ln^{2}\varphi\;\cos(\pi t)+\pi^{2}e^{-2tln\varphi}\right)\right]^{3/2}} \nonumber \\ \end{equation}\\ By using this result, we find that;\\ \begin{equation} \lim_{n{\to \infty}} \kappa(n)=0. \nonumber \\ \end{equation} It means, when n is close to infinity, at Fibonacci number points the curvature is 0. Namely, the curve behaves like line at Fibonacci numbers.\\ Another amazing result is found when we look the curvature ratio at consecutive Fibonacci numbers;\\ $$\lim_{n{\to \infty}} \frac{\kappa(n+1)}{\kappa(n)}=-\frac{1}{\varphi^3}.$$\\ In this result, ``-" sign comes from its signed curvature of the curve.\\ By using this result we can also calculate;\\ $$\lim_{n{\to \infty}} \frac{\kappa(n+2)}{\kappa(n)}=\lim_{n{\to \infty}} \frac{\kappa(n+2)}{\kappa(n+1)}. \frac{\kappa(n+1)}{\kappa(n)}=\left(-\frac{1}{\varphi^3}\right)\left(-\frac{1}{\varphi^3}\right)=\frac{1}{\varphi^6}.$$ We can generalize this result as; $$\lim_{n{\to \infty}} \frac{\kappa(n+k)}{\kappa(n)}=\left(-\frac{1}{\varphi^3}\right)^k.$$ \newpage \subsection{Fibonacci Spirals} \quad Up to now, we have thought only positive real numbers $t$ as an argument in Binet formula for $F_t$. But now, we look what will be if we consider t as negative real numbers: -$\infty$$<$t$<$0\\ In equation (\ref{relationwith negative index}), we have found the relation between $F_n$ and $F_{-n}$ so that \\ $$F_{-n}=(-1)^{n+1}F_{n}.$$\\ \phantom{ab}From this equality, we can see that for negative even number n, the values of the function are given by negative Fibonacci numbers. For odd numbers n we will have positive value for Fibonacci numbers with odd index.\\ \phantom{ab}In contrast to positive t when function $F_t$ is oscillating along positive half axis, where $Re(F_t)$ is positive, for negative t we are getting a spiral, crossing real line from both sides. We called such curve as ``Binet-Fibonacci Spiral curve" since it intersects real axis at Fibonacci numbers. Its shape in complex plane is obtained by using parametric plot in Mathematica, in Figure 2. \begin{center}\vspace{0.25cm} \includegraphics[width=0.7\linewidth]{spiral} \captionof{figure}{Binet-Fibonacci Spiral Curve} \end{center}\vspace{0.25cm} \begin{center}\vspace{0.25cm} \includegraphics[width=0.90\linewidth]{nautilus1} \captionof{figure}{Nautilus 1} \end{center}\vspace{0.25cm} \begin{center}\vspace{0.25cm} \includegraphics[width=0.40\linewidth]{spiral} \hspace{1.5cm} \includegraphics[width=0.40\linewidth]{nautilus1} \captionof{figure}{Comparison of Binet-Fibonacci Spiral Curve with Nautilus 1} \end{center}\vspace{0.25cm} \begin{center}\vspace{0.25cm} \includegraphics[width=0.90\linewidth]{nautilus2} \captionof{figure}{Nautilus 2} \end{center}\vspace{0.25cm} \begin{center}\vspace{0.25cm} \includegraphics[width=0.40\linewidth]{spiral} \hspace{1.5cm} \includegraphics[width=0.40\linewidth]{nautilus2} \captionof{figure}{Comparison of Binet-Fibonacci Spiral Curve with Nautilus 2} \end{center}\vspace{0.25cm} To see that there is no other intersection points with the x-axis, apart from Fibonacci numbers, let us look $Im(F_t).$\\ When $Im(F_t)=0$, we get intersection points with x-axis as,\\ $$Im(F_t)=\frac{1}{\varphi+\varphi^{-1}}\left(-\sin(\pi t)e^{-t\ln(\varphi)}\right)=0.$$\\ Since $e^{-t\ln(\varphi)}\neq0$,\\ $$\sin(\pi t)=0 \Rightarrow \pi t=\pi k \Rightarrow\;\; t=k, \;\;\mbox{where}\;\; k=0,\pm1,\pm2,\pm3,... $$\\ \phantom{abc}This means at points $F_k$, where $k=0,\pm1,\pm2,\pm3,...$\,\,, we have intersection points with real axis. So, we can say that there is no other intersection points of the Fibonacci spiral with the real axis, except at Fibonacci numbers $F_k$.\\ Using the result, on curvature of Binet-Fibonacci Oscillating curve, we can expect that when $t\rightarrow-\infty$, $\kappa(t)\rightarrow0.$ Intuitively, due to the having huge radius, it is clear that Binet-Fibonacci Spiral behaves like line at any given point when $t\rightarrow-\infty$. \\ In addition, we look the curvature ratio when $t{\to -\infty}$;\\ $$\lim_{t{\to -\infty}} \frac{\kappa(t+1)}{\kappa(t)}=\frac{1}{\varphi}.$$\\ Furthermore, we can generalize this result;\\ $$\lim_{t{\to -\infty}} \frac{\kappa(t+k)}{\kappa(t)}=\lim_{t{\to -\infty}} \frac{\kappa(t+k)}{\kappa(t+(k-1))}\;.\;.\;.\;\frac{\kappa(t+1)}{\kappa(t)}=\left(\frac{1}{\varphi}\right)^k.$$\\ When we look at the Binet-Fibonacci spiral curve;\\ $$\lim_{n{\to -\infty}} \frac{F_{n+2}}{F_n}=\lim_{n{\to -\infty}} \frac{F_{n+2}}{F_{n+1}}\; \frac{F_{n+1}}{F_n}=\varphi^2=\varphi+1\approx 2.618\;.$$\\ Also, we approach to this value by looking along the x and y axis in our own Nautilus necklace (Figures 3 and 5).\\ When we compare our Fibonacci spiral with the Nautilus's natural spiral in nature, we realize that they have quite similar behaviours (Figures 4 and 6).\\ Also, we noticed that when $t\rightarrow-\infty$, our Binet-Fibonacci spiral curve behaves as a Logarithmic spiral. Furthermore, we know that at every point in logarithmic spiral we have constant angle between vector $\vec{r}$ and $\frac{d\vec{r}}{dt}$. We find that at infinity angle between the vector $\vec{r}$ and $\frac{d\vec{r}}{dt}$ is constant in the Binet-Fibonacci Spiral curve (asymptotically).\\ Finally; in Figure 7, we show the curve for both positive and negative real values of argument,-$\infty$$<$t$<$$\infty$, of the Binet Formula in the range of t between [-4,4].\\ \begin{center}\vspace{0.25cm} \includegraphics[width=0.7\linewidth]{spiralandbfcurve} \captionof{figure}{Binet-Fibonacci Curve} \end{center}\vspace{0.25cm} \section{Conclusions} In the present paper, we have studied several generalizations of Fibonacci sequences. First, the generalized Fibonacci sequences as the sequences with arbitrary initial values. Second, the addition of two and more Fibonacci subsequences and Fibonacci polynomials with arbitrary bases. In all these cases we get Binet representation and asymptotics of the sequence as the ratio of consecutive terms. For Fibonacci numbers with congruent indices we derived general formula in terms of generalized Fibonacci polynomials and Lucas numbers. By extending Binet formula to arbitrary real variables we constructed Binet-Fibonacci curve in complex plane. For positive numbers the curve is oscillating with decreasing amplitude, while for negative numbers it becomes Binet-Fibonacci spiral. For t$<<$0, Binet-Fibonacci Spiral curve becomes Logarithmic spiral. Comparison with nautilus curve shows quite similar behavior. Areas under the curve and curvature characteristics are calculated as well as the asymptotic of relative characteristics. \section{Appendix} \subsection{Fibonacci Subsequence Theorem} \textbf{THEOREM 1:}\quad Subsequence ${G^{(k;\alpha)}_n} \equiv F_{kn+\alpha}$ is subject to the next recursion formula \begin{equation} G^{(k;\alpha)}_{n+1}=L_k{G^{(k;\alpha)}_n}+(-1)^{k-1}{G^{(k;\alpha)}_{n-1}}, \end{equation} where $L_k$ are Lucas numbers. This is why it is given by Generalized Fibonacci polynomials with integer arguments as \begin{equation} G^{(k;\alpha)}_{n}=F_{kn+\alpha}=G_n{(L_k,(-1)^{k-1})}. \end{equation} \\ \textbf{PROOF 1:}\quad\\ To prove the equality ${G^{(k;\alpha)}_{n+1}}=L_k{G^{(k;\alpha)}_n}+(-1)^{k-1}{G^{(k;\alpha)}_{n-1}}$, we begin with left hand side of equation and at the end, we get right hand side of this equation.\\ Since, we can write ${G^{(k;\alpha)}_{n+1}}=F_{k(n+1)+\alpha}=F_{kn+k+\alpha}$;\\ \begin{align} F_{kn+k+\alpha}&=\frac{1}{\varphi-\varphi'}\left[\varphi^{kn+k+\alpha}-\varphi'^{kn+k+\alpha}\right] \nonumber \\ &=\frac{1}{\varphi-\varphi'}\left[\varphi^{kn+\alpha}\varphi^k-\varphi'^{kn+\alpha}\varphi'^k \right] \nonumber \\ &=\frac{1}{\varphi-\varphi'}\left[\varphi^{kn+\alpha}\varphi^k+(-\varphi'^{kn+\alpha}\varphi^k+\varphi'^{kn+\alpha}\varphi^k)-\varphi'^{kn+\alpha}\varphi'^k\right]\nonumber\\ &=\frac{1}{\varphi-\varphi'}\left[(\varphi^{kn+\alpha}-\varphi'^{kn+\alpha})\varphi^k+\varphi'^{kn+\alpha}\varphi^k-\varphi'^{kn+\alpha}\varphi'^k\right]\nonumber\\ &=F_{kn+\alpha}\varphi^k+\frac{1}{\varphi-\varphi'}\left[\varphi'^{kn+\alpha}\varphi^k-\varphi'^{kn+\alpha}\varphi'^k\right] \nonumber\\ &=F_{kn+\alpha}(\varphi^k+(-\varphi'^k+\varphi'^k))+\frac{1}{\varphi-\varphi'}\left[\varphi'^{kn+\alpha}\varphi^k-\varphi'^{kn+\alpha}\varphi'^k\right]\nonumber\\ &=F_{kn+\alpha}(\varphi^k+\varphi'^k)-F_{kn+\alpha}\varphi'^k+\frac{1}{\varphi-\varphi'}\left[\varphi'^{kn+\alpha}\varphi^k-\varphi'^{kn+\alpha}\varphi'^k\right]\nonumber\\ &=F_{kn+\alpha}L_k+\frac{1}{\varphi-\varphi'}\left[\varphi'^{kn+\alpha}\varphi'^k-\varphi^{kn+\alpha}\varphi'^k+\varphi'^{kn+\alpha}\varphi^k-\varphi'^{kn+\alpha}\varphi'^k\right]\nonumber\\ &=F_{kn+\alpha}L_k+\frac{1}{\varphi-\varphi'}\left[\varphi'^{kn+\alpha}\varphi^k-\varphi^{kn+\alpha}\varphi'^k\right]\nonumber\\ &=F_{kn+\alpha}L_k+\frac{\varphi^k\varphi'^k}{\varphi-\varphi'}\left[\varphi'^{kn-k+\alpha}-\varphi^{kn-k+\alpha}\right]\nonumber\\ \nonumber \end{align} \begin{align} &=F_{kn+\alpha}L_k-\frac{\varphi^k\varphi'^k}{\varphi-\varphi'}\left[\varphi^{kn-k+\alpha}-\varphi'^{kn-k+\alpha}\right]\nonumber\\ &=F_{kn+\alpha}L_k-\frac{(\varphi\varphi')^k}{\varphi-\varphi'}\left[\varphi^{kn-k+\alpha}-\varphi'^{kn-k+\alpha}\right] \;\; \mbox{since} \;\; (\varphi\varphi')^k=(-1)^k,\nonumber\\ &=F_{kn+\alpha}L_k-(-1)^k\left[\frac{\varphi^{kn-k+\alpha}-\varphi'^{kn-k+\alpha}}{{\varphi-\varphi'}}\right]\nonumber\\ &=F_{kn+\alpha}L_k+(-1)^{k+1}F_{kn-k+\alpha}\;\; \mbox{and since}\;\; (-1)^{k+1}(-1)^{-2}=(-1)^{k-1}, \nonumber\\ &=F_{kn+\alpha}L_k+(-1)^{k-1}F_{kn-k+\alpha}\nonumber\\ \nonumber \end{align} Therefore, $F_{kn+k+\alpha}=F_{kn+\alpha}L_k+(-1)^{k-1}F_{kn-k+\alpha}$ is obtained. Equivalently, under our notation;\\ $${G^{(k;\alpha)}_{n+1}}=L_k{G^{(k;\alpha)}_n}+(-1)^{k-1}{G^{(k;\alpha)}_{n-1}}$$ is written.\\ \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad $\blacksquare$\\ \subsection{Theorems on Alternating Sign of Areas } \textbf{THEOREM 2:}\quad$A_{n,n+1}<0$ if $n=2k$ and $A_{n,n+1}>0$ if $n=2k+1$ where k=1,2,3,...\\ \textbf{PROOF 2:}\quad Firstly; if $n=2k$, then we have to show that;\\ $$A_{2k,2k+1}=-\frac{1}{10}\left[\frac{4ln(\varphi)}{\pi}-\frac{\pi}{2ln(\varphi)}(F_{4k}-\frac{1}{\varphi}F_{4k+1})\right]<0.$$ This inequality is valid if,\\ $$\frac{4ln(\varphi)}{\pi}>\frac{\pi}{2ln(\varphi)}(F_{4k}-\frac{1}{\varphi}F_{4k+1})$$\\ By induction we have; \quad $F_{4k}+\varphi'F_{4k+1}=\left(\varphi'\right)^{4k+1}.$ So, we have;\\ $$\frac{4ln(\varphi)}{\pi}>\frac{\pi}{2ln(\varphi)}(\varphi'^{4k+1})$$\\ $$\frac{8(ln(\varphi))^2}{\pi^2}>\left(\varphi'\right)^{4k+1}=\left(\frac{-1}{\varphi}\right)^{4k+1}=\left(\frac{1}{\varphi}\right)^{4k+1}(-1)^{4k+1}=-\left(\frac{1}{\varphi}\right)^{4k+1}$$\\So; $$\frac{8(ln(\varphi))^2}{\pi^2}>-\left(\frac{1}{\varphi}\right)^{4k+1}$$\\ Thus, it is evident.\\ Secondly; if $n=2k+1$, we have to show that,\\ $$A_{2k+1,2k+2}=-\frac{1}{10} \left[-\frac{4ln(\varphi)}{\pi}-\frac{\pi}{2ln(\varphi)}(F_{4k+2}-\frac{1}{\varphi}F_{4k+3})\right]>0$$\\ It is valid when ;\\ $$-\frac{1}{10} \left[-\frac{4ln(\varphi)}{\pi}-\frac{\pi}{2ln(\varphi)}\left(\varphi'\right)^{4k+3}\right]>0$$\\ $$-\frac{1}{10} \left[-\frac{4ln(\varphi)}{\pi}+\frac{\pi}{2ln(\varphi)}\left(\frac{1}{\varphi}\right)^{4k+3}\right]>0$$\\ $$-\frac{1}{10}\left(\frac{\pi}{2ln(\varphi)}\right)\left[\frac{1}{(\varphi)^{4k+3}}-\frac{8(ln(\varphi))^2}{\pi^2}\right]>0$$\\ Now our next step is to prove;\\ $$\frac{8(ln(\varphi))^2}{\pi^2}>\frac{1}{\varphi^{4k+3}}$$\\ In fact, $$\frac{8(ln(\varphi))^2}{\pi^2}>\frac{1}{\varphi^7}>\frac{1}{\varphi^{4k+3}}\quad \mbox{for} \;\;k>1$$\\ So, we can easily say that $\frac{8(ln(\varphi))^2}{\pi^2}>\frac{1}{\varphi^{4k+3}}.$\\ As a result, proof is completed.\\ \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad $\blacksquare$\\ \textbf{THEOREM 3:}\quad $A_{n,n+1}+A_{n+1,n+2}<0$ for both $n=2k$ and $n=2k+1$ n=2,3,4,..\\ \textbf{PROOF 3:}\quad Since we know; \\ $$A_{n,n+1}=-\frac{1}{10} \left[\frac{4(-1)^nln(\varphi)}{\pi}-\frac{\pi}{2ln(\varphi)}(F_{2n}-\frac{1}{\varphi}F_{2n+1})\right],$$\\ we can write addition of two consecutive area segments;\\ $A_{n,n+1}+A_{n+1,n+2}=$\\ $=-\frac{1}{10}\left[\frac{4(-1)^nln(\varphi)}{\pi}+\frac{4(-1)^{n+1}ln(\varphi)}{\pi}-\frac{\pi}{2ln(\varphi)}(F_{2n}-\frac{1}{\varphi}F_{2n+1})-\frac{\pi}{2ln(\varphi)}(F_{2n+2}-\frac{1}{\varphi}F_{2n+3})\right]$\\ $=\frac{1}{10}\left(\frac{\pi}{2ln(\varphi)}\right)\left[F_{2n}-\frac{1}{\varphi}F_{2n+1}+F_{2n+2}-\frac{1}{\varphi}F_{2n+3}\right]$\\ $=\frac{\pi}{20ln(\varphi)}\left[F_{2n}+F_{2n+2}-\frac{1}{\varphi}(F_{2n+1}+F_{2n+3})\right]$\\ Since we know, $F_{2n}+F_{2n+2}=L_{2n+1}$ and $F_{2n+1}+F_{2n+3}=L_{2n+2}$, where $L_n$ are Lucas numbers;\\ So,$$A_{n,n+1}+A_{n+1,n+2}=\frac{\pi}{20ln(\varphi)}\left[L_{2n+1}+\varphi'L_{2n+2}\right].$$\\ Since $L_{n}=\varphi^{n}+\varphi'^{n},$ \begin{align} L_{2n+1}+\varphi'L_{2n+2}&=\varphi^{2n+1}+\varphi'^{2n+1}+\varphi'(\varphi^{2n+2}+\varphi'^{2n+2})\nonumber \\ &=\varphi^{2n+1}+\varphi'^{2n+1}+\varphi'\varphi\varphi^{2n+1}+\varphi'^{2n+3} \;\;\mbox{since}\,\, \varphi'\varphi=-1 ;\nonumber \\ &=\varphi'^{2n+1}+\varphi'^{2n+3}\nonumber \\ &=\varphi'^{2n+1}(1+\varphi'^2) \;\;\mbox{and since}\,\,\varphi'^2=\varphi'+1\nonumber \\ &=\varphi'^{2n+1}(2+\varphi')\nonumber \\ \nonumber \end{align} Thus, we obtained;\\ $L_{2n+1}+\varphi'L_{2n+2}=\varphi^{2n+1}(2+\varphi')=\left(\frac{-1}{\varphi}\right)^{2n+1}\left(2-\frac{1}{\varphi}\right)=\fbox{$-(\frac{1}{\varphi})^{2n+1}(2-\frac{1}{\varphi})$}$\\ Now, for both\,\, $n=2k$ and $n=2k+1$;\quad $-\left(\frac{1}{\varphi}\right)^{2n+1}\left(2-\frac{1}{\varphi}\right)<0$. Thus;\\ $A_{n,n+1}+A_{n+1,n+2}=\frac{\pi}{20ln(\varphi)}\left[L_{2n+1}+\varphi'L_{2n+2}\right]<0$ \\ The proof is completed.\\ \phantom{abcdefghklmoprstuyzbmerveozvatanmerveozvatanmervemerve}$\blacksquare$\\ \subsection{Relative Area Sequence} Here, we define $\lambda_n$ as ratio of areas;\\ $$\lambda_n=\left|\frac{A_{n,n+1}}{A_{n-1,n}}\right|=\left|\frac{8(ln(\varphi))^2(-1)^n-\pi^2(F_{2n}-\frac{1}{\varphi}F_{2n+1})}{-8(ln(\varphi))^2(-1)^n-\pi^2(F_{2n-2}-\frac{1}{\varphi}F_{2n-1})}\right|$$\\ $$=\frac{|(F_{2n}-\frac{1}{\varphi}F_{2n+1})-\frac{8(ln(\varphi))^2}{\pi^2}(-1)^n|}{|(F_{2n-2}-\frac{1}{\varphi}F_{2n-1})+\frac{8(ln(\varphi))^2}{\pi^2}(-1)^n|}$$\\ $$=\frac{|(\varphi')^{2n+1}-\frac{8(ln(\varphi))^2}{\pi^2}(-1)^n|}{|(\varphi')^{2n-1}+\frac{8(ln(\varphi))^2}{\pi^2}(-1)^n|}$$\\ $$=\frac{|(-1)^{2n+1}\frac{1}{\varphi^{2n+1}}-\frac{8(ln(\varphi))^2}{\pi^2}(-1)^n|}{|(-1)^{2n-1}\frac{1}{\varphi^{2n-1}}+\frac{8(ln(\varphi))^2}{\pi^2}(-1)^n|}$$\\ $$=\frac{|-\frac{1}{\varphi^{2n+1}}-\frac{8(ln(\varphi))^2}{\pi^2}(-1)^n|}{|-\frac{1}{\varphi^{2n-1}}+\frac{8(ln(\varphi))^2}{\pi^2}(-1)^n|}$$\\ $$=\frac{|\frac{1}{\varphi^{2n+1}}+\frac{8(ln(\varphi))^2}{\pi^2}(-1)^n|}{|\frac{1}{\varphi^{2n-1}}-\frac{8(ln(\varphi))^2}{\pi^2}(-1)^n|}$$\\ $$=\frac{\varphi^{2n-1}}{\varphi^{2n+1}}\frac{|1+\frac{8(ln(\varphi))^2}{\pi^2}(-1)^n\varphi^{2n+1}|}{|1-\frac{8(ln(\varphi))^2}{\pi^2}(-1)^n\varphi^{2n-1}|}$$\\ $$=\frac{1}{\varphi^2}\frac{|\frac{8(ln(\varphi))^2}{\pi^2}\varphi^{2n}\varphi+(-1)^n|}{|\frac{8(ln(\varphi))^2}{\pi^2}\frac{\varphi^{2n}}{\varphi}-(-1)^n|}$$\\ $$\mbox{At the end,}\;\lambda_n=\left|\frac{A_{n,n+1}}{A_{n-1,n}}\right|=\frac{1}{\varphi^2}\frac{|\frac{8(ln(\varphi))^2}{\pi^2}\varphi^{2n}\varphi+(-1)^n|}{|\frac{8(ln(\varphi))^2}{\pi^2}\varphi^{2n}\frac{1}{\varphi}-(-1)^n|}\;\;\mbox{is obtained.} $$\\
2,869,038,156,069
arxiv
\section{Introduction} In order to solve the absences of explicit sentiment information of some aspects in the context, recent studies on ABSC \cite{pontiki2014semeval} began to focus on the parsing of sentiment dependency among aspects. For example, \texttt{The laptop's storage is large, so does the battery capacity.}, the customer praised both storage and battery capacity, while no direct sentiment description of battery capacity is available in the review. The methods capable of dependency learning can be approximately categorized into the topological structure-based dependency parsing methods \cite{zhang2019aspect,huang2019syntax}, and syntax tree distance-dependent methods\cite{phan2020modelling}. Meanwhile, some works adopt hybrid dependency modeling strategies to enhance the model's ability to learn sentiment dependency. However, due to the additional startup time and expensive resources occupation of dependency tree learning, they are not the ideal solutions for dependency learning in long texts, especially texts with multi-aspects. Fig \ref{tab:resource} shows the brief comparisons between the dependency-tree based models and non-dependency-tree based models\footnote{The experiments are performed on RTX 2080 GPU, AMD R5-3600 CPU with PyTorch $1.9.0$. The original size of the Laptop14 and Restaurant14 datasets are $336kb$ and $492kb$, respectively. }. Those dependency tree-based methods generally employ the graph convolution network (GCN) and attention mechanism\cite{bahdanau2014neural} to model the sentiment dependency. There are diversities of attention mechanisms proposed in the previous research\cite{wang2016attention,ma2017interactive}, e.g., multi-grained attention\cite{zhang2019aspect} and multi-head attention\cite{vaswani2017attention}. These works ignore the efficiency drawback of syntax tree handling. With the development of pre-trained models (PTMs), researchers began to propose methods based on PTM (e.g., BERT). Those methods based on PTMs achieve promising performance, indicating that PTMs may learn the potential sentiment dependencies. Scholars have recognized that the sentiment polarity of the target aspect is highly related to its local context, and considerable improvements have been obtained by integrated modeling between the local context and the global context\cite{yang2021multi,phan2020modelling}. Moreover, the local context feature can be easily adapted to enhance various models. \begin{table}[htbp] \centering \caption{The average resources occupation of popular ABSC models. ``P.T.'', ``T.T.'' and ``A.S.'' indicate the data processing time, training time in 10 epochs and additional storage requirement. $^*$ indicates the non-dependency based models, and ``$^\dagger$'' indicates our models.} \resizebox{0.98\columnwidth}{14mm}{ \begin{tabular}{ccccccc} \toprule \multirow{2}[4]{*}{Models} & \multicolumn{3}{c}{Laptop14} & \multicolumn{3}{c}{Restaurant14 } \\ \cmidrule{2-7} & P.T. (sec) & T.T. (sec) & A.S. (kb) & P.T. (sec) & T.T. (sec) & A.S. (kb) \\ \midrule BERT-BASE $^*$& 1.62 & 221.24 & 0 & 3.17 & 351.52 & 0 \\ LCF-BERT $^*$& 2.89 & 612.72 & 0 & 3.81 & 1513.62 & 0 \\ ASGCN-BERT & 13.29 & 273.12 & 7054 & 19.52 & 413.86 & 9457 \\ RGAT-BERTX & 35421.51 & 212.41 & 157444 & 48594.46 & 335.56 & 188340 \\ LSA-T $^\dagger$ & 3.16 & 233.56 & 0 & 4.32 & 391.83 & 0 \\ LSA-S $^\dagger$ & 20.56 & 259.52 & 0 & 30.23 & 414.25 & 0 \\ \bottomrule \end{tabular}% } \label{tab:resource}% \end{table}% Our study shows that sentiment dependency usually exists in the sentiment cluster, which implies the possibility of fast modeling of sentiment dependency. We exploit this finding by introducing sentiment patterns (SP) to improve ABSC. Meanwhile, we propose a sentiment dependency learning framework based on the local sentiment aggregating mechanism. The LSA handles the sentiment dependency within the aggregation window (AW), avoiding the participation of tree and graph structures. The AW is composed of aspect-emphasized context features. e.g., the features learned using the BERT-SPC input format. But we propose the embedding-based local context focus (ELCF) to build the aggregation window due to its effectiveness. There are different implementation of the ELCF feature extraction, i.e., token-distance based method \cite{zeng2019lcf} and syntax-based distance \cite{phan2020modelling}. Besides, we propose differential weighting for window components to enhance our models. The experimental results show the LSA is an affordable sentiment dependency learning method according to the performance comparisons. Therefore, the main contributions\footnote{The code and datasets are available at: \url{https://github.com/yangheng95/PyABSA}} of this paper are as follows: \begin{itemize} \item[1] The novel sentiment patterns for ABSC are introduced in this paper. Thereafter, the efficient LSA mechanism is proposed to learn sentiment dependency in sentiment clusters. \item[2] The embedding-based local context focus is proposed to enhance the LCF mechanism. The ELCF avoids calculating and dumping context weights and embeds the distances between aspect and context words to extract local context features. The ELCF outperforms the LCF and is more stable. \item[3] The differential weighting strategy for AW building is proposed to enhance LSA. And the experimental results show that the influence of sentiment information from different directions in the AW is different. \item[4] We study the effectiveness of simplified AW and conduct ablation experiments to evaluate the performance of simplified AW. Besides, we discuss the pros and cons of LSA and the dependency tree-based models. \end{itemize} \section{Related Works} Existing popular ABSC methods can be divided into methods based on classic attention mechanisms, methods based on dependence, and methods based on pre-trained models. And some works could be classified into multiple categories. \subsection{Attention-based Methods} \citet{wang2016attention} propose the ATAE-LSTM which employs the attention mechanism to emphasize the key context containing important sentiment information. In order to explore the relatedness between aspects and context, \citep{ma2017interactive} propose an LSTM-based model based on a novel interactive attention mechanism and achieves hopeful performance. Instead of only using the classical coarse-grained attention, \cite{fan2018multi} propose pairs of fine-grained attention mechanisms accompanied with coarse-grained attention, and the proposed model significantly outperforms previous models. There are other methods based on diversities of attention mechanisms \cite{tang2016effective,chen2017recurrent,tay2018learning,lin2019deep} have been introduced into the ABSC task. Recently, multi-head self-attention (i.e., SA) has been gradually studied in ABSC compared to classic attention and has achieved remarkable performance. \subsection{Dependency-based Methods} Although the sentiment dependency and syntactic dependency are not strictly equivalent, the researchers attempted to model sentiment dependency based on the syntax dependency tree and make some progress. Early works focus on learning the sentiment dependency based on syntax tree parsed from aspect and context. e.g., \citep{zhang2019aspect} and \citep{sun2019aspect} introduce the models based on dependency trees and obtain promising performance. Most ABSC models \cite{huang2019syntax,tang2020dependency,wang2020relational} employ the GCN equipped with an attention mechanism to learn dependency trees, because the GCN is able to model topological relation and obtains promising performance. Meanwhile, some methods \cite{he2018effective,zhang2019syntax} exploit the dependency tree to measure the distance between aspect and context words, those methods avoid modeling the dependency tree directly and have better efficiency. \subsection{PTM-based Methods} The pre-trained models prompt the development of ABSC. BERT is one of the first pre-trained models to be applied in ABSC, which achieves exciting performance by fine-tuning without any model modification \cite{xu2019bert}. \citep{rietzler2019adapt} argue that besides fine-tuning, domain adaption of BERT on target corpus could make a great improvement for ABSC. \citet{zhao2020modeling} and \citet{wang2020relational} propose the BERT-based models and exploit dependency trees to learn sentiment information. Instead of directly modeling dependency tree, \citet{phan2020modelling} propose a method that calculates the distance using syntactical information to guide the model to learn the ELCF feature and obtain considerable results. \begin{figure*}[ht] \centering \includegraphics[width=1.5\columnwidth]{fig/sample.jpg} \caption{Dependency-based context example parsed from a real restaurant review. The colored words are tokens from aspect terms, and the arrowed lines indicate the dependency relations.} \label{fig:example} \end{figure*} \section{Methodology} Fig. \ref{fig:lsa} shows the main architecture of the LSA framework. The LSA-based model uses BERT to learn the ELCF features of all provided aspects, and the aggregation window travels upon the ELCF features of adjacent aspects. We concatenate the global context feature and aggregation window feature to predict aspect polarity in case of potential loss of sentiment information outside the aggregation window. \subsection{Preliminaries} Fig. \ref{fig:example} shows an example of aspect-based sentiment classification, where ``atmosphere'', ``food'' and ``service'' contain positive sentiment, while ``dinner'' and ``drink'' contain neutral sentiment. In ABSC, there are not only multiple aspects with different sentiment polarities, but also the polarity between each aspects may be dependent or even contradictory. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{fig/SP.png} \caption{Visualization of sentiment cluster and sentiment coherency.} \label{fig:sp} \end{figure} \subsection{Sentiment Pattern} Inspired by existing works\cite{zhang2019aspect,zhao2020modeling} which proved sentiment polarity between aspects is not always independent, we introduce sentiment pattern (SP), i.e., the underlying empirical principles of organization of sentiment polarities, to help the model to learning sentiment dependency. Precisely modeling for sentiment patterns may be difficult, we can develop our model under the guidance of SP. We propose two sentiment patterns in this paper and prove our arguments by experiment analysis. \subsubsection{Sentiment Cluster} \label{SP1} The aspects containing similar sentiment polarity tend to cluster as shown in Fig \ref{fig:sp}. As users generally organize the opinions of aspects before gives the review, it is intuitive to realize that users tend to cluster the aspects according to the polarity category. i.e., \textbf{SP1}. Table \ref{tab:cluster} show the number of sentiment clusters with different sizes in the four public datasets\footnote{Many of the size $1$ clusters are because the texts contain only one aspect, these texts are hard to extract dependency for current dependency learning-based models. }. \begin{table}[htbp] \centering \caption{The number of sentiment clusters with different sizes.} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}[4]{*}{Datasets} & \multicolumn{6}{c|}{Cluster Size} \\ \cline{2-7} & $1$ & $2$ & $3$ & $4$ & $\geq5$ & $All$ \\ \hline Laptop14 & 1616 & 400 & 112 & 34 & 12 & 2174 \\ \hline Restaurant14 & 2121 & 681 & 258 & 71 & 33 & 3164 \\ \hline Restaurant15 & 1056 & 203 & 56 & 23 & 3 & 1341 \\ \hline Restaurant16 & 1415 & 281 & 74 & 32 & 6 & 1808 \\ \hline \end{tabular}% \label{tab:cluster}% \end{table}% \subsubsection{Sentiment Coherency} \label{SP2} Sentiment polarities of multiple aspects are possible to subject to the sentiment coherency as shown in Fig \ref{fig:sp}. In the case of heuristic thinking, users are probably to bring up an aspect that has the same polarity as pre-commented aspects for any thinking pause. The pattern of sentiment coherency can be classified into global and local coherency. We design our model referring to the local sentiment coherency. i.e., \textbf{SP2}. \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{fig/lsa-bert.png} \caption{The main framework of local sentiment aggregating mechanism.} \label{fig:lsa} \end{figure} \subsection{Local Sentiment Information Aggregation} The local sentiment aggregating mechanism is based on \textbf{SP1} and \textbf{SP2}. The implementation of LSA depends on the aspect-oriented context feature, e.g., ELCF features or other aspect emphasized features. We construct the aggregation window using ELCF features of adjacent aspects, i.e., the $k$-th ($k=1$ is used in this paper) left- and right-adjacent aspects are adopted to build the aggregation window. The calculation of local context can be classified into the token distance-based method and syntax distance-based method. In this paper, we adopt both methods to extract ELCF features and construct the aggregation window. i.e., LSA-T (token distance-based ELCF features) and LSA-S (syntax distance-based ELCF features), respectively. \subsubsection{Token Distance-based Local Context} Token distance-based local context is calculated using the distance of token-aspect pairs. Assume $W^c =\left\{w^c_{0}, w^c_{1}, \ldots, w^c_{n}\right\}$ is the token set after tokenization. The distance $\mathcal{D}_t$ of a token-aspect pairs is calculated as follow: \begin{equation} \mathcal{D}_t = \frac{\sum_{i=1}^{m} (p_{i}-p_{t})}{m} \end{equation} where $p_{i} (i \in [1, m] )$ and $p_{t}$ are the positions of $i$-th token within the aspect and the position of any context token, respectively. $m$ is the length of an aspect. Then LSA uses the embedded distance to learn ELCF features. The LSA-T contains two implementations: LSA with Masking Embedding (ME), i.e., \textbf{LSA-T-ME} and LSA with Distance Embedding (DE), i.e., \textbf{LSA-T-DE}. The former convert the distance vector into ${0, 1}$ mask vector \cite{zeng2019lcf} to obtain the ELCF feature, while the latter forward the distance into embedding without converting. Besides, we determine the local context and assign the local context tags according to $\mathcal{D}_t$: \begin{equation} T_{t}=\left\{\begin{array}{cl} 0, & \mathcal{D}_t > \alpha \\ 1, & other \end{array}\right. \end{equation} where $n$ is the length of the tokenized context; $\alpha (\alpha=3)$ is a fixed threshold to measure local context. This implementation is called LSA with Distance Embedding, i.e.,\textbf{ LSA-T-DE}. \subsubsection{Syntax Distance-based Local Context} Although directly learning syntax structure is inefficient, we can employ the distance calculated from the syntax structure to measure local context and model the local context. Fig. \ref{fig:example} shows a syntax-based tree from a sample with multi-aspects. The distance $\mathcal{D}_t$ can be calculated according to the shortest distance between a token node and aspect nodes in the syntax-based tree. Consistent with the token-based local context calculation method, the syntactic structure-based method also calculates the average distance between the aspect-token and the context token: \begin{equation} \mathcal{D}_t = \frac{\sum_{i=1}^{m} min\_dist(t, t^{aspect}_{i})}{m} \end{equation} where $min\_dist$ indicates the shortest distance between $i$-th token within the aspect and context token $t$ from the non-local context. Similar to the LSA-T, the LSA-S also contains two implementations: LSA with Masking Embedding (ME), i.e., \textbf{LSA-S-ME} and LSA with Distance Embedding (DE), i.e., \textbf{LSA-S-DE}. \subsubsection{Aggregation Window} We use BERT as the base model to encode input text. Assume that $H^c$ is the context feature learned from BERT: \begin{equation} H_{l}^{T} = W_{l}^{T}H^{c} \end{equation} \begin{equation} H_{l}^{L} = W_{l}^{L}H^{c} \end{equation} \begin{equation} H_{l}^{R} = W_{l}^{R}H^{c} \end{equation} where $H_{l}^{T}$, $H_{l}^{L}$ and $H_{l}^{R}$ are the ELCF feature of the target aspect, the feature of left- and right-adjacent aspect. $W_{l}^{T} \in \mathbb{R}^{n \times d_{h}}$, $W_{l}^{L} \in \mathbb{R}^{n \times d_{h}}$ and $W_{l}^{R} \in \mathbb{R}^{n \times\ d_{h}}$ are the local context weight vectors of aspects. We apply the self-attention for ELCF feature of each aspect: \begin{equation} H_{SA}^{lsa} = [H_{SA}^{L}, H_{SA}^{T}, H_{SA}^{R}] \end{equation} \begin{equation} H^{lsa} = W^{lsa} H_{SA}^{lsa}+ b^{lsa} \end{equation} $ H_{SA}^{L}$, $H_{SA}^{T}$, $H_{SA}^{R} $ are ELCF features learned by self attention. $d_{h}$ is the dimension of the hidden size and $H_{SA}^{lsa}$ is the window composed of the ELCF features of multiple adjacent aspects. $H^{lsa}$ is the output representation of LSA, $W^{lsa}$ and $b^{lsa}$ are the trainable weight and bias parameters. \subsubsection{Aggregation Window Padding} We need to pad the aggregation window using the aspect-based features. Here are three padding cases shown in Fig. \ref{fig:padding}. It is worthy noting that padding sentiment aggregation window does not degenerate model because the padded components are duplicated and the same as edge adjacent aspects. Besides, the padded components have the same sentiment information which maintain \textbf{SP1} and \textbf{SP2} while modeling the sentiment clusters. \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{fig/window-padding.png} \caption{Window padding for different situations. } \label{fig:padding} \end{figure} \subsubsection{Differential Weighted Aggregation Window} The LSA treats the sentiment information of adjacent aspects on both left and right sides equally. However, According to \textbf{SP2}, it is natural for us to realize that the importance of sentiment information of the left- and right- adjacent aspects are probably different. Thereafter, We propose differential weighting to differential adjust the contribution of sentiment information from the left-adjacent (previous) aspect and the sentiment information of the right-adjacent (following) aspect. Assume $\eta$ is the adjustable weight of the ELCF feature of left and right aspects: \begin{equation} H_{att}^{dw\_lsa} = [\eta H_{SA}^{L}, H_{SA}^{T}, (1-\eta) H_{SA}^{R}] \end{equation} where $H_{att}^{dw\_lsa}$ is the ELCF feature learned through differential weighting Strategy. \subsection{Output Layer} For the purpose of compensating the loss of context feature caused by ELCF calculation, we combine the global context feature and feature learned from the local sentiment aggregating to predict sentiment polarities as following: \begin{equation} O^{fusion} = W^{f}[H^{lsa}, H^{c}] + b^{f} \end{equation} \begin{equation} O^{dense} = W^{d}O_{head}^{fusion} + b^{d} \end{equation} \begin{equation} \hat{y} =\frac{\exp (O^{dense})}{\sum_{1}^{C} \exp (O^{dense})} \end{equation} where $O_{head}^{fusion}$ and $\hat{y}$ are the feature of first token and predicted sentiment polarity, respectively. $C$ indicates the number of polarity categories. $ W^{f} \in \mathbb{R}^{n \times 2d_{h}}$, $b^{f} \in \mathbb{R}^{2d_{h}}$ and $ W^{d} \in \mathbb{R}^{1 \times C}$, $b^{d} \in \mathbb{R}^{C}$ are the trainable weight and bias vectors. \subsection{Model Training} We use the BERT-BASE\footnote{Implemented by huggingface, available at: \url{https://github.com/huggingface/transformers}} as the underlying model, and optimize our model using Adam. The objective function is cross-entropy as follows: \begin{equation} \mathcal{L}=-\sum_{1}^{C} \widehat{y_{i}} \log y_{i} + \lambda\|\Theta\|_{2} \end{equation} where $\lambda$ and $\Theta$ are the $L_{2}$ regularization and parameter set of the model. \section{Experiments} \subsection{Datasets and Hyper-parameters} To comprehensively evaluate the performance of the local sentiment aggregating mechanism, we conducted experiments on four datasets\footnote{The processed datasets are available with the code in supplementary materials.} (containing multiple aspects): the Laptop14 and Restaurant14 datasets from SemEval-2014 Task4\cite{pontiki2014semeval}, the Restaurant15, Restaurant16 datasets from SemEval-2015 task12\cite{pontiki2015semeval} and SemEval-2016 task5\cite{pontiki2016semeval}, respectively. \begin{table}[htb] \centering \caption{The statistics of four datasets with multiple aspects.} \resizebox{\columnwidth}{12mm}{ \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}[4]{*}{\textbf{Datasets}} & \multicolumn{2}{c|}{Positive} & \multicolumn{2}{c|}{Negative} & \multicolumn{2}{c|}{Neutral} \\ \cline{2-7} & Train & Test & Train & Test & Train & Test \\ \hline Laptop14 & 994 & 341 & 870 & 128 & 464 & 169 \\ \hline Rest14 & 2164 & 728 & 807 & 196 & 637 & 196 \\ \hline Rest15 & 909 & 326 & 256 & 180 & 36 & 34 \\ \hline Rest16 & 1240 & 468 & 437 & 117 & 69 & 30 \\ \hline \end{tabular}% } \label{tab:datasets}% \end{table}% For fair comparisons with other BERT-based state-of-the-art models, we adopt the commonly used hyperparameter settings. The learning rate of LSA models are 2e-5. The batch size and maximum text length are 16 and 80, respectively. The L2 regularization parameter $\lambda$ is 1e-5, and the local context threshold $\alpha$ is 3 for both LSA-T and LSA-S. Each model trained for five rounds and the average performance is presented. \begin{table*}[t] \centering \caption{The overall performance of LSA variants on four datasets. $^\dagger$ and $^\ddagger$ indicate the performance of our implementation and the results retrieved from the published papers, respectively. The best experimental results are heightened in \textbf{bold}.} \resizebox{0.625\textwidth}{33mm}{ \begin{tabular}{lcccccccc} \toprule \multicolumn{1}{c}{\multirow{2}[4]{*}{\textbf{Models}}} & \multicolumn{2}{c}{\textbf{Laptop14}} & \multicolumn{2}{c}{\textbf{Rest14}} & \multicolumn{2}{c}{\textbf{Rest15}} & \multicolumn{2}{c}{\textbf{Rest16}} \\ \cmidrule{2-9} & \textit{Acc} & \textit{F1} & \textit{Acc} & \textit{F1} & \textit{Acc} & \textit{F1} & \textit{Acc} & \textit{F1} \\ \midrule BERT-BASE$^\dagger$ & 79.73 & 75.5 & 82.74 & 73.73 & 82.16 & 64.96 & 89.43 & 74.2 \\ BERT-PT$^\ddagger$ & 78.89 & 75.89 & 85.92 & 79.12 &- &- & - & - \\ RGAT-BERT$^\ddagger$ & 80.94 & 78.2 & 86.68 & 80.92 & - & - & - &- \\ SDGCN-BERT$^\ddagger$ & 81.35 & 78.34 & 83.57 & 76.47 &- &- & - &- \\ SK-GCN-BERT$^\ddagger$ & 79.0 & 75.57 & 83.48 & 75.19 & 83.2 & 66.78 & 87.19 & 72.02 \\ DGEDT-BERT$^\ddagger$ & 79.8 & 75.6 & 86.3 & 80.0 & 84.0 & 71.0 & 91.9 &79.0 \\ LCF-BERT-CDM$^\dagger$ & 80.3 & 76.85 & 86.28 & 80.24 & 83.83 & 69.97 & 90.62 & 76.93 \\ LCF-BERT-CDW$^\dagger$ & 79.73 & 76.07 & 86.16 & 80.12 & 83.77 & 69.03 & 91.0 & 77.1 \\ LCFS-BERT-CDM$^\dagger$ & 79.99 & 76.51 & 86.31 & 80.32 & 83.4 & 68.81 & 90.81 & 75.86 \\ LCFS-BERT-CDW$^\dagger$ & 80.25 & 76.72 & 86.43 & 80.84 & 84.07 & 69.67 & 90.35 & 76.28 \\ ASGCN-BERT$^\dagger$ & 79.83 & 75.89 & 84.76 & 77.94 & 84.22 & 72.9 & 91.05 & 77.05 \\ \midrule LSA-T-ME &81.09 &77.31 &86.40 &80.77 &84.69 &71.97 &91.0 &77.44 \\ LSA-T-DE &80.88 &77.27 &86.25 &80.14 &84.69 &71.55 &91.92 &77.50 \\ LSA-S-ME &81.04 &78.16 &86.70 &80.86 &\textbf{85.25} &\textbf{72.22} &91.22 &77.81 \\ LSA-S-DE &\textbf{81.35} &\textbf{78.35} &\textbf{87.14} &\textbf{81.04} &84.81 &72.21 &\textbf{92.20} &\textbf{79.50} \\ \bottomrule \end{tabular}% } \label{tab:main}% \end{table*}% \subsection{Compared Models} We compare the performance of LSA-T and LSA-S with following state-of-the-art models (most of them are dependency learning based models):\\ BERT-BASE \cite{devlin2019bert} is the baseline of BERT-based models.\\ RGAT-BERT \cite{wang2020relational} is a relational graph attention network based on refined dependency parse tree.\\ SDGCN-BERT \cite{zhao2020modeling} is a GCN-based model that can capture the sentiment dependencies between aspects.\\ DGEDT-BERT \cite{tang2020dependency} is a dual-transformer based network enhanced by dependency graph.\\ SK-GCN-BERT \cite{zhou2020sk} is a GCN-based model which exploit the syntax and commonsense to learn sentiment information.\\ LCF-BERT \cite{zeng2019lcf} employs the local context focus mechanism based on token distance to emphasize the local context feature. \\ LCFS-BERT \cite{phan2020modelling} adopts the syntax-based distance to enhance model in extracting local context feature.\\ ASGCN-BERT \cite{zhang2019aspect} ASGCN-BERT is a dependency learning model we develop based on the ASGCN. We deploy a pretrained BERT which works as the embedding layer to enhance the ASGCN. \begin{figure}[htbp] \centering \subfloat[Restaurant15 - Accureacy]{ \includegraphics[width=\columnwidth]{fig/res15acc.jpg} }\\ \subfloat[Restaurant15 - F1]{ \includegraphics[width=\columnwidth]{fig/res15f1.jpg} } \caption{Visualization of performance under differential weighting on the Restaurant15 dataset.} \label{fig:dw1} \end{figure} \subsection{Analysis of Overall Performance} Table \ref{tab:main} shows the main experimental results. Overall, the LSA-based models obtain substantial improvements over most of the BERT-based models on four datasets. In particular, the LSA-S-ME achieves better accuracy than LSA-S-DE on the Restaurant14 dataset. LSA-S and LSA-T variants obtain impressive performance on the Restaurant15, Restaurant16 datasets. Compared with the BERT-BASE baseline, our model outperforms approximately 2\% accuracy on all four datasets. As for comparisons with GCN-based SDGCN and SK-GCN-BERT, LSA-based models significantly improve the classification accuracy and F1 on the Restaurant14 dataset. The comparisons with GCN-based models suggest that the local sentiment aggregating is competent to handle the sentiment dependency without any GCN architecture. \subsection{Differential Weighting on Aggregation Window} \begin{figure}[htbp] \centering \subfloat[Restaurant16 - Accuracy]{ \includegraphics[width=\columnwidth]{fig/res16acc.jpg} }\\ \subfloat[Restaurant16 - F1]{ \includegraphics[width=\columnwidth]{fig/res16f1.jpg} } \caption{Visualization of performance under differential weighting on the Restaurant16 dataset.} \label{fig:dw2} \end{figure} It is natural to realize that the order of aspects in the text matters while modeling the aggregation window. Because users tend to comment on an aspect that has the same polarity as the pre-commented aspects. We use the differential weighting to model this effect. We use $\eta~(\eta \in [0, 1])$ to adjust the contribution of ELCF features of side aspects. A greater $\eta$ means more contribution of the previous aspect's ELCF feature. The difference between simplified AW (SAW) and differential weighting AW (DAW) with $\eta=0$ or $\eta=1$ is the network structure, as the DAW employs a full-connected layer with $3 \times d_{h}$ input size ($2 \times d_{h}$ in SAW learning the window features. Fig \ref{fig:dw1} and Fig \ref{fig:dw2} show the performance of the model under different $\eta$. It is clear to observe the significance of the aspects on both sides are different. However, because the datasets are small and contain error data, our experiment show different optimal $\eta$ for LSA variants on different datasets. On the other hand, the fixed hyperparameter $\eta$ is hard to precisely model the significance of the sentiment information of side aspects. We will consider adaptive $\eta$ calculation methods in the future. \subsection{Aggregation Window Decomposition} \begin{table*}[htbp] \centering \caption{The average performance deviation of ablated LSA. ``L'' and ``R'' indicates the local sentiment aggregating constructed using the ELCF feature of the left-adjacent aspect and right-adjacent aspect, respectively.} \resizebox{0.58\textwidth}{22mm}{ \begin{tabular}{cccccccccc} \toprule \multirow{2}[4]{*}{\textbf{Models}} & \multirow{2}[4]{*}{AW} & \multicolumn{2}{c}{\textbf{Laptop14}} & \multicolumn{2}{c}{\textbf{Rest14}} & \multicolumn{2}{c}{\textbf{Rest15}} & \multicolumn{2}{c}{\textbf{Rest16}} \\ \cmidrule{3-10} & & Acc & F1 & Acc & F1 & Acc & F1 & Acc & F1 \\ \midrule \multirow{2}[2]{*}{LSA-T-ME} & L & +0.05 & +0.40 & -0.27 & -0.32 & -0.86 & -1.42 & -0.32 & -1.78 \\ & R & +0.31 & +0.52 & +0.18 & +0.06 & -0.06 & +0.30 & -0.05 & -0.49 \\ \midrule \multirow{2}[2]{*}{LSA-T-DE} & L & +0.26 & +0.59 & -0.63 & -0.30 & +0.99 & +1.83 & -0.92 & -2.10 \\ & R & +0.36 & +0.54 & -0.57 & -0.21 & -0.06 & -0.46 & -0.75 & -1.67 \\ \midrule \multirow{2}[1]{*}{LSA-S-ME} & L & +0.47 & +0.47 & -1.34 & -1.33 & -0.06 & -1.29 & 0 & +0.67 \\ & R & +0.42 & +0.56 & -0.71 & -0.67 & -0.56 & -1.16 & +0.22 & +1.27 \\ \midrule \multirow{2}[1]{*}{LSA-S-DE} & L & -0.21 & -0.45 & -0.03 & -0.11 & 0 & -1.61 & -0.22 & +0.40\\ & R & -0.42 & -0.82 & -0.60 & -0.65 & -0.86 & -2.90 & -0.60 & -0.07 \\ \bottomrule \end{tabular}% } \label{tab:ablations}% \end{table*}% We study the effectiveness of simplified LSA, i.e., only the local context feature of the aspect on the left or right side are adopted to construct LSA. Table \ref{tab:ablations} shows the experimental results. Compared with the full LSA, although the simplified LSA slightly improves the sentiment classification efficiency, in most situations the simplified LSA suffers a loss of performance inevitably. Generally, we observe a noticeable decrease in performance of simplified aggregation window-based models, especially on the Rest14, Rest15, Rest16 datasets. However, the performance on the Laptop14 dataset is better than full-size AW, excluding the LSA-S-DE. \subsection{Research Questions} \subsubsection{RQ1: Can only an adjacency/graph relations-based approach learn sentiment dependency between aspects?} While measuring the importance of sentiment dependency between different aspects is very difficult, we ignore sentiment dependency outside the aggregation window and apply differential weighting to control the contribution of ELCF features of aspects on the left and right. And experiment results show that the sentiment information in the aggregation window does have more impact on the targeted aspect's sentiment. Beside, the LSA models avoid learning complex topological structures, e.g., adjacency matrix, and only models the sentiment dependency in the aggregation window composed of the ELCF feature of aspects. It is worth thinking about why such a simple framework based on the ELCF features significantly outperforms the BERT/GCN-based models. On the one hand, the experimental results verify the feasibility of sentiment patterns. On the other hand, GCN has the limited ability to extract sophisticated text features compared to pretrained models. Another indication is that apart from applying PTMs, external knowledge, or switching network to enhance the model, we may lay more emphasis on the characteristic of the task itself. \subsubsection{RQ2: Are there other ways to build an aggregation window?} We adopt the AW construction method based on ELCF features of aspects, and this initial implementation yields significant performance improvements. However, it is important to note that there may be many alternatives to build the AW. We also tried to construct $[CLS] + Context + [SEP] + Aspect + [SEP]$ to build sentiment AW, and also achieved promising performance. We are exploring other possibilities and expect LSA to supersede the dependent learning approach based on auxiliary data, e.g., graph structure data. \subsubsection{RQ3: What are the difference between convolution and window-based aggregation?} Convolution and AW are different from AW in concept and implementation. The main differences are as follows: \textbf{Modeling target}. Convolution is usually used for the basic network to learn text features, e.g., learning embedded text features. However, the sentiment AW aims at learning aspect-specific context features, e.g., ELCF feature, and the AW only works in sentiment cluster. \textbf{Processing granularity}. Convolution is an ideal operation to learn semantic connections between tokens. However, the purpose of AW learns the coarse-grained context feature, i.e., global context feature weighted according to the target aspect. Since the AW contains few components, convolution has no advantage in modeling AWs. \subsubsection{The threatening of aggregation window building} The parsing of the syntax tree greatly affects the extraction of ELCF features. We use spaCy to obtain the syntax tree as previous works. Although LSA achieves impressive performance improvement, due to the problem of compatibility between spaCy and BERT, e.g., tokenization strategy, there are considerable samples among four datasets (including train and test datasets) are tokenized into different token set using spaCy and BERT. In that case, there is an non-negligible error rate in calculating aspect-token pair distance and extracting ELCF feature. We believe that using the syntax tree induced by pre-trained model will alleviate this problem and bring about substantive performance improvement. \section{Conclusion} We introduce sentiment patterns, which guides the proposal of the efficient local sentiment aggregating mechanism to learn the sentiment dependency between aspects. Compared with the dependency tree-based models, the LSA only exploits the distance information of the aspect-token pair to improve modeling for the sake of saving computational resources. Moreover, the LSA-based models outperform the BERT/GCN-based models on four commonly used datasets. We also propose differential weighting to measure the importance of sentiment information of different aspects, which provides a new clue for sentiment dependency modeling. In the future, we plan to work on other window construction methods and propose a self-adaptive differential weighting method to improve the performance of LSA. \section*{Acknowledgment} Thanks to the anonymous reviewers to help us improve this work. This research is funded by the National Natural Science Foundation of China, project approval number: 61876067; The Guangdong General Colleges and Universities Special Projects in Key Areas of Artificial Intelligence of China, project number: 2019KZDZX1033. And this research is supported by the Innovation Project of Graduate School of South China Normal University, project number: 2019LKXM038.
2,869,038,156,070
arxiv
\section{METHODOLOGY AND INITIAL CONDITIONS \label{meth}} \subsection{Numerical Methods} For the purpose of comparison, we perform two calculations with identical resolutions and characteristic parameters. The first, which we denote RT, includes radiative transfer and feedback from stellar sources. The second, henceforth NRT, uses an EOS to describe the thermal evolution of the gas. We perform both simulations using the parallel AMR code, ORION. ORION utilizes a conservative second order Godunov scheme to solve the equations of compressible gas dynamics \citep{truelove98, klein99}: \begin{eqnarray} {{\partial \rho} \over{ \partial t}} + \nabla \cdot (\rho {\bf v}) &=& 0, \\ {{\partial (\rho {\bf v})} \over{ \partial t}} + \nabla \cdot (\rho {\bf vv}) &=& -\nabla P - \rho \nabla \phi, \\ {{\partial (\rho e)} \over{ \partial t}} + \nabla \cdot [(\rho e + P){\bf v}] &=& \rho {\bf v} \nabla \phi - \kappa_R \rho (4\pi B -cE), \end{eqnarray} where $\rho$, $P$, and $\bf v$ are the fluid density, pressure, and velocity, respectively. The total fluid energy is given by $e= 1/2 \rho {\bf{v}}^2 + P/(\gamma-1)$, where $\gamma=5/3$ for a monatomic ideal gas\footnotemark. The total radiation energy density is denoted by $E$, and $B$ is the Planck emission function. ORION solves the Poisson equation for the gravitational potential $\phi$ using multi-level elliptic solvers with multi-grid iteration: \begin{equation} {\nabla}^2 \phi = 4 \pi G [ \rho + \sum_n m_n \delta({\bf x}-{\bf x}_n) ], \end{equation} where $m_n$ and ${\bf x}_n$ are the mass and position of the nth star. \footnotetext{Most of the volume of the domain is too cold to excite any of the $H_2$ rotational or vibrational degrees of freedom, and thus the gas acts as if it were monatomic.} ORION solves the non-equilibrium flux-limited diffusion equation using a parabolic solver with multi-grid iteration to determine the radiation energy density \citep{krumholzkmb07}: \begin{equation} {{\partial E} \over{ \partial t}} - \nabla \cdot (\frac{c \lambda }{ \kappa_{\rm R} \rho} \nabla E) = \kappa_{\rm P} \rho (4 \pi B - cE) + \sum_n L_n W({\bf x}-{\bf x}_n) \label{radenergy}, \end{equation} where $\kappa_{\rm R}$ and $\kappa_{\rm P}$ are the Rosseland and Planck dust opacities, and $L_n$ is the luminosity of the nth star. $W$ is a weighting function that determines the addition of the stellar luminosity to the grid (see Appendix A for details of the star particle algorithm). The flux-limiter is given by $\lambda = {1\over R} (\coth R - {1\over R})$, where $R = |\nabla E/ (\kappa_{\rm R} \rho E) |$ \citep{levermore81}. We assume that the dust grains and gas are thermally well-coupled, an approximation we discuss further in Section 4.2. We obtain the dust opacities from a linear fit of the \citet{pollack94} dust model, which includes grains composed of silicates, trolites, metallic iron, organics, and H$_2$O ices. For gas temperatures $10 \leq T_g \leq 350$ K, the linear fit is given by: \begin{eqnarray} \kappa_R &=& 0.1 + 4.4 (T_g/350 ) \mbox{ cm$^2$ g$^{-1}$}, \\ \kappa_P &=& 0.3 + 7.0 (T_g/375 ) \mbox{ cm$^2$ g$^{-1}$}. \end{eqnarray} These fits give $\kappa_{\rm R} = 0.23$ cm$^2$ g$^{-1}$ and $\kappa_{\rm P} = 0.49$ cm$^2$ g$^{-1}$ at the minimum simulation temperature, 10 K. Work by \citet{semenov03} explores the effect of dust composition, porosity, and iron content on the Planck and Rosseland average opacities. For the different models, they find a spread of more than an order of magnitude in the opacity at 10 K. The simplest model, based upon the assumption that dust grains are homogenous spheres, produces the lowest value for the Rosseland opacity, $\kappa_R \simeq 0.02$ cm$^2$ g$^{-1}$, while the most porous and non-homogenous grain models produce 10 K opacities as large as $\kappa_R \simeq 0.7$ cm$^2$ g$^{-1}$. For temperatures above 100 K, the different dust models are more converged and the opacities are generally within a factor of 2. As a result, the temperature range from 10 K to 100 K is the most sensitive to dust assumptions. In this range, homogenous models increase roughly quadratically with temperature, while fluffier grain models increase linearly. Our opacity fits are then close to the mean value of $\kappa_R = 0.16$ for the \citet{semenov03} models, although this value is more representative of porous and aggregate grains. As a result, our dust model is reasonable for the higher density regions of $n \gtrsim 10^7$ cm$^3$ typical of protostellar cores, but we may overestimate the dust opacity in the lower density cold gas by as much as a factor of 10. In studies of low-mass star formation, it is reasonable to neglect pressure exerted by the radiation field on the dust and gas. This is because the advection of radiation enthalpy is small compared to the rate the radiation diffuses through the gas: \begin{equation} {\nabla \cdot {\left(\frac{3-R_2}{2}{\bf v}E\right)}\over{\nabla \cdot \left(\frac{c \lambda}{\kappa_R \rho} \nabla E\right)}} \ll 1, \end{equation} where $R_2 = \lambda + {\lambda}^2 R^2$ is the Eddington factor. Without radiative transfer, the energy exchange term in (5) disappears, and we close the system of equations with a barotropic EOS for the gas pressure: \begin{equation} P = \rho c_{\rm s} ^ 2 + \left({{\rho} \over {\rho_{\rm c}}}\right)^ {\gamma} \rho_{\rm c} c_{\rm s}^2, \end{equation} where $c_{\rm s} = ({k_{\rm B} T }/{ \mu})^{1/2}$ is the isothermal sound speed, $\gamma=5/3$, the average molecular weight $\mu =2.33m_{\rm H}$, and the critical density, $\rho_{\rm c}=2\times 10^{-13}$ g cm$^{-3}$. The value of $\mu$ reflects an assumed gas composition of $n_{\rm He} = 0.1n_{\rm H}$. The critical density determines the transition from isothermal to adiabatic regimes, and we adopt a value to agree with the full angle-dependent 1D radiation-hydrodynamic calculation by \citet{masunaga98} that agrees with the collapse solution prior to H$_2$ dissociation. We use the Truelove criterion to determine the addition of new AMR grids so that the gas density in the calculations always satisfies: \begin{equation} {\rho} < {\rho_{\rm J}} = {{J^2\pi c_{\rm s}^2}\over{G(\Delta x_l)^2}} \label{jeans}, \end{equation} where $\Delta x_l$ is the cell size on level $l$, and we adopt a Jeans number of $J=0.25$ \citep{truelove97}. In the case with radiative transfer, it is important to adequately resolve spatial gradients in the radiation field. Radiation gradients are primarily associated with collapsing regions hosting a star but are not covered by the Jeans gravitational criterion. We find that we adequately resolve the radiation field and avoid effects such as grid imprinting by refining wherever $\nabla E/E > 0.25$. Although the simulation box and gas behavior is periodic, we adopt Marshak boundary conditions for the radiation field. This allows the radiation to escape from from the box as it would from a molecular cloud. We insert sink, or star, particles in regions of the flow that have exceeded the Jeans density on the maximum level \citep{krumholz04}. These particles mark collapsing regions and also represent protostellar objects. In the simulation with radiative transfer, the particles act as radiative sources, and they are endowed with a sub-grid stellar model. We describe the details of this model and its implementation in Appendix B. By construction, stars that approach within eight cells are merged together. Small sink particles, such as those generated by disk fragmentation, tend to accrete little mass and frequently merge with their more substantial neighbors within a few orbital times. \begin{figure*} \epsscale{0.85} \plotone{f1C.eps} \epsscale{0.7} \caption{Log gas column density of the RT (left) and NRT (right) simulations at 0.0, 0.25, 0.5, 0.75, 1.0 $t_{\rm ff}$. The log density weighted gas temperature for the RT is shown in the center. The color scale for the column density ranges from $10^{-1.5}-10^{0.5}$ g cm$^{-2}$ and $10-50$ K for the gas temperature. Animations of the left and right panels, as well as color figures, are included in the online version. \label{isocolumn}} \end{figure*} \subsection{Initial Conditions} We chose a characteristic 3D turbulent Mach number, ${{\mathcal M}}$=6.6, and assume that the cloud is approximately virialized: \begin{equation} { \alpha_{\rm vir}} = {{5 \sigma^2} \over { G M / R}} \simeq 1. \end{equation} The initial box temperature is $T$ = 10 K, length of the box $L$ = 0.65 pc and the average density is $\rho = 4.46 \times 10^{-20}$ g cm$^{-3}$, so that the cloud satisfies the observed linewidth-size relation \citep{solomon87, heyer04}. The total box mass is 185 $\ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi$. To obtain the initial turbulent conditions, we begin without self-gravity and apply velocity perturbations to an initially constant density field using the method described in \citet{maclow99}. These perturbations correspond to a Gaussian random field with flat power spectrum in the range $1 \le k \le 2$ where $k=k_{\rm phys} L / 2 \pi$ is the normalized wavenumber. At the end of three cloud crossing times, the turbulence follows a Burgers power spectrum, $P(k) \propto k^{-2}$, as expected for hydrodynamic systems of supersonic shocks. We denote this time $t = 0$. We then turn on gravity and follow the subsequent gravitational collapse for one global freefall time: \begin{equation} {\bar t}_{ff}= \sqrt{ {{3 \pi} \over {32 G \bar{ \rho}}}} = 0.315 \mbox{ Myr}, \end{equation} where $\bar \rho$ is the mean box density. We continue turbulent driving in the simulations, using a constant energy injection rate to ensure that the turbulence does not decay and the cloud maintains approximate energy equipartition. \begin{figure*} \epsscale{1.1} \plottwo{f2a.eps}{f2b.eps} \plottwo{f2c.eps}{f2d.eps} \caption{Histogram of the gas temperatures weighted by volume fraction for RT at 0.0, 0.5, 0.75, and 1.0 $t_{ff}$. \label{temphist}} \end{figure*} The calculations have a $256^3$ base grid with 4 levels of factors of 2 in grid refinement, giving an effective resolution of $4096^3$, where $\Delta x_4$ = 32 AU. In \S 3.2, we describe the results of a high-resolution core study using 7 levels of refinement for an effective resolution of $65,536^3$ and minimum cell size $\Delta x_7$ = 4 AU. Generally, the calculations run on 128-256 CPUs. \begin{deluxetable*}{lcccccc} \tablecaption{RT protostar properties at 1 $t_{\rm ff}$ \label{table1}} \tablehead{\colhead{M (\ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi)} & \colhead{$\dot M$ (\ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi yr$^{-1}$)\tablenotemark{a}} & \colhead{$\dot M_f$ (\ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi yr$^{-1}$)\tablenotemark{b}} & \colhead{$\bar{\dot M}$ (\ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi yr$^{-1}$)\tablenotemark{c}} & \colhead{L (\ifmmode {\rm L}_{\mathord\odot}\else $L_{\mathord\odot}$\fi)} & \colhead{Age (Myr) \tablenotemark{d}}} \startdata 1.52 & $4.2\times 10^{-9}$ & $1.1\times 10^{-7}$ & $8.7\times 10^{-6}$ &7.2 & 0.18 \\ 0.45 & $2.0 \times 10^{-8}$ & $3.9\times 10^{-8}$ & $4.0\times 10^{-6}$ & 0.9& 0.11\\ 0.09 & $1.4\times 10^{-7}$ & $1.3\times 10^{-7}$ & $8.0\times 10^{-7}$ &0.3 & 0.11\\ 2.91 & $8.1\times 10^{-7}$ & $1.7\times 10^{-5}$& $2.9\times 10^{-5}$ & 177.5 & 0.10\\ 0.35 & $5.6\times 10^{-7}$ & $2.0\times 10^{-7}$& $3.5\times 10^{-6}$& 1.3 & 0.10 \\ 2.21 & $6.0\times 10^{-7}$ & $4.2\times 10^{-6}$& $2.4\times 10^{-5}$& 45.2 & 0.09\\ 1.54 & $4.0\times 10^{-6}$ & $7.5\times 10^{-6}$& $1.7\times 10^{-6}$& 74.6 & 0.09 \\ 1.17 & $9.8\times 10^{-6}$ & $1.7\times 10^{-5}$& $1.4\times 10^{-5}$& 69.4 & 0.09 \\ 0.43 & $1.2\times 10^{-6}$ & $2.8\times 10^{-6}$& $6.0\times 10^{-6}$& 8.6 & 0.09\\ 0.48 & $3.2\times 10^{-6}$ & $7.2\times 10^{-6}$& $6.9\times 10^{-6}$ &19.4 & 0.08 \\ 0.65 & $1.6\times 10^{-6}$ & $9.9\times 10^{-6}$& $1.1\times 10^{-5}$& 12.9 & 0.08\\ 0.80 & $5.7\times 10^{-6}$ & $1.7\times 10^{-5}$& $1.5\times 10^{-5}$& 67.6 & 0.06 \\ 0.33 & $2.1\times 10^{-5}$ & $2.2\times 10^{-5}$& $2.3\times 10^{-5}$& 79.1 & 0.02\\ 0.06 & $4.7\times 10^{-6}$ & $5.1\times 10^{-6}$& $7.4\times 10^{-6}$& 3.9 & 0.01\\ 0.01 & $3.0\times 10^{-6}$ & $1.1\times 10^{-5}$& $8.6\times 10^{-6}$& 0.8 & 0.003\\ \enddata \tablenotetext{a} {Instantaneous accretion rate.} \tablenotetext{b} {Final accretion rate averaged over the last $\sim$ 2500 yrs.} \tablenotetext{c} {Mean accretion rate averaged over the protostar lifetime.} \tablenotetext{d} {Age calculated from the time of particle formation.} \end{deluxetable*} \section{RESULTS \label{results}} \subsection{Radiative Transfer and Non-Radiative Transfer Comparison} In order to quantify the effects of radiative feedback on low-mass star formation, we compare a simulation including radiative transfer with a non-radiative one using an EOS. The latter simulation is essentially isothermal throughout since the highest density allowed by the Truelove criteria at the fiducial maximum AMR level corresponds to $\rho \simeq 5 \times 10^{-15}$ g cm$^{-3}$. With the adopted EOS, gas of this density is not heated above 11 K. Images of the two simulations at different times are shown in Figure \ref{isocolumn}. Although the simulations use identical forcing patterns applied at the same Mach number, the details of the turbulence differ as the two calculations have slightly different time steps and turbulent decay rates. Both calculations begin at $t=0$ with a centrally condensed region that forms the first stars, an imprint of the large wavenumber driving. Once gravity is turned on, we continue driving the simulations with the same energy injection rate, yielding 3D Mach numbers of 7.0 and 8.6 at 1 $t_{\rm ff}$ for the NRT and RT calculations, respectively. Because gravitational collapse causes non-turbulent velocity motions, we chose to fix the energy injection rate rather than the total kinetic energy. Thus, the root-mean-squared gas velocity no longer exactly indicates the total turbulent energy, and the Mach number increases above the initial value. In Tables \ref{table1} and \ref{table2}, we list the properties of the stars formed in each calculation at 1$t_{\rm ff}$. \subsubsection{Temperature Distribution} At $t=0$, the RT simulation is nearly isothermal and gas temperatures, are distributed between 10-11 K (Figure \ref{temphist}). Evaluated at the mean box density, the gas is quite optically thin with an average optical depth though the box of $\tau_{\rm L} = L \times \kappa_{\rm R} \rho = 0.65~\mbox{pc} \times 4.46\times10^{-20}$ g cm$^{-3} \times 0.2~\mbox{cm}^2~\mbox{g}^{-1} \sim 0.02$. Since the box is so transparent, the gas cools very efficiently. Small temperature variations arise in the initial state due to the distribution of strong shocks. For reference, gas compressed by a Mach 10 shock at 10 K will undergo net heating of $<$ 0.1 K during a time step. Qualitatively, the change is so small because the radiative cooling time is a factor of $\sim 10^3$ smaller than the time step. Under the influence of gravity, collapsing regions begin to become optically thick, where individual cells at the maximum simulation densities reach optical depths of $\tau \simeq 3$ when $T=100$ K. Figure \ref{temphist} shows the evolution of the gas temperature distribution over a freefall time. There are three processes that result in heating. First, there is the direct contribution from the protostars, which we add as a source term in the radiation energy equation. Second there is heating due to viscous dissipation, which is given by: \begin{equation} \dot e_{\rm vis} = -(\sigma ' \cdot \nabla) \cdot {\bf v}, \end{equation} where $\sigma '$ is the viscous stress tensor, $\sigma ' = \eta(S -{2 \over 3} I \nabla \cdot v)$ and $S = \nabla v + (\nabla v)^T$ \citep{landau}. We assume that the dynamic viscosity $\eta= \rho |{\bf v}| \Delta x / { \mathcal R}e_g$, where the Reynolds number, ${\mathcal R }e_g \simeq 1$, at the dissipation scale. However, turbulent dissipation occurs over a range of the smallest scales on the domain, where the largest amount of dissipation occurs on the size scale of a cell. Thus, we expect this formula to be uncertain to within a factor of two. Third, the net heating due to gas compression is given by: \begin{equation} \dot e_{\rm comp} = - P( \nabla \cdot {\bf v} ); \end{equation} the heating is negative (i.e., cooling occurs) in rarefactions. Figure \ref{heating} shows the heating contributions summed over the entire domain. At $t=0$, the only source of heating is turbulent motions. The figure demonstrates that after star formation commences protostellar output rather than compression is responsible for the majority of the heated gas, and at $1t_{\rm ff}$ protostellar heating dominates by an order of magnitude relative to viscous dissipation and four orders of magnitude relative to gas compression. Viscous dissipation dominates the heating prior to star formation. After star formation is underway, viscous dissipation occurs in the protostellar disks. In contrast, turbulent shocks then contribute very little to the total. Figure \ref{temphist} shows the evolution of the gas temperature distribution over a freefall time. The amount of heated gas ($T > 12$ K) increases with the number of stellar sources from 0.06\% of the volume for one protostar at 0.5 $t_{\rm ff}$ to $\sim 4$\% at $1~t_{\rm ff}$. The corresponding mass fractions of the heated gas are slightly higher at 0.3 \% and 5\%, respectively. As we have seen in the previous figure, most of this heating is directly related to the protostars, and it comprises a relatively small volume filling fraction. The temperature distribution as a function of distance from the sources is shown in Figure \ref{tempvsr}. As illustrated, heating is local and occurs within $\sim$ 0.05 pc of the protostar. Since the remainder of the cloud remains near 10 K, additional turbulent fragmentation is not affected by pre-existing protostars. However, radiative feedback profoundly influences the evolution of the protostars, accretion disks, and stellar multiplicity as we will show (see \S 3.1.2-3.1.3). Our temperature profiles are qualitatively similar to those of \citet{masunaga00}, who model 1D protostellar collapse with radiative transfer. During the formation of the low-mass protostar, \citet{masunaga00} also find that heating above 10 K is confined within 0.05 pc of the central source and that significant variation in temperature occurs as a function of density and time. Additional studies using GFLD \citep{whitehouse06} or approximate radiative transfer methods \citep{stamatellos07, forgan09} find similar heating beyond that expected from a barotropic EOS. Due to temperature variation with both density and time, we find that the gas temperature is poorly represented by a single EOS with characteristic critical density and $\gamma$. Figure \ref{tvsrho} shows the distribution of cell temperatures as a function of cell density. For reference, we also plot our fiducial EOS for the NRT simulation as well as the EOS presented by \citet{larson05}. We find that many cells at lower densities are heated due to close proximity with a source. In fact, for both the EOS described in \S 2, which only includes the heating due to gas compression, and the \citet{larson05} EOS, none of the cells are predicted to heat much above the initial 10 K temperature. Nonetheless, at any given time a representative EOS can be formulated by fitting the mean grid cell temperature binned as a function of density. Figure \ref{tvsrho} shows a least-squares fit of the temperature data for two different times. The magnitude of the error bars is given by the standard deviation of the temperatures in each density bin. Because such an equation fits the average temperature, there is necessarily a large scatter as illustrated by the error bars. The two fits return different effective critical densities and gamma values. Thus, a single EOS results in a large fraction of cells unavoidably at the wrong temperature. \begin{deluxetable*}{lccccc} \tablecaption{NRT protostar properties at 1 $t_{\rm ff}$ \label{table2}} \tablehead{\colhead{M (\ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi)} & \colhead{$\dot M$ (\ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi yr$^{-1}$)\tablenotemark{a}} & \colhead{$\dot M_f$ (\ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi yr$^{-1}$)\tablenotemark{b}} & \colhead{$\bar{\dot M}$ (\ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi yr$^{-1}$)\tablenotemark{c}} & \colhead{Age (Myr) \tablenotemark{d}}} \startdata 3.92 & $7.2\times 10^{-6}$ & $1.2\times 10^{-5}$& $2.2\times 10^{-5}$& 0.15 \\ 4.77 & $1.6 \times 10^{-6}$ & $2.7\times 10^{-5}$& $2.6\times 10^{-5}$& 0.15\\ 2.91 & $1.0\times 10^{-5}$ & $1.2\times 10^{-5}$& $2.6\times 10^{-5}$& 0.11\\ 4.84 & $2.1\times 10^{-5}$ & $2.3\times 10^{-5}$& $4.4\times 10^{-5}$& 0.11\\ 0.66 & $2.5\times 10^{-7}$ & $2.7\times 10^{-7}$& $7.6\times 10^{-6}$& 0.09 \\ 1.13 & $1.3\times 10^{-5}$ & $1.3\times 10^{-5}$& $2.1\times 10^{-5}$& 0.05\\ 0.66 & $8.9\times 10^{-7}$ & $9.0\times 10^{-7}$& $1.2\times 10^{-5}$& 0.05 \\ 0.55 & $9.3\times 10^{-7}$ & $1.0\times 10^{-6}$& $1.1\times 10^{-5}$& 0.05 \\ 0.71 & $5.9\times 10^{-6}$ & $5.6\times 10^{-6}$& $1.4\times 10^{-5}$& 0.05\\ 1.32 & $1.2\times 10^{-5}$ & $1.4\times 10^{-5}$& $7.8\times 10^{-5}$& 0.02 \\ 0.08 & $2.7\times 10^{-5}$ & $6.6\times 10^{-6}$& $5.9\times 10^{-6}$& 0.02\\ 0.49 & $1.1\times 10^{-5}$ & $1.1\times 10^{-5}$& $3.6\times 10^{-5}$& 0.02 \\ 0.26 & $5.0\times 10^{-6}$ & $1.2\times 10^{-5}$& $2.0\times 10^{-5}$& 0.02\\ 0.04 & $5.8\times 10^{-6}$ & $1.1\times 10^{-5}$& $2.8\times 10^{-6}$& 0.02\\ 0.02 & $2.6\times 10^{-9}$ & $1.1\times 10^{-8}$& $1.2\times 10^{-6}$& 0.02\\ 0.12 & $1.3\times 10^{-6}$ & $1.5\times 10^{-6}$& $9.3\times 10^{-6}$& 0.02\\ 0.04 & $2.7\times 10^{-6}$ & $2.7\times 10^{-6}$& $4.2\times 10^{-6}$& 0.01\\ 0.01 & $8.6\times 10^{-12}$ & $5.5\times 10^{-12}$& $1.8\times 10^{-6}$& 0.01\\ 0.09 & $6.4\times 10^{-6}$ & $5.1\times 10^{-6}$& $1.3\times 10^{-5}$& 0.01\\ 0.14 & $1.9\times 10^{-5}$ & $1.9\times 10^{-5}$& $2.3\times 10^{-5}$& 0.01\\ 0.02 & $2.7\times 10^{-6}$ & $7.0\times 10^{-6}$& $5.5\times 10^{-6}$& 0.01\\ 0.05 & $2.4\times 10^{-5}$ & $1.5\times 10^{-5}$& $1.6\times 10^{-5}$& 0.01\\ \enddata \tablenotetext{a} {Instantaneous accretion rate.} \tablenotetext{b} {Final accretion rate averaged over the last $\sim$ 2500 yrs.} \tablenotetext{c} {Mean accretion rate averaged over the protostar lifetime.} \tablenotetext{d} {Age calculated from the time of particle formation.} \end{deluxetable*} \begin{figure} \epsscale{1.2} \plotone{f3.eps} \caption{The magnitude of the heating rate due to all stellar sources, viscous dissipation, and gas compression at the times shown in Figure \ref{isocolumn}. \label{heating}} \end{figure} Since accretion luminosity is predominantly emitted at the stellar surface, a low simulation resolution, when not augmented for the missing source contribution, can significantly neglect a large part of the heating (e.g., \citealt{bate09b}). Typical pre-main sequence protostellar radii are expected to range from 3-5 $R_{\odot}$ for low-mass stars \citep{palla93, robit06}. Thus, the temperature at a distance, $r$, from an emitting source, $L_*$, is given by: \begin{equation} T = \left({L_* \over {4 \pi \sigma_B r^2}}\right)^{1/4}, \end{equation} where $\sigma_{\rm B}$ is the Stefan-Boltzman constant, and the gas distribution is assumed to be spherically symmetric. Then the difference in accretion luminosity for a simulation with minimum resolution of $R_{\rm res} =$ 0.5 AU versus a simulation resolving down to the stellar surface at $R_*=5~R_{\odot}$ is given by: \begin{equation} \Delta L = {{G m \dot m} \over {R_{res}}} \times \left( {{R_{\rm res}}\over{R_*}} - 1\right) \simeq {{G m \dot m} \over {R_{\rm res}}} \times (20) \label{deltal}. \end{equation} Thus, the actual accretion luminosity at the higher resolution is a factor of 20 larger! Since we adopt a stellar model to calculate the protostellar radii self-consistently, we include the entire accretion luminosity contribution down to the stellar surface in our simulations. From (\ref{deltal}), the difference in luminosity corresponds to a factor of $(20)^{1/4}$ or $ \sim 2$ underestimation of the gas temperature. Nonetheless, this estimate is conservative since it does not include the additional luminosity emitted by the protostar, which may become significant during the Class II and late Class I phase. Thus, we expect the simulation of \citet{bate09b} may overestimate the extent of small scale fragmentation and BDs formed in disks. \begin{figure} \epsscale{1.2} \plotone{f4.eps} \caption{The gas temperature as a function of distance from the source for all sources in the RT simulation at 1.0 $t_{ff}$. The line indicates a line with $T \propto r^{-1/2}$. \label{tempvsr}} \end{figure} \subsubsection{Stellar Mass Distribution} The large temperature range in the RT simulation has a profound effect on the stellar mass distribution. Figure \ref{imf} depicts the total mass of the star-disk systems in each simulation, were we define the surrounding disk as cells with $\rho > 5 \times 10^{-17}$ g cm$^{-3}$. We find that this cutoff selects gas within a few hundred AU of the protostars, visually identified with the disk, while excluding the envelope gas. Increased thermal support in the protostellar disk acts to suppress disk instability and secondary fragmentation in the core. In contrast, protostellar disks in the NRT calculation suffer high rates of fragmentation. Most of these small fragments are almost immediately accreted by the central protostar, driving temporarily large accretion rates onto the central source. If we define the star formation rate per freefall time as \begin{equation} \mbox{SFR}_{ff} = {\dot M_* \over {M/{\bar t_{ff}}}}, \end{equation} \citep{krumholzmckee05} then the total star formation rate in the NRT simulation is 13\% versus 7\% in the RT simulation. Thus, the RT SFR$_{\rm ff}$ is almost half the NRT value and agrees better with observations \citep{KandT07}. Since the simulations have the same numerical resolution, thermal physics must be directly responsible. In the RT simulation, cores containing protostars experience radiative feedback that slows collapse and accretion. Due to the small number statistics, we do not directly compare with the shape of the observed initial mass function. Accurate comparison is also problematic because many of the late forming protostars are still actively accreting. As shown in Table \ref{table1}, by 1$t_{\rm ff}$ in the RT case, about a third, or 5 of the protostars, have accretion rates that are at least 5 times smaller than their individual mean accretion rate, indicating that the main accretion phase has ended. Adopting an efficiency factor of $\epsilon_{\rm core} = {{1}\over{3}}$ to account for mass loss due to outflows \citep{matzner00,alves07, enoch08}, we find that the mean protostellar mass of these protostars is $\bar m = 0.4$ \ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi, which is comparable to the expected mean mass of the system initial mass function of $\sim$ 0.5 \ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi \citep{scalo86, chabrier05}. \begin{figure*} \epsscale{1.15} \plottwo{f5a.eps}{f5b.eps} \caption{The average gas temperature at 0.8 $t_{ff}$ and 1.0 $t_{ff}$ as a function of density. The error bars give the temperature dispersion in each bin. The dashed line is a least-sqares fit of Equation 11 which returns $\rho_{\rm c}$ and $\gamma$. The dot-dashed line plots Equation 11 with the original parameters: $\rho_{\rm c} = 1 \times 10^{-13}$ g cm$^{-3}$ and $\gamma = 1.67$. The power law density-temperature relation from \citet{larson05} is also plotted. \label{tvsrho}} \end{figure*} The dynamics of close bodies and embedding gas are difficult to accurately resolve inside a small number of grid cells, so we merge particles that pass within 8 cells. Without this limit, some of the small fragments would dynamically interact with the central body and be ejected from the stellar system. These brown dwarf size objects are commonly produced in simulations that do not include a merger criterion, typically in larger numbers than are observed in the stellar IMF (e.g., \citealt{bate03, bate05, bate09a}). As a result, the simulation IMF only resolves wide binaries with separations $>$ 300 AU. \begin{figure} \epsscale{1.2} \plotone{f6.eps} \caption{The distribution of masses (star + disk) for the two simulations at 1.0 $t_{ff}$. The solid and dashed-cross lines indicate the NRT and RT simualtions, respectively. \label{imf}} \end{figure} Figure \ref{frag} shows a histogram of all created fragments in both simulations, including the final mass of the objects that are merged. Due to the low-resolution of the disks in the simulations, $\sim 10$ cells, the many small bodies shown in the NRT distribution indicate numerical disk instability rather than small bodies forming from gravitational collapse. The large number of particles that are created in the NRT case is directly related to the nearly isothermal EOS. Gravitational instability in disks results in filamentary spiral arms. If the gas is isothermal, the filaments undergo indefinite collapse irrespective of the numerical resolution \citep{truelove97, truelove98, larson05, inutsuka92}. In a numerical calculation, this means that all the cells along a filament exceed the Jeans criterion nearly simultaneously and trigger refinement. Once the maximum refinement level is reached, sink particles are introduced in cells with densities violating the Truelove criterion \citep{truelove97}. Since our sink particle algorithm is formulated to represent a collapsing sphere, it is not well suited to filament collapse. Kratter et al.~(2009, in preparation) have addressed this issue in their predominantly isothermal simulations by transitioning from an isothermal to an adiabatic EOS once the density reaches a factor of four below the density at which sink particles are created. This has the effect of forcing filaments to fragment into quasi-spherical blobs prior to sink particle creation, thereby allowing the collapsing objects to be faithfully represented by point-like sink particles. At higher resolution, the barotropic nature of our EOS is invoked and so much of this fragmentation disappears (see Section \ref{convergenceEOS}). Similarly, in radiative calculations filamentary collapse is halted by heating due to radiative feedback, so that fragmentation is described by spherical rather than filamentary collapse. For either representation of heating, although numerical fragmentation in filaments is restricted, physical fragmentation may yet occur. The creation and fragmentation of filaments in the simulations is a result of gravitational instability driven by rapid accretion. The criterion for the onset of instability is similar to the classic Toomre $Q<1$ condition, slightly modified by the non-axisymmetry of the instabilities and the finite scale height of the disks, which is a result of turbulence driven by the accretion. This sort of gravitational instability has been investigated by Kratter et al.~(2008, 2009 in preparation), who point out that the presence or absence of instability depends largely on the accretion rate onto the disk. The rate of mass transport through an $\alpha$ disk is \begin{equation} \dot{m} = 3\left(\frac{\alpha}{Q}\right) \frac{c_{\rm s,disk}^3}{G}, \end{equation} where $Q$ is the Toomre parameter for the disk and $c_{\rm s,disk}$ is the sound speed within it. Gravitational instabilities produce a maximum effective viscosity $\alpha\sim 1$. At early times, we find that the accretion rate from a core onto the disk forming within it can be $\gg c_{\rm s,core}^3 / G$, where $c_{\rm s,core}$ is the sound speed in the core. If the sound speeds in the disk and core are comparable, $c_{\rm s,disk} \sim c_{\rm s,core}$, as is the case in the low-resolution NRT simulation, then the disk can only deliver matter to the star at a rate $\sim c_{\rm s,core}^3 / G$ while still maintaining $Q > 1$. As a result matter falls onto the disk faster than the disk can deliver it to the star, and the disk mass grows, driving $Q$ toward 1 and producing instability and fragmentation, as illustrated by the NRT simulation. Conversely, if the disk is warmed, either by radiation or by a switch from an isothermal to an adiabatic equation of state, then $c_{\rm s,disk} > c_{\rm s,core}$ and the rate at which the disk can deliver gas to the star increases. If the disk is sufficiently warm then it can process all the incoming material while still maintaining $Q > 1$. As a result the disk does not fragment, as is seen in both the low- and high-resolution RT simulations and in high-resolution NRT simulation. This shows that the fragmentation in the low-resolution NRT simulation is indeed numerical rather than physical in origin, and that it is a result of the density-dependence of the equation of state rather than of the resolution directly. This analysis also sheds light on the importance of numerical viscosity. \citet{krumholz04} show that in the inner few cells of disks, numerical viscosity can cause angular momentum transport at rates that correspond to $\alpha \ga 1$. However, as the analysis above shows, increasing $\alpha$ tends to suppress fragmentation rather than enhance it. We find that fragmentation is more prevalent in the low-resolution NRT simulation than the high-resolution one, which is exactly the opposite of what we would expect if numerical angular momentum transport were significantly influencing fragmentation. Therefore we conclude that numerical angular momentum transport is not dominant in determining when fragmentation occurs in our simulations. In isothermal calculations, the issue of filamentary collapse is a problem for all sink particle methods and it is not unique to grid-based codes. Due to the filamentary fragmentation in the NRT case, we prefer to merge close particle pairs in the simulations rather than follow their trajectories. Note that particles are inserted with the mass exceeding the Truelove criterion rather than the net unstable mass in the violating cells. Particles created within a discrete bound mass typically gain size quickly. Most particles formed in the unstable disk regions form in a spiral filament and do not have significant bound mass, so the particle mass is tiny when they are accreted by the central object. However, if several small particles are created within the merging radius each time step around a particular protostar, their merging can significantly increase the instantaneous accretion rate. As illustrated by the figure, there are only a handful of objects that form and approach within a merging radius in the RT simulation, whereas the NRT simulation produces a plethora of such bodies. \citet{bate09b} finds a similar reduction in protostar number with the addition of radiative transfer. As in our calculation, the final number of stars including radiative transfer is sufficiently small that a statistical comparison with the IMF is problematic. Instead, we base our comparison on the mean stellar mass. Using a resolution of 0.5 AU, \citet{bate09b} finds $\bar m \sim 0.5 ~\ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi$, which does not include outflows or any scaling factor accounting for their presence. Adopting a scaling factor of $\epsilon_{\rm core} = 1/3$ would produce a mean of $\bar m \sim 0.2~ \ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi$, lower than our RT mean mass and the mean mass of either the system or individual stellar initial mass function reported by \citet{chabrier05}. However, in \citet{bate09b} a number of the protostars are continuing to accrete and have not reached their final mass. In addition, \citet{bate09b} demonstrates that the mean stellar mass increases as calculations approach higher resolution and include a larger portion of the accretion luminosity. This result is most likely because disk fragmentation decreases as the gas becomes hotter, thus increasing accretion onto primary objects. It is possible that if \citet{bate09b} had included all the accretion and stellar luminosity, the mean mass obtained would be closer to the value we find. Observations suggest that BDs compose $\sim 30$\% of the total population of clusters \citep{andersen06}. Despite the merger criterion we adopt, the NRT calculation produces a significant number of BDs, $> 30 $\% sans scaling with $\epsilon_{\rm core}$, resulting in a slightly lower mean mass than the RT run. In comparison, \citet{bate03} find that approximately half of the objects formed are BDs, resulting in a mean mass of $\sim 0.1 ~\ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi$. This result persists for barotropic calculations modeling more massive clusters with superior resolution \citep{bate09a}. Calculations using a modified EOS that includes effects due to the internal energy and dissociation of H$_2$, ionization state of H, and approximate dust cooling find increased disk fragmentation, leading to numerous BDs \citep{attwood09}. Thus, the overproduction of BDs in non-radiative simulations substantiates the importance of radiative transfer and feedback from protostars in accurately investigating fragmentation and the initial mass function. \begin{figure*} \epsscale{1.15} \plottwo{f7a.eps}{f7b.eps} \caption{The figures show the distribution of particles formed as a function of mass for the RT(left) and NRT(right) simulations (solid line). These include the particles that are merged, where the total particle number with final masses greater than $10^{-3}$ \ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi is 23 and 251, respectively. The dashed lines show the distributions of stellar masses at the final time output. \label{frag}} \end{figure*} \subsubsection{Accretion Rates} As indicated by the Toomre criterion given by equation (5.2), the local gas temperature is key to the stability of disks. Clumpiness in the disks is directly reflected in the variability of the protostellar accretion rate. Figure \ref{acc} shows the accretion rates for the two first-forming protostars in each calculation as a function of time. The RT protostellar accretion in the left panel illustrates that once a protostar has accreted most of the mass in the core envelope, its accretion rate diminishes significantly. Protostars in both simulations show evidence of variable accretion on short timescales. However, the accretion bursts in the NRT simulation may vary by an order of magnitude, while in the RT case variability is generally only a few. Disk clumpiness may be magnified due to dynamical perturbations by nearby companions. For the cases shown, the RT protostar is single, while the NRT protostar has several companions. Similar variability to the NRT protostellar accretion rate is also observed by \citet{schmeja04}. In their turbulent isothermal runs, \citet{schmeja04} show that the magnitude of the initial particle accretion rate is comparable to our calculations at $\dot m \sim \mbox{few} \times 10^{-5}$ $\ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi$ yr$^{-1}$ with variability by factors of 5-10. However, the reported accretion rates appear to significantly decrease within 0.1 Myr. \begin{figure*} \epsscale{1.15} \plottwo{f8a.eps}{f8b.eps} \caption{The accretion rate, $\dot M$, as a function of time for the first forming object in the RT (left) and NRT (right) simulations. We average both simulations over 2 kyr for consistency. \label{acc}} \end{figure*} In principle, a sizable amount of the protostellar mass may be accreted during the periods of high accretion. We define a burst as an increase of 50\% in the accretion rate over 1000 years, where mergers of another protostar of mass $m > 0.1$ $\ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi$ are excluded. Using this metric, the NRT protostars accrete from 0-13 \% of their mass during the bursts with a median of 5\%. The RT protostars accrete 0.0-9 \% of their mass during bursty accretion with a median amount of 1\%. Thus, variable accretion is not significant. Our data analysis is limited by the coarse level time step of $\sim 100$ yrs, so that accretion rate variability on shorter timescales will not be resolved in the analysis. For comparison, \citet{vorobyov06}, modeling the formation and accretion history of a protostar in two-dimensions, find that $> 50 $ \% of the protostellar mass is gained in short intense accretion bursts. In their simulations, accretion occurs smoothly until $t \simeq 0.15$ Myr, where variability on timescales $< 100$ yrs begins, corresponding to accretion of $\sim 0.05$ $\ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi$ clumps. Although their time resolution is finer, sampling at longer time increments, as in our calculation, is unlikely to miss persistent cyclical variability of four to five orders of magnitude in accretion rate. We find that when the stellar mass is about half the final mass, large variability in the RT accretion rates is rare, while it is more common in the NRT case. RT protostars with ages comparable to $t \simeq 0.15$ Myr experience the most variable accretion occurring over 1-2 orders of magnitude. However, by this time, the majority of the envelope mass has been accreted and accretion rates are $\bar{\dot m} \sim 10^{-7}$ $\ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi$ yr$^{-1}$, so that accreting significant mass is unlikely. In \citet{vorobyov09}, the authors demonstrate that simulations with a stiffer equation of state and warmer disk exhibit variability over at most two orders of magnitude. This finding is more consistent with our results, and it supports the differences in accretion we find between the NRT and RT calculations. However, bursty accretion due to disk instability also depends upon the core rotation and the rate at which mass is fed into the disk from the envelope\citep{vorobyov06, boley09}. Thus, we expect that radiative effects alone cannot completely determine accretion behavior. Since the disks in our low-resolution calculations are not well resolved, it is possible that we may not be able to resolve the disk clumpiness observed by \citet{vorobyov06}. Their innermost cell is placed at 5 AU, which is comparable to the cell size in our high-resolution runs, however, they adopt logarithm spacing to concentrate cells in the inner region of the disks. We note that their method also includes an approximate treatment of magnetic fields that could influence their results and which we neglect in our calculations. Figure \ref{aveacc} shows that the NRT simulation exhibits slightly higher average accretion. Note that we subtract the accretion spikes caused by significant mergers. The mean accretion rate over the protostars lifetime for the final protostars is $\sim 1 \times 10^{-5}$ $\ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi$ yr$^{-1}$ versus $\sim 6 \times 10^{-6}$ $\ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi$ yr$^{-1}$ for the RT run. Without the added thermal support from radiation feedback and with increased fragmentation, the NRT protostars accrete their envelope mass more quickly. However, the protostars in both calculations satisfy the same accretion-mass relationship, with accretion increasing approximately linearly with star mass. Using a least-squares fitting technique, we obtain power-law relationships $\bar {\dot m} \propto m^{0.92}$ and $\bar {\dot m} \propto m^{0.64}$ for the RT and NRT data, respectively, which have $\chi^2$ values of $67.6$ and $18.0.$\footnotemark We include masses $m \ge 0.1$ \ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi in the fit and weight the data by the ages of the protostars. Thus, young protostars with only a short accretion history are weakly weighted. As Figure \ref{aveacc} shows, there is a significant amount of scatter about the fits. \citet{schmeja04} find a similar trend between the mean accretion rate and final masses for protostars forming in their isothermal driven turbulence simulations. \footnotetext{The $\chi^2$ value for the fit is given by: $\chi^2 = \sum_{i=1}^N {{y_i -A -Bx_i}\over{\sigma_y^2}}$, where $y_i$ are the age-weighted accretion rates, $x_i$ are the masses, $A$ and $B$ are the fit coefficients, and $\sigma_y$ is the standard deviation of the $y_i$ values. } The apparent correlation between stellar mass and average accretion rate occurs because protostars forming in more massive cores tend to be more massive and also have higher accretion rates. \citet{mckee03} derive a self-similar solution for the accretion rate where the pressure and density each have a power-law dependence on $r$, such that $\rho \propto r^{-k_{\rho}}$ and $P \propto \rho^{\gamma_P}\propto r^{-k_P}$, where $\gamma_P = 2k_P/(2+k_P)$ and $k_{\rho} = 2/(2-\gamma_P)$. Although the simulated cores are not self-similar, it is possible to fit a power-law to the pressure of the core envelope in most cases. Both RT and NRT cores have exponents in the range $k_P \simeq 0-5$ at a few thousand AU from the protostar, with an average value of $k_P \sim 1$ or $k_{\rho}=1.5$. \citet{mckee03} show that the accretion rate is then: \begin{eqnarray} \dot m_* &=& 5.5 \times 10^{-6} \phi_*A^{1/8}, k_P^{1/4} \epsilon_{\rm core}^{1/4} \left(\frac{m_{*f}}{1~\ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi}\right)^{3/4} \times \nonumber \\ & & \left( {{P_{s, \rm core} /k_B}\over{10^6 \mbox{ K cm}^{-3}}} \right) \left({{m_*}\over{m_{*f}}}\right)^{3(2-k_{\rho})/[2(3-k_{\rho})]} \mbox{$\ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi$yr$^{-1}$}, \end{eqnarray} where $m_{*f}$ is the final stellar mass, $P_{s, \rm core}$ is the core surface pressure, $\phi_*$ and $A$ are order unity constants describing the effect of magnetic fields on accretion and the isothermal density profile, respectively. Since we weight the fit by the protostellar age, this selects for the case where $m_* \simeq m_{*f}$. Assuming that $\Sigma_{\rm cl}$ is roughly constant, $\dot m_* \propto m_{*f}^{3/4}$, that is similar to the slopes produced by the least-squares fit. \subsubsection{Multiplicity} The number of stars with stellar companions is an important observable that may directly relate to the initial conditions of star-forming regions. Among the population of field stars, most systems are single with the number of systems containing multiple stars increasing as a function of stellar mass \citep{lada06}. Young pre-main sequence stellar populations are observed to contain more multiple systems than field stars suggesting that the multiplicity fraction evolves over time \citep{duchene07}. Unfortunately, the initial stellar multiplicity is challenging to directly measure due to the difficulty of resolving close pairs and limited sample sizes \citep{duchene07}. The two dominant effects influencing multiplicity are fragmentation and N-body dynamics. While fragmentation in a collapsing core may result in multiple stars, systems with three or more bodies are dynamically unstable, causing higher-order stellar systems to rapidly lose members. Multiple stellar systems can also occur via stellar capture, a mechanism most applicable to high-mass stars forming in very clustered environments \citep{moeckel07}. \citet{goodwin05} suggest that that observed higher-order multiple systems are initially members of open stellar clusters rather than arising from the fragmentation of a single core. In general, the number of such systems is observed to be small, with only 1 in every 50 systems in the field having at least four members \citep{duquennoy91}. The RT and NRT calculations present very different pictures of the initial stellar multiplicity. The large differences in temperature and fragmentation have a significant effect on the fractions of stars in single and multiple systems. As shown in Figure \ref{multiple}, the majority of stars formed in the RT calculation are single, while in the NRT calculation the majority of stars live in systems with 2 or more stars. This is mainly due to continued disk fragmentation rather than long-lived stable orbital systems. The field single star fraction (SSF), defined as the ratio of the number of primary stars without a {\it stellar} companion to the total number of stellar systems, is observed to be $\sim$ 70\% \citep{lada06}\footnotemark. \footnotetext{The SSF does not include brown dwarfs in estimating multiplicity.} The RT calculation produces an SSF of 0.8 $+0.2/-0.4$, while the NRT calculation has an SSF of 0.6 $\pm 0.4$, where the uncertainty is given by the poisson error. Due to the resolution of our calculation, we can only capture wide binary systems of $r > 300$ AU. However, a number of protostars have undergone significant mergers, which we define as those in which the smaller mass exceeds 0.1 \ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi. We find that about a third of the stars in the RT simulation and a tenth of stars in the NRT simulation have experienced significant past mergers. Assuming that these would have resulted in multiple stellar system revises the SSF values to 0.5$\pm 0.3$ and 0.6$\pm 0.4$, respectively. Unfortunately, this is a very uncertain estimate as we have small statistics, and we cannot know whether the systems with significant mergers would have resulted in bound or unbound systems in the absence of the mergers. \begin{figure} \epsscale{1.2} \plotone{f9.eps} \caption{The plot shows the distribution of average accretion rates (crosses) as a function of final star mass at 1 $t_{\rm ff}$. The horizontal line indicates the Shu (1977) accretion rate $c_{\rm s}^3/G$ at 10K. The dashed and dot-dashed lines indicate the age weighted fit of the average accretion rates for the RT and NRT runs, respectively. \label{aveacc}} \end{figure} \begin{figure} \epsscale{1.2} \plotone{f10.eps} \caption{The plot shows the system multiplicity for the two calculations, where N is the number of stellar systems, and the plot is normalized to the total number of systems. The multiplicity on the x-axis is the number of stars in each system. \label{multiple}} \end{figure} \subsubsection{Stellar Feedback} Our model includes accretion luminosity and a sub-grid stellar model estimating the contribution from Kelvin-Helmholz contraction and nuclear burning (see Appendix \ref{starpart2}) The stellar model includes four evolutionary stages. The earliest stage occurs when the protostar begins burning deuterium within the core at a sufficient rate to maintain a constant core temperature. Once the initial deuterium in the core is depleted, burning occurs at the rate that new matter convects inwards; this is the steady core deuterium state. In the third stage, the star burns the deuterium remaining in the outer layers of the protostar. Finally, the star ceases contracting and reaches the zero-age main sequence (ZAMS). Figure \ref{lumoft1} shows the luminosity as a function of time for three different protostars. At early times, accretion dominates the luminosity, and variability in accretion is strongly reflected in the total luminosity. At late times, accretion slows and Hayashi contraction begins to make a substantial contribution. In general, the total luminosity summed over all the stars is dominated by those protostars with the highest accretion rates. For these young sources, the stellar luminosity is quite small in comparison to the accretion luminosity. Thus, the last panel in Figure \ref{lumoft1} shows that for all times, accretion luminosity is the main source of luminosity. For comparison, luminosity due to other physical processes such as compression and viscous dissipation is small (see Figure \ref{heating}). Figure \ref{avelum} shows the final luminosity as a function of source mass. The luminosity increases roughly linearly with mass but has a fairly large scatter. As indicated on the plot, two of the stars have reached the ZAMS, which was due to increased accretion resulting from significant mergers. Even in this low-mass stellar cluster, there are individual stars with contributions larger than the net viscous dissipation. This demonstrates that any heating due to viscous dissipation is exceeded by modest protostellar feedback. \begin{figure} \epsscale{1.2} \plotone{f11.eps} \caption{The plot shows the total luminosity as a function of time for three stars in the RT simulation. The accretion luminosity contribution is shown by the dashed line, and the masses are 1.5, 0.45, and 0.35 \ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi, respectively. The bottom plot shows the total luminosity including all the protostars. \label{lumoft1}} \end{figure} \subsection{Resolution and Convergence \label{convergence}} The AMR methodology allows flexibility in both the depth and breadth of resolution. An insufficient amount of resolution may give inaccurate results, so it is important to gauge the sensitivity of the result to the resolution. The large scope of the problem and the expense of the radiative transfer methodology limits the depth or maximum resolution of our calculation, where the RT calculation cost is $\sim$ 70,000 CPU hrs on 2.3 GHz quad-core processors. To quantify the effects of resolution on the solution, we run second RT and NRT calculations that evolve the first formed object to a resolution eight times higher than the overall calculation. We run these simulations for 0.12 $t_{\rm ff}$ after the formation of the protostar. We adopt a fixed number of cells for the closest resolved approach between two particles, so that the high-resolution simulations have a merging radius of 32 AU, a factor of eight smaller than the low-resolution cases. \subsubsection{High-Resolution Study with Radiative Feedback} The high-resolution and low-resolution calculations both form single objects with stable, thermally supported disks. Figure \ref{tempprof} shows a comparison of the densities, temperatures, and radiation fields. The effective radiation temperature differs by only a few percent outside the inner cells of the low-resolution calculation. In both cases, the gas and radiation temperatures are well coupled such that $T_{\rm gas} \simeq T_{\rm rad}$. However, the gas in the high resolution case is more centrally concentrated, and the disk radius appears smaller. At the final time, the high-resolution star has accreted 0.54 $\ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi$, while the low-resolution case has reached 0.50 $\ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi$. During the course of the run, the lower resolution case forms a few fragments in the disk, which are almost immediately accreted by the primary, while in the high-resolution case, no additional particles are formed. \begin{figure} \epsscale{1.2} \plotone{f12.eps} \caption{ The plot shows the distribution of luminosities (crosses) in the RT simulation as a function of final star mass at 1.0 $t_{\rm ff}$. The crosses, stars, and diamonds refer to stars undergoing variable core deuterium burning, undergoing steady core deuterium burning, or reaching the zero-age main sequence. \label{avelum}} \end{figure} Figure \ref{lumoft2} shows a comparison of the accretion and luminosity as a function of time. Accretion is generally smooth, and the rates are generally within a factor of two. The luminosity in the low-resolution run has slightly larger variation, but the two approach a similar value at later times. Although there are deviations in the history between the two runs, the evolution is not significantly different at the higher resolution. Certainly, even higher resolution is preferable for investigation of disk properties, but our main result--{\it that radiative feedback is important to the formation of low-mass stars}--is insensitive to the simulation resolution. High-resolution radiation-hydrodynamics simulations of low-mass disks including irradiation confirm that such disks, with properties similar to ours, are stable against fragmentation \citep{cai08}. Gravitational instability is expected to occur only in the regime where the mass of the disks is comparable to the stellar mass \citep{cai08, stamatellos08, stamatellos09}. \begin{figure*} \epsscale{1.1} \plotone{f13C.eps} \caption{From left to right, the images show the log density, log radiation temperature, $T_{\rm r}=(E_{\rm r}/a)^{1/4}$, and log gas temperature for a protostellar system at $\sim 0.6~t_{\rm ff}$ followed with $dx = 4$ AU resolution (top) and $dx = 32$ AU (bottom). The image is 0.03 pc on a side, where we denote the star position with a black cross. The colorscale ranges are given by $10^{-19}-10^{-14}$ g cm$^{-3}$, $1-100$ K, and $1-100$ K, respectively. \label{tempprof}} \end{figure*} \subsubsection{High-Resolution Study with a Barotropic EOS \label{convergenceEOS}} This higher resolution non-radiative study achieves maximum densities $> 5 \times 10^{-13}$ g cm$^{-3}$, several times higher than the barotropic critical density. Consequently, dense gas is heated to temperatures of $\sim 20-25$ K. During the time we compare the non-radiative simulations, both the high-resolution barotropic calculation and the first collapsing core in low-resolution NRT calculation form a similar mass primary object with protostellar disk (see Figure \ref{barotropic}). However, the low-resolution NRT system experiences significantly more fragmentation. We find that the protostellar disk in the NRT case fragments during approximately half of the time steps, while in the higher resolution barotropic case fragmentation occurs very rarely, taking place in less than $<$0.1\% of the time steps. Since the low-resolution NRT disks are essentially isothermal, we conclude that heating due to the barotropic approximation is largely responsible for decreasing the number of fragments. In contrast, the higher resolution disks are heated to $\sim 20$ K. However, this is still significantly less heating than in the RT case, and we find that numerical instability is not suppressed completely even with high-resolution. The radiative high-resolution case experiences no disk fragmentation, underscoring our conclusion that radiative feedback is crucial to representing fragmentation or lack thereof in the star formation process. Despite different merger radii, in both non-radiative cases all of the fragments are eventually merged with the primary protostar so that the end result in both calculations is a single protostar. This suggests that the fragmentation taking place at low-resolution is largely numerical rather than physical. We emphasize that both significantly higher resolution than we use and additional physics are required to study accretion disk properties. \subsubsection{Convergence} The minimum breadth of resolution is determined by the Truelove criterion. Due to the radiation gradient refinement criterion we apply to resolve the radiation field, at 1 $t_{\rm ff}$ the RT simulation has $\sim 80$ \% more cells, generally concentrated near the protostars, than the NRT calculation. This extra refinement improves the resolution regions near protostellar sources. Inverting (5.12) yields an expression for the effective Jeans number for each cell as a function of density, resolution, and sound speed: \begin{equation} J_{\rm eff} ={ {(\rho G) ^{1/2} \Delta x_l} \over {c_s \pi ^{1/2} }}. \end{equation} As shown in Figure \ref{jeansno}, the RT simulation is shifted to lower $J_{\rm eff}$, where the vast majority of cells in both calculations are resolved to better than $J_{\rm eff}$ = 0.1. The choice of base grid resolution guarantees that $J_{\rm eff}$ is typically much smaller than $J$ for most of the cells on the domain. We use a fiducial value of $J=0.25$ to trigger additional refinement in both simulations, so no cell has $J_{\rm eff}$ exceeding 0.25. Cells in the highest $J_{\rm eff}$ bin are exclusively found on the maximum AMR level, and they are generally at the highest gas densities. These cells, many located in the disks around the protostars, are at the same resolution in both calculations. Thus, the fragmentation results of the RT and NRT calculations are not dissimilar due to differences in effective resolution but are solely a result of differences in thermal physics. \section{SIMPLIFYING ASSUMPTIONS \label{caveat}} These numerical calculations neglect a number of arguably crucial physical processes in low-mass star formation. In this section, we discuss the implications for our results. \subsection{Chemical Processes} \subsubsection{Dust Morphology} Our dust model neglects the evolution of dust grains due to coagulation and shattering. In cold dense environments, such as protostellar disks, the aggregation of dust grains may significantly increase grain sizes on timescales as short as 100 years \citep{schmitt97, blum02}. Observations of Class 0 protostars indicate significant evolution of the dust size distribution at average densities of $n\simeq 10^7$ cm$^{-3}$ by the Class 0 phase \citep{kwon09}. Since we adopt a single dust model for the entire domain, we are likely to either overestimate or underestimate dust grain size in different regions. To examine the effect of the dust model on gas temperature, we repeat the turbulent driving phase (without gravity) using a conservative model more typical for non-aggregate dust grains: \begin{eqnarray} \kappa_R = 0.015 (T_g^2/110 ) \mbox{ cm$^2$ g$^{-1}$ for $T_g \leq 110$} \label{rosse}\\ \kappa_P = 0.10 (T_g^2/110 ) \mbox{ cm$^2$ g$^{-1}$ for $T_g \leq 110$}. \label{planck} \end{eqnarray} Using this model, we find that shocked gas may be heated as high as 18 K after a crossing time. In comparison, gas in the fiducial case is only heated to $\sim $11 K at the same densities (see Figure \ref{temphist} for the temperature distribution due to the fiducial dust opacity model). However, the extent of the additional heating is quite small since only 0.003\% of the mass is heated above 11 K and thus differs from the fiducial case. This suggests that the simulations may underestimate the gas temperature in low density regions outside of cores ($n_H < 10^{7}$) where the dust distribution is not expected to evolve due to coagulation. Significant discrepancy between the gas temperatures of the two models is mainly confined to a small number of cells and is mitigated by the importance of molecular cooling in these regions, which we discuss in Section 4.1.2. \subsubsection{Gas Temperature at Low Density} \begin{figure} \epsscale{1.2} \plotone{f14.eps} \caption{The plot shows the accretion rate and luminosity as a function of time for the first formed star in the RT calculation and the same object followed with $dx = 4$ AU resolution. Temporal bins of 1kyr are used. \label{lumoft2}} \end{figure} To simplify the dust-gas interaction, we assume that dust and gas are perfectly collisionally coupled such that their temperatures are identical. In molecular clouds, there can be significant variation between the dust and gas temperatures. For example, dust in close proximity to stellar sources is radiatively heated, while in strongly shocked regions of the flow, dust acts as a coolant for compressionally heated gas. Below we will discuss both the regime where dust cooling dominates, i.e., $T_g > T_d$, and where molecular cooling dominates, i.e., $T_d > T_g$. \begin{figure} \epsscale{1.2} \plotone{f14bC.eps} \caption{ The images show the log density (left) and log gas temperature (right) for a NRT protostellar system at $\sim 0.5~t_{\rm ff}$ followed with $dx = 4$ AU resolution (top) and $dx = 32$ AU (bottom). The image is 0.03 pc on a side, where we denote the star position with a black cross. The color scale ranges are given by $10^{-19}-10^{-14}$ g cm$^{-3}$ and $1-50$ K, respectively. \label{barotropic}} \end{figure} When the gas is shock-heated, the perfect coupling approximation remains valid as long as the rate of energy transfer between the gas and dust is balanced by the cooling rate of the dust. The dust cooling per unit grain area by photon emission is: \begin{equation} F(a, T_d)= 4 < Q(a,T_d)> \sigma_B T_d^4, \end{equation} where $T_d$ is the dust temperature, $a$ is the grain size, and $<Q(a, T_d)>$ is the Planck-averaged emissivity \citep{draine84}. Then for an ensemble of grains with dust opacity, $\kappa_P$, the dust cooling is given by: \begin{eqnarray} n^2\Lambda_d &\simeq& 4 \kappa_P \rho \sigma_B T_d^4 \\ &\simeq& 9 \times 10^{-21} \left({{n_{H}} \over{1.6 \times 10^4 \mbox{ cm}^{-3}}} \right) \left( {T_d \over {10 \mbox{ K}} }\right)^6 \mbox{erg cm}^{-3}\mbox{s}^{-1}. \label{dustcool} \end{eqnarray} In equation (\ref{dustcool}), we substitute Equation (\ref{planck}) for $\kappa_{\rm P}$ and assume that $T_g \sim T_d$. The rate at which energy is transferred from the gas to the dust is given by: \begin{eqnarray} n\Gamma_d &=& 9 \times 10^{-34} {n_{H}}^2 T_g^{0.5} \left[ 1-0.8e^{ \left( - {{75 \rm K}\over{T_g}} \right)}\right] \times \nonumber \\ & & (T_g-T_d) \left({{\sigma_d} \over{2.44 \times 10^{-21} \mbox{ cm$^{-3}$}}}\right) \\ &\simeq& 7.3 \times 10^{-24} \left( { {n_{H}} \over{1.6 \times 10^4 \mbox{ cm}^{-3}} } \right)^2 \left( {T_g \over {10 \mbox{ K}} }\right)^{3/2} \times \nonumber \\ & &\left[ 1-0.8e^{ \left( - {{75 \rm K}\over{T_g}} \right)}\right] \left(1 - {T_d \over T_g} \right) \mbox{ erg cm}^{-3} \mbox{ s}^{-1}, \nonumber \end{eqnarray} where we adopt $\sigma_d = 2.44 \times 10^{-21}$ cm$^{-2}$ for the dust cross section per H nucleus \citep{young04}. For a gas temperature of 10 K the exponential term is very small, so we neglect it in the following equation. Equating these expressions and solving for the gas density at which heating and cooling balance gives: \begin{equation} n_{H} \simeq 2 \times 10^7 \left( {T_d \over {10 \mbox{ K}} }\right)^6 \left( {T_g \over {10 \mbox{ K}} }\right)^{-3/2} \left(1 - {T_d \over T_g} \right)^{-1} \mbox{cm}^{-3}. \end{equation} Thus, we demonstrate that the dust and gas are well coupled as long as the gas densities are sufficiently high. However, even in regions where the dust and gas may not be well coupled, molecular line cooling is important. For gas densities in the range $n_{\rm H} = 10^3-10^5$ cm$^{-3}$, CO is the dominant coolant. For these densities, the cooling rate per H, $\Lambda/n_{\rm H}$, is approximately constant with density at fixed temperature. To compare to the magnitude of dust cooling consider a 2 km s$^{-1}$ shock that heats the gas above 100 K. The cooling rate at 100 K is given by $n^2\Lambda_{\rm mol} \simeq 5 \times 10^{-27} n_{\rm H}$ ergs cm$^{-3}$ s$^{-1}$, where we adopt the cooling coefficient from \citet{neufeld95}. The characteristic cooling time is $\sim $ 1000 yrs at the average simulation density, which is approximately half the cell-crossing time of such a shock, implying that molecules cool the gas relatively efficiently. Since the shock temperatures on our domain are limited by our resolution, which is much larger than the characteristic cooling length, post-shock temperatures do not surpass 20 K. In this regime, the dust cooling for perfect dust-gas coupling is at least an order of magnitude larger than the estimated molecular cooling. As a result, we likely under-estimate the temperatures in low-density strongly shocked gas in comparison to similar shocks in molecular clouds. In the regions near protostars, the perfect coupling assumption is valid provided that gas heating by dust is balanced by molecular cooling. This case is discussed by \citet{krumholz08}, where the authors demonstrate that the dust and gas temperature remain within a degree provided that: \begin{equation} T_d-T_g \simeq \frac{3.5 \times 10^5}{n_{\rm H}} \mbox{ K} \end{equation} for gas temperatures around 10 K. For higher gas temperatures around 100 K, we adopt the molecular cooling coefficient above and find that the dust and gas are well coupled provided $n_{\rm H}$ exceeds $\sim 2 \times 10^8$ cm$^{-3}$. Number densities of this magnitude are exceeded in collapsing cores, so that regions near protostars are guaranteed to have well-coupled dust and gas. Thus, gas temperatures in our RT simulation are fairly accurate for densities larger than the average density, but they may be underestimated by a factor of $\sim 2$ in strong shocks when the molecular cooling rate is much smaller than the implemented dust cooling rate. Since gas heating suppresses fragmentation, our results may actually overestimate the amount of fragmentation. Consequently, our finding that radiative feedback reduces fragmentation would generally be strengthened by a better treatment of the thermodynamics. \begin{figure} \epsscale{1.2} \plotone{f15.eps} \caption{Histogram of the effective Jeans number, $J_{\rm eff}$ at 1.0 $t_{ff}$. The solid and dashed lines indicate the NRT and RT simulations, respectively. Each histogram is normalized to the total number of cells. \label{jeansno}} \end{figure} \subsection{Magnetic Fields} Observations indicate the presence of magnetic fields in nearby low-mass star-forming regions \citep{crutcher99}. However, the magnitude of the fields and their importance in the star formation process remain uncertain. Observations by \citet{crutcher08} suggest that the energy contributed by magnetic fields on core scales is subdominant to the gravitational and turbulent energies. On smaller scales, magnetic fields are believed to be associated with disk accretion and the generation of protostellar outflows \citep{shu94, konigl00}. Numerical simulations have demonstrated that the presence of magnetic fields may suppress disk fragmentation by supplying additional pressure support \citep{machida08, price07a, price08}. We find that the inclusion of radiative transfer has a similar stabilizing influence on disks. \subsection{Multi-frequency Radiative Transfer} Due to the expense of the calculations, we adopt a gray radiative transfer flux-limited diffusion approximation. By averaging over angles and frequencies to obtain the total radiation energy density, we sacrifice the direction and frequency information inherent in the radiation field. As discussed in \citet{krumholz09}, these approximations touch on several competing effects that influence the radiation spectrum. Since radiation pressure is negligible for low-mass stars, it does not affect the gas dynamics. Instead, our main consideration is the extent to which radiative heating may differ for a more sophisticated radiative treatment. As we have discussed in previous sections, the gas temperature and corresponding thermal pressure alone have a significant relationship with accretion and fragmentation. The first effect to consider is a more exact treatment of dust opacity, which is strongly frequency dependent in the infared and increases towards lower wavelengths (e.g., \citealt{ossen94}). Since long-wavelength radiation has a lower optical depth, in a multi-frequency calculation the longest wavelengths would be able to escape the core. Anisotropies in the radiation field may also facilitate cooling. Radiative beaming, for example via an outflow cavity, may allow photons to escape along the poles \citep{krumholz05}. Thus, both these effects will likely decrease the temperature in protostellar cores. The gray radiative transfer method also assumes that the radiation field is thermalized, producing a Planckian radiation spectrum everywhere. Although this is a fair assumption in opaque regions where the number of mean-free-paths is large, it fails in optically thin regions. Since thermalization softens the radiation spectrum, the assumed Planck spectrum is likely to under-predict the heating rate. Since the net affect of the approximations is somewhat unclear, comparison with more sophisticated radiative treatments would be ideal. However, there have been no 2D or 3D non-gray simulations of low-mass star formation. To date, the most thorough investigation of protostellar formation is presented by \citet{masunaga00}. These spherically symmetric simulations follow the formation of 0.8 and 1.0 $\ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi$ protostars. At radii of 60 AU, they find temperatures ranging from 20-250 K during the main accretion phase, while we find $T_{\rm max} \sim 90$ K. Their maximum protostellar luminosity is 25 $L_{\odot}$, which is entirely due to accretion. A few of the protostars in the RT simulation have higher masses and higher maximum luminosities, but the gas temperature distributions on average appear similar (see Figure \ref{tempvsr}). However, the disparity in maximum temperature may be attributable to either differences in the radiation schemes or initial conditions and geometry. Future work will investigate the effects of 3D multi-frequency radiative transfer on low-mass star formation. \section{CONCLUSIONS \label{conclude}} We perform gravito-radiation-hydrodynamic simulations to explore the effect of radiation feedback on the process of low-mass star formation. We compare our calculation with a similar one using an approximately isothermal equation of state in lieu of radiative transfer. We find that the inclusion of radiative transfer has a profound effect on the gas temperature distribution, accretion, and final stellar masses. We confirm the finding of \citet{bate09b} that additional heating provided by radiative transfer stabilizes protostellar disks and suppresses small scale fragmentation that would otherwise result in brown dwarfs. However, we also find that the vast majority of the heating comes from protostellar radiation, rather than from compression or viscous dissipation. Thus calculations that neglect radiative feedback from protostars, either because they use approximations for radiative effects that are incapable of including it (e.g., \citealt{bate03, clark08}) or because the explicitly omit it (e.g., \citealt{bate09b}), significantly underestimate the gas temperature and thus the strength of this effect. More generally, we find that, due to significant variations in the temperature with time, no scheme that does not explicitly include time-dependent protostellar heating is able to adequately follow fragmentation on scales smaller than $\sim$0.05 pc. We find that due to the increased thermal support in the protostellar disks, accretion is smoother and less variable with radiative feedback. However, we show that for low-mass star formation the heating is local and limited to the volume within the protostellar cores. As a result, pre-existing sources do not inhibit turbulent fragmentation elsewhere in the domain. We find that the mean accretion rate increases with final stellar mass so that the star formation time is only a weak function of mass. This is inconsistent with the standard \citet{shu77} picture, but it is qualitatively consistent with the \citet{mckee03} result for the turbulent core model, where the star formation time varies as the final stellar mass to the $1/4$ power. The magnitude and variability of protostellar luminosity is of significant observational interest. If accretion contributes a substantial portion of the total luminosity emitted by young protostars, then upper limits for protostellar accretion rates can be inferred directly from the observed luminosity. This may give clues about the formation timescale and the accretion process while the protostars are deeply embedded and cannot be directly imaged. In a future paper we will examine the "luminosity problem'' and compare with embedded Class 0 and Class I protostars. Our larger NRT and RT simulations are performed at a maximum resolution of 32 AU, so it is possible that a few of our cores form stars that otherwise would have become thermally supported or turbulently disrupted in a higher resolution calculation. Thus, higher resolution calculations would be desirable for further work. Although we find that the inclusion of radiative transfer has a similar impact as magnetic fields on fragmentation and accretion, simulations examining the interplay of magnetic fields and radiative transfer are important. To asses the accuracy of our radiative transfer approximations, further simulations with multi-frequency treatment in multi-dimensions with improved dust modeling are also necessary. \acknowledgments{ The authors acknowledge helpful discussions with Andrew Cunningham and Kaitlin Kratter. Support for this work was provided by the US Department of Energy at the Lawrence Livermore National Laboratory under contract B-542762 (S.~S.~R.~O) and DE-AC52-07NA27344 (R.~I.~K.) and the NSF grant PHY05-51164 (C.~F.~M. \& S.~S.~R.~O.); the NSF grant AST-0807739 and NASA through the Spitzer Space Telescope Theoretical Research Program, provided by a contract issued by the Jet Propulsion Laboratory (M.~R.~K.); the NSF grant AST-0606831 (C.~F.~M \& S.~S.~R.~O). Computational resources were provided by the NSF San Diego Supercomputing Center through LRAC grant UCB267; and the National Energy Research Scientific Computer Center, supported by the Office of Science of the US Department of Energy under contract DE-AC03-76SF00098, though ERCAP grant 80325.} \begin{appendix} \section{A. The Star Particle Algorithm} \label{starpart1} In this appendix we describe the details of our ``star particle" algorithm we use to represent protostars. Appendix \ref{starpart1} describes how the star particle algorithm functions within the larger ORION code, while Appendix \ref{starpart2} describes the protostellar evolution code that we use to determine the luminosities of our stars. This division is useful because, from the standpoint of the ORION code, a star particle is characterized by only four quantities: mass, position, momentum, and luminosity. The luminosity is determined by the protostellar evolution model outlined in Appendix \ref{starpart2} that is attached to each star particle, but the only output of this model that is visible to the remainder of the code is luminosity. In a calculation using star particles, we add a set of additional steps to every update cycle on the finest AMR level, so that the cycle becomes \begin{enumerate} \item Hydrodynamic update for gas \label{hydro} \item Gravity update for gas \label{grav} \item Radiation update, including stellar luminosity \label{radgas} \item Star particle update \begin{enumerate} \item Sink particle update \label{sinkupdate} \item Stellar model update \label{starupdate} \end{enumerate} \end{enumerate} Steps (\ref{hydro}) -- (\ref{radgas}) are the ordinary parts of the update that we would perform even if no star particles were present. In steps (\ref{hydro}) and (\ref{grav}) star particles have no direct effect (since they do not interact hydrodynamically, and we handle their gravitational interactions with the gas in an operator split manner in step (\ref{sinkupdate}). In step (\ref{radgas}), star particles act as sources of luminosity, as indicated in equation (\ref{radenergy}). We implement this numerically as follows: let $L_n$ and $\mathbf{x}_n$ be the luminosity and position of star particle $n$. Our code uses the \citet{krumholzkmb07} radiation-hydrodynamic algorithm, in which we split the radiation quantities into those to be handled explicitly and those to be handled implicitly. We therefore write the evolution equation to be solved during the radiation step as \begin{equation} \frac{\partial \mathbf{q}}{\partial t} = \mathbf{f}_{\rm e-rad} + \mathbf{f}_{\rm i-rad}, \end{equation} where $\mathbf{q}=(\rho, \rho\mathbf{v}, \rho e, E)$ is the state vector describing the gas and radiation in a cell, the explicit update vector $\mathbf{f}_{\rm e-rad}$ is the same as in the standard \citeauthor{krumholzkmb07}\ algorithm (their equation 52)\footnote{Note that our notation here differs slightly from that of \citet{krumholzkmb07}, in that we follow the standard astrophysics convention in which $\kappa$ is the specific opacity, while \citet{krumholzkmb07} follow the radiation-hydrodyanmic convention in which $\kappa$ is the total opacity. As a result, any opacity $\kappa$ that appears in the \citet{krumholzkmb07} equations is replaced by $\kappa\rho$ here.}, and the implicit update is modified to be \begin{equation} \mathbf{f}_{\rm i-rad} = \left( \begin{array}{c} 0 \\ 0 \\ -\kappa_{\rm P} \rho (4 \pi B - cE) \\ \nabla\cdot \left(\frac{c\lambda}{\kappa_{\rm R}\rho}\nabla E\right) + \kappa_{\rm P}(4\pi B - c E) + \sum_n L_n W(\mathbf{x}-\mathbf{x}_n). \end{array} \right). \end{equation} Here $W(\mathbf{x}-\mathbf{x}_n)$ is a weighting function that depends on the distance between the location of the cell center $\mathbf{x}$ and the location of the star $\mathbf{x}_n$. The weighting function has the property that the sum of $W(\mathbf{x}-\mathbf{x}_n)$ over all cells is unity, and that $W(\mathbf{x}-\mathbf{x}_n) = 0$ for $|\mathbf{x}-\mathbf{x}_n|$ larger than some specified value. For the computations we present in this paper we use the same weighting function as we use for accretion (equation (13) of \citealt{krumholz04}). However, we have experimented with other weighting functions, including truncated Gaussians, top-hats, and delta functions, and we find that the choice makes very little difference because radiation injected into a small volume of the computational grid almost immediately relaxes to a configuration determined by diffusion. With this modification to $\mathbf{f}_{\rm i-rad}$, our update procedure is the same as described in \citet{krumholzkmb07}. Step (\ref{sinkupdate}) is the ordinary sink particle method of \citet{krumholz04}, so we only summarize it here and refer readers to that paper for a detailed description and the results of numerous tests. We first create new particles in any cell whose density exceeds the Jeans density on the maximum AMR level (i.e.,\ where equation (\ref{jeans}) is not satisfied.) Next we merge star particles whose accretion zones, defined to be 4 cells in radius, overlap. This ensures that we combine multiple sink particles created in adjacent cells that simultaneously exceed the Jeans density, or multiple sink particles created in the same cell during consecutive time steps. Then we transfer mass from the computational grid onto existing sink particles. Accretion happens within a radius of 4 cells around each sink particle. The amount of mass that a sink particle accretes is determined by fitting the flow around it to a Bondi flow, reduced to account for an angular momentum barrier that would prevent material from reaching the computational cell in which the sink particle is located. The division of mass accreted among cells inside the 4-cell accretion zone is determined by a weighting function. The accretion process leaves the radial velocity, angular momentum, and specific internal energy of the gas on the computational grid unchanged (in the frame co-moving with the sink particle), and it conserves mass, momentum, and energy to machine precision. Next we calculate the gravitational force between every sink particle and the gas in every cell using a direct $1/r^2$ force computation (since the number of particles is small), and modify the momenta of the sink particles and the momenta and energies of the gas cells appropriately. Finally we update the positions and velocities of each sink particle under their mutual gravitational interaction, using a simple N-body code. Forces are again computed via direct $1/r^2$ sums. Once the sink particle update is complete, we proceed to update the protostellar evolution model that is attached to each star particle. \section{B. Protostellar Evolution Model} \label{starpart2} Step (\ref{starupdate}) of the update cycle described in Appendix \ref{starpart1} involves advancing the internal state of each star particle. The primary purpose of this procedure is to determine the stellar luminosity for use in step (\ref{radgas}). We determine the luminosity using a simple one-zone protostellar evolution model introduced by \citet{nakano95} and extended by \citet{nakano00} and \citet{tan04}. The model has been calibrated to match the detailed numerical calculations of \citet{palla91, palla92}, and it agrees to $\sim 10\%$. The numerical parameters we use for the calculations in this paper are based on this calibration, but we note that after we began this work \citet{hosokawa09a} published calculations suggesting that slightly different values would improve the model's accuracy. We recommend that \citeauthor{hosokawa09a}'s values be used in future work. Before diving into the details of the numerical implementation, it is helpful to give an overview of the model. The model essentially treats the star as a polytrope whose contraction is governed by energy conservation. The star evolves through a series of distinct phases, which we refer to as ``pre-collapse", ``no burning", ``core deuterium burning at fixed $T_c$", ``core deuterium burning at variable $T_c$", ``shell deuterium burning", and ``main sequence". The ``pre-collapse" phase corresponds to the very low mass stage ($m \la 0.01$ $\ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi$) when the collapsed mass is not sufficient to dissociate H$_2$ and produce second collapse to stellar densities \citep{masunaga00}. During this phase the object is not yet a star. ``No burning" corresponds to the phase when the object has collapsed to stellar densities, but has not yet reached the core temperature $T_c\approx 1.5\times 10^6$ K required to ignite deuterium, and its radiation is powered purely by gravitational contraction. During this phase the star is imperfectly convective. The next stage, ``core deuterium burning at fixed $T_c$", begins when the star ignites deuterium. While the deuterium supply lasts, core deuterium burning acts as a thermostat that keeps the core temperature fixed and the star fully convective. Once the deuterium is exhausted, the star begins the ``core deuterium burning at variable $T_c$" phase, during which the core temperature continues to rise. The star remains fully convective, and new deuterium arriving on the star is rapidly dragged down to the center and burned. The rising core temperature reduces the star's opacity, and eventually this shuts off convection in the stellar core, beginning the ``shell deuterium burning" phase. At the start of this phase the star changes to a radiative structure and its radius swells; deuterium burning continues in a shell around the radiative core. Finally the star contracts enough for its core temperature to reach $T_c\approx 10^7$ K, at which point it ignites hydrogen and the star stabilizes on the main sequence, the final evolutionary phase in our model. In the following sections, we give the details of our numerical implementation of this model. \subsection{Initialization and Update Cycle} When a star is first created, its mass is always below $0.01$ $\ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi$ and thus in the ``pre-collapse" state. We do not initialize our protostellar evolution model until the mass exceeds $0.01$ $\ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi$ -- prior to this point star particles are characterized only by a mass and have zero luminosity. On the first time step when the mass exceeds $0.01$ $\ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi$, we change the state to ``no burning". Thereafter each star particle is characterized by a radius $r$, a polytropic index $n$, and a mass of gas from which deuterium has not yet been burned, $m_d$. We initialize these quantities to \begin{eqnarray} r & = & 2.5 R_{\odot} \left(\frac{\Delta m/\Delta t}{10^{-5}\,M_{\odot}\mbox{ yr}^{-1}}\right)^{0.2} \\ n & = & 5 - 3\left[1.475 + 0.07\log_{10}\left(\frac{\Delta m/\Delta t}{M_{\odot}\mbox{ yr}^{-1}}\right)\right]^{-1} \\ m_d & = & m, \end{eqnarray} where $\Delta t$ and $\Delta m$ are the size of the time step when the star passes $0.01$ $\ifmmode {\rm M}_{\mathord\odot}\else $M_{\mathord\odot}$\fi$ and the amount of mass accreted during it. If this produces a value of $n$ below 1.5 or greater than 3.0, we set $n=1.5$ or $n=3.0$. These fitting formulae are purely empirical calibrations designed to match \citet{palla91, palla92}. The choice of $n$ intermediate between $1.5$ and $3.0$ corresponds to imperfect convection. Once a star particle has been initialized and its state set to ``no burning", during each time step we perform the following operations: \begin{enumerate} \item Update the radius and the deuterium mass \item Compute the new luminosity \item Advance to the next evolutionary phase \end{enumerate} We describe each of these operations below. \subsection{Evolution of the Radius and Deuterium Mass} Once a star reaches the ``main sequence" evolutionary phase, we simply set its radius equal to the radius of a zero-age main sequence star of the same mass, which we compute using the fitting formula of \citet{tout96} for Solar metallicity. Before this point we treat the star as an accreting polytrope of fixed index. For such an object, in a time step of size $\Delta t$ during which the star gains a mass $\Delta m$, the radius changes by an amount $\Delta r$ given by a discretized version of equation (5.8) of \citet{nakano00}: \begin{eqnarray} \label{radevolution} \Delta r &=& 2 \frac{\Delta m}{m} \left(1 - \frac{1-f_k}{a_g \beta} + \frac{1}{2}\frac{d\log\beta}{d\log m}\right) r - 2 \frac{\Delta t}{a_g \beta} \left(\frac{r}{G m^2}\right) \left(L_{\rm int} + L_I - L_D\right)r \end{eqnarray} Here $a_g = 3/(5-n)$ is the coefficient describing the gravitational binding energy of a polytrope, $\beta$ is the mean ratio of the gas pressure to the total gas plus radiation pressure in the star, $f_k$ is the fraction of the kinetic energy of the infalling material that is radiated away in the inner accretion disk before it reaches the stellar surface, $L_{\rm int}$ is the luminosity leaving the stellar interior, $L_I$ is the rate at which energy must be supplied to dissociate and ionize the incoming gas, and $L_D$ is the rate at which energy is supplied by deuterium burning. In this equation we adopt $f_k=0.5$, the standard value for an $\alpha$ disk. For $\beta$, the low-mass stars we discuss in this paper have negligible radiation pressure and so $\beta=1$ to very good approximation. In general, however, we determine $\beta$ and $d\log\beta/d\log m$ by pre-computing a table of $\beta$ values for polytropes as a function of polytropic index $n$ and mass $m$, and then interpolating within that table. For $n=3$ interpolation is unnecessary and we instead obtain $\beta$ by solving the Eddington quartic \begin{equation} P_c^3 = \frac{3}{a} \left(\frac{k_B}{\mu m_{\rm H}}\right)^4 \frac{1-\beta}{\beta^4} \rho_c^4, \end{equation} where $P_c$ and $\rho_c$ are the central pressure and density of the polytrope (which are also stored in a pre-computed table as a function of $n$), and $\mu=0.613$ is the mean molecular weight for fully ionized gas of Solar composition. For the luminosity emanating from the stellar interior we adopt \begin{equation} L_{\rm int} = \max\left(L_{\rm ms}, L_{\rm H}\right), \end{equation} where $L_{\rm ms}$ is the luminosity of a main sequence star of mass $m$, which we compute using the fitting formula of \citet{tout96} for Solar metallicity, and $L_{\rm H} = 4\pi r^2 \sigma T_{\rm H}^4$ is the luminosity of a star on the Hayashi track, with a surface temperature $T_{\rm H}=3000$ K. For the luminosity required to ionize and dissociate the incoming material we use \begin{equation} L_I = 2.5~L_{\odot} \frac{(\Delta m/\Delta t)}{10^{-5}\,M_{\odot}\mbox{ yr}^{-1}}, \end{equation} which corresponds to assuming that this process requires 16.0 eV per hydrogen nucleus. The deuterium luminosity depends on the evolutionary stage. In the ``pre-collapse" and ``no burning" phases, $L_D=0$. In the ``core burning at fixed $T_c$" phase, we set the deuterium luminosity to the value required to keep the central temperature at a constant value $T_c=1.5\times 10^6$ K. This is (equation (5.13) of \citealt{nakano00}) \begin{equation} L_D = L_{\rm int} + L_I + \frac{G m}{r}\frac{\Delta m}{\Delta t} \left\{1-f_k - \frac{a_g \beta}{2}\left[1 + \frac{d\log(\beta/\beta_c)}{d\log m}\right]\right\}, \end{equation} where $\beta_c = \rho_c k_B T_c / (\mu m_{\rm H} P_c)$ is the ratio of gas pressure to total pressure at the center of the polytrope. In all subsequent phases, deuterium is burned as quickly as it is accreted, so we take \begin{equation} L_D = 15~L_{\odot} \frac{(\Delta m/\Delta t)}{10^{-5}\,M_{\odot}\mbox{ yr}^{-1}}, \end{equation} which corresponds to assuming an energy release of 100 eV per gram of gas, appropriate for deuterium burning in a gas where the deuterium abundance is $\mbox{D}/\mbox{H}=2.5\times 10^{-5}$. Finally, we update the mass of material that still contains deuterium simply based on $L_D$. The change in unburned mass is \begin{equation} \label{deutupdate} \Delta m_d = \Delta m - 10^{-5}M_{\odot} \left(\frac{L_D}{15~L_{\odot}}\right) \left(\frac{\Delta t}{\mbox{yr}}\right). \end{equation} \subsection{Computing the Luminosity} From the standpoint of the rest of the code, the only quantity of any consequence is the luminosity, since this is what enters as a source term in step (\ref{radgas}). The luminosity radiated away from the star consists of three parts: \begin{equation} L = L_{\rm int} + L_{\rm acc} + L_{\rm disk}, \end{equation} where $L_{\rm int}$ is the luminosity leaving the stellar interior as defined above, $L_{\rm acc}$ is the luminosity radiated outward at the accretion shock, and $L_{\rm disk}$ is the luminosity released by material in traversing the inner disk. These in turn are given by \begin{eqnarray} L_{\rm acc} & = & f_{\rm acc} f_k \frac{G m \Delta m/\Delta t}{r} \\ L_{\rm disk} & = & (1 - f_k) \frac{G m \Delta m/\Delta t}{r}, \end{eqnarray} where $f_k=0.5$ as defined above, and $f_{\rm acc}$ is the fraction of the accretion power that is radiated away as light rather than being used to drive a wind. Although we do not explicitly include a protostellar outflow in this calculation, we take $f_{\rm acc} = 0.5$ so that we do not overestimate the accretion luminosity by assuming that the all the accretion power comes out radiatively rather than mechanically. Thus, we assume a total radiative efficiency of 75\%. Although this value is consistent with x-wind models \citep{ostriker95}, neither x-wind or disk-wind models definitively constrain the total conversion of accretion energy into radiation, and we treat this as a free parameter. \subsection{Advancing the Evolutionary State} The final pieces of our protostellar evolution model are the rules for determining when to change the evolutionary state, and for determining what happens at such a change. Our rules are as follows: if the current state is ``no burning", then at the end of each time step we compute the central temperature by numerically solving the equation \begin{equation} P_c = \frac{\rho_c k_B T_c}{\mu m_{\rm H}} + \frac{1}{3} a T_c^4, \end{equation} where $P_c$ and $\rho_c$ are determined from the current mass, radius, and polytropic index. If $T_c \geq 1.5\times 10^6$ K, we change the evolutionary state to ``core burning at fixed $T_c$" and we change the polytropic index to $n=1.5$. If the current evolutionary state is ``core burning at fixed $T_c$", then we check to make sure that $m_d \geq 0$ after we update the unburned deuterium mass with equation (\ref{deutupdate}). If not, then the deuterium has been exhausted and we change the state to ``core burning at variable $T_c$". If the current state is ``core burning at variable $T_c$", we decide whether a radiative zone has formed by comparing the luminosity being generated by deuterium burning, $L_D$, to the luminosity of a zero-age main sequence star of the same mass, $L_{\rm ms}$. We switch the state to ``shell deuterium burning" when $L_D/L_{\rm ms} > f_{\rm rad} = 0.33$. At this point we also change the polytropic index to $n=3$ and increase the radius by a factor of 2.1, representing a swelling of the star due to formation of the radiative barrier. Finally, if the state is ``shell burning", we compare the radius $r$ at the end of every time step to the radius of a zero-age main sequence star of the same mass. Once the radius reaches the main sequence radius, we switch the state to ``main sequence", our final evolutionary state. \end{appendix} \bibliographystyle{apj}
2,869,038,156,071
arxiv
\section{Introduction} GRB are cosmic gamma ray emission distributed isotropically originating at extra galactic distances. They are inferred to have energy of the order of $10^{53}$ ergs. If the beaming and the $\gamma$-ray production of the GRB are quite high, then the central engine which powers such conversion can release energy to power GRB at cosmological distances. There are various models describing the central engine for GRB. Mergers of two NS or a NS and a black hole in a binary \cite{pacz} being some popular models. However, recent calculation of Janka \& Ruffert \cite{janka} had shown that the energy released in such neutrino-antineutrino annihilation is much smaller to account for the GRB, thought to be occurring at large cosmological distances. Therefore, the central engine which powers such huge energies in GRB still remains unclear. On the other hand, the conversion of a NS to a QS liberates huge amount of energy, simply because the star mass changes during the conversion. Therefore, the mass difference of NS and QS manifest itself in the form of energy release. The first calculation of the energy release by such conversion was proposed by Alcock et al. \cite{alcock} and Olinto \cite{olinto}. More detailed similar calculation for the conversion of NS to SS/HS was done by Cheng \& Dai \cite{cheng}, Ma \& Xie \cite{ma} and subsequently by many others \cite{bhat1,bhat2,drago,neibergal,herzog,mallick1,pagliara}. Almost all of them found the energy released to be greater than $10^{51}$ergs, and connected them with the observed gamma ray bursts (GRB). However, new observation \cite{kulkarni98,kulkarni99} reveals the GRB to occur at cosmological distances. Therefore, energies of the order of $10^{51}$ ergs are too low to power GRB at such huge distances. A systematic calculation done by Bombaci \& Datta \cite{bombaci}, using various EOS, estimated the energy released during conversion to be of the order of $10^{53}$ergs. Their main assumption was that the initial NS baryonic mass (BM) and the final NS BM is the same, the BM is conserved during such conversion. Their estimation of the energy release can power GRB to such large distances and was in agreement with the observed GRB. However, during the conversion there can be ejection of matter from the outer layers of the star due to several shock bounce. The matter ejected is usually assumed to be low, but Fryer \& Woosley \cite{woosley} showed that the matter ejected can be as high as $0.2$ solar mass. For such a scenario the assumption of baryonic mass conservation does no longer hold. The above calculation of energy release depends on the fact that some of the compact stars are really exotic and quark stars (QS either SS or HS) do really exist. However, the recent measurement \cite{Demorest10,new2m} found the pulsars to have a very high mass, $M\sim 2 $M$_\odot$. Pulsars with such high masses, for the first time, has imposed strict constraints in determining the equations of state (EOS) of matter describing compact stars. The EOS of quark matter usually has strangeness \cite{itoh,bodmer,witten} and therefore, provides an additional degree of freedom. This extra degree of freedom softens the EOS, that is, the reduction of pressure for a given energy density. As a result, it becomes difficult for the quark EOS to generate stars with such high mass which satisfies the mass limit set by new measured heavy pulsars. However, new calculations have found that the effects due to strong interaction, such as one-gluon exchange or color-superconductivity can make the EOS stiffer and increase the maximum mass limit of SS and HS \cite{Ruester04,Horvath04,Alford07,Fischer10,Kurkela10a, Kurkela10b}. Ozel \cite{Ozel10} and Lattimer \cite{Lattimer10} were the first to study the implications of the new mass limit for SS and HS within the MIT bag model. Therefore, the conversion of NS to QS (SS/HS) is still a viable scenario of astrophysical phase transition (PT). The only other astrophysical events which can come close to the energy budget of GRB are the recently observed giant flares. Three giant flares, SGR 0526-66, SGR 1900+14 and SGR 1806-20 \cite{flare} have been detected so far. The huge amount of energy in these flares can be explained by the presence of a strong surface magnetic field, whose strength is estimated to be larger than $10^{14}$G, and such stars are termed magnetars. Simple calculation suggests that the magnetic fields in the star interior can be few order higher ($\sim 10^{18}$G). Theoretical calculation also suggests that such strong magnetic field can effects the gross structure of NS. It effects the structure of a NS through the metric describing the star \cite{bocquet,cardall} or by modifying the EOS of the matter through the Landau quantization of charged particles \cite{chakrabarty97,yuan,broderick,chen,wei}. As the EOS gets modified, the mass of magnetar differs from a normal pulsar. This would eventually effect the energy released during the conversion of a NM to quark magnetar (QM). Therefore, the energy released during the PT of normal pulsars would be quite different from the energy released during the PT of magnetars. In the present work, we calculate the energy release during the conversion of NS to SS/HS. Previous calculation does not include the idea of matter ejection during the conversion process. In our calculation we assume that the baryonic mass conservation to be true, only after taking into account the mass of the matter ejected. Finally, we would study the energy released during the conversion of a NM to SM/HM. In the next section (Section II), we give the outline of the calculations involved in the energy release. In section III, we discuss the EOS. We discuss the effect of magnetic field on the EOS in section IV, and the HS is discussed in section V. The results are discussed in section VI. Finally, in section VII, we summarize our results and draw conclusion from them. \section{Energy released during conversion} Earlier calculation of the energy release during the phase transition of a NS to QS was based on the idea that the baryon number is conserved during the phase transition, therefore, the baryonic mass of the star remains same after the conversion. However, there is a change in the proper mass and the gravitational mass. The energy released during the conversion is primarily due to the difference in the proper and gravitational mass of initial NS and final QS. However, Fryer \& Woosley \cite{woosley} showed that there may be ejection of matter during the conversion from the outer layer of the star due to several shock bounce. Their calculation showed that the matter ejected can be as high as $0.2$ solar mass, where the baryon number conservation is not maintained. In this work we assume that as the PT occurs and the transition front travels outwards converting hadronic matter to quark matter. During the PT the star suffers several shock bounce and matter is ejected primarily from the outer layers of the NS. Matter is ejected before the phase transition front reaches the outer layers. Therefore, the matter ejected is normal hadronic matter. What happens is that, say the phase transition starts in a NS of baryonic mass $2$ solar mass. The phase transition front starts travelling outwards, towards the surface of the NS, converting hadronic matter to quark matter. During this time several shock bounces happens and matter is ejected from the outer layers of the star. The ejected matter is still hadronic matter (mostly from the outer nuclear region). We assume the matter ejected is of baryonic mass $0.2$ solar mass. Therefore, the baryonic mass of the NS is now $1.8$ solar mass. The baryonic mass of the final QS is also $1.8$ solar mass. The energy released during the conversion is due to the difference in the proper and gravitational mass of the intermediate NS and final QS (which have the same baryonic mass of $1.8$ solar mass). The matter ejected does not play any role in the energy release because it is ejected from the star before suffers any phase transition. The total energy released in the conversion of NS to QS is actually the difference between the total binding energy of the QS star (BE(QS)) and the total binding energy of the NS (BE(NS)) \begin{equation} E_T = BE(QS) - BE(NS). \end{equation} The total conversion energy can then be written as the sum of gravitational and internal energy change \begin{equation} E_T = E_I + E_G. \end{equation} where the gravitational and internal energy are given by \begin{eqnarray} E_I = BE_I(QS) - BE_I(NS) \\ \nonumber E_G = BE_G(QS) - BE_G(NS). \end{eqnarray} The total, internal and gravitational energy of conversion can be written in terms of their respective gravitational and proper mass (or rest mass) \begin{eqnarray} E_T = [M_G(NS) - M_G(QS)]c^2, \\ \nonumber E_I = [M_P(NS) - M_P(QS)]c^2, \\ \nonumber E_G = [M_P(QS) - M_G(QS) - M_P(NS) + M_G(NS)]c^2, \end{eqnarray} where $M_G$ is the gravitational mass, $M_P$ is the proper mass or rest mass of a star. Keeping the baryonic mass of the star fixed, we calculate the proper and gravitational mass of the star. The baryonic mass, proper mass and the gravitational mass of a star can be obtained by solving the structural equations for non-rotating compact stars by Tolman-Oppenheimer-Volkoff equations \cite{shapiro}. The baryonic mass, gravitational mass and proper mass are given by \begin{eqnarray} M_B= \int_0^R dr 4\pi r^2 \Big[ 1 - {{2Gm(r)}\over{c^2 r}}\Big]^{-1/2} n(r)m_N, \\ M_G = \int_0^R 4 \pi r^2 \epsilon(r) dr, \\ M_P = \int_0^R dr 4\pi r^2 \Big[ 1 - {{2Gm(r)}\over{c^2 r}}\Big]^{-1/2} \epsilon(r), \end{eqnarray} where $n(r)$ is the number density, $m_N$ is the mass of neutron, $\epsilon(r)$ being the total mass-energy density and $m(r)$ the gravitational mass enclosed within a spherical volume of radius $r$. \section{Hadronic and quark EOS} We use the non-linear relativistic mean field (RMF) model with hyperons to describe the hadronic EOS. In this model, the baryons interact with mean meson fields \cite{boguta,glen91,sugahara,sghosh, schaffner}. The Lagrangian density having nucleons, baryon octet ($\Lambda,\Sigma^{0,\pm},\Xi^{0,-}$) and leptons can be written in the form \cite{ritam2012,ritam2012b} \begin{eqnarray} \label{baryon-lag} {\cal L}_H & = & \sum_{b} \bar{\psi}_{b}[\gamma_{\mu}(i\partial^{\mu} - g_{\omega b}\omega^{\mu} - \frac{1}{2} g_{\rho b}\vec \tau . \vec \rho^{\mu}) \nonumber \\ & - & \left( m_{b} - g_{\sigma b}\sigma \right)]\psi_{b} + \frac{1}{2}({\partial_\mu \sigma \partial^\mu \sigma - m_{\sigma}^2 \sigma^2 } ) \nonumber \\ & - & \frac{1}{4} \omega_{\mu \nu}\omega^{\mu \nu}+ \frac{1}{2} m_{\omega}^2 \omega_\mu \omega^\mu - \frac{1}{4} \vec \rho_{\mu \nu}.\vec \rho^{\mu \nu} \nonumber \\ & + & \frac{1}{2} m_\rho^2 \vec \rho_{\mu}. \vec \rho^{\mu} -\frac{1}{3}bm_{n}(g_{\sigma}\sigma)^{3}- \frac{1}{4}c(g_{\sigma}\sigma)^{4} +\frac{1}{4}d(\omega_{\mu}\omega^{\mu})^2 \nonumber \\ & + & \sum_{L} \bar{\psi}_{L} [ i \gamma_{\mu} \partial^{\mu} - m_{L} ]\psi_{L}. \end{eqnarray} The leptons $L$ are assumed to be non-interacting, whereas the baryons are coupled to the meson (scalar $\sigma$, isoscalar-vector $\omega_\mu$ and isovector-vector $\rho_\mu$). The constant parameters of the model are determined by fitting the nuclear matter saturation properties. The model, however, fails to explain the experimentally observed strong $\Lambda \Lambda$ attraction. Mishustin \& Schaffner \cite{schaffner} corrected the model by the adding two mesons, the isoscalar-scalar $\sigma^*$ and the isovector-vector $\phi$, which couple only with the hyperons. In our calculation we have used two different parameter sets. The one with relatively softer EOS is known as TM1 parametrization and the other which generates much stiffer EOS is given by PL-Z parametrization. The details of the parametrization can be found in Refs. \cite{schaffner,ritam2012b}. Maintaining charge neutrality and beta equilibrium, the energy density and pressure are given by \cite{ritam2012b} \begin{eqnarray} \varepsilon & = & \frac{1}{2} m_{\omega}^2 \omega_0^2 + \frac{1}{2} m_{\rho}^2 \rho_0^2 + \frac{1}{2} m_{\sigma}^2 \sigma^2 + \frac{1}{2} m_{\sigma^*}^2 \sigma^{*2} + \frac{1}{2} m_{\phi}^2 \phi_0^2 +\frac{3}{4}d\omega_0^4+ U(\sigma) \nonumber \\ & & \mbox{} + \sum_b \varepsilon_b + \sum_l \varepsilon_l \,, \\ P & = & \sum_i \mu_i n_i - \varepsilon . \end{eqnarray} We employ the simple MIT bag model \cite{chodos} to describe the quark EOS. The up, down and strange quark masses are taken to be $5$MeV, $10$MeV and $200$MeV, respectively. For the bag model, the energy density and pressure can be written as \cite{ritam2012b} \begin{eqnarray} \epsilon^Q &=& \sum_{i=u,d,s} \frac{g_i}{2 \pi^2} \int_0^{p_F^i} dp p^2\sqrt{m_i^2 + p^2}+ B_G\,,\label{edec}\\ P^Q &=& \sum_{i=u,d,s} \frac{g_i}{6\pi^2} \int_0^{p_F^i} dp \frac{p^4}{\sqrt{m_i^2 + p^2}}- B_G\,, \label{pdec} \end{eqnarray} where the Fermi momentum $p_F^i$ is given as $p_F^i=\sqrt{\mu_i^2-m_i^2}$. $g_i$ is the degeneracy of quarks $i$. $B_G$ is the bag constant and is considered a free parameter in the model. The matter is assumed to be neutrino-free ($\mu_{\nu_e} = \mu_{\overline \nu_e} = 0$ ), and like the hadronic matter is charge neutral and in beta equilibrium. \section{Inclusion of magnetic field in EOS} The magnetic field is assumed to be along the $z$ axis, and can be written as ${\vec B}=B\hat{k}$ \cite{monika,ritam2011,ritam2012}, and so the charged particles get Landau quantized in the perpendicular direction to the field. Therefore, the energy of the $n$th Landau level ($n$ being the quantum number) \cite{chakrabarty97,yuan,broderick,chen,wei,lan} takes the form \begin{equation} E_i=\sqrt{{p_i}^2+{m_i}^2+|q_i|B(2n+s+1)}\,. \end{equation} In this equation $s$ is the spin of the particle and is equal to $s=\pm1$ for the up(+) and down(-), respectively. $p_i$ is the momentum along the field direction of particle $i$. Writing $2n+s+1=2\nu$, the energy of the particle can be rewritten as \begin{equation} E_i = \sqrt{{p_i}^2+{m_i}^{2}+2\nu |q_i|B} \nonumber \\ = \sqrt{{p_i}^2+{\widetilde{m}_{i,\nu}}^2} \,. \end{equation} The number density and the energy density of the charged particles get modified, but the neutral particles remain unaffected. The number density and energy density of the charged particles is given by \begin{equation} n_i= \frac{|q_i| B}{2 \pi^2} \sum_{\nu} p_{f,\nu}^i \,, \label{nmax} \end{equation} and \begin{equation} \varepsilon_i= \frac{|q_i| B}{4 \pi^2} \sum_{\nu} \left[ E_f^i p_{f,\nu}^i + \widetilde{m}_{\nu}^{i~2} \ln \left( \left| \frac{E_f^i + p_{f,\nu}^i}{\widetilde{m}^i_{\nu}} \right| \right) \right] \,. \end{equation} $p_{f,\nu}^i$ is the Fermi momentum for the level with the principal quantum number $n$ and spin $s$ and is given by \begin{equation} p_{f,\nu}^{i~2} = E_f^{i~2} - \widetilde{m}_{\nu}^{i~2} \,. \end{equation} Therefore, the total energy density of the hadronic matter now takes the form \begin{eqnarray} \varepsilon & = & \frac{1}{2} m_{\omega}^2 \omega_0^2 + \frac{1}{2} m_{\rho}^2 \rho_0^2 + \frac{1}{2} m_{\sigma}^2 \sigma^2 + \frac{1}{2} m_{\sigma^*}^2 \sigma^{*2} + \frac{1}{2} m_{\phi}^2 \phi_0^2 +\frac{3}{4}d\omega_0^4+ U(\sigma) \nonumber \\ & & \mbox{} + \sum_b \varepsilon_b + \sum_l \varepsilon_l + \frac{{B}^2}{8 \pi^2} \,, \end{eqnarray} where the last term being the magnetic field contribution. The pressure can simply be represented as \begin{eqnarray} P= \sum_i \mu_i n_i - \varepsilon \,. \end{eqnarray} For the quark matter, the thermodynamic potential in presence of strong magnetic field is written as \cite{chakrabarty-sahu,ritam2012} \begin{eqnarray} \Omega_i&=&-\frac{2g_i|q_i|B}{4\pi^2}\sum_{\nu}\int_{\sqrt{m_i^2+2\nu |q_i| B}}^{\mu} dE_i\sqrt{E_i^2-m_i^2-2\nu |q_i|B}. \label {eq:om} \end{eqnarray} The total energy density and pressure of the strange quark matter are given by \begin{eqnarray} \varepsilon &=& \sum_{i}\Omega_i +B_G +\sum_{i}n_i \mu_i \nonumber \\ p&=&-\sum_i\Omega_i-B_G. \end{eqnarray} \section{EOS for the HS} We use Glendenning conjecture \cite{glen} to determine the HS. The range of baryon density, where both the hadron and quark phases coexists called mixed phase. The stellar matter is charge neutral, therefore, the hadron and quark phases may be separately charged, but maintaining charge neutrality in the mixed phase. There are two approaches to construct the HS. One is the Gibbs construction and the other is the Maxwell construction. The Gibbs condition determines the mechanical and chemical equilibrium between two phases, and is given by \cite{ritam2012,ritam2012b} \begin{equation} P_{\rm {HP}}(\mu_e, \mu_n) =P_{\rm{QP}}(\mu_e, \mu_n) = P_{\rm {MP}}. \label{gibbs} \end{equation} The volume fraction occupied by quark matter in the mixed phase is $\chi$, and the hadronic matter is $(1-\chi)$ and is connected as \begin{equation} \chi \rho_c^{\rm{QP}} + (1 - \chi) \rho_c^{\rm{HP}} = 0, \label{e:chi} \end{equation} where $\rho_c$ is respective charge density. This corresponds to the assumption that the surface tension in quark matter is almost zero. For this case the pressure varies continuously with energy density. The mixed phase lies in between the hadronic and quark phase, where both the phase coexist maintaining global charge neutrality. The Maxwell construction also demands \begin{equation} P_{\rm {HP}}(\mu_e, \mu_n) =P_{\rm{QP}}(\mu_e, \mu_n) = P_{\rm {MP}}, \label{maxwell} \end{equation} but for this case the electron chemical potential is not constant across the boundary. This construction corresponds to high surface tension in quark matter. For the Maxwell construction there is a sudden jump in energy density from the hadronic phase to quark phase. There is no mixed phase in between. \section{Results} To begin with, we would assume that a NS, with a sudden density fluctuation at the core, undergoes a phase transition. The phase transition is brought about by a transition front travelling from the centre to the surface of the star. Previous calculation of the energy release estimation assumed that the baryonic mass of the star remains constant. The number of baryons before and after the conversion does not change. This assumption may not be valid for every scenario as there may be some matter ejection from the star. For the phase transition induced collapse of NS to QS, there may be a shock development at the centre of the star which gives enough momentum for the matter to eject from the outer layers of the star \cite{woosley,harko}. It was suggested by Fryer \& Woosley \cite{woosley} that the mass ejection may be as high as $0.2$ times solar mass. Recent calculation also shows that the phase transition (only deconfinement) in the NS can proceed via a deflagration or a detonation \cite{bhat1,herzog} with such high velocities that the whole star is converted to two flavor quark matter within a few millisecond. For such cases the velocity is so high that there ought to be some matter ejection from the outer layers. If such is the case, the baryon mass conservation condition cannot be maintained strictly. We assume that the matter is ejected before the phase transition front reaches the outer layers. Therefore, the ejected matter is normal hadronic matter and not quark matter. First we would calculate the energy release with no matter ejection and then we would calculate the energy release when there is a matter ejection of $M_B=0.2$ solar mass. The resultant baryonic mass of the NS after matter ejection is denoted as $M_{BE}$. The conversion may continue up to the surface or may die down after some distance. This depends on the initial energy difference between the matter phases at the centre of the star and also on the initial density and spin fluctuation of the star \cite{glen}. If the conversion process continues to the surface we have a SS, and if it stops in between we have a HS. For the HS we have assumed both Gibbs (Gib) and Maxwell (Max) prescription. The relativistic mean field EOS model for nuclear matter is used to construct the NS. The final star may be a SS or a HS. We construct the HS based on Glendenning construction. In our calculation we have used two different parametrization for the hadronic EOS, and had regulated the quark EOS by changing the bag constant. The masses of the light quarks are quite bounded and we take them to be $5$MeV (up) and $10$MeV (down). The mass of strange (s) quark is still not well established, but expected to lie between $100-300$MeV. Therefore, we have kept the mass of the s-quark fixed at $200$MeV. We take different bag constant ($B_G$) to have different quark matter EOS. For simplicity, we will denote ${B_G}^{1/4}=160$ MeV$=B_g$. \begin{figure} \centering \begin{tabular}{cc} \begin{minipage}{200pt} \includegraphics[width=200pt]{fig1.eps} \end{minipage} \begin{minipage}{200pt} \includegraphics[width=200pt]{fig2.eps} \end{minipage} \\ \end{tabular} \caption{In (a) pressure as a function of energy density is plotted for different EOS. The hadronic EOS are denoted as TM1 and PL-Z, the strange EOS with bag pressure $150$ MeV as MIT 150, and the mixed EOS as Max (hadronic and quark parametrization are specified). In (b) The difference between Max and Gib construction are explicitly shown.} \label{fig1} \end{figure} There are several studies to model the density dependence of $B_g$ and we choose one of the widely used model given in refs. \cite{adami,blaschke}. Without going into much detail, which can be found in refs. \cite{ritam2012,ritam2012b}, we give the expression for the density variation of the bag constant \begin{eqnarray} B_{gn}(n_b) = B_\infty + (B_g - B_\infty) \exp \left[ -\beta \Big( \frac{n_b}{n_0} \Big)^2 \right] \:, \label{bag} \end{eqnarray} where $B_\infty=130$ MeV is the lowest value of the bag constant which it attains at asymptotically high densities (known from experiments). The bag pressure mentioned in the paper is the initial value of bag pressure ($B_g$ in the equation). As the density increases the bag pressure decreases and reaches $130$MeV asymptotically, the decrease rate is controlled by $\beta$. We take $\beta=0.003$ (which can generate massive QS \cite{ritam2012b}). However, a recent calculation by MingFeng et al. \cite{ming} shows that for a varying bag constant in the quark EOS, an extra term gets added in the matter pressure. The matter pressure is now given by \begin{equation} p=-\sum_i\Omega_i-B_G+n_b\frac{\partial B_G}{\partial n_b}, \label{mterm} \end{equation} where, $n_b$ is the number density. The last term makes the quark EOS softer. As the EOS is softer the maximum mass also decreases. In this work we have followed the prescription for calculating the EOS of quark matter. \begin{figure} \centering \begin{tabular}{cc} \begin{minipage}{200pt} \includegraphics[width=200pt]{fig5.eps} \end{minipage} \begin{minipage}{200pt} \includegraphics[width=200pt]{fig6.eps} \end{minipage} \\ \end{tabular} \caption{Gravitational masses as a function of baryonic masses are plotted for different EOS. (a) is for TM1 parametrization (hadronic and hybrid) and (b) for Pl-Z parametrization.} \label{fig2} \end{figure} The strange matter is absolutely stable if the bag pressure is $150$ MeV. Therefore, for such bag constant the resultant star is a strange star (SS). However, if the bag pressure $160$ MeV, the strange matter is not absolutely stable and we get a hybrid star. The hybrid star can have a mixed phase (denoted as Gib) or there can be a direct jump from quark to hadronic matter (denoted as Max). In this work, the bag pressure for the strange star is always $150$ MeV and for the HS the bag pressure is $160$ MeV. In Fig. 1a, we have plotted pressure as a function of energy density for different EOS. The hadronic EOS with TM1 and PL-Z parametrization. The figure shows that the EOS with TM1 parametrization is softer than the PL-Z parametrized EOS. The mixed EOS curves has been plotted with $B_g=160$ MeV, with the above mentioned hadronic parametrizations. The difference between the Gibbs and Maxwell prescription for the mixed EOS is shown in Fig. 1b. The EOS with Gibbs prescription shows an extended mixed phase region whereas the EOS with Maxwell prescription shows a jump in the energy density (corresponding to a jump in baryon density) from the hadronic to quark matter. For the Maxwell construction with TM1 parametrization there is a jump in the nuclear density from $0.34 fm^{-3}$ to $0.56 fm^{-3}$ and for PL-Z the jump is $0.36 fm^{-3}$ to $0.57 fm^{-3}$. For the Gibbs construction with TM1 parametrization the mixed phase occurs in the range of $0.13 fm^{-3}-0.56 fm^{-3}$ and for PL-Z the range is $0.16 fm^{-3}-0.57 fm^{-3}$. \begin{table} \begin{center} \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline $NS \rightarrow SS$ & $M_B$ & $M_G(NS)$ & $M_P(NS)$ & $M_G(SS)$ & $M_P(SS)$ & $E_G$ & $E_I$ & $E_T$ \\ \hline $TM1L \rightarrow MIT150$ & $1.71$ & $1.546$ & $1.781$ & $1.41$ & $1.677$ & $0.57$ & $1.86$ & $2.43$ \\ $PL-Z \rightarrow MIT150$ & $1.945$ & $1.67$ & $1.957$ & $1.573$ & $1.915$ & $0.66$ & $1.07$ & $1.73$ \\ \hline $NS \rightarrow HS$ \\ \hline $TM1L \rightarrow Max$ & $1.71$ & $1.546$ & $1.781$ & $1.501$ & $1.752$ & $0.28$ & $0.52$ & $0.80$ \\ $TM1L \rightarrow Gib$ & $1.71$ & $1.546$ & $1.781$ & $1.468$ & $1.747$ & $0.78$ & $0.61$ & $1.39$ \\ $PL-Z \rightarrow Max$ & $1.945$ & $1.67$ & $1.957$ & $1.638$ & $1.975$ & $0.89$ & $-0.32$ & $0.57$ \\ $PL-Z \rightarrow Gib$ & $1.945$ & $1.67$ & $1.957$ & $1.676$ & $1.983$ & $0.79$ & $-0.08$ & $0.71$ \\ \hline \end{tabular} \caption{Table showing the energy released during the phase transition from NS to SS/HS for the non magnetic star. There is no matter ejection during the conversion. The masses are in terms of solar masses and the energies in terms of $10^{53}$ergs.} \label{table1} \end{center} \end{table} We assume that there is no matter ejection from the outer layers of the star during the PT. In Table \ref{table1}, we have given the corresponding proper mass and gravitational mass of the NS and QS (SS/HS) for the same baryonic mass ($M_B$). For the hadronic star we have used two different model parametrization which gives different $M_B$ for the NS. For the TM1 model $M_B$ is $1.71 M_{\odot}$ and for PL-Z model it is $1.945 M_{\odot}$. The phase transition occurs and the NS is converted to QS. The resultant $M_B$ for the QS is the same that of the initial NS. If the phase transition happens throughout the whole star and the star is converted to a SS. The energy liberated is $2.43 \times 10^{53}$ ergs. If the phase transition stops in between and we have a resultant HS the liberated energy is somewhat less. The HS without a mixed phase region liberates the minimum energy and is $7-8 \times10^{52}$ ergs. To explain the energy release, we plot gravitational mass against baryonic mass curves for different EOS (fig 2). The energy liberated is due to the difference in the gravitational mass between a NS and a QS for a particular baryonic mass. This depends on the relative stiffness of the curves (this stiffness is not same as the EOS curve stiffness). We find that the most stiff curve is for the hadronic EOS and the least stiff is the strange EOS. The Max curve is slightly stiffer than the Gib curve. Although the strange EOS is the stiffer than hadronic EOS, the NS with hadronic EOS reaches its maximum mass at much lower central energy density. So in the baryonic mass vs. gravitational mass plot the NS stiffness is the highest and the SS stiffness is the lowest. Similar argument follows for the Max and Gib curves (as Gib curve has extended quark region in the mixed phase). Therefore, the energy liberated during the conversion of NS to Max HS is the least and is the maximum for the conversion of NS to SS. What physically happens is that, for the conversion of NS to SS the conversion takes place throughout the star and so the energy liberated is larger, whereas for the HS the conversion takes place only up to a certain region. The energy released during the conversion of Max HS is less than that of Gib HS, This is because for the Gib HS the conversion process also continues in the mixed phase whereas the Max HS has no mixed phase so the conversion process stops much earlier. \begin{table} \begin{center} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|} \hline $NS \rightarrow SS$ & $M_B$ & $M_{BE}$ & $M_G(NS)$ & $M_P(NS)$ & $M_G(SS)$ & $M_P(SS)$ & $E_G$ & $E_I$ & $E_T$ \\ \hline $TM1L \rightarrow MIT150$ & $1.71$ & $1.51$ & $1.379$ & $1.555$ & $1.265$ & $1.474$ & $0.59$ & $1.45$ & $2.04$ \\ $PL-Z \rightarrow MIT150$ & $1.945$ & $1.745$ & $1.53$ & $1.76$ & $1.445$ & $1.712$ & $0.75$ & $0.86$ & $1.61$ \\ \hline $NS \rightarrow HS$ \\ \hline $TM1L \rightarrow Max$ & $1.71$ & $1.51$ & $1.379$ & $1.555$ & $1.379$ & $1.555$ & $0$ & $0$ & $0$ \\ $TM1L \rightarrow Gib$ & $1.71$ & $1.51$ & $1.379$ & $1.555$ & $1.325$ & $1.544$ & $0.76$ & $0.2$ & $0.96$ \\ $PL-Z \rightarrow Max$ & $1.945$ & $1.745$ & $1.53$ & $1.76$ & $1.502$ & $1.768$ & $0.64$ & $-0.14$ & $0.50$ \\ $PL-Z \rightarrow Gib$ & $1.945$ & $1.745$ & $1.53$ & $1.76$ & $1.449$ & $1.757$ & $0.5$ & $0.5$ & $0.55$ \\ \hline \end{tabular} \caption{Table showing the energy released during the phase transition from NS to SS/HS for the non magnetic star. The initial baryonic mass of the star is the same as that of Table \ref{table1}. There is matter ejection from the outer layer due to shock bounce, and is of baryonic mass $0.2$ solar mass.} \label{table2} \end{center} \end{table} Table \ref{table2} shows the energy liberated during the conversion of NS to SS/HS, when there is matter ejection from the surface layers. The amount of matter ejected is taken to be $0.2 M_{\odot}$ of $M_B$. Therefore, the resulting baryonic mass denoted as $M_{BE}$ is $(M_B -0.2) M_{\odot}$. As the $M_B$ decreases the gravitational mass and proper mass of the NS and QS also decreases. We find that the energy released during such conversion is slightly less than that from the previous case. This is because as the star loses matter it becomes less massive and so the energy liberated is slightly less. \begin{figure} \centering \includegraphics[width=3.3in]{fig1-b.eps} \caption{Pressure vs. energy density plot for different EOS with and without magnetic field. The magnetic field is parametrized according to that given in Eqn. \ref{mag-vary}. The hadronic EOS is given as TM1, and its magnetic counterpart is denoted as TM1+B. The same representation is adopted for the strange and mixed EOS. The straight curve is the hadronic (TM1) EOS and the short dash is its magnetic counterpart. The dotted curve is for the MIT 150 and long dash is for its magnetic counterpart. The broken curves are for the Max TM1 (dash-dot the original and dash-dash-dot the magnetic) and the curves with star (original) and plus (magnetic) are for the Gib PL-Z.} \label{fig3} \end{figure} Next we considered the effect of magnetic field in the EOS. The surface field strength observed in magnetars lie between $10^{14}$G to $10^{15}$G. It is usually assumed that the magnetic field increases as one goes towards the centre of the star, and a simple calculation involving flux conservation of the progenitor star yields the central magnetic field as high as $\sim 10^{17}-10^{18}$G. Without going into much detail we follow the prescription of the field variation given in Refs. \cite{chakrabarty97,monika} \begin{equation} {B}(n_b)={B}_s+B_0\left\{1-e^{-\alpha \left( \frac {n_b}{n_0} \right)^\gamma}\right\}, \label{mag-vary} \end{equation} where $\alpha$ and $\gamma$ determines the magnetic field variation across the star for fixed surface field $B_s$ and central field $B_0$. The value of $B$ depends primarily on $B_0$. We keep the surface field strength fixed at $B_s=10^{14}$G. We put $\gamma=2$ and $\alpha=0.01$ to have the field variation. It was shown that magnetic with field strength greater than few times $10^{18}$G, makes the star unstable. Therefore, for our calculation we assume a conservative value of the central field to be $10^{17}$G. The variation in the $\alpha$ and $\gamma$ would give interesting result as far as the magnetic field variation in the star is concerned, but for our calculation of the energy release the results would not change much. So we assume a fixed $\alpha$ and $\gamma$ values, $\alpha=0.01$ and $\gamma=2$. In Fig. \ref{fig3} we have plotted magnetic field induced EOS curves. Qualitatively the magnetic induced EOS curves are much softer than the non magnetic curve. This is because the magnetic pressure due to Landau quantization act in the opposite direction of the matter pressure. Also, the magnetic stress acts towards the matter energy density. These two collectively reduces the stiffness of the magnetic field induced EOS. The curves shows that the magnetic field has very little effect on hadronic region (low-density regime) as the field strength there is low. As we go towards the centre of the star the magnetic field strength increases and therefore the effect on the mixed and quark phase are higher. The quark sector (high-density regime) is the most affected and the mixed phase is moderately affected (intermediate regime). The mixed phase region also gets extended due to the magnetic field and the region now is $0.13 fm^{-3}-0.65 fm^{-3}$ for the TM1 parametrization and $0.17 fm^{-3}-0.69 fm^{-3}$ for PL-Z parametrization. The jump in the nuclear density from the nuclear to quark matter for the Maxwell construction is from $0.39 fm^{-3}$ to $0.57 fm^{-3}$ for the TM1 parametrization and from $0.43 fm^{-3}$ to $0.59 fm^{-3}$ for the PL-Z parametrization. \begin{figure} \centering \begin{tabular}{cc} \begin{minipage}{200pt} \includegraphics[width=200pt]{fig5-b.eps} \end{minipage} \begin{minipage}{200pt} \includegraphics[width=200pt]{fig6-b.eps} \end{minipage} \\ \end{tabular} \caption{Gravitational masses as a function of baryonic masses are plotted with for magnetars. (a) is with TM1 parametrization and (b) is with PL-Z parametrization.} \label{fig4} \end{figure} Table \ref{table3} gives the energy releases during a phase transition of NM to a quark magnetar (QM), which can be either be SM or a HM. The magnetic field strength and configuration is mentioned above. We first assume that there is no matter ejection. As the magnetic field makes the EOS softer, both the baryonic mass and gravitational mass decreases. As the star becomes less massive (see fig \ref{fig4}), the energy liberated is much less. Table \ref{table4} shows the energy liberated during the conversion of NM to SM/HM, when there is matter ejection from the surface layers. We again assume the amount of matter ejected to be $0.2 M_{\odot}$ of $M_B$. For this case the star is less massive, therefore the energy released is also less. It is interesting to note that for some conversion from NS to Max HS the energy liberated is zero as the NS GM and the SS GM becomes same for a particular BM. Physically it means that the PT happens without any energy release. \begin{table} \begin{center} \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline $NS \rightarrow SS$ & $M_B$ & $M_G(NS)$ & $M_P(NS)$ & $M_G(SS)$ & $M_P(SS)$ & $E_G$ & $E_I$ & $E_T$ \\ \hline $TM1L \rightarrow MIT150$ & $1.7$ & $1.536$ & $1.772$ & $1.436$ & $1.728$ & $1.0$ & $0.79$ & $1.79$ \\ $PL-Z \rightarrow MIT150$ & $1.825$ & $1.606$ & $1.889$ & $1.532$ & $1.877$ & $1.11$ & $0.21$ & $1.32$ \\ \hline $NS \rightarrow HS$ \\ \hline $TM1L \rightarrow Max$ & $1.7$ & $1.536$ & $1.772$ & $1.512$ & $1.767$ & $0.34$ & $0.09$ & $0.43$ \\ $TM1L \rightarrow Gib$ & $1.7$ & $1.546$ & $1.772$ & $1.49$ & $1.777$ & $0.9$ & $-0.09$ & $0.81$ \\ $PL-Z \rightarrow Max$ & $1.825$ & $1.606$ & $1.889$ & $1.583$ & $1.885$ & $0.34$ & $0.07$ & $0.41$ \\ $PL-Z \rightarrow Gib$ & $1.825$ & $1.606$ & $1.89$ & $1.576$ & $1.875$ & $0.29$ & $0.25$ & $0.54$ \\ \hline \end{tabular} \caption{Table showing the energy released during the phase transition from NM to QM (SM/HM). The magnetic field is parametrized according to that given in Eqn. \ref{mag-vary}. There is no matter ejection during the conversion.} \label{table3} \end{center} \end{table} The energy liberated during the conversion of NS to SS is of the amount of $10^{53}$ ergs, and it can account for the cosmological origin of the GRB. Such huge energy release is only comparable to the energy observed during the GRB. The detailed picture of the conversion mechanism and the central engine for the GRB is still not well understood. The above described process can be responsible for the energetic of the GRB. In this paper, we have concentrated more on the difference in gravitational binding energy due to phase transition which is accompanied by matter ejection from the outer layers. It should be realized that for a realistic star with rotation and magnetic field, there would be asymmetric matter ejection and also asymmetric energy deposition. The amount of matter ejected, the fraction of energy which spend on heating the star and the fraction which is spend on propagating the conversion front is still not clear. However, if a significant amount of energy released goes into the energetic of electron-positron pairs production, it can account for the GRB energy. The energy released during the conversion of NM to HM is of the order of $10^{52}$ ergs, and are therefore difficult to power GRB at cosmological distances. However, the magnetars would release energy more efficiently from their magnetic poles than the normal NS. Therefore, in magnetars the energy liberated from the star would have a large Lorentz number and can account for the giant flare activity. \begin{table} \begin{center} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|} \hline $NS \rightarrow SS$ & $M_B$ & $M_{BE}$ & $M_G(NS)$ & $M_P(NS)$ & $M_G(SS)$ & $M_P(SS)$ & $E_G$ & $E_I$ & $E_T$ \\ \hline $TM1L \rightarrow MIT150$ & $1.7$ & $1.5$ & $1.371$ & $1.545$ & $1.285$ & $1.509$ & $0.9$ & $0.64$ & $1.54$ \\ $PL-Z \rightarrow MIT150$ & $1.825$ & $1.625$ & $1.445$ & $1.655$ & $1.38$ & $1.645$ & $0.98$ & $0.18$ & $1.16$ \\ \hline $NS \rightarrow HS$ \\ \hline $TM1L \rightarrow Max$ & $1.7$ & $1.5$ & $1.371$ & $1.545$ & $1.371$ & $1.545$ & $0$ & $0$ & $0$ \\ $TM1L \rightarrow Gib$ & $1.7$ & $1.5$ & $1.371$ & $1.545$ & $1.342$ & $1.556$ & $0.72$ & $-0.2$ & $0.52$ \\ $PL-Z \rightarrow Max$ & $1.825$ & $1.625$ & $1.445$ & $1.655$ & $1.445$ & $1.655$ & $0$ & $0$ & $0$ \\ $PL-Z \rightarrow Gib$ & $1.825$ & $1.625$ & $1.445$ & $1.655$ & $1.428$ & $1.657$ & $0.26$ & $0.04$ & $0.30$ \\ \hline \end{tabular} \caption{Table showing the energy released during the phase transition from NM to QM. The magnetic field is parametrized according to that given in Eqn. \ref{mag-vary}. The initial baryonic mass of the star is the same as that of Table \ref{table3}. There is matter ejection from the outer layers of baryonic mass $0.2$ solar mass.} \label{table4} \end{center} \end{table} This mechanism of energy production and connecting them with GRB is not unanimously accepted. Some models predicts that it is sufficient to power GRB \cite{ma1}, but other model predicts that although the energy may be high, but to produce high Lorentz factor the energy goes down to $10^{46}-10^{49}$ergs \cite{woosley}. However, in a recent paper by Cheng et al. \cite{harko} argued that due to the difference of gravitational binding energy of the star after the phase transition some matter near the stellar surface would be ejected. This matter would further be accelerated by electron-positron pairs created by neutrino-antineutrino annihilation. The mass ejection could give rise to high Lorentz factor and also high luminosity needed for GRB. We should mention that the EOS which was used for the calculation are softer due to the presence of strangeness (hyperons in hadronic EOS and strange quarks in quark EOS) in them. The maximum mass that the EOS can produce is much less than $2$ solar mass, which is the new limit of NS mass from precise measurement of pulsars J1614-2230 and J0348+0432. However, for the hadronic EOS, if we consider non hyperonic EOS with TM1 parametrization the maximum mass $2.18$ solar mass. For the PL-Z parametrization the same is $2.3$ solar mass, both of which can easily satisfy the mass constraint. New calculation points that with strong vector-meson coupling in SU(3) symmetry \cite{weiss1,weiss2}, even hyperonic star can generate massive NS. In the quark sector, even with density dependent bag constant in the MIT bag model the maximum mass of QS are below $2$ solar mass. This is due to the presence of extra term coming from consideration of density dependent bag constant (Eq. 26). The EOS of quark sector is still much debated and new model (NJL and PNJL) are being used to satisfy the strong interacting behaviour. However, still there is no exact calculation and the EOS are very parameter dependent. In our model if we consider the effect from quark interaction and colour superconducting matter as used by Alford et al. and Weissenborn et al. \cite{alford,weiss3}, the maximum mass of SS becomes $2.05$ solar mass and that of HS becomes $2.13$ solar mass (with $a4=0.67$ for quark interaction and $\Delta=100$ MeV for gap energy), both of which satisfy the new mass limit. However, with such EOS the qualitative nature of our results remains the same. Even with such consideration, the hadronic EOS is still the steepest and the quark EOS is the softest. Therefore, our conclusion that the energy liberated during the conversion of NS to SS is greater than that liberated during the conversion of NS to HS holds (there is only quantitative change in the values by $20-30\%$). The magnetic field makes all the EOS softer. Therefore the observation that the energy liberated during conversion of magnetars is less than that liberated during phase transition in normal pulsars also remains unaltered. \section{Summary and Conclusion} In this work, we have calculated the energy released during the conversion of NS to QS (SS/HS). The total energy released is the sum of the gravitational and internal energy of conversion. Our first assumption was that the conversion starts with a sudden spin down of the star, resulting in a huge density fluctuation at the core, initiating the phase transition. We also assume that some matter is ejected from the outer layers of the star due to several shock bounce, and therefore, the change in baryonic mass. The conversion may continue up to the surface or may die out after some distance. This depends on the energy difference between the matter phases at the centre of the star and also on the initial density and spin fluctuation of the star. The final star may be a SS or a HS. For the NS we have considered relativistic mean field EOS model of hadronic matter. For the quark matter (SS/HS), we have considered simple MIT bag model. We construct the HS based on Glendenning construction. In our calculation we have used two different parametrization for the hadronic EOS, and had regulated the quark EOS by changing the bag constant. First we have shown the energy liberated during the conversion of NS to QS, with no matter ejection. With a fixed baryonic mass, the energy liberated is obtained from the difference in the gravitational mass of the initial NS and final QS. For such a case the liberated energy is always close to $10^{53}$ ergs. Next, we had shown the energy liberated during the conversion of NS to QS, with matter ejection from the surface layer of the NS. The amount of matter ejected is taken to be $0.2 M_{\odot}$ of $M_B$. Therefore, the resulting baryonic mass denoted as $M_{BE}$ is $(M_B -0.2) M_{\odot}$. Due to matter ejection as the $M_B$ decreases the gravitational mass and proper mass of the NS and QS also decreases. The energy liberated during the conversion of NS to QS with matter ejection is always less than that of the energy liberated during the conversion of NS to QS with no matter ejection. We have also studied the conversion of NS to SS/HS having high surface magnetic field (observed magnetars). We have denoted them as NM and QM (SM/HM). The energy liberated during the conversion of magnetars is less than that for normal pulsars. One thing that clearly points out is that the conversion of NS to HS liberates energy few times less than that for the conversion of NS to SS. Therefore, it would be difficult for them to power GRB from cosmological distances. For the Max HS the energy liberated is the lowest and sometimes it is zero, which indicates the PT there occurs without any observable signal. The energy liberated during the conversion of NS to QS is of $10^{53}$ ergs, and it can account for the cosmological origin of the GRB if the Lorentz factor is high. The energy released during the conversion of NM to QM is of the order of $10^{52}$ ergs, and are therefore difficult to power GRB at cosmological distances. However, the magnetars would release energy more efficiently from their magnetic poles than the normal NS. Therefore, in magnetars the energy liberated from the star with large Lorentz number and can account for the giant flares activity. The detailed picture of the conversion mechanism and the central engine for the GRB is still not well understood. However, the above described processes are thought to be responsible for the energetic of the GRB. In this paper, we have concentrated more on the difference in gravitational binding energy due to phase transition which is accompanied by matter ejection from the outer layers. We have seen the difference in the energy release for normal pulsars and magnetars. The amount of matter ejected, the fraction of energy which goes towards heating of the star and the fraction spent on maintaining the conversion would ultimately determine the fate of energy release. However, if a significant amount of energy released goes into the energetic of electron-positron pairs production, it can account for the GRB energy. Assuming the baryonic mass of the star to remain unaltered, Bombaci and Dutta \cite{bombaci} performed their calculation and found that the NS has always greater mass than the final SS. Neutrino-antineutrino pair annihilation to electron-positron pairs deposit energy of the order of $10^{49}$ergs \cite{cheng}. Neutron proton scattering by neutrinos inside the dense star deposit energy at least two to three order further higher. Due to the shock of the phase transition some matter near the stellar surface may be ejected and would further be accelerated by electron-positron pairs and may eventually oscillate. This may give rise to high Lorentz factor and high luminosity needed for GRB. These two are the most efficient processes to account for the energetic of the GRB. As the actual process takes place at cosmological distances, we do not have a better understanding of such phenomena. As there are no other source of external energy available to the star, other than the rotational energy, a huge amount of energy may be used in the actual conversion. We can only conclude that if a small fraction of the energy of the conversion is released it may manifest itself at least in the form of giant flares, which are usually associated with the magnetars. However, if a substantial amount of energy is released, it may account for the energies observed during the GRB. Therefore, more detailed studies in theoretical and observational front is still needed for the pulsars and magnetars to have a better understanding of the energetic of the conversion. On the other hand, with new observation of $2$ solar mass pulsar gives very strong constraints for the EOS describing compact stars. The EOS used in our calculation cannot satisfy the new mass limit. However, this can be remedied by considering new prescriptions for both hadronic and quark matter EOS. With the improved EOS, the basic results of our calculation does not have any qualitative change but only small quantitative change. With new techniques of precise mass measurement, the physics of pulsars is entering a new phase. As the exact nature of strong interactions is still far from being settled, the quark EOS being very model dependent, the maximum mass of NS and QS calculated from theoretical consideration are going to evolve further in future. To get a clear picture of energy release and its detailed mechanism we need more detailed microscopic studies and this work is the first step towards that direction.
2,869,038,156,072
arxiv
\section{Introduction} \label{intro} It is well known that the Glauber diffraction model \cite{G0,G1} is a convenient and reliable tool for the analysis of scattering of fast hadrons (nucleons) by nuclei. Based on the eikonal and fixed-scatterer approximations, it was specially developed more than 50 years ago for the high- and intermediate-energy regions where no exact theoretical treatments were available. So, the validity of Glauber model could be tested previously only by comparing its results with the respective experimental data. The unexpected success of such a simple model in describing the hadron-nucleus and nucleus-nucleus scattering at forward angles caused numerous studies of the accuracy and the range of validity of the Glauber formulation, as well as many attempts of extension of this range. Different refinements of the initial simple model have been introduced since then, and they included corrections for non-eikonal and relativistic effects, Fermi motion, etc. The last (in time) substantial steps taken in this direction can be found in Refs.~\cite{GUR,BAL,ABJ,BBJ}. However, the comprehensive analysis of various corrections to the Glauber model has revealed \cite{H2} that many important corrections to the initial model seems tend to compensate strongly each other, so that an incorporation of only one of them can even worsen the results of the initial simple model. So, it turned out to be highly nontrivial to improve the initial Glauber approach. Another serious problem with this model seems is its rather restricted range of applicability, i.e., it should work well, in general, at sufficiently high energies and forward angles. However, it would be extremely interesting (for many practical applications) to know these limits more definitely, although they are dependent upon the particular problem to be solved. Fortunately, nowadays we can learn much more than before about these limits for some important cases by comparing the predictions of the Glauber model against the results of precise calculations within the framework of the respective full models, i.e., without approximations peculiar to the diffraction model. Among these cases allowing the careful comparison with a numerically accurate treatment is the $Nd$ intermediate-energy scattering within a realistic three-body model. Now we have a very nice opportunity to examine the accuracy of the Glauber model by direct comparison of its predictions with exact three-nucleon Faddeev calculations~\cite{GLO} which account for the same (nucleonic) degrees of freedom and the same input on-shell $NN$ amplitudes. Such a test will show qualitatively or even quantitatively the validity of different approximations involved in the Glauber model. To obtain fully realistic conclusions the Glauber model itself must as realistic as possible; i.e., it should include all fully realistic input spin-dependent $NN$ amplitudes and all components of the target (e.g., deuteron) wave function. Such generalization of the initial model enables us to analyze the spin observables (which should be much more sensitive to fine interference effects and different approximations) as well as the unpolarized cross sections, so we will be able to draw more quantitative and well-grounded conclusions about the validity of the Glauber formulation. For a meaningful comparison with exact three-body calculations, the inputs of the model, i.e., $NN$ amplitudes and deuteron wave functions, must also be the most accurate and coincide with those used in the current Faddeev calculations. Because the diffraction model includes on-shell $NN$ amplitudes only, they can be taken from the experiment. Or, more definitely, one can take these amplitudes from modern phase-shift analysis (PSA), so that they will be on-shell equivalent to those derived from realistic $NN$ potential models entering the Faddeev equations (in, of course, the energy region where such potentials describe accurately the $NN$ experimental data). The fully realistic Faddeev equations for $Nd$ scattering have been solved up to now only for the incident energies below 350 MeV in the laboratory frame~\cite{FC250,EXP248}.\footnote{There is only a single full three-body calculation \cite{EXPAD4} for $pd$ scattering at the proton incident energy $T_p \simeq 400$ MeV, but its results are still preliminary and have not been published yet.} Complications which arise with growing energy are connected with limitations of highly precise $NN$ potentials involved as well as with hard computational problems. Recently \cite{LIN}, the Faddeev calculations at higher energies (up to 2 GeV) have been carried out, but only in a schematic model with three identical bosons interacting through a scalar central potential of the Malfliet-Tjon type. In this model, a detailed comparison with the Glauber approach for total and differential elastic cross sections was also performed \cite{ELSTER}. In the present paper, we tested the validity of the Glauber model with a fully realistic two-body input. First, we generalized the initial model by incorporating the full spin dependence of $NN$ amplitudes and high-quality deuteron wave function as well as the charge-exchange effects. We analyzed the differential cross sections and polarization observables in $pd$ elastic scattering at the energies of a few hundred MeV, which seems already high enough to apply the generalized Glauber approach but still low enough to compare its predictions with those of exact realistic Faddeev calculations. Moreover, it was demonstrated \cite{FC250} that at such moderate energies, relativistic effects do not play a significant role at small and medium scattering angles, so the nonrelativistic treatment seems to be sufficient on such conditions. To confirm our conclusions and to obtain a more clear understanding of the phenomena in question, the comparison of the results for both theoretical approaches, i.e., Glauber and Faddeev, with available experimental data is also presented. From all these comparisons, one can draw more definite conclusions about the true range of validity of the refined Glauber model. The content of the paper is as follows. In Sec.~\ref{model}, we generalize the initial diffraction model by incorporating all ten $NN$ helicity amplitudes (five are for $pp$ and five are for $pn$ scattering) and develop a convenient Gaussian-like parametrization of these amplitudes. Also we build a multi-Gaussian expansion for realistic deuteron $S$- and $D$-wave functions. The convenient analytical representation for main input ingredients of the model makes it possible to derive all 12 invariant $pd$ amplitudes in fully analytical forms. In Sec.~\ref{results}, we present the main results of the work. The detailed and comprehensive discussion of the obtained results and some physical arguments which can help to interpret our findings more clearly are presented in Sec.~\ref{discuss}. Sec.~\ref{sum} is devoted to formulation of the conclusions. Two appendixes include some important details of the calculations within the framework of the refined Glauber model. In Appendix~\ref{A}, we present the explicit interrelations between all $pd$ invariant amplitudes through $NN$ invariant amplitudes and deuteron formfactors. In Appendix~\ref{B}, details of analytical integration in the double-scattering terms of the $pd$ amplitudes are given. \section{Refined Glauber model} \label{model} To explore high-precision spin-dependent $NN$ interactions for describing $pd$ elastic scattering, the conventional Glauber model and its basic formulas which relate $pd$ amplitude to the input $NN$ amplitudes and the deuteron wave function have to be generalized. In preceding years, some papers have been published that considered the following contributions separately: (i) spin dependence of $NN$ amplitudes \cite{KSP,FSP}, (ii) $D$ wave of the deuteron \cite{HD,GD}, (iii) isospin dependence of $NN$ amplitudes, i.e., double charge-exchange contribution to $pd$ elastic scattering \cite{WEX,GEX}. All these items were included later in the so-called relativistic multiple-scattering theory \cite{ABJ,BBJ} which went beyond the Glauber framework by accounting for corrections to the eikonal and fixed-scatterer approximations and some relativistic effects as well. It is well known, at least qualitatively, that different corrections to the Glauber model tend to cancel each other substantially \cite{H2}, so it is hard to improve the Glauber model essentially. Besides, the modified versions are much more complicated than the initial model. So, we have generalized just the initial Glauber formulation by including the above-mentioned items without any further corrections to the diffraction model itself, thus staying within the original Glauber framework. \subsection{Definition of observables} \label{defobs} First of all, we need to define the differential cross section and spin-dependent observables in terms of the $pd$ elastic-scattering amplitude. The differential cross section is connected to the above amplitude $M$ by the relation\footnote{Our normalization is different from the standard one by the Lorentz-invariant factor ${8 \sqrt{\pi} I(s,m_p,m_d) \! \equiv \! 4 \sqrt{\pi [s\!-\!(m_p + m_d)^2][s\!-\!(m_p - m_d)^2]}}$, where $s$ is the $pd$ invariant mass squared, and $m_p$ and $m_d$ are the proton and deuteron masses. Such normalization is chosen in order to simplify the final formulas.} \begin{equation} \label{dsm} d\sigma\! / \!dt = {\textstyle \frac{1}{6}}{\rm Sp}\,\bigl(MM^+\bigr), \end{equation} where $t = -q^2$ is the momentum transfer squared.\footnote{Although we work in the laboratory frame according to the initial Glauber suggestion, we should throughout keep in mind the relation $t = -q^2$ for consistency. This relation is valid in the center-of-mass frame and approximately valid in the laboratory frame at small momentum transfers. Physically, the difference between the variables $t$ in these two frames originates from recoil effects which are neglected in the Glauber formalism due to the fixed-scatterer approximation. So, this difference should not be accounted for without careful treatment of recoil effects as well as other corrections to the Glauber model which all become significant at large momentum transfers.} As for spin-dependent observables, in this work we concentrate mainly on the vector and tensor analyzing powers. For the proton and deuteron vector analyzing powers ($A_{\alpha}^p$ and $A_{\alpha}^d$) and for the deuteron tensor analyzing powers ($A_{\alpha \beta}$) we take the standard formulas \begin{equation*} A_{\alpha}^p = {\rm Sp}\,\bigl(M \sigma_{\alpha}M^+\bigr)/{\rm Sp}\,\bigl(MM^+\bigr), \ \ \ A_{\alpha}^d = {\rm Sp}\,\bigl(M S_{\alpha}M^+\bigr)/{\rm Sp}\,\bigl(MM^+\bigr), \end{equation*} \begin{equation} \label{apm} A_{\alpha \beta} = {\rm Sp}\,\bigl(M S_{\alpha \beta} M^+\bigr)/{\rm Sp}\,\bigl(MM^+\bigr), \end{equation} where $\frac{1}{2}\sigma_{\alpha}$ and $S_{\alpha}=\frac{1}{2}(\sigma_{n\alpha} + \sigma_{p\alpha})$ are the spin matrices of the proton and deuteron, $S_{\alpha \beta}= \frac{3}{2}(S_{\alpha}S_{\beta} + S_{\beta}S_{\alpha}) - 2\delta_{\alpha \beta}$ is a quadrupole operator, and $\alpha,\beta \in \{x,y,z\}$. The total amplitude $M$ can be expanded on the amplitudes invariant under space rotations and space-time reflections. For the $pd$ case, there are 12 such invariant amplitudes $A_1$--$A_{12}$, and the amplitude $M$ (in nonrelativistic formulation) is expressed through them as \begin{eqnarray} \label{ma} M[{\bf p},{\bf q}; {\bm \sigma}, {\bf S}] &=& \bigl(A_1 + A_2 \,{\bm \sigma}\hat{n}\bigr) + \bigl(A_3 + A_4 \,{\bm \sigma}\hat{n}\bigr)({\bf S}\hat{q})^2 + \bigl(A_5 + A_6 \,{\bm \sigma}\hat{n}\bigr)({\bf S}\hat{n})^2 + \nonumber \\ & & + A_7 \, ({\bm \sigma}\hat{k})({\bf S} \hat{k}) + A_{8} \,{\bm \sigma}\hat{q} \bigl(({\bf S}\hat{q})({\bf S}\hat{n}) \!+\! ({\bf S}\hat{n})({\bf S}\hat{q})\bigr) + \bigl(A_9 + A_{10} \,{\bm \sigma}\hat{n}\bigr){\bf S}\hat{n} + \nonumber \\ & & + A_{11} \,({\bm \sigma}\hat{q})({\bf S}\hat{q}) + A_{12} \,{\bm \sigma}\hat{k} \bigl(({\bf S} \hat{k})({\bf S}\hat{n}) \!+\! ({\bf S} \hat{n})({\bf S}\hat{k})\bigr), \end{eqnarray} where the unit vectors $\hat{k} = ({\bf p}+{\bf p'})/|{\bf p}+{\bf p'}|, \ \ \hat{q} = ({\bf p}-{\bf p'})/|{\bf p}-{\bf p'}|, \ \ \hat{n} = \hat{k} \times \hat{q}$ and ${\bf p}$, ${\bf p'}$ are the momenta of the incident and outgoing proton respectively. Now all the $pd$ observables can be written in terms of invariant amplitudes $A_1$--$A_{12}$. Defining the directions of coordinate axes $\hat{e}_x = \hat{q}, \,\, \hat{e}_y = \hat{n}, \,\, \hat{e}_z = \hat{k}$ and applying the standard trace technique, one gets for the differential cross section and nonvanishing analyzing powers the following expressions: \begin{eqnarray} \label{apa} d\sigma\! / \!dt \!\!\! &=& \!\!\! |A_1|^2 + |A_2|^2 + {\textstyle \frac{2}{3}}\Bigl(\sum\limits_{i=3}^{12}{|A_i|^2} + {\rm Re}\,\bigl[2A_1^*(A_3 + A_5) + 2A_2^*(A_4 + A_6) + A_3^*A_5 + A_4^*A_6\bigr]\Bigr), \nonumber \\ {A_y}^p \!\!\! &=& \!\!\! 2\,{\rm Re}\,\bigl[2\,(A_1^* + A_3^* +A_5^*)(A_2 + A_4 + A_6) + A_1^*A_2 - A_3^*A_6 - A_4^*A_5 + 2A_9^*A_{10}\bigr]/(3\,d\sigma\! / \!dt), \nonumber \\ {A_y}^d \!\!\! &=& \!\!\! 2\,{\rm Re}\,\bigl[(2A_1^* + A_3^* +2A_5^*)A_9 + (2A_2^* + A_4^* + 2A_6^*)A_{10} + A_7^*A_{12} + A_8^*A_{11}\bigr]/(3\,d\sigma\! / \!dt), \nonumber \\ A_{yy} \!\!\! &=& \!\!\! \Bigl(2\,\bigl(|A_5|^2 + |A_6|^2 + |A_9|^2 + |A_{10}|^2\bigr) - \bigl(|A_3|^2 + |A_4|^2 + |A_7|^2 + |A_8|^2 + |A_{11}|^2 + |A_{12}|^2\bigr) + \nonumber \\ & & + 2\,{\rm Re}\,\bigl[A_1^*(2A_5 - A_3) + A_2^*(2A_6 - A_4) + A_3^*A_5 + A_4^*A_6\bigr]\Bigr)/(3\,d\sigma\! / \!dt), \nonumber \\ A_{xx} \!\!\! &=& \!\!\! \Bigl(2\,\bigl(|A_3|^2 + |A_4|^2 + |A_{11}|^2 + |A_{12}|^2\bigr) - \bigl(|A_5|^2 + |A_6|^2 + |A_7|^2 + |A_8|^2 +|A_{9}|^2 + |A_{10}|^2\bigr) + \nonumber \\ & & + 2\,{\rm Re}\,\bigl[A_1^*(2A_3 - A_5) + A_2^*(2A_4 - A_6) + A_3^*A_5 + A_4^*A_6\bigr]\Bigr)/(3\,d\sigma\! / \!dt), \nonumber \\ A_{zz} \!\!\! &=& \!\!\! -A_{yy} -A_{xx}, \nonumber \\ A_{xz} \!\!\! &=& \!\!\! {\rm Im}\,\bigl[A_3^*A_9 + A_4^*A_{10} - A_7^*A_{12} - A_8^*A_{11}\bigr]/(d\sigma\! / \!dt). \end{eqnarray} \subsection{Generalization of initial Glauber formalism} \label{gengl} In the initial Glauber model, the $pd$ scattering amplitude as the function of transferred momentum ${\bf q}$ is represented as a sum of two terms corresponding to single and double scatterings of the incident proton off target nucleons: \begin{equation} \label{msd} M({\bf q}) = M^{(s)}({\bf q}) + M^{(d)}({\bf q}). \end{equation} With the use of eikonal and fixed-scatterer approximations, the single- and double-scattering amplitudes are expressed in terms of the on-shell $NN$ amplitudes ($pp$ amplitude $M_p$ and $pn$ amplitude $M_n$) and the deuteron wave function $\Psi_d$ as \begin{eqnarray} \label{msmd} M^{(s)}({\bf q}) &=& {\textstyle \int} d^{3}r e^{i{\bf q}{\bf r}/2} \Psi_d({\bf r}) \bigl[M_n({\bf q}) + M_p({\bf q})\bigr] \Psi_d({\bf r}), \nonumber \\ M^{(d)}({\bf q}) &=& {\textstyle \frac{i}{4 \pi^{3/2}}\int} d^{2}q'{\textstyle\int} d^{3}r e^{i{\bf q'}{\bf r}} \Psi_d({\bf r}) \bigl[M_n({\bf q_2}) M_p({\bf q_1}) + M_p({\bf q_1}) M_n({\bf q_2})\bigr] \Psi_d({\bf r}), \end{eqnarray} where the vectors ${\bf q_1} = {\bf q}/2 - {\bf q'} , \ \ {\bf q_2} = {\bf q}/2 + {\bf q'}$ have been introduced for momenta transferred in collisions with individual target nucleons.\footnote{In the expression~(\ref{msmd}) for the amplitude $M_d$, we have omitted the term arising from the commutator of the amplitudes $M_n({\bf q_2})$ and $M_p({\bf q_1})$ \cite{GEX}. This term gives only a small contribution to the intermediate-energy $pd$ elastic scattering due to relative smallness of spin-dependent $NN$ amplitudes and the deuteron $D$ wave.} The double-charge-exchange process contributes to elastic scattering as well. This contribution is significant at incident energies $T_p \lesssim 1$ GeV, so we should include it in the model. It was already done in Ref.~\cite{GEX} by incorporating the isospin structure of the general $NN$ amplitude and averaging over the isoscalar deuteron ground state. This operation leads to an additional term in double-scattering amplitude \begin{equation} \label{mc} M^{(c)}({\bf q}) = -{\textstyle \frac{i}{4 \pi^{3/2}}\int} d^{2}q'{\textstyle\int} d^{3}r e^{i{\bf q'}{\bf r}} \Psi_d({\bf r}) \bigl[M_n({\bf q_2}) - M_p({\bf q_2})\bigr]\bigl[M_n({\bf q_1}) - M_p({\bf q_1})\bigr] \Psi_d({\bf r}). \end{equation} The neglect of the spin dependence in $NN$ amplitudes and deuteron wave function reduces Eqs.~(\ref{msd}) and (\ref{msmd}) to the conventional Glauber formulas. Furthermore, the double-charge-exchange amplitude $M_c$ vanishes in a widely used approximation $M_n = M_p$ (it corresponds to neglecting the isospin dependence of the general $NN$ amplitude). In the realistic case, with which we are here concerned, the accurate incorporation of both spin and isospin degrees of freedom is required. While the latter is done simply by adding the term $M_c$ to the double-scattering amplitude $M_d$, the inclusion of spin structure of both $NN$ amplitudes and deuteron wave function in the Glauber model is much more involved. We take $NN$ amplitudes in the form \begin{eqnarray} \label{mi} M_i[{\bf p}, {\bf q};{\bm\sigma},{\bm\sigma}_i] &=& A_i + C_i\,{\bm\sigma}\hat{n} + C'_i\,{\bm\sigma}_i\hat{n}+ B_i\,({\bm\sigma}\hat{k})({\bm\sigma}_i\hat{k}) + \nonumber \\ & & + \, \bigl(G_i+H_i\bigr)({\bm\sigma}\hat{q})({\bm\sigma}_i\hat{q}) + \bigl(G_i-H_i\bigr)({\bm\sigma}\hat{n})({\bm\sigma}_i\hat{n}), \end{eqnarray} where $i=n,p$. In the laboratory frame, one should distinguish the amplitudes $C$ and $C'$. For the deuteron wave function, we use the standard expression \begin{equation} \label{psi} \Psi_d[{\bf r};{\bm\sigma}_n,{\bm\sigma}_p] = {\textstyle \frac{1}{\sqrt{4\pi}r}}\bigl(u(r) + {\textstyle \frac{1}{2\sqrt{2}}}w(r)\, S_{12}[\hat{r};{\bm\sigma}_n,{\bm\sigma}_p]\bigr), \end{equation} where $u$ and $w$ are the radial wave functions for $S$ and $D$ waves, and $S_{12}[\hat{n};{\bf v}_1,{\bf v}_2] = 3({\bf v}_1\hat{n})({\bf v}_2\hat{n}) - ({\bf v}_1{\bf v}_2)$. After substituting expressions~(\ref{mi}) and~(\ref{psi}) into Eqs.~(\ref{msmd}) and (\ref{mc}), and making some spin algebra with noncommuting operators $M_n, M_p$ and $\Psi_d$, one gets rather complicated general formulas for the $pd$ amplitudes $M^{(s)}$, $M^{(d)}$, and $M^{(c)}$ expressed through the input $NN$ amplitudes $A_i,B_i,C_i,C'_i,G_i,H_i$ ($i=n,p$) and the deuteron form factors $S_0^{(0)},S_0^{(2)},S_2^{(1)},S_2^{(2)}$. To simplify further derivation, one can employ the smallness of the spin-dependent $NN$ amplitudes (say, $\mathcal{B}_i$) compared to spin-independent ones ($A_i$) at high energies as well as the smallness of the deuteron $D$-wave $w$ compared to $S$-wave $u$~\cite{PKYAF}. So, the terms containing products $\mathcal{B}_i^k w^l$ with $k+l \geqslant 3$ can be dropped out of the expressions for the amplitudes $M^{(s)}$, $M^{(d)}$, and $M^{(c)}$ on definite conditions. In fact, the ratio of spin-dependent amplitudes $\mathcal{B}_i$ to spin-independent ones $A_i$ is strongly decreasing when the energy rises, so that such an approximation in the $pd$ amplitudes, being quite accurate at intermediate energies $T_p \sim 1$ GeV, can be unsatisfied at lower energies $T_p \sim 100$ MeV. This observation has nothing to do with the validity of the Glauber model itself at such lower energies, and it should be kept in mind when doing the careful comparison between the present version of the Glauber model and experimental data for spin analyzing powers (especially for tensor ones which are more sensitive to fine spin-dependent effects) in Sec.~\ref{results}. After the above simplification, Eqs.~(\ref{msmd}) and (\ref{mc}) can be easily integrated over $d^3r$. In doing this, we make use of the deuteron form factor, which is defined as \begin{eqnarray} \label{s} S[{\bf q};{\bm\sigma}_n,{\bm\sigma}_p] &=& {\textstyle \int} d^3 r \, e^{i{\bf q}{\bf r}}\,|\Psi_d[{\bf r};{\bm\sigma}_n,{\bm\sigma}_p]|^2 = \nonumber \\ &=& S_0(q) - {\textstyle\frac{1}{2\sqrt{2}}} S_2(q) \, S_{12}[\hat{q};{\bm\sigma}_n,{\bm\sigma}_p]. \end{eqnarray} It is convenient to divide the monopole and quadrupole form factors, $S_0$ and $S_2$, into two parts which correspond to different multiplicities of the $D$-wave function $w$, i.e., \begin{equation} \label{s0s2} S_0(q) = S_0^{(0)}(q) + S_0^{(2)}(q), \ \ \ S_2(q) = S_2^{(1)}(q) + S_2^{(2)}(q), \end{equation} where \begin{eqnarray} \label{s00} S_0^{(0)}(q) &=& {\textstyle \int\limits_0^{\infty}}dr \, u^2(r)\, j_0(qr), \qquad \quad S_0^{(2)}(q) = {\textstyle \int\limits_0^{\infty}}dr \, w^2(r)\, j_0(qr), \nonumber \\ S_2^{(1)}(q) &=& 2 {\textstyle \int\limits_0^{\infty}}dr \, u(r)w(r)\, j_2(qr), \ \ \ S_2^{(2)}(q) = - 2^{-1/2} {\textstyle \int\limits_0^{\infty}}dr \, w^2(r)\, j_2(qr). \end{eqnarray} Eventually, using the expansion~(\ref{ma}) for the total $pd$ amplitude $M$, one obtains the explicit interrelations between all 12 invariant $pd$ amplitudes and 12 invariant input $NN$ amplitudes and also different components of the deuteron form factor (for the final formulas and details of analytic ${\bf q'}$ integration in the double-scattering amplitudes, see Appendixes~\ref{A} and \ref{B}, respectively). Having these interrelations and proper two-body input in hand, one can calculate straightforwardly the $pd$ differential cross section and all polarization observables on the basis of the refined Glauber model. \subsection{Parametrization of the $\bm{NN}$ amplitudes and deuteron wave function} \label{param} The Glauber model deals with $pd$ and $NN$ amplitudes defined in the laboratory frame. However, it is more convenient to treat the $NN$ helicity amplitudes in the two-nucleon center-of-mass frame. It is easy to show that the laboratory amplitudes $A,B,C,G,H$ at small $q$ can be straightforwardly expressed through the conventional helicity amplitudes $N_0,N_1,N_2,U_0,U_2$ (or $\phi_1$--$\phi_5$) as \begin{gather} A \approx N_0 = (\phi_3 + \phi_1)/2, \ \ B \approx -U_0 = (\phi_3 - \phi_1)/2, \nonumber \\ C \approx iN_1 = i\phi_5, \nonumber \\ G \approx (U_2 - N_2)/2 = \phi_2/2, \ \ H \approx (U_2 + N_2)/2 = \phi_4/2. \label{anu} \end{gather} Here, in making appropriate approximations we do not go beyond the diffraction model. It was also demonstrated \cite{SOR} that the amplitude $C'$ (see Eq.~(\ref{mi})) in high-energy small-angle limit is distinguished only by a relativistic correction from the amplitude $C$, i.e. \begin{equation} \label{c'} C' \approx C + (q/2m)N_0. \end{equation} Moreover, both the amplitudes, $C$ and $C'$, are small at high energies in comparison to the other amplitudes, so that the above correction is hardly playing a significant role, but it still should be included for consistency. All the helicity $pp$ and $pn$ amplitudes at the energy $T_p = 1$ GeV are displayed in Fig.~\ref{nnamp}. These amplitudes are built in the present work on the basis of recent PSA \cite{SAID}, and we used a special code \cite{PANN} to reconstruct the $pp$ and $pn$ helicity amplitudes from the PSA data. As is clearly seen from Fig.~\ref{nnamp}, the amplitude $N_0$ superiors significantly all the other helicity amplitudes. It is also clearly seen that the corresponding $pp$ and $pn$ amplitudes are distinguished from each other significantly while in early works on the diffraction approach they have been chosen to be the same for the sake of simplicity. \begin{figure} \begin{center} \resizebox{0.7\columnwidth}{!}{\includegraphics{nnamp.eps}} \end{center} \caption{Combinations of the $NN$ helicity amplitudes (in units $\sqrt{\rm mb}$/GeV), which correspond to the laboratory $NN$ amplitudes used in our calculations (see Eq.~(\ref{anu})). The $pp$ amplitudes are shown in column (a), the $pn$ amplitudes are given in column (b). The dashed lines correspond to the real parts of the amplitudes, while the solid lines represent their imaginary parts.} \label{nnamp} \end{figure} To parametrize the $NN$ helicity amplitudes, it is very convenient to employ a Gaussian series representation with an explicit separation of the behavior near $q = 0$: \begin{gather} N_0(q) = {\textstyle \sum\limits_{j=1}^n}\, C_{a,j}\exp(-A_{a,j}\,q^2), \ \ \ U_0(q) = {\textstyle \sum\limits_{j=1}^n}\, C_{b,j}\exp(-A_{b,j}\,q^2), \nonumber \\ N_1(q) = q\,{\textstyle \sum\limits_{j=1}^n}\, C_{c,j}\exp(-A_{c,j}\,q^2), \nonumber \\ (U_2(q)-N_2(q))/2 = {\textstyle \sum\limits_{j=1}^n}\, C_{g,j}\exp(-A_{g,j}\,q^2), \nonumber \\ (U_2(q)+N_2(q))/2 = q^2\, {\textstyle \sum\limits_{j=1}^n}\, C_{h,j}\exp(-A_{h,j}\,q^2). \label{nug} \end{gather} Here the subscripts $a,b,c,g,h$ in the parameters $C,A$ denote the respective laboratory $NN$ amplitudes (see Eq.~(\ref{anu})).\footnote{In explicit calculations we explored two different relations (and two sets of parameters $C,A$) for each helicity amplitude, i.e., one for its real part and one for the imaginary part. Here just the general forms which fit both real and imaginary parts of the amplitudes are given for simplicity.} In our calculations we took $n = 5$, i.e. five Gaussian terms in all above sums. With this choice, we found that the Gaussian approximated $NN$ amplitudes are very near to the exact ones in the forward hemisphere~\cite{PKYAF}. The visible deviations begin only at large angles where the Glauber model demands a fast vanishing of all the underlying amplitudes. The rise in magnitude of the true $pp$ helicity amplitudes is due to Pauli principle, according to which the whole $pp$ amplitude must be antisymmetrized. This antisymmetrization is essential in large-angle $pd$ scattering only through one-nucleon exchange mechanism, so that, the diffraction model being derived for forward-angle scattering does not account for this exchange mechanism. On the other hand, the charge-exchange process which is responsible for the rising of $np$ helicity amplitudes at large angles can contribute to $pd$ elastic scattering already at rather forward angles through the double charge exchange, and thus, the latter mechanism is included to our formalism explicitly. For the deuteron wave function we explored the high-precise $NN$ potential model CD-Bonn~\cite{CDB}. To parametrize $S$- and $D$-wave components of the function we have also employed the Gaussian representation (with an additional factor $r^n$ to reproduce the behavior near the origin): \begin{equation} \label{uwg} u(r) = r \, {\textstyle \sum\limits_{j=1}^m}\, C0_j\exp(-A0_j\,r^2), \ \ \ w(r) = r^3 \, {\textstyle \sum\limits_{j=1}^m}\, C2_j\exp(-A2_j\,r^2). \end{equation} In our calculations we have chosen $m=5$. With this number of terms the approximated deuteron wave functions coincide with high accuracy with the exact ones from the origin up to large distances ($r_{NN} \simeq 20$ fm). With the above parametrization of the deuteron radial wave functions the form factors defined in Eq.~(\ref{s00}) take the forms \begin{eqnarray} \label{sg} S_0^{(0)}(q) &=& {\textstyle \sum\limits_{i,j=1}^m}\, C0_i C0_j {\textstyle\frac{\sqrt{\pi}}{4\lambda_{00,ij}^{3/2}}}\exp(-x_{00,ij}), \nonumber \\ S_0^{(2)}(q) &=& {\textstyle \sum\limits_{i,j=1}^m}\, C2_i C2_j {\textstyle\frac{\sqrt{\pi}}{16\lambda_{22,ij}^{7/2}}}(4x_{22,ij}^2-20x_{22,ij}+15)\exp(-x_{22,ij}), \nonumber \\ S_2^{(1)}(q) &=& {\textstyle \sum\limits_{i,j=1}^m}\, C0_i C2_j {\textstyle\frac{\sqrt{\pi}}{2\lambda_{02,ij}^{5/2}}}x_{02,ij}\exp(-x_{02,ij}), \nonumber \\ S_2^{(2)}(q) &=& {\textstyle \sum\limits_{i,j=1}^m}\, C2_i C2_j {\textstyle\frac{\sqrt{2\pi}}{16\lambda_{22,ij}^{7/2}}}(2x_{22,ij}^2-7x_{22,ij})\exp(-x_{22,ij}), \end{eqnarray} where $\lambda_{kl,ij}=Ak_i+Al_j$, $x_{kl,ij} = q^2/(4\lambda_{kl,ij})$, and $k,l = 0,2$. \section{Results} \label{results} Using the above refined Glauber model we analyzed the $pd$ differential cross sections as well as proton and deuteron analyzing powers at three intermediate energies: $T_p = 250$ and $440$ MeV and 1 GeV.\footnote{For the deuteron analyzing powers which are measured in $dp$ scattering these are the equivalent proton incident energies in the inverse kinematics, i.e., $T_p=T_d/2$.} These energies were chosen because there is a considerable amount of experimental data on $pd$ elastic observables in these energy regions \cite{EXPAD4,EXP250,EXPAD2,EXPDS4,EXPAP392,EXPDS1,EXPAD1}. Besides that, the two lower energies are appropriate to compare in detail the predictions of our model with exact Faddeev results. We start with the energy $T_p = 250$ MeV because the realistic Faddeev calculations are well grounded for this energy. Results for $pd$ differential cross section and proton analyzing power at $T_p = 250$ MeV are represented in Fig.~\ref{ds-ay-250}. We have also calculated the deuteron vector and tensor analyzing powers at the equivalent proton energy $T_p = 250$ MeV. However the exact Faddeev results and experimental data for these observables are available in the literature just for a bit lower energy $T_p = 200$ MeV. Our separate comparison between some experimental data at $T_p = 200$ and $250$ MeV has shown that they are very close to each other. So, our predictions at $T_p = 250$ MeV in comparison with exact three-body results and experimental data at $T_p = 200$ MeV are displayed in Fig.~\ref{apd-250}. In addition, we show the results of refined Glauber model at $T_p = 440$ MeV (see Fig.~\ref{ds-ay-440}). The Faddeev calculations with the fully realistic $NN$ interaction are not so reliable for this energy, thus, we restrict ourselves with the differential cross section and the proton analyzing power. We compared our result for differential cross section at the energy $T_p = 440$ MeV with the result of Faddeev calculation at the same energy and with experimental data at $T_p = 425$ MeV. For the comparison with our result for proton analyzing power at $T_p = 440$ MeV, we took existing (to date) Faddeev result and experimental data at a bit lower energy $T_p = 392$ MeV. \begin{figure} \begin{center} \resizebox{0.5\columnwidth}{!}{\includegraphics{ds-ay-250.eps}} \end{center} \caption{ Differential cross section (a) and proton analyzing power (b) in $pd$ elastic scattering at the incident energy $T_p=250$ MeV. The solid lines represent the results obtained within the refined Glauber model, the dotted lines show the single-scattering contribution only, while the dashed lines correspond to predictions of the exact Faddeev calculations \cite{EXP250} with $NN$ potential CD-Bonn. Experimental data (squares) are taken from Ref.~\cite{EXP250}.} \label{ds-ay-250} \end{figure} \begin{figure} \begin{center} \resizebox{0.9\columnwidth}{!}{\includegraphics{apd-250.eps}} \end{center} \caption{Deuteron vector (a) and tensor (b), (c) analyzing powers at the equivalent proton energy $T_p=250$ MeV. For the notations, see Fig.~\ref{ds-ay-250}. Results of the Faddeev calculations and experimental data are taken from Ref.~\cite{EXPAD2} (for the energy $T_p=200$ MeV).} \label{apd-250} \end{figure} \begin{figure} \begin{center} \resizebox{0.5\columnwidth}{!}{\includegraphics{ds-ay-440.eps}} \end{center} \caption{Differential cross section (a) and proton analyzing power (b) in $pd$ elastic scattering at the incident energy $T_p=440$ MeV. For the notations, see Fig.~\ref{ds-ay-250}. Results of Faddeev calculations are taken from Refs.~\cite{EXPAD4} ($440$ MeV) and \cite{EXPAP392} ($392$ MeV), experimental data --- from Refs.~\cite{EXPDS4} ($425$ MeV) and \cite{EXPAP392} ($392$ MeV).} \label{ds-ay-440} \end{figure} Besides the comparison between the refined Glauber model predictions and exact three-body Faddeev results, it would be highly interesting to compare our results with existing experimental data at the higher energy $T_p = 1$ GeV which is more traditional for the diffraction model. This comparison has been made for the differential cross section as well as for deuteron vector and tensor analyzing powers. In Fig.~\ref{ds-apd-1000}, the predictions of our model together with respective experimental data are displayed. \begin{figure} \begin{center} \resizebox{0.9\columnwidth}{!}{\includegraphics{ds-apd-1000.eps}} \end{center} \caption{Differential cross section (a) and deuteron vector (b) and tensor (c), (d) analyzing powers in $dp$ elastic scattering at the equivalent proton energy $T_p=1$ GeV calculated within the refined Glauber model. Dotted lines show the contribution of single scattering only, solid lines represent the full calculation. Experimental data (squares) are taken from Refs.~\cite{EXPDS1} and \cite{EXPAD1}.} \label{ds-apd-1000} \end{figure} It is clearly seen from Figs.~\ref{ds-ay-250}--\ref{ds-apd-1000} that our results found within the refined Glauber model are, in general, in a very reasonable agreement with both exact three-body calculations and experimental data at transferred momenta squared $|t| \lesssim 0.35$ (GeV/$c)^2$ for differential cross sections and vector and tensor analyzing powers as well.\footnote{The agreement for tensor analyzing powers at rather low energies ($T_p \simeq 250$ MeV) is not so good as for differential cross sections and vector analyzing powers (see Figs.~\ref{ds-ay-250} and \ref{apd-250}). This fact is very likely related to our simplifying assumption about relative smallness of the spin-dependent $NN$ amplitudes in comparison to the large spin-independent ones (see the end of Sec.~\ref{gengl}) and not to the validity of the Glauber approximation itself.} This gives, at first glance, some interesting deep puzzle because the good agreement with the exact Faddeev calculations is seen in the region where the double scattering (in Glauber model) dominates. However, instead of two purely on-shell and no-recoil scatterings of incident proton by two nucleons in deuteron within the Glauber model framework, the Faddeev calculations include many fully off-shell rescatterings with full account of recoil effects. We will discuss in detail some possible physical reasons for such an amazing agreement in the next section. Moreover, in Figs.~\ref{ds-ay-250}--\ref{ds-ay-440} one can see some general new trend: in those kinematical regions (at larger $|t|$) where the refined diffraction model deviates essentially from the exact Faddeev theory the exact $3N$ results begin to deviate also from the experimental data. This gives another interesting question to answer. \section{Discussion} \label{discuss} It would be useful to arrange the general discussion of the results obtained in this paper in a few separate points. (i) Fully analytical formulas which relate all 12 invariant $pd$ amplitudes to the accurate input $pp$ and $pn$ helicity amplitudes (see Appendixes~\ref{A} and \ref{B}) allow us to not only greatly simplify all the numerical calculations for $pd$ spin observables but also to develop an efficient and convenient algorithm for solving an important inverse scattering problem (at fixed energy). This inverse problem can be formulated as follows: (a) Having the precise intermediate-energy $nd$ spin observables and differential cross section and by taking the respective $np$ helicity amplitudes at the same energy as a well-established input, one can extract poorly known neutron-neutron scattering amplitudes at the same energy. (b) Or, alternatively, having in our possession the accurate $pd$ experimental data in the energy region $T_p > 1.1$ GeV, we can find by inversion the proton-neutron scattering amplitudes which are still poorly known at these energies. Surely, a separate study should be done before doing this inversion to establish here a real sensitivity of the input $pn$ amplitudes to the $pd$ cross sections and analyzing powers while taking into consideration the experimental error corridor. So, such an inversion opens a way to finding in principle the accurate $nn$ (or $pn$) scattering amplitudes from the precise $nd$ or $pd$ experimental data. (ii) Our numerous calculations performed in this work on the basis of refined Glauber model have been compared with the respective exact Faddeev $3N$ calculations with mostly the same input on-shell $NN$ amplitudes for differential cross section and vector and tensor analyzing powers. For the numerous spin-dependent observables, it was done for the first time. This direct comparison has demonstrated clearly an amazingly good agreement between the results of the refined diffraction model and exact $3N$ calculations, even at rather low energies $T_p \simeq 250$ MeV. The agreement gets even more impressive when the collision energy is rising. It should be stressed here that we observe this nice agreement in the area where the double scattering in the Glauber model approach becomes prevailing. This implies, among other things, that the severe approximations made in the Glauber approach just in the double-scattering treatment \cite{G1} really work even at rather low energies. Our conclusion should be confronted with the results of the previous work \cite{ELSTER} where a similar comparison was made between the exact $3N$ Faddeev calculations and the conventional Glauber model predictions for intermediate-energy $Nd$ scattering. In that work, both theoretical approaches were based on the simple central $NN$ potential MT-III (employed to calculate the input $NN$ amplitude for the Glauber model), so the comparison between the predictions was performed for differential and total cross sections only. The authors \cite{ELSTER} found that in case of the model $NN$ potential MT-III, the Glauber model results do not reproduce the exact $3N$ calculation results for the differential cross section at $T_N \simeq 200$ MeV and the predictions of both approaches become more similar only at higher energies $T_N \gtrsim 1$ GeV, as should be expected. Nevertheless, a fair agreement between two approaches was found for the single-scattering terms only, while the Glauber on-shell double-scattering correction was shown to be insufficient in comparison to the Faddeev second-order rescattering correction. Thus, the general conclusion of Ref.~\cite{ELSTER} was that the Glauber and fully converged Faddeev results \textit{do not coincide} beyond the very forward angles (where the single scattering dominates) even at the highest energy considered ($T_N = 2$ GeV). However, when confronting both series of results one should keep in mind that the model $NN$ potential MT-III does not reproduce the empirical $NN$ scattering amplitudes at higher partial waves $l \geq 1$, and thus does not reproduce the total $NN$ amplitudes even at $T_N = 250$ MeV, see the Fig.~\ref{nn-mt3}. It should be stressed that the Glauber approach exploits essentially the characteristic features of just empirical $NN$ amplitudes and with other types of the input $NN$ amplitudes the contributions of neglected terms may become much higher. In particular, the strong sensitivity of the Glauber model results for $pd$ scattering (especially in the diffraction minimum) to the ratio of real to imaginary parts of $NN$ amplitudes is well known (see, for example, \cite{FDS}). Due to numerous inelastic processes at $T_N > 300$ MeV the realistic $NN$ potential has to have an imaginary part rising with energy. This imaginary part of the $NN$ potential leads to an $NN$ scattering amplitude which has an enhanced imaginary part, while the amplitude for the model MT-III potential has very small imaginary part strongly decreasing with the rise of energy (see Fig.~\ref{nn-mt3}, upper row). A second but even more important point is seen from the comparison of the model $NN$ differential cross sections (for MT-III potential) with realistic ones (see Fig.~\ref{nn-mt3}, lower row). The rates of falling for two types of cross sections (as functions of momentum transfer squared) are completely different, so the effective radius of the $NN$ interaction in the realistic case appears to be much shorter than that for the model MT-III interaction. Indeed, the effective radius for MT-III potential $r_{NN} \simeq 2$ fm or even more,\footnote{If to define this radius as that value of $r_{NN}$ where the potential can be practically neglected.} so that when analyzing the double-scattering term with such a model input $NN$ potential and keeping in mind that the average distance between two nucleons in deuteron is around $4$ fm one can conclude that in this $NN$ model the incident nucleon is moving through the target deuteron all the time within the field of strong nuclear force. I.e., we cannot consider the incident nucleon in this schematic model as moving freely in between two successive collisions with the nucleons in deuteron. In case of the realistic $NN$ interaction, the effective range of $NN$ force gets much shorter as compared to the size of deuteron (this is clearly seen from Fig.~\ref{nn-mt3}) and thus the above assumption of the Glauber model for estimation of the double-scattering term becomes quite valid. \begin{figure} \begin{center} \resizebox{0.9\columnwidth}{!}{\includegraphics{nn-mt3.eps}} \end{center} \caption{Ratio of real to imaginary parts of the $NN$ spin-independent helicity amplitude (a) and $NN$ differential cross section (b) at different energies of the incident nucleon derived from the MT-III potential model (dashed lines) with taken from PSA \cite{SAID} for $np$ scattering (solid lines).} \label{nn-mt3} \end{figure} An additional argument in favor of validity of the above Glauber model assumption for the double-scattering term is the good agreement between the diffraction model results and exact $3N$ calculations found for many observables, i.e., vector and tensor analyzing powers as well as differential cross sections. In fact, as is seen from Figs.~\ref{ds-ay-250}--\ref{ds-ay-440} the agreement for spin observables is evident in the area where the double-scattering term dominates. But this term includes a strong interference between non-spin-flip, single-spin-flip and double-spin-flip $NN$ helicity amplitudes, so that the behavior of the intermediate propagator of the projectile (moving between two successive collisions) should be of high importance to reproduce all the considered spin observables. (iii) To compare further the refined Glauber model results with the experimental data and with exact Faddeev results (see Figs.~\ref{ds-ay-250}--\ref{ds-apd-1000}) one can observe that the area where the diffraction model predictions begin to deviate essentially from the exact $3N$ results almost coincides with that where the latter begin to deviate from experimental data. In other words, the refined Glauber model reproduces quite properly the results of exact $3N$ calculations just in the region where the Faddeev $3N$ framework reflects properly the underlying $3N$ dynamics, i.e. the dynamics which assumes the validity of the conventional $2N$ and $3N$ force models and implies the nucleonic and $\Delta$-isobar degrees of freedom only. From this point of view, the deviation of exact Faddeev results from the accurate experimental data on $pd$ scattering \cite{SEK1} can imply that some hidden degrees of freedom (e.g., dibaryonic, etc.) manifest themselves in large-angle $pd$ scattering. A strong additional argument in favor of just this hypothesis follows from the fact that the above deviation gets more and more serious when collision energy is rising. According to some previous theoretical and experimental works (see, e.g., \cite{GUR,VIN}) the disagreements at $500$--$1000$ MeV may reach an order of magnitude at large scattering angles. (iv) The last, but not least, problem which can be posed by our Glauber model calculations is related to the amazingly good accuracy of the diffraction model at relatively low energies $T_p \simeq 200$ MeV and at rather large scattering angles. To solve this puzzle, one should recall that when considering scattering of antiprotons by light nuclei the validity of the Glauber model was found to begin at as low as $50$ MeV \cite{UZ}. The validity at such a low energy is with no doubts tightly related to strong absorption of antiprotons by the nuclear core, so that the central nuclear area (where the nuclear density is still noticeable) is seen by the incident antiproton as a large black disk, on which the diffraction is observed in such experiments. Rather similar physics is seen behind the intermediate- and high-energy $pd$ scattering. Because the elastic $pd$ cross section at these energies is rather small fraction of the total cross section the dominating processes are just inelastic ones (at least at small and middle impact parameter values), so that the fast incident nucleon goes away from the elastic channel with high probability when it is not very far from the loosely bound target nucleus. Thus, the $pd$ elastic scattering at such high energies can be viewed as a diffraction of the fast incident particle on the edge of the large black disk, so that the diffraction process can be described as a peripheral collision. This physical picture is schematically represented by Fig.~\ref{disk}. Here the central area (the hatched disk) with the radius $r_t = D_t/2$, with $D_t$ being the size of the deuteron ($D_t \simeq 4$ fm), shows the almost-black disk where the incident nucleons drop out of the elastic scattering channel and undergo mainly inelastic scattering (an ``absorption'' from the entrance channel). So, the truly elastic scattering happens mainly at the edge of the hatched disk inside a ring (shown by the dashed line on Fig.~\ref{disk}) with the width $\lambda_i$ (it corresponds to the wave zone in optical diffraction). Thus, the ratio $\eta_{\sigma} = \sigma_{el}/\sigma_{tot}$ of the elastic scattering cross section to the total one can be roughly estimated as the ratio of areas inside the ring and hatched inner disk, i.e., $\eta_r = \frac{2\pi r_t \lambda_i}{\pi {r_t}^2} = 2 \lambda_i/r_t$. For the incident energies $T_N \simeq 100$-$200$ MeV the nucleon wavelength $\lambda_i \simeq 0.2$--$0.3$ fm, so that the ratio of the areas $\eta_r = 2 \lambda_i/r_t \simeq 0.20$--$0.25$ which is in a good qualitative agreement with the measured ratio $\eta_{\sigma} = \sigma_{el}/\sigma_{tot} \simeq 0.15$--$0.20$. From this simple picture one can understand clearly the reasons for a good applicability of the Glauber diffraction model for the $pd$ elastic scattering even at energy $T_p \simeq 200$ MeV. \begin{figure} \begin{center} \resizebox{0.33\columnwidth}{!}{\includegraphics{disk.eps}} \end{center} \caption{Illustration of optical diffraction in high-energy $pd$ elastic scattering. The almost-black disk with radius $D_t/2$ (hatched disk) surrounded by the wave zone of width $\lambda_i$ (dashed line) represents the area inside the loosely bound target where inelastic processes dominate. The elastic scattering proceeds mainly in the ring of width $\lambda_i$, so that $\lambda_i/D_t \ll 1$.} \label{disk} \end{figure} As for the observed validity of the Glauber model at rather large transferred momenta ${\bf q}$, it is related basically to a double-scattering term which dominates in the region beyond the forward diffraction peak. So, the momentum ${\bf q}$ transferred within the double scattering corresponds to ca. ${\bf q}/2$ for each of single scatterings entering the double-scattering term. Thus, it is very likely that although the validity of eikonal approximation at ${\bf q}/2$ can be broken in a strict sense, the degree of this breaking should increase rather slowly with the rise of $q$. (v) Finally, it would be very appropriate to discuss here some possible reasons for observed disagreement between the results of exact $3N$ calculations and experimental data for $pd$ cross sections and especially for spin observables at large and backward scattering angles. This topic can be important also to improve the diffraction model description of the experimental data at larger $t$-values. The observation of $pd$ differential cross section and spin analyzing powers at large scattering angles shows that starting with incident energy $T_p \simeq 200$ MeV, the disagreements between exact Faddeev calculations and respective experimental data increase when the collision energy increases, and the contribution of conventional $3N$ forces (induced by the intermediate $\Delta$-isobar generation) does not help in reaching the agreement~\cite{SEK1}. So, it seems that this observation makes it meaningless to improve the formal aspects of Glauber model by taking into account many other effects ignored in the present formulation, e.g., off-shell corrections and relativistic effects such as boosts, etc., because the majority of these effects have been already included in the exact $3N$ calculations \cite{FC250} and likely do not help reach a good agreement with the data at large angles. It is important to stress also that the experimental differential cross sections at large angles are typically underestimated by the present-day theory. This fact and rise of all disagreements with energy could imply that the theoretical model does not include some essential degrees of freedom which manifest themselves at rising energy stronger and stronger. One can suppose that the most plausible candidature for this d.o.f. ignored in all previous $3N$ calculations (and also in all previous Glauber model results) are quark-meson (or dressed dibaryon~\cite{AP05,DBM1}) d.o.f. Indeed, the dressed dibaryon describes the situation when two nucleons overlap strongly their quark cores (at $r_{NN} \lesssim 1$ fm). So, according to the modern dibaryon concept~\cite{DBM1,DBM2}, this area corresponds to a strong attraction between two quark cores due to an appearance of a strong scalar field surrounding the unified six-quark system. In such a picture, the incident nucleon scattered into large angles is feeling not two well-separated nucleons in deuteron but one compact quark bag which can survive, in sharp contrast to the loosely bound deuteron, even at very large transferred momenta. Thus, if we assume for a moment the existence of such a dressed dibaryon in deuteron with a weight of about $2$--$3\%$~\cite{DBM1,DBM2}, it should be sufficient to enhance strongly the backward scattering of intermediate- and high-energy hadrons by deuteron. So, the straightforward generalization of the Glauber model can be done also in this direction.\footnote{In doing this, one should consider a direct hadron-dibaryon interaction without basic approximations of the diffraction model like eikonal, etc.} \section{Summary} \label{sum} In this work, we presented the comparison between the predictions for $pd$ elastic scattering observables given by the refined Glauber model, exact Faddeev calculations, and experiments. As input for refined Glauber model, we used the fully realistic $NN$ helicity amplitudes which describe the $NN$ observables at intermediate energies (at the level of accuracy of modern PSA) and the high-precision model of the deuteron wave function. For the convenient representation of the deuteron wave functions and $NN$ helicity amplitudes, we employed the special multi-Gaussian expansion which allowed us to perform all the calculations fully analytically. So, we calculated within the framework of the refined diffraction model the differential cross sections and the spin-dependent observables, i.e., the analyzing powers of proton and deuteron. We found an amazingly good agreement between the results of our refined Glauber model and exact Faddeev calculations up to transferred momentum values where the exact $3N$ results begin to deviate essentially from the experimental data. We discussed the possible reasons for such surprising agreement, which extends to rather low energies ($T_p \gtrsim 200$ MeV) and rather large scattering angles. Our general conclusion derived from the detailed comparisons with exact $3N$ calculations and very numerous experimental data for $pd$ analyzing powers and scattering cross sections can be formulated as follows: the Glauber model (in its refined form developed in the present work) turns out to be quite accurate starting with relatively low energies for loosely bound target nuclei as deuteron is. The refined diffraction model leads to predictions (in a rather wide scope of its applicability) which are, in general, in a similar agreement with experimental data as the exact Faddeev calculations. This conclusion should be valid not only for hadron scattering on loosely bound nuclei such as $d$, ${}^6{\rm Li}$, etc., but also for scattering of such hadrons as $\eta$, $K$ and other mesons on arbitrary nuclei, i.e., in the case of strong absorption of an incident wave by the nuclear core. \acknowledgments The authors are very grateful to Prof. A. Faessler for the nice hospitality in T\"{u}bingen University where part of this work was done. We appreciate very much the partial financial support from RFBR Grants Nos.~08-02-91959 and 07-02-00609 and the DFG Grant No. 436 RUS 113/790/0-2.
2,869,038,156,073
arxiv
\section{Introduction} Recent advances in deep reinforcement learning (RL) have demonstrated significant applicability and strong performance in games \cite{mnih:dqn, silver:2017}, continuous control \cite{lillicrap:ddpg}, and robotics \cite{levine:2016}. Among them, deep neural networks, such as convolutional neural networks, are widely used as powerful functional approximators for extracting useful features and enabling complex decision making. For instance, in continuous control tasks, a policy that selects actions under certain state observation can be parameterized by a deep neural network that takes the current state observation as input and gives an action or a distribution of action as output. In order to optimize such policies, various policy gradient methods \cite{mnih:2016, schulman:trpo, schulman:ppo, heess:2017}, including both off-policy and on-policy approaches, have been proposed. In particular, deterministic policy gradient method (DPG), which extends the discrete Q-learning algorithm for the continuous action spaces, exploits previous experience or off-policy data from a replay buffer and often achieves more desirable sample efficiency compared to most existing on-policy policy gradient algorithms. In the recent NIPS 2017 learning to run challenge, the deep deterministic policy gradient algorithm (DDPG) \cite{lillicrap:ddpg}, a variant of DPG, has been applied by almost all top-ranked teams and achieved a very compelling success in a high-dimensional continuous control problem, while on-policy algorithms, including TRPO \cite{schulman:trpo} and PPO \cite{schulman:ppo}, performed much worse with the same amount of data collected. In contrast to deep Q-learning (DQN) \cite{mnih:dqn} which only learns a value function on a set of discrete actions, DDPG also parameterizes a deterministic policy to select a continuous action, thus avoiding the optimization in or the discretization of the continuous action space. As an off-policy actor-critic method, DDPG utilizes Bellman equation updates for the value function and the policy gradient descent to directly optimize the actor policy. Unlike DQN which often applies epsilon-greedy exploration on a set of discrete actions, more sophisticated continuous exploration in the high-dimensional continuous action space is required for DDPG. A common practice of exploration in DDPG is to add a uncorrelated Gaussian or a correlated Ornstein-Uhlenbeck (OU) process \cite{Uhlenbeck:ou} to the action selected by the deterministic policy. The data collected by this exploration method is then added to a replay buffer used for DDPG training. However, in practice, Gaussian noises may be sub-optimal or misspecified, and hyper-parameters in the noise process are hard to tune. In this work, we introduce a meta-learning algorithm to directly learn an exploration policy to collect better experience data for DDPG training. Instead of using additive noises on actions, we parameterize a stochastic policy to generate data to construct the replay buffer for training the deterministic policy in the DDPG algorithm. This stochastic policy can be seen as an exploration policy or a teacher policy that gathers high-quality trajectories that enable better training of the current deterministic policy and the value function. To learn the exploration policy, we develop an on-policy policy gradient algorithm based on the training improvement of the deterministic policy. First, we obtain a collection of exploration data from the stochastic policy and then apply DDPG on this data-set to make updates of the value function and the deterministic policy. We then evaluate the updated deterministic policy and compute the improvement of these updates based on the data just collected by comparing to the previous policy. Therefore, the policy gradient of the stochastic policy can be computed using the deterministic policy improvement as the reward signal. This algorithm adaptively adjusts the exploration policy to generate effective training data for training the deterministic policy. We have performed extensive experiments on several classic control and Mujoco tasks, including Hopper, Reacher, Half-Cheetah, Inverted Pendulum, Inverted Double Pendulum and Pendulum. Compared to the default DDPG in OpenAI's baseline \cite{abbeel:parameternoise}, our algorithm demonstrated substantial improvements in terms of sample efficiency. We also compared the default Gaussian exploration and the learned exploration policy and found that the exploration policy tends to visit novel states that are potentially beneficial for training the target deterministic policy. \section{Related Work} The idea of meta learning has been widely explored in different areas of machine learning, under different names, such as meta reinforcement learning, life-long learning, learning to learn, and continual learning. Some of the recent work in the setting of reinforcement learning includes \cite{duan2016rl, finn2017model, wang2016learning}, to name a few. Our work is related to the idea of learning to learn but instead of learning the optimization hyperparameters we hope to generate high quality data to better train reinforcement agents. Intrinsic rewards such as prediction gain \cite{bellemare:im}, learning progress \cite{Oudeyer:im}, compression progress \cite{Schmidhuber:im}, variational information maximization \cite{abbeel:vime, hester2017intrinsically}, have been employed to augment the environment's reward signal for encouraging to discover novel behavior patterns. One of limitations of these methods is that the intrinsic reward weighting relative to the environment reward must be chosen manually, rather than learned on the fly from interaction with the environment. Another limitation is that the reshaped reward might not guarantee the learned policy to be the same optimal one as that learned from environment rewards only \cite{ng:policyinvariance}. The problem of exploration has been widely used in the literature. Beyond the traditional studies based on epsilon-greedy and Boltzmann exploration, there are several recent advances in the setting of deep reinforcement learning. For example, \cite{tang2017exploration} studied count-based exploration for deep reinforcement learning; \cite{stadie2015incentivizing} proposed a new exploration method based on assigning exploration bonuses from a concurrently learned transition model; \cite{hester2013learning} studied a bandit-based algorithm for learning simple exploration strategies in model-based settings; \cite{osband2016deep} used a bootstrapped approach for exploration in DQN, a simple algorithm in a computationally and statistically efficient manner through the use of randomized value functions \cite{osband2016random}. \section{Reinforcement learning} In this section, we introduce the background of reinforcement learning. We start with introducing Q-learning in Section~3.1, and then deep deterministic policy gradient (DDPG) which works for continuous action spaces in Section~3.2. \subsection{Q-learning} Considering the standard reinforcement learning setting, an agent takes a sequence of actions in an environment in discrete time and collects a scalar reward per timestep. The objective of reinforcement learning is to learn a policy of the agent to optimize the cumulative reward over future time. More precisely, we consider an agent act over time $t\in\{1, \ldots, T\}$. At time $t$, the agent observes an environment state $s_t$ and selects an action $a_t\in A$ to take according to a policy. The policy can be either a deterministic function $a=\mu(s)$, or more generally a conditional probability $\pi(a|s)$. The agent will then observe a new state $s_{t+1}$ and receive a scalar reward value $r_t \in R$. The set $A$ of possible actions can be discrete, continuous or mixed in different tasks. Given a trajectory $\{s_t, a_t, r_t\}_{t=1}^T$, the overall reward is defined as a discounted sum of incremental rewards, $R=\sum_{t=1}^T \gamma^t r_t$, where $\gamma \in [0,1)$ is a discount factor. The goal of RL is to find the optimal policy to maximize the expected reward. Q-learning~\citep{WatkinsPHD1989,WatkinsML1992} is a well-established method that has been widely used. Generally, Q-learning algorithms compute an action-value function, often also referred to as Q-function, $Q^\ast(s,a)$, which is the expected reward of taking a given action $a$ in a given state $s$, and following an optimal policy thereafter. The estimated future reward is computed based on the current state $s$ or a series of past states $s_t$ if available. The core idea of Q-learning is the use of the Bellman equation as a characterization of the optimal future reward function $Q^\ast$ via a state-action-value function \begin{equation} Q^\ast(s_t,a) = \E [r_t + \gamma\max_{a^\prime} Q^\ast(s_{t+1},a^\prime)], \end{equation} where the expectation is taken w.r.t the distribution of state $s_{t+1}$ and reward $r_t$ obtained after taking action $a$. Given the optimal Q-function, the optimal policy greedily selects the actions with the best Q-function values. Deep Q-learning (DQN), a recent variant of Q-learning, uses deep neural networks as Q-function to automatically extract intermediate features from the state observations and shows good performance on various complex high-dimensional tasks. Since Q-learning is off-policy, a particular technique called ``experience replay''~\citep{LinML1992,WawrzynskiNN2009} that stores past observations from previous trajectories for training has become a standard step in deep Q-learning. Experience replays are stored as a dataset, also known as replay buffer, $B = \{(s_j, a_j, r_j, s_{j+1})\}$ which contains a set of previously observed state-action-reward-future state-tuples $(s_j, a_j, r_j, s_{j+1})$. Such experience replays are often constructed by pooling such tuples generated by recent policies. With the replay buffer $D$, Deep Q learning follows the following iterative procedure~\citep{MnihNIPSWS2013,mnih:dqn}: start an episode in the initial state $s_0$; sample a mini-batch of tuples $M=\{(s_j, a_j, r_j, s_{j+1})\}\subseteq B$; compute and fix the targets $y_j = r_j + \gamma\max_a Q_{\theta^-}(s_{j+1}, a)$ for each tuple using a recent estimate $Q_{\theta^-}$ (the maximization is only considered if $s_j$ is not a terminal state); update the Q-function by optimizing the following program w.r.t the parameters $\theta$ typically via stochastic gradient descent: \begin{equation} \min_\theta \sum_{(s_j, a_j, r_j, s_{j+1})\in M}\left(Q_{\theta}(s_j,a_j) - y_j\right)^2. \end{equation} \label{eq:QFuncOpt} Besides updating the parameters of the Q-function, each step of Q-learning needs to gather additional data to augment the replay buffer. This is done by performing an action simulation either by choosing an action at random with a small probability $\epsilon$ or by following the strategy $\arg\max_{a} Q_{\theta}(s_t, a)$ which is currently estimated. This strategy is also called the $\epsilon$-greedy policy which is applied to encourage visiting unseen states for better exploration and avoid the training stuck at some local minima. We subsequently obtain the reward $r_t$. Subsequently we augment the replay buffer $B$ with the new tuple $(s_t, a_t,r_t, s_{t+1})$ and continue until this episode terminates or reaches an upper bound of timesteps, and then we restart a new episode. When optimizing w.r.t the parameter $\theta$, a recent Q-network is used to compute the target $y_j = r_j + \gamma\max_a Q_{\theta^-}(s_{j+1},a)$. \subsection{Deep Deterministic Policy Gradient} For continuous action spaces, it is practically impossible to directly apply Q-learning, because the max operator in the Bellman equation, which find the optimal $a$, is usually infeasible, unless discretization is used or some special forms of Q-function are used. Deep deterministic policy gradient (DDPG) \citet{lillicrap:ddpg} addresses this issue by training a parametric policy network together with the Q-function using policy gradient descent. Specifically, DDPG maintains a deterministic actor policy ${\pi} = \delta(a - \mu(s, \theta^{\pi}))$ where $\mu(s,\theta^{\pi})$ is a parametric function, such as a neural network, that maps the state to actor. We want to iteratively update $\theta^{\pi}$, such that $a=\mu(s, \theta^{\pi})$ gives the optimal action that maximizes the Q-function $Q(s,a)$. so that $a= \mu(s,\theta^{\pi})$ can be viewed as an approximate action-argmax operator of the Q-function, and we do not have to perform the action maximization in the high-dimensional continuous space. In training, the critic $Q_\theta(s, a)$ is updated using the Bellman equation as in Q-learning that we introduced above, and the actor is updated to maximize the expected reward w.r.t. $Q_\theta(s,a)$, $$ \max_{\theta^{\pi}} \big \{ J(\theta^{\pi}) := \E_{s\sim B}[Q_\theta(s, \mu(s, \theta^{\pi}))] \big \}, $$ where $s\sim B$ denotes sampling $s$ from the replay buffer $B$. This is achieved in DDPG using gradient descent: $$ \theta^{{\pi}} \gets \theta^{\pi} + \eta \nabla_{\theta^{\pi}} J(\theta^{\pi}), $$ where \begin{align*} \nabla_{\theta^{\pi}} J(\theta^{\pi}) & = \nabla_{\theta^{\pi}} \E_{s\sim B}[\nabla_a Q_\theta(s, \mu(s, \theta^{\pi})) \nabla_{\theta^{\pi}} \mu(s)]. \end{align*} In DDPG, the actor $\mu(s, \theta^{\pi})$ and the critic $Q_\theta(s,a)$ are updated alternatively until convergence. As in Q-learning, the performance of DDPG critically depends on a proper choice of exploration policy ${\pi_e}$, which controls what data to add at each iteration. However, in high-dimensional continuous action space, exploration is highly nontrivial. % In the current practice of DDPG, the exploration policy ${\pi_e}$ is often constructed heuristically by adding certain type of noise to the actor policy to encourage stochastic exploration. A common practice is to add an uncorrelated Gaussian or a correlated Ornstein-Uhlenbeck (OU) process \cite{Uhlenbeck:ou} to the action selected by the deterministic actor policy, that is, $$ a = \mu(s, \theta^{\pi}) ~ + ~ \mathcal{N}(0,\sigma^2).$$ Since DDPG is off-policy, the exploration can be independently addressed from the learning. It is still unclear whether these exploration strategies can always lead to desirable learning of the deterministic actor policy. \begin{algorithm} \caption{Teacher: Learn to Explore} \label{alg:alg} \begin{algorithmic}[1] \STATE Initialize ${\pi_e}$ and ${\pi}$. \STATE Draw $D_1$ from ${\pi}$ to estimate the reward $\hat R_{{\pi}}$ of ${\pi}$. \STATE Initialize the Replay Buffer $B = D_1$. \FOR{iteration $t$} \STATE Generate $D_0$ by executing teacher's policy ${\pi_e}$. \STATE Update actor policy ${\pi}$ to $\pi'$ using DDPG based on $D_0$: $\pi' \leftarrow \mathrm{DDPG}({\pi}, D_0)$. \STATE Generate $D_1$ from ${\pi}'$ and estimate the reward of ${\pi}'$. Calculate the meta reward: $\hat{\mathcal{R}}(D_0) = \hat R_{{\pi}'} - \hat R_{{\pi}}$. \STATE Update Teacher's Policy ${\pi_e}$ with meta policy gradient \\ $$ \theta^{{\pi_e}} \gets \theta^{{\pi_e}} + \eta \nabla_{\theta^{{\pi_e}}} \log \mathcal P(D_0 | {\pi_e}) \hat{\mathcal{R}}(D_0) $$ \STATE{ Add both $D_0$ and $D_1$ into the Replay Buffer $B \gets B \bigcup D_0 \bigcup D_1$. } \STATE Update ${\pi}$ using DDPG based on Replay Buffer, that is, ${\pi} \gets \mathrm{DDPG}({\pi}, ~ B)$. Compute the new $\hat R_{{\pi}}$. \ENDFOR \end{algorithmic} \end{algorithm} \section{Learning to Explore} We expect to construct better exploration strategies that are potentially better than the default Gaussian or OU exploration. In practice, e.g., in the Mujuco control tasks, the action spaces are bounded by a high-dimensional continuous cube $[-1,1]^d$. Therefore, it is very possible that the Gaussian assumption of the exploration noises is not suitable when the action selected by the actor policy is close to the corner or boundaries of this cube. Furthermore it is also possible that the actor policy gets stuck in a local basin in the state space and thus cannot escape even with random Gaussian noises added. All existing exploration strategies seem to be based on the implicit assumption that the exploration policy ${\pi_e}$ should stay close to the actor policy ${\pi}$, but with some more stochastic noise. However, this assumption may not be true. Instead, it may be beneficial to make ${\pi_e}$ significantly different from the actor ${\pi}$ in order to explore the space that has not been explored previously. Even in the case of using Gaussian noise for exploration, the magnitude of the Gaussian noise is also a critical parameter that may influence the performance significantly. Therefore, it is of great importance to develop a systematic approach to adaptively learn the exploration strategy, instead of using simple heuristics. Since DDPG is an off-policy learning algorithm and the exploration is independent from the learning, we can decouple the exploration policy with the actor policy. We hope to construct an exploration policy which generates novel experience replays that are more beneficial for training the actor policy. To do so, we introduce a meta-reinforcement learning approach to learn an exploration policy so that it most efficiently improves the training of the actor policy. \subsection{A view from MDP} To better understand our method, we can formulate a MDP (Markov Decision Process) for the interaction between exploration agent (or \textit{teacher} with policy ${\pi_e}$) and exploitation agent (or \textit{student} with policy ${\pi}$). The state space $\mathcal{S}$ is defined as the collection of the (exploitation) policy ${\pi}$, the action in space $\mathcal{A}$ is defined as the rollouts $D_0$ generated by executing the meta-exploration-policy ${\pi_e}$. Then any \textbf{Policy Updater} could be defined as a transition function to map a policy to the next policy: $\mathcal{T} : \mathcal{S} \times \mathcal{A} \rightarrow \mathcal{S}$. For example, DDPG is a off-policy \textbf{Policy Updater}. The reward function $\mathcal{R}: \mathcal{S} \times \mathcal{A} \rightarrow R$ could be defined as \textbf{Policy Evaluator} to specify the exploitation agent's performance. Furthermore, we define \textbf{meta-reward} $\mathcal{R}(D_0) = \mathcal{R}_{\pi'} - \mathcal{R}_{\pi}$ to measure the student's performance improvement. To produce a reward, for example, we can make a state transition $(\pi, D_0) \rightarrow \pi'$ with transition function \textbf{DDPG}, and get the Monte Carlo estimation of the reward $\mathcal{R}$ based on the rollouts $D_1$ generated by executing the look-ahead policy $\pi'$. For more details, please refer to Algorithm~\ref{alg:alg}. \subsection{Learning Exploration Policy with Policy Gradient} Our framework can be best viewed as a teacher-student learning framework, where the exploration policy ${\pi_e}$, viewed as the \emph{teacher}, generates a set of data $D_0$ at each iteration, and feeds it into a DDPG agent with an actor policy $\pi$ (the \emph{student}) who learns from the data and improves itself. Our goal is to adaptively improve the teacher ${\pi_e}$ so that it generates the most informative data to make the DDPG learner improve as fast as possible. In this meta framework, the generation of data $D_0$ can be viewed as the ``action'' taken by the teacher ${\pi_e}$, and its related reward should be defined as the improvement of the DDPG learner using this data $D_0$, \begin{align}\label{j0}\begin{split} \mathcal J({\pi_e}) & = \E_{D_0\sim {\pi_e}}[\mathcal R(D_0)] \\ & = \E_{D_0\sim {\pi_e}} [R_{\mathrm{DDPG}({\pi}, D_0)} - R_{{\pi}}], \end{split} \end{align} where $\pi' = \mathrm{DDPG}({\pi}, D_0)$ denotes a new policy obtained from one or a few steps of DDPG updates from ${\pi}$ based on data $D_0$; $R_{\mathrm{DDPG}({\pi}, D_0)}$ and $R_{{\pi}}$ are the actual cumulative reward of rollouts generated by policies $\pi' = \mathrm{DDPG}({\pi}, D_0)$ and ${\pi}$, respectively, in the original RL problem. Here we use $\mathcal R(D_0)$ to denote the ``meta'' reward of data $D_0$ in terms of how much it helps the progress of learning the agent. Similar to the actor policy, we can parameterize this exploration policy ${\pi_e}$ by $\theta^{{\pi_e}}$. Using the REINFORCE trick, we can calculate the gradient of $\mathcal J({\pi_e})$ w.r.t. $\theta^{{\pi_e}}$: \begin{align}\label{metagrad} \nabla_{\theta^{{\pi_e}}}\mathcal J = \E_{D_0\sim {\pi_e}} \left [ \mathcal R(D_0) \nabla_{\theta^{{\pi_e}}} \log \mathcal P(D_0 | {\pi_e}) \right ] , \end{align} where $\mathcal P(D_0 | {\pi_e})$ is the probability of generating transition tuples $D_0:=\{s_t, a_t, r_t\}_{t=1}^T$ given ${\pi_e}$. This distribution can be factorized as $$ \mathcal P(D_0 | {\pi_e}) = p(s_0) \prod_{t=0}^T {\pi_e}(a_{t}| s_t) p(s_{t+1}|s_t, a_t), $$ where $ p(s_{t+1}|s_t, a_t)$ is the transition probability and $p(s_0)$ the initial distribution. The dependency of the reward is omitted here. Because $p(s_{t+1}|s_t, a_t)$ is not involved with the exploration parameter $\theta^{{\pi_e}}$, by taking the gradient w.r.t. $\theta^{{\pi_e}}$, we have $$ \nabla_{\theta^{{\pi_e}}} \log \mathcal P(D_0 | {\pi_e}) = \sum_{t=1}^T \nabla_{\theta^{{\pi_e}}} \log {\pi_e}(a_t|s_t). $$ This can be estimated easily on the rollout data. We can also approximate this gradient with sub-sampling for the efficiency purpose. To estimate the meta-reward $\mathcal R(D_0)$, we perform an ``exercise move'' by running DDPG ahead for one or a small number of steps: we first calculate a new actor policy ${\pi}' = \mathrm{DDPG}({\pi}, D_0)$ by running DDPG based on data $D_0$; we then simulate from the new policy ${\pi}'$ to get data $D_1$, and use $D_1$ to get an estimation $\hat R_{{\pi}'}$ of the reward of ${\pi}'$. This allows us to estimate the meta reward by $$ \hat{\mathcal{R}}(D_0) = \hat R_{{\pi}'} - \hat R_{{\pi}}, $$ where $\hat R_{{\pi}}$ is the estimated reward of ${\pi}$, which we should have obtained from the previous iteration. Once we estimate the meta-reward $\mathcal R(D_0)$, we can update the exploration policy ${\pi_e}$ by following the meta policy gradient in \eqref{metagrad}. This yields the following update rule: \begin{align}\label{update0} \theta^{{\pi_e}} \gets \theta^{{\pi_e}} + \eta \hat{\mathcal{R}}(D_0) \sum_{t=1}^T \nabla_{\theta^{{\pi_e}}} \log {\pi_e}(a_t|s_t). \end{align} After updating the exploration policy, we add both $D_0$ and $D_1$ into a replay buffer $B$ that we maintain across the whole process, that is, $B\gets B\cup D_0 \cup D_1$; we then update the actor policy ${\pi}$ based on $B$, that is, ${\pi} \gets \mathrm{DDPG}({\pi}, ~ B)$. Our main algorithm is summarized in Algorithm~\ref{alg:alg}. It may appear that our meta update adds significant computation demand, especially in requiring to generate $D_1$ for the purpose of evaluation. However, $D_1$ is used highly efficiently since it is also added into the replay buffer and is subsequently used for updating ${\pi}$. This design helps improve, instead of decrease, the sample efficiency. Our framework allows us to explore different parametric forms of ${\pi_e}$. We tested two design choices: i) Similar to motivated by the traditional exploration strategy, we can set ${\pi_e}$ to equal the actor policy adding a zero-mean Gaussian noise whose variance is trained adaptively, that is, ${\pi_e}= \normal(\mu(s, \theta^\pi), ~ \sigma^2 I)$, where $\sigma$ is viewed as the parameter of ${\pi_e}$ and is trained with meta policy gradient \eqref{update0}. ii) Alternatively, we can also take ${\pi_e}$ to be another Gaussian policy that is completely independent with ${\pi}$, that is, ${\pi_e} = \normal(f(s, \theta^f), \sigma^2 I)$, where $f$ is a neural network with parameter $\theta^f$, and $\theta^{{\pi_e}} :=[\theta^f, \sigma]$ is updated by the meta policy gradient \eqref{update0}. We tested both i) and ii) empirically, and found that ii) performs better than i). This may suggest that it is beneficial to explore spaces that are far away from the current action policy (see Figure~\ref{fig:contour}). \section{Experiments} In this section, we conduct comprehensive experiments to understand our proposed meta-exploration-policy learning algorithm and to demonstrate its performance in various continuous control tasks. Two videos are included as supplementary material to illustrate the running results of Pendulum and Inverted Double Pendulum. \begin{table}[t] \centering \begin{tabular}{l|r} & DDPG and Meta \\\hline Number of Epoch Cycles & 20 \\ Number of Rollout Steps & 200 \\ Number of Training Steps & 50 \\ \end{tabular} \caption{\label{tab:trpo} Common Parameter settings for DDPG and Meta in most tasks} \end{table} \begin{figure}[t] \centering \centerline{\includegraphics[width=\columnwidth, trim={0em 2.5em 0 0},clip]{inverted_cmp.png}} \rotatebox{0}{\footnotesize Time Steps ($\times 1000$)} \caption{Comparison between meta exploration policies and DDPG} \label{fig:behavior} \end{figure} \begin{figure*}[htbp] \centering \begin{subfigure}[b]{0.15 \textwidth} \includegraphics[width=\textwidth]{invertedpen-s.png} \end{subfigure} \begin{subfigure}[b]{0.15 \textwidth} \includegraphics[width=\textwidth]{pendulum-s.png} \end{subfigure} \begin{subfigure}[b]{0.15 \textwidth} \includegraphics[width=\textwidth]{inverted-s.png} \end{subfigure} \begin{subfigure}[b]{0.15 \textwidth} \includegraphics[width=\textwidth]{hopper-s.png} \end{subfigure} \begin{subfigure}[b]{0.15 \textwidth} \includegraphics[width=\textwidth]{cheetah-s.png} \end{subfigure} \begin{subfigure}[b]{0.15 \textwidth} \includegraphics[width=\textwidth]{reacher-s.png} \end{subfigure} \caption{Illustrative screenshots of environments we experiment with Meta and DDPG} \end{figure*} \subsection{Experimental Setting} Our implementation is based on the OpenAI's DDPG baseline \cite{abbeel:parameternoise} on the GitHub website\footnote{https://github.com/openai/baselines/tree/master/baselines/ddpg}. Our experiments were performed on a server with 8 Tesla-M40-24GB GPU and 40 Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz processors. The deterministic actor (or student) policy network and $Q$-networks have the same architectures as implemented in the default DDPG baseline, which are multi-layer perceptrons with two hidden layers (64-64). For the meta-exploration policy (teacher $\pi_e$), we implemented a stochastic Gaussian policy with a mean network represented with a MLP with two hidden layers (64-64), and a log-standard-deviation variance network with a MLP with two hidden layers (64-64). In order to make a fair comparison with baseline, we try to set the similar hyper-parameters as DDPG. The common parameter settings for DDPG and our meta-learning algorithm in most tasks are listed in Table 1. Besides those common ones, our method has some extra parameters: exploration rollout steps (typically 100) for generating exploration trajectories, number of evaluation steps (typically 200, same as DDPG's rollout steps) for generating exploitation trajectories used to evaluate student's performance, number of training steps (typically 50, aligning with DDPG's training steps) to update student policy ${\pi}$, and number of exploration training steps (typically 1) to update the Meta policy ${\pi_e}$. In most experiments, we set number of cycles 20 in an epoch to align with DDPG's corresponding setting. Tasks such as Half-Cheetah, Inverted Pendulum, which need more explore rollout steps (1000) to finish the task, and ended up with 2000 evaluation steps, 500 number of training steps to update students and 100 exploration training steps to update teacher. In OpenAI's DDPG baseline \cite{abbeel:parameternoise}, the total number of steps of interactions is 1 million. Here, in tasks such as Half-Cheetah, Inverted Pendulum and Inverted Double Pendulum, it takes about 1.5 million steps, Hopper with 1 million steps, and 0.7 million and 0.9 million steps are sufficient for Reacher and Pendulum to achieve convergence. Similar to DDPG, the optimizer we use to update the network parameter is Adam \cite{kingma:adam} with the same actor learning rate $0.0001$, critic learning rate $0.001$, and additionally learning rate $0.0001$ for our meta policy. Similar to DDPG, we adopt {Layer-Normalization \cite{ba:layer-norm}} for our two policy networks and one $Q$-network. \begin{table*}[t] \centering \caption{Reward achieved in different environments} \label{table:finalreward} \begin{tabular}{|l|l|l|} \hline env-id & Meta & DDPG \\ \hline InvertedDoublePendulum-v1 & \textbf{7718 $\pm$ 277} & 2795 $\pm$ 1325 \\ \hline InvertedPendulum-v1 & \textbf{745 $\pm$ 27} & 499 $\pm$ 23 \\ \hline Hopper-v1 & \textbf{205 $\pm$ 41} & 135 $\pm$ 42 \\ \hline Pendulum-v0 & \textbf{-123 $\pm$ 10} & -206 $\pm$ 31 \\ \hline HalfCheetah-v1 & \textbf{2011 $\pm$ 339} & 1594 $\pm$ 298 \\ \hline Reacher-v1 & -12.16 $\pm$ 1.19 & \textbf{-11.67 $\pm$ 3.39} \\ \hline \end{tabular} \end{table*} \begin{figure*}[t] \centering \begin{subfigure}[b]{0.3 \textwidth} \includegraphics[width=\textwidth]{invertedpen.png} \caption{\label{sfig:invertedpen} InvertedPendulum} \end{subfigure} \begin{subfigure}[b]{0.3 \textwidth} \includegraphics[width=\textwidth]{inverted.png} \caption{\label{sfig:inverted} InvertedDoublePendulum} \end{subfigure} \begin{subfigure}[b]{0.3 \textwidth} \includegraphics[width=\textwidth]{hopper.png} \caption{\label{sfig:hopper} Hopper} \end{subfigure} \\ \begin{subfigure}[b]{0.3 \textwidth} \includegraphics[width=\textwidth]{pendulum.png} \caption{\label{sfig:pendulum} Pendulum} \end{subfigure} \begin{subfigure}[b]{0.3 \textwidth} \includegraphics[width=\textwidth]{cheetah.png} \caption{\label{sfig:cheetah} HalfCheetah} \end{subfigure} \begin{subfigure}[b]{0.3 \textwidth} \includegraphics[width=\textwidth]{reacher.png} \caption{\label{sfig:reacher} Reacher} \end{subfigure} \caption{Performance Comparison of Meta and DDPG for Six Continuous Control Tasks.} \label{fig:convergence} \end{figure*} \subsection{ Meta Exploration Policy Explores Efficiently} To investigate and evaluate different teacher's behaviors, we tested in Inverted Double Pendulum the two possible choices of policy architectures of ${\pi_e}$ listed in Section~4. In Figure~\ref{fig:behavior}, \texttt{Meta} denotes that we learn an exploration policy that is a Gaussian MLP policy with independent network architecture of student's policy. Meta runs consistently better than DDPG baseline with relative high return and sample-efficiency. Usually, Meta policy learns in the same pace as student policy, it updates every time both from student's success (performance improvement) and failure (negative performance). For a further more robust policy updates, we may need to take consideration of the trade-off between sample efficiency and sample quality. \begin{figure*}[t] \begin{subfigure}[b]{0.3 \textwidth} \includegraphics[width=\textwidth]{Meta_Teacher_pi0_8_early_10K_c.png} \caption{\label{sfig:meta-teacher} Meta-Teacher (early)} \end{subfigure} \begin{subfigure}[b]{0.3 \textwidth} \includegraphics[width=\textwidth]{Meta_Student_pi0_8_early_10K_c.png} \caption{\label{sfig:meta-student} Meta-Student (early)} \end{subfigure} \begin{subfigure}[b]{0.3 \textwidth} \includegraphics[width=\textwidth]{DDPG_pi0_8_early_10K_c.png} \caption{\label{sfig:ddpg-status} DDPG (early)} \end{subfigure} \\ \begin{subfigure}[b]{0.3 \textwidth} \includegraphics[width=\textwidth]{Meta_Teacher_pi0_8_late_10K_c.png} \caption{\label{sfig:meta-teacher} Meta-Teacher (late)} \end{subfigure} \begin{subfigure}[b]{0.3 \textwidth} \includegraphics[width=\textwidth]{Meta_Student_pi0_8_late_10K_c.png} \caption{\label{sfig:meta-student} Meta-Student (late)} \end{subfigure} \begin{subfigure}[b]{0.3 \textwidth} \includegraphics[width=\textwidth]{DDPG_pi0_8_late_10K_c.png} \caption{\label{sfig:ddpg-status} DDPG (late)} \end{subfigure} \\ \caption{State Visitation Density Contours of Meta and DDPG in Early and Late Training Stages.} \label{fig:contour} \end{figure*} A second exploration policy denoted as \texttt{Meta (variance)} in Figure~\ref{fig:behavior} is by taking advantage of student's learning, combined with a variance network as ${\pi_e} = {\pi} + N(0, \sigma^2 I)$. Essentially, we are learning adaptive variance for exploration. Based on the student's performance, teacher is able to learn to provide training transitions with appropriate noise. This teacher's demonstrations help student to explore different regions of state space in an adaptive way. For Figure~\ref{fig:behavior}, we can see that the fully independent exploration policy perform better than the more restrictive policy that only adds noise to the action policy. As we show in Figure~\ref{fig:contour}, the independent exploration policy tends to explore regions that are not covered by the actor policy, suggesting that it is beneficial to perform \emph{non-local exploration}. \subsection{Sample Efficiency in Continuous Control Tasks} We show the learning curves in Figure~\ref{fig:convergence} for six various continuous control tasks, each is running three times with different random seeds to produce reliable comparison. Overall, our meta-learning algorithm is able to achieve sample-efficiency with better returns in most of the following continuous control tasks. Significantly, in Inverted Pendulum and Inverted Double Pendulum, on average, in about 250 thousands out of 1500 thousands steps, we are able to achieve the similar return as the best of DDPG. That is about $1/6$ number of baseline's samples. Finally, our average return is about 7718 compared to DDPG's 2795. In Pendulum, we performed clearly better with higher average return, and converge faster than DDPG in less than 200 thousand steps. In Half-Cheetah and Hopper, on average, our meta-learning algorithm is pretty robust with higher returns and better sample-efficiency. In Reacher, we have very similar return as DDPG baseline with lower variance. The possible intuition we are able to improve the sample-efficiency and returns in most of tasks is that teacher is able to learn to help student to improve their performance, which is the student's ultimate goal. \subsection{Guided Exploration with Diverse Meta Policies} To further understand the behaviors of teacher and student policies and how teacher interacts with student during the learning process, we plot the density contours of state visitation probabilities in Figure~\ref{fig:contour}. The probabilities are learned with Kernel Density Estimation based on the samples in 2D embedding space. In Inverted Double Pendulum task, we collect about 500 thousands observation states for teacher policy and 1 million states for student policy. As comparison, we get 1 million states from DDPG policy. Then we project these data-sets {jointly} into 2D embedding space by {t-SNE} \cite{maaten:t-sne}. We may be able to find interesting insights, although it is possible that the {t-SNE} projection might introduce artifacts in the visualization. As shown in Figure~\ref{fig:contour}, we have two groups of comparison studies for the evolution of teacher and student learning processes in different stages. In each row, the first column is Meta-Teacher, the second one is Meta-Student policy and the third one is the DDPG baseline. The first row (Figure~\ref{fig:contour}(a, b, c)) visualize state distributions from the first 50 roll-outs by executing the random teacher and student policies where the policies are far from becoming stationary. The bottom row (Figure~\ref{fig:contour}(d,e,f)) demonstrates the state distribution landscape visited by teacher, student and DDPG, respectively, from the last 50 roll-outs to the end of learning The teacher is exploring the state space in a \textit{global} way. In the two learning stages, the Meta-Teacher (Figure~\ref{fig:contour}(a, d)) has diversified state visitation distributions ranging from different modes in separate regions. We can see that Meta-Teacher policy has high entropy, which implies that Meta-Teacher provides more diverse samples for student. Guided by teacher's wide exploration, student policy is able to learn from a large range of state distribution regions. Interestingly, compared to teacher's behavior, the student visits almost complementary different states in distribution space consistently in both the early (Figure~\ref{fig:contour}(a,b)), and later (Figure~\ref{fig:contour}(d,e)) stages. We can see that the teacher interacts with the student and is able to learn to explore different regions based on student's performance. Meanwhile, the student is learning from teacher's provided demonstrations and is focusing on different regions systematically. This allows the student to improve its performance consistently and continuously. It indicates that our \textit{global exploration} strategy is quite different from noise-based random walk \textit{local exploration} in principle. From the early (Figure~\ref{fig:contour}(b)) to the later stage (Figure~\ref{fig:contour}(e)), we find that the student is growing to be able to learn stationary and robust policies, guided by teacher's interactive exploration. Finally, compared to DDPG (Figure~\ref{fig:contour}(f)), we achieve better return (8530 vs 2830) for this comparison, which indicates that our Meta policy is able to provide a better exploration strategy to help improve the baseline. \section{Conclusion} We introduce a meta-learning algorithm to adaptively learn exploration polices to collect better experience data for DDPG training. Using a simple meta policy gradient, we are able to efficiently improve the exploration policy and achieve significantly higher sample efficiency than the traditional DDPG training. Our empirical study demonstrates the significant practical advantages of our approach. Although most traditional exploration techniques are based on \emph{local exploration} around the actor policy, we show that it is possible and more efficient to perform \emph{global exploration}, by training an independent exploration policy that allows us to explore spaces that are far away from the current state distribution. This finding has a substantial implication to our understanding on exploration strategies, showing that more adaptive, non-local methods should be used in order to learn more efficiently. Finally, this meta-policy algorithm is general and could be applied to the other off-policy reinforcement learning problems. \section*{Acknowledgement} We appreciate Kliegl Markus for his insightful discussions and helpful comments. \bibliographystyle{icml2018}
2,869,038,156,074
arxiv
\section{Introduction} In environmental, biomedical, and engineering applications a common objective is to estimate the relation between a predictor and an outcome when there is prior knowledge that the relation is monotone or otherwise shape-constrained. In this paper we consider one such application that relates to measuring airborne particles at fine-temporal resolution using a recently-developed portable monitor. At the center of this problem is estimation of a function that is known to be monotone due to the physical processes involved in the monitor and making inference on the scaled first derivative of the estimated monotone function which is equal to estimated aerosol concentration. Measuring air pollution with high temporal and spatial resolution is critical to both conducting air pollution research and protecting the public's health. In an ideal world, we would be able to use a large number of monitors to measure personal air pollution exposure in cohort studies of health effects or to deploy in networks to warn of potential risks such as those from exposure to wildfire smoke. However, the large size and high cost of air quality monitors has historically prohibited widespread use. Hence, there is a need to develop smaller, more affordable monitors and the accompanying data science tools to make meaningful inference on the readouts of these monitors. In this paper we consider inference for data generated by the recently-developed Mobile Aerosol Reference Sampler (MARS). MARS was designed to be an affordable, portable monitor for measuring fine particulate matter ($\text{PM}_{2.5}$) concentrations in environmental and occupational health studies \citep{Tryner2019a}. The MARS device is built on the Ultrasonic Personal Aerosol Sampler (UPAS) platform which has also been previously described in the literature \citep{Volckens2017}. MARS uses a piezoelectric microblower to pull air through a PM$_{2.5}$ cyclone inlet and a 25mm filter. A high-resolution pressure sensor measures the time-resolved pressure drop across the sampling filter. As particles accumulate on the filter the pressure drop across the filter increases. This pressure drop should be positive and increase monotonically in time during measurement. Deviations from monotonicity only occur (1) in the first few minutes of use when a new filter is stretching out or (2) if there is a change in air density or particle source. In the experimental data used in this paper, particle source remained constant and only minor changes in air density occurred. Time-resolved PM$_{2.5}$ concentration can be inferred from the time-resolved rate of change in pressure drop after the latter is normalized to the total PM$_{2.5}$ mass collected on the filter. Specifically, when the derivative is scaled so that the area under the derivative function is equal to total PM$_{2.5}$ mass collected on the filter divided by volumetric flow rate, then the scaled first derivative is a measure of PM$_{2.5}$ concentration as a function of time \citep{Novick1992,Dobroski1997}. Hence, the objective is to estimate the pressure drop as a function of time and then make inference on the scaled first derivative of pressure drop. Several approaches have been proposed to estimate monotone functions. Early works include estimation of shape-constrained piecewise linear functions \citep{Hildreth1954, Brunk1955}. \cite{Mammen1991} proposed monotone kernel smoother methods and \cite{Mammen2001} proposed monotone projections of unconstrained smooth estimators. A large number of spline based approaches have been proposed including cubic smoothing splines \citep{Wang2008a}, constrained regression splines \citep{Ramsay1988,Meyer2008,Meyer2011, Powell2012}, penalized splines \citep{Meyer2012}, and piecewise linear splines \citep{Neelon2004}. Several recent papers have proposed monotone Bernstein polynomial (BP) regression \citep{Chang2005,Chang2007,Curtis2011, Wang2011,Wilson2014c,Ding2016}. In this paper we take a BP approach to constrained regression. Monotonicity can be imposed with BPs by imposing a linear order constraint on the regression coefficients. An alternative but equivalent approach is to linearly transform the regression coefficients and then impose a positivity constraint on all of the transformed regression coefficients with the exception of the intercept, which is unconstrained \citep{Wang2011}. \cite{Curtis2011} proposed a variable selection approach to monotone regression with BPs that puts a variable selection prior on the transformed regression coefficients akin to a mixture of a mass point at 0 and a normal distribution truncated below at 0. The approach is appealing because it imposes monotonicity, allows for data-driven tuning of the model by selecting excess basis functions out of the model and allows for no association when all coefficients are selected out of the model. The approach we present here, which we refer to as Bayesian nonparametric monotone regression (BNMR), is similar to that of \cite{Curtis2011} in that we use a BP expansion and a variable selection prior that imposes monotonicity. In contrast, our approach both selects some regression coefficients to be zero and clusters other regression coefficients. By clustering regression coefficients we create a reduced set of combination basis functions that are each the sum of multiple BPs and assigned a single regression coefficient. This has two distinct advantages over only variable selection. First, when all regression coefficients are clustered together into a single combination basis function the approach is equivalent to performing linear regression with the slope constrained to be non-negative. This improves performance when the true regression function is in fact linear. Second, when the true regression function is nonlinear our approach requires a reduced number non-zero regression coefficients each corresponding to the combination of a mutually exclusive set of basis functions. In a simulation study we show that our approach is able to match the flexibility of alternative approaches but uses a smaller number of parameters. As a result our Markov chain Monte Carlo (MCMC) approach samples from the full conditional of a truncated multivariate normal distribution of smaller dimension which can reduce autocorrelation in the resulting chain. Hence, the proposed method allows for flexible monotone regression while allowing the model to be null when there is no association between predictor and outcome and allowing the function to be linear when there is no evidence of nonlinearity. This results in comparable performance to other approaches for smooth nonlinear functions but improved inference when the true relation is linear. We apply the proposed approach to evaluate 12 samples collected using MARS in a controlled laboratory chamber. We compare estimated time-resolved PM$_{2.5}$ inferred with the proposed method based on 30-second measurements of pressure drop across the MARS filter to minute-resolution measurements of PM$_{2.5}$ in the chamber reported by a tapered element oscillating microbalance (TEOM) (1405 TEOM, ThermoFisher Scientific, Waltham, MA, USA), which is a regulatory-grade PM$_{2.5}$ monitor. \section{Methods} \subsection{Model formulation}\label{sub:model} Our primary interest is estimating the regression function \begin{equation} \label{eq:model} y_i=f(x_i) + \epsilon_i \end{equation} where $f$ is an unknown monotone function. Without loss of generality, and consistent with our application, we assume that $f$ is monotone increasing. We also assume that $x$ is scaled to the unit interval. We parameterized $f$ using a BP expansion. The $k^\text{th}$ BP basis function of order $M$ is \begin{equation} \label{eq:bpbasis} \psi_k(x,M) = \left(\begin{array}{c}M \\ k \end{array}\right) x^k (1-x)^{M-k}. \end{equation} The regression function expressed as a weighted combination of BPs is \begin{equation} f(x) = \sum_{k=0}^M \psi_k(x,M)\beta_k = \Psi(x,M)\boldsymbol\beta, \label{eq:fexpand} \end{equation} where $\boldsymbol\beta=\left(\beta_0,\dots,\beta_M\right)^T$ are regression coefficients and $\Psi(x,M)=\left[ \psi_0(x,M),\dots,\psi_M(x,M)\right]$. The first regression coefficient $\beta_0$ parameterizes the intercept. Figure~\ref{subfig:BP} shows the BP basis used in the data analysis. \begin{figure} \centering \subfloat[Bernstein polynomial basis]{ \includegraphics[]{img/Analysis_basis_full_11_trim.eps} \label{subfig:BP} } \subfloat[Transformed Bernstein polynomial basis]{ \includegraphics[]{img/Analysis_basis_full_transformed_11_trim.eps} \label{subfig:transBP} } \subfloat[Selected transformed Bernstein polynomial basis used with BISOREG]{ \includegraphics[]{img/Analysis_basis_bisoreg_11_trim.eps} \label{subfig:bisoregBP} } \subfloat[Linear combination of transformed Bernstein polynomial basis used with BNMR]{ \includegraphics[]{img/Analysis_basis_bnmr_11_trim.eps} \label{subfig:bnmrBP} } \caption{Various representations of the Bernstein polynomial (BP) basis functions. Panel~\ref{subfig:BP} shows the 51 BP basis functions of order $M=50$ ($\Psi(x,M)$). Panel~\ref{subfig:transBP} shows the transformed BP basis represented as $\Psi(x,M) \mathbf{A}^{-1}$ as described in ~\ref{sub:model}. This transformation is used for both BNMR and BISOREG. Panel~\ref{subfig:bisoregBP} shows the posterior mode group of basis functions selected to be included into the model with BISOREG. This is a subset of the transformed basis functions shown in Panel~\ref{subfig:BP}. Panel~\ref{subfig:bnmrBP} shows the posterior model combination of basis functions included with BNMR. This includes the intercept and three basis functions which are each a linear combination of one to three of the basis functions shown in panel~\ref{subfig:transBP} and subsequently linear combinations of the basis functions shown in Panel~\ref{subfig:BP}. Results from all 12 runs are shown in the supplemental material.} \label{fig:BP} \end{figure} The regression function in \eqref{eq:fexpand} is monotone increasing if $\beta_{k-1}\le\beta_k$ for all $k=1,\dots,M$. Following \cite{Curtis2011}, it is convenient to reparameterize the regression coefficients. Let $\mathbf{A}\boldsymbol\beta=\boldsymbol\theta$ where $\boldsymbol\theta=\left(\theta_0,\dots,\theta_M\right)^T$ and the $(M+1)\times (M+1)$-matrix $\mathbf{A}$ is such that $\theta_0=\beta_0$ and $\theta_k=\beta_k-\beta_{k-1}$ for $k=1,\dots,M$: \begin{equation} \label{eq:Amonotone} \mathbf{A} = \left[\begin{array}{cccccc} 1 & 0 & 0 & \dots & 0 & 0 \\ -1 & 1 & 0 & \dots & 0 & 0 \\ 0 & -1 & 1 & \dots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \dots & -1 & 1 \\ \end{array}\right]. \end{equation} The regression function is then \begin{equation} \label{eq:reparam} f(x) = \Psi(x,M) \mathbf{A}^{-1} \boldsymbol\theta. \end{equation} Figure~\ref{subfig:transBP} shows the transformed basis $\Psi(x,M) \mathbf{A}^{-1}$ used in the data analysis. Using this reparameterization $f$ is monotone increasing when $\theta_k\ge0$ for all $k>0$. Further, $f$ is linear with the form $f(x)=\theta_0+wx$ then $\theta_k=w$ $\forall k>0$ including no association when $w=0$. We assign a prior to $\theta_k$, $k=1,\dots,M$, that is a finite mixture of a mass point at zero denoted by the Dirac measure $\delta_0$ and a distribution $P$ with positive support. This approach selects some regression coefficients to be 0, effectively removing those basis functions from the model. In the non-zero probability event that all regression coefficients are zero there is no association between $x$ and $y$. We then let the positive distribution be a Dirichlet process (DP) with base measure $P_0\equiv \text{TN}_{[0,\infty]} (\mu,\phi^2)$, where $\text{TN}_{[0,\infty]}(\mu,\phi^2)$ implies a truncated normal with support $[0,\infty]$, mean $\mu$, and variance $\phi^2$. By using a base measure with support over $\mathbb{R}^+$ we ensure that the non-zero regression coefficients are positive. This imposes monotonicity of $f$. Further, the clustering property of the DP allows for all regression coefficients to be equal, in the same cluster, allowing for positive probability that $f$ is linear. The selection and clustering of the regression coefficients does not, however, impact smoothness. The estimated function is guaranteed to be smooth and differentiable. The full model is \begin{eqnarray} \label{eq:fullmodel} Y_i| \boldsymbol\theta,\sigma^2 &\sim& \text{N}\left[ \Psi(x_i,M) \mathbf{A}^{-1} \boldsymbol\theta , \sigma^2 \right] \\ \theta_j| P,\pi &\sim& \pi \delta_0 + (1-\pi)P\nonumber \\ P &\sim& \text{DP}(\alpha P_0)\nonumber\\ P_0 &\equiv& \text{TN}_{[0,\infty]} (\mu,\phi^2). \nonumber \end{eqnarray} The above model is equivalent to a DP with base measure that is a finite mixture \begin{eqnarray} \theta_j|G &\sim&G \\ G &\sim& \text{DP}(\alpha G_0)\nonumber\\ G_0|\pi &\equiv&\pi \delta_0 + (1-\pi)\text{TN}_{[0,\infty]} (\mu,\phi^2). \nonumber \end{eqnarray} Several papers have used similar DP constructions that combine a DP with a finite mixture of a mass point and a non-truncated normal distribution \citep{Herring2010,Canale2017,Cassese2019} or a gamma distribution \citep{Liu2015a}. We complete the specification by assigning the prior $\sigma^{-2}\sim\text{Gamma}(a,b)$, a normal mean zero variance $\phi_0^2$ prior to the intercept $\theta_0$, and $\pi\sim\text{Beta}(a_\pi,b_\pi)$. \subsection{Posterior computation} The model in \eqref{eq:fullmodel} can be efficiently sampled with a Gibbs sampler. This is accomplished by first integrating out $\pi$ from the model. The Gaussian likelihood and truncated normal base measure allows for $P$ to be marginalized out of the model as well. The posterior can be simulated using a Polya Urn scheme \citep{Blackwell1973,West1994,Bush1996}. Let $\Lambda_i=\Psi(x_i,M)\mathbf{A}^{-1}$ be the transformed BP basis expansion for observation $i$ and $\boldsymbol\Lambda$ be the $n\times (M+1)$ design matrix with row $i$ equal to $\Lambda_i$. Let $\Lambda_{i[k]}$ denote the vector $\Lambda_i$ with the $k^{th}$ element omitted, $\boldsymbol\theta_{[k]}$ the vector $\boldsymbol\theta$ with the $k^{th}$ element omitted, and $\Lambda_{ik}$ denote only the $k^{th}$ element of $\Lambda_i$. Similarly, let $\boldsymbol\Lambda_{[k]}$ be the matrix $\boldsymbol\Lambda$ with the $k^{th}$ column omitted and $\boldsymbol\Lambda_{k}$ be only the $k^{th}$ column of $\boldsymbol\Lambda$. Finally, we denote by $S_k$ the categorical indicator where $S_k=c$ if $\theta_k=\eta_c$ and $n_c$ the number of coefficients in cluster $c$ where $n_0$ is the number in the null cluster with $\theta_k=0$. The full conditional for $S_k$, $k=1,\dots,S_M$, is categorical. The conditional probability that the $k^{th}$ regression coefficient is equal to 0 is \begin{equation} \Pr\left(S_k=0|-\right) = d \frac{n_0^*+a_\pi}{M-1+a_\pi + b_\pi} \prod_{i=1}^n\mathcal{N}\left(y_i; \Lambda_{i[k]} \boldsymbol\theta_{[k]},\sigma^2\right), \end{equation} where $n_0^*$ is the number of regression coefficients in cluster $0$, the null cluster, excluding $\theta_k$, $d$ is a normalizing constant, and $\mathcal{N}(x;\mu,\sigma^2)$ denotes a normal density function. In contrast to standard DP models, the zero cluster is allowed to be empty in this model. The conditional probability that $\theta_k$ is allocated to an existing non-zero cluster $c$ is \begin{eqnarray} \Pr\left(S_k=c|-\right) &=& d \frac{\left(M-n_0^*-1+b_\pi\right)n_c^*}{\left(M-1+a_\pi + b_\pi\right)\left(n-n_0^*-1+\alpha\right)} \\ &&\quad\times\prod_{i=1}^n\mathcal{N}\left(y_i; \Lambda_{i[k]} \boldsymbol\theta_{[k]}+\Lambda_{ik} \eta_c,\sigma^2\right).\nonumber \end{eqnarray} Finally, the conditional probability that $\theta_k$ is allocated to a new cluster $c'$ is \begin{eqnarray} \Pr\left(S_k=c'|-\right) &=& d \frac{\left(M-n_0^*-1+b_\pi\right)\alpha}{\left(M-1+a_\pi + b_\pi\right)\left(M-n_0^*-1+\alpha\right)} \\ &&\quad\times\left[\prod_{i=1}^n\mathcal{N}\left\{y_i;\sum_{l=0,l\ne k}^M\phi_l(x_i,M)\beta_l,\sigma^2\right\}\right] \exp\left(\frac{\widetilde{m}^2}{2\widetilde{v}}-\frac{\mu^2}{2\phi^2}\right)\nonumber\\ &&\quad\times \frac{(2\pi)^{-1/2} \phi^{-1} \widetilde{v}^{1/2}}{\int_0^\infty f(z;\mu,\phi^2)dz } \int_0^\infty f(\theta;\widetilde{m},\widetilde{v})d\theta.\nonumber \end{eqnarray} where $\widetilde{v}=1/[\phi^{-2} + \sigma^{-2}\sum_{i=1}^n\psi_k(x_i,M)^2]$ and $\widetilde{m}=\widetilde{v}[\phi^{-2}\mu+ \sigma^{-2}\sum_{i=1}^n\psi_k(x_i,M)\{y_i-\sum_{l=0,l\ne k}^M\psi_l(x_i,M)\beta_l\}]$. In the situation where $S_k$ is assigned to new cluster $c'$ a value for $\theta_k=\eta_{c'}$ can be sampled from its univariate truncated normal full conditional. The full conditional for a single regression coefficient $\eta_{c'}$ where $n_{c'}=1$ (no other coefficient takes that value) is truncated above 0 and has mean $\widetilde{m}$ and variance $\widetilde{v}$ as specified above. We use the hybrid univariate truncated normal sampler of \cite{Li2015} to sample from this full conditional. The $M+1$-vector $\boldsymbol\theta$ contains three types of elements: the unconstrained intercept, parameters that are selected to be 0, and parameters that are non-zero and are constrained to be greater than 0. The non-zero values take on $K+1$ unique values $\boldsymbol\eta=\{\theta_0,\eta_1,\dots,\eta_K\}$ where $\theta_0$ is the unconstrained intercept. Using this notation the linear predictor $\boldsymbol\theta=\mathbf{B}\boldsymbol\eta$ where $\mathbf{B}$ is a transformation matrix that maps $\boldsymbol\eta$ to $\boldsymbol\theta$ according to $S_1,\dots,S_M$. The vector $\boldsymbol\eta$ has a truncated multivariate normal full conditional with mean $m=\sigma^{-2}v\left(\mathbf{B}^T\boldsymbol\Lambda^T\mathbf{y}+\phi^{-2}\mu\right)$ and variance $v=\left(\sigma^{-2}\mathbf{B}^T\boldsymbol\Lambda^T\boldsymbol\Lambda\mathbf{B} + D\right)$ where $D$ is is a diagonal matrix with $\phi_0^{-2}$ in the first diagonal location for the intercept and $\phi^{-2}$ in all other diagonal locations for the constrained coefficients. These are the same as the typical mean and variance for a normal-normal model full conditional. The first element $\theta_0$ is not truncated and the remaining elements are truncated below at 0. We simulate from the full conditional of $\boldsymbol\eta$ as a multivariate block using the hybrid multivariate sampler approach of \cite{Li2015}. The Gibbs sampler is completed with standard updates of $\alpha$ using a mixture of gammas \citep{Escobar1995} and $\sigma^{-2}$ using the standard gamma full conditional. \subsection{Details on tuning} Care must be given when specifying the prior, particularly for the choice of values for the mean and standard deviation of the base measure $\mu$ and $\phi$. This is challenging because the plausible values for the regression coefficients depends on the number of non-zero regression coefficients in the model and how many basis functions each coefficient is applied to (cluster size). We do not know either of these quantities a priori. We have taken the approach of scaling the outcome $\mathbf{y}$ to have mean zero and variance one and then setting $\mu=0.5$ and $\phi=0.25$. This puts reasonable mass on values between zero and one which represents plausible values for a variety of basis configurations. We have found that this choice performs well across a variety of simulated and real datasets. We use this setting in all simulation and data analysis results presented in this paper. However, results can be sensitive to this choice. Supplemental Section 1.2 includes an additional simulation study that compares sensitivity to different values of $\mu$ and $\phi$. We show that as $\phi$ increases the posterior probability of no association decreases and the number of clusters (unique non-zero regression coefficients) increases. However, the model fit as measured by RMSE on $f$ and the derivative of $f$ is less sensitive to this choice. The user must also specify the order of the BP ($M$). This should be selected, in theory, based on sample size and the differentiability of the function being estimated \citep{Mclain2009}. In practice, methods such as reversible jump MCMC or Kullback-Leibler distance have been used to attempt to estimate the dimension of basis expansions to be used in nonparametric regression \citep[e.g.][]{Dias2002,Dias2007,Meyer2011} while penalization can be used to regularize a rich basis to avoid over fitting \citep{Crainiceanu2005}. It has additionally been noted that shape constraints, including monotonicity, reduce sensitivity to the dimension of the basis expansion \citep{Meyer2008}. We follow the approach of \cite{Curtis2011} and use a rich basis and let the prior select or cluster redundant predictors. In this paper, we use $M=50$ in all results shown in the main text but show in the supplement that a smaller value of $M$ results in lower RMSE when the true function is closer to linear and a higher value of $M$ is preferred when the true function is more wiggily. If the practitioner has prior knowledge of the shape of the underlying function, beyond monotonicity, this could be incorporated into the selection of $M$ a priori. \subsection{Inference on the derivative and aerosol concentration}\label{sub:derivative} The proposed approach allows for coherent estimation and inference on not only the function $f$ but on the derivatives of $f$. This includes full quantification of the uncertainty in the derivatives and guaranteed smoothness in the derivatives. This is particularly critical in our application where the first derivative of $f$ is proportional to the time-resolved aerosol concentration. For a BP of order $M$ the first derivative is a BP of order $M-1$. Specifically the first derivative is \begin{equation} f'(x) = M\sum_{k=0}^{M-1}\psi_{k}(x,M-1)\theta_{k+1} = M\Psi(x,M-1)\boldsymbol\theta_{[0]}. \label{eq:deriv1} \end{equation} For the derivative the regression coefficient $\theta_0=\beta_0$, which corresponds to the intercept, is not included. Hence, the derivative can be identified in closed form from the posterior sample of $\boldsymbol\theta$. Inference on the derivative can be made directly by using the posterior sample of $\boldsymbol\theta$. From a theoretical perspective the total aerosol mass accumulated on the filter should be the flow rate through the filter times the concentration integrated over time. Here, flow rate is constant and therefore $\int_0^1 f'(x) dx \propto (\text{filter mass})/(\text{flow rate})$. In our model $\int_0^1 f'(x) dx = \beta_M-\beta_0=\sum_{k=1}^M\theta_k$. We therefore scale the derivative to the total filter mass by replacing $\boldsymbol\theta$ in \eqref{eq:deriv1} with $\tilde{\boldsymbol\theta}=\boldsymbol\theta\times(\text{filter mass})/[(\text{flow rate})\times\sum_{k=1}^M\theta_k]$. We then estimate aerosol concentration as \begin{equation} \tilde{f}'(x) = M\sum_{k=0}^{M-1}\psi_{k}(x,M-1)\tilde\theta_{k+1} = M\Psi(x,M-1)\tilde{\boldsymbol\theta}_{[0]}. \label{eq:deriv2} \end{equation} In practice, we scale each draw of $\boldsymbol\theta$ from the posterior and then construct a posterior sample of $\tilde{f}'$. \subsection{Alternative spline approach} The proposed prior can be applied to other basis expansions and achieve some, but not all, of the same properties. Using the same prior structure with a transformed $B$-spline or $I$-splines without the transformation matrix $\mathbf{A}$ can still achieve monotonicity \citep{DeBoor1978,Ramsay1988}. The proposed prior will also allow estimation of no association when all regression coefficients are clustered at zero. However, the clustering will not result in shrinkage toward a linear response. Using a spline basis with compact support may result in more flexibility than the BP approach presented here. This could be particularly appealing when the function being estimated has sharp change points. In addition, the derivative of many common splines including $B$-splines and $I$-splines can be represented as a spline itself and inference on the derivative can be made using a similar approach. However, splines lose flexibility and smoothness in the derivative. For example, the standard cubic spline has a quadratic derivative while a quadratic spline has a piecewise linear derivative. This may not be sufficiently flexible in many cases, as seen in the data analysis in Section~\ref{s:da}. In contrast, the BP uses a higher order polynomial and therefore has a higher order derivative which imposes smoothness not only in the function being estimated but in all derivatives of that function. \section{Simulation} We compare the proposed approach, BNMR, to alternative methods for monotone regression in a simulation study. We generated 500 datasets from four designs each taking the form $y_i=f_s(x_i)+\epsilon_j$ for $i=1,\dots,n$ with $\epsilon_i\sim\text{N}(0,0.25^2)$. We generate $x\sim\text{Unif}(0,1)$ and consider four shapes for the function $f_s(\cdot)$: \begin{enumerate} \item Flat: $f_1(x)=0$. \item Linear: $f_2(x)=x$. \item Wavy: $f_3(x)=\sin(3\pi x)/(3\pi)+x$. \item Flat-nonlinear: $f_4(x)=0$ for $x<0.5$ and $f_5(x)=[2(x-0.5)]^2$ for $0.5\le x$. \end{enumerate} The flat, linear, and wavy functions mirror those from \cite{Curtis2011}. We simulated data sets of size $n=100$ and $1000$. We compared BNMR to alternative monotone regression methods that have available {\tt R} software that includes variance estimates. The comparison methods are: constrained generalized additive models \citep[CGAM,][]{Meyer2013,Meyer2018}, Bayesian constrained generalized additive models \citep[BCGAM,][]{Meyer2011,Oliva-Aviles2018}, and Bayesian isotonic regression \citep[BISOREG,][]{Curtis2011,Curtis2018}. In addition we compare with the unconstrained methods ordinary least squares (OLS), local polynomial regression (LOESS), and an unconstrained Bernstein polynomial model (UBP). For BNMR and BISOREG we set $M=50$ and consider other values in the Supplemental Material. For UBP we select $M$ using deviance information criterion \citep[DIC,][]{Spiegelhalter2002}. We evaluate the model performance by the root mean squared error (RMSE) on the function $f(\cdot)$ and the pointwise 95\% interval coverage both evaluated at 100 evenly spaced points spanning the range of $x$. For the Bayesian methods (BNMR, BISOREG, BCGAM, and UBP) we consider the posterior probability that $f$ is linear and that $f$ is flat (no association). For the CGAM and OLS we report the mean $p$-value for testing the null hypothesis that there is no association. Table~\ref{tab:sim1} shows results from the simulation study. At $n=100$, BNMR had the lowest RMSE on $f$ among all the monotone regression methods on all four scenarios (within one standard error with BCGAM and BISOREG on the flat scenario). The only method to have lower RMSE was OLS on the linear scenario. BNMR, BISOREG, CGAM, and UBP all had pointwise 95\% interval coverage between 0.95 and 0.98 on all scenarios. Each of the other methods had interval coverage of 0.9 or less on at least one scenario. At $n=1000$, BNMR had the lowest RMSE for the flat and flat-nonlinear scenarios (within one standard error with BCGAM and BISOREG on the flat scenario). CGAM and BCGAM had the lowest RMSE of the constrained methods on the wavy scenario with BNMR and BISOREG slightly higher. OLS and UBP had the lowest RMSE on the linear scenario. UBP selected $M=3$, a cubic regression function, in almost all datasets for the linear scenario. \begin{table} \centering \caption{ Simulation results comparing estimation of $f$ with each method. The table shows RMSE and 95\% interval coverage both evaluated pointwise on a grid of 100 evenly spaced points. The columns labeled Pr( flat ) are the posterior probability of a flat response or for OLS and CGAM the mean $p$-value for rejecting the null of association. Additional simulation results including standard errors for the RMSE and interval widths are included in the supplemental material.} \label{tab:sim1} \begin{tabular}{lcccccc}\hline & \multicolumn{3}{c}{$n=100$} & \multicolumn{3}{c}{$n=1000$}\\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} Model & RMSE & Coverage & Pr( flat ) & RMSE & Coverage & Pr( flat ) \\ \hline \multicolumn{7}{l}{\it Scenario 1: Flat}\\ BCGAM & 2.22 & 0.95 & 0.00 & 0.65 & 0.96 & 0.00 \\ BISOREG & 2.19 & 0.97 & 0.86 & 0.61 & 0.97 & 0.95 \\ BNMR & 2.08 & 0.96 & 0.94 & 0.60 & 0.96 & 0.99 \\ CGAM & 3.57 & 0.98 & 0.49 & 1.17 & 0.99 & 0.50 \\ LOESS & 5.15 & 0.93 & NA & 1.61 & 0.95 & NA \\ OLS & 3.11 & 0.95 & 0.50 & 0.93 & 0.96 & 0.53 \\ UBP & 5.23 & 0.93 & 0.00 & 1.60 & 0.95 & 0.00 \\ \hline \multicolumn{7}{l}{\it Scenario 2: Linear}\\ BCGAM & 6.42 & 0.84 & 0.00 & 2.58 & 0.84 & 0.00 \\ BISOREG & 5.99 & 0.96 & 0.00 & 2.42 & 0.95 & 0.00 \\ BNMR & 4.72 & 0.98 & 0.00 & 2.14 & 0.96 & 0.00 \\ CGAM & 5.79 & 0.96 & 0.00 & 2.30 & 0.95 & 0.00 \\ LOESS & 5.15 & 0.93 & NA & 1.61 & 0.95 & NA \\ OLS & 3.11 & 0.95 & 0.00 & 0.93 & 0.96 & 0.00 \\ UBP & 5.28 & 0.93 & 0.00 & 1.60 & 0.95 & 0.00 \\ \hline \multicolumn{7}{l}{\it Scenario 3: Wavy}\\ BCGAM & 6.57 & 0.84 & 0.00 & 2.17 & 0.88 & 0.00 \\ BISOREG & 5.98 & 0.95 & 0.00 & 2.25 & 0.95 & 0.00 \\ BNMR & 5.30 & 0.96 & 0.00 & 2.25 & 0.94 & 0.00 \\ CGAM & 5.67 & 0.96 & 0.00 & 2.15 & 0.96 & 0.00 \\ LOESS & 6.44 & 0.89 & NA & 2.14 & 0.93 & NA \\ OLS & 8.02 & 0.56 & 0.00 & 7.22 & 0.19 & 0.00 \\ UBP & 6.38 & 0.90 & 0.00 & 2.30 & 0.90 & 0.00 \\ \hline \multicolumn{7}{l}{\it Scenario 4: Flat-nonlinear}\\ BCGAM & 5.33 & 0.90 & 0.00 & 1.93 & 0.89 & 0.00 \\ BISOREG & 5.60 & 0.95 & 0.00 & 2.12 & 0.96 & 0.00 \\ BNMR & 4.95 & 0.96 & 0.00 & 1.82 & 0.96 & 0.00 \\ CGAM & 5.29 & 0.96 & 0.00 & 1.91 & 0.97 & 0.00 \\ LOESS & 5.70 & 0.91 & NA & 1.88 & 0.93 & NA \\ OLS & 16.11 & 0.32 & 0.00 & 16.24 & 0.09 & 0.00 \\ UBP & 5.42 & 0.93 & 0.00 & 1.91 & 0.91 & 0.00 \\ \hline \end{tabular} \end{table} Both BNMR and BISREG had high, greater than 0.86, posterior probabilities of a flat response (no association) in the flat scenario. BCGAM does not include a flat response in the parameter space and therefore has a posterior, and prior, probability of 0. The average $p$-values for the test of no association for CGAM and OLS were between 0.49 and 0.53. BNMR is the only method that allows the estimated function $f$ to be linear with slope greater than 0. However, in the linear scenario this did not occur. The mean posterior probability of a linear function was 0.00. The response is linear when all regression coefficients are non-zero and take the same value. Figure~\ref{fig:nparams} shows the number of non-zero regression coefficient in the model and the number of unique values those regression coefficients take. Both BNMR and BISOREG include only a small number of non-zero regression coefficients, effectively selecting out of the model the majority of the basis functions. Because not all basis functions are included the estimated regression function is never truly linear. Despite not being exactly linear, BNMR has lower RMSE than any of the other nonparametric methods on the linear scenario. A key difference between BNMR and BISOREG is that all non-zero regression coefficients in BISOREG take unique values while with BNMR the non-zero regression coefficients are clustered together and take fewer unique values. On average, there were more non-zero regression coefficients included into the model with BNMR but fewer, less than two, unique regression coefficients. This is true for both the linear and nonlinear scenarios and for BP expansions of order ranging from 20 to 100 (shown in supplemental material). As a result, the proposed approach requires estimating only a small number of unique regression coefficients regardless of the size of the basis expansion or the wiggliness of the regression function. \begin{figure} \centering \includegraphics{img/number_of_coefficients.eps} \caption{Simulation results for the number of non-zero regression coefficients (dashed line) and the number of unique values of the non-zero regression coefficient (solid line) for BISOREG (triangle) and BNMR ($\times$). The number of unique non-zero values is always equal to the total number of non-zero values in BISOREG.} \label{fig:nparams} \end{figure} In our application we are interested in the derivative of the monotone function. The BP basis used by BISOREG, BNMR and UBP allows straight forward inference on the derivatives of $f$. The other methods do not naturally allow for this inference. We calculate a pointwise approximation of the derivative for the other methods by calculating change in $\widehat{f}$ divided by change in $x$ for each pair of neighboring observations on an equally spaced grid. We do not evaluate coverage for these methods. Table~\ref{tab:sim1deriv} shows the RMSE and 95\% interval coverage (for BISOREG, BNMR and UBP only) for the derivative of $f$. BNMR had lowest RMSE for all scenarios at the smaller sample size and the flat scenario at the larger sample size. BNMR, BISOREG, and UBP all suffered from poor interval coverage in several of the scenarios. The coverage is pointwise and in the flat-nonlinear scenario, which has an sharp ``elbow'' change-point, both methods fail to cover in the elbow, highlighting a limitation of the smooth BP basis. \begin{table} \centering \caption{ Simulation results comparing estimation of the derivative $f'$ with each method. The table shows RMSE and 95\% interval coverage both evaluated pointwise on a grid of 100 evenly spaced points. Intervals for the derivative with BCGAM, CGAM and LOESS are not available. Additional simulation results including standard errors for the RMSE and interval widths are included in the supplemental material.} \label{tab:sim1deriv} \begin{tabular}{lcccc}\hline & \multicolumn{2}{c}{$n=100$} & \multicolumn{2}{c}{$n=1000$}\\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} Model & RMSE & Coverage & RMSE & Coverage \\ \hline \multicolumn{5}{l}{\it Scenario 1: Flat}\\ BCGAM & 3.24 & NA & 0.85 & NA \\ BISOREG & 4.90 & 0.00 & 0.61 & 0.00 \\ BNMR & 1.12 & 1.00 & 0.19 & 1.00 \\ CGAM & 22.80 & NA & 9.92 & NA \\ LOESS & 53.81 & NA & 18.49 & NA \\ UBP & 55.26 & 0.92 & 16.61 & 0.93 \\ \hline \multicolumn{5}{l}{\it Scenario 2: Linear}\\ BCGAM & 61.68 & NA & 39.98 & NA \\ BISOREG & 65.54 & 0.96 & 44.91 & 0.94 \\ BNMR & 39.57 & 1.00 & 38.21 & 0.95 \\ CGAM & 68.85 & NA & 40.79 & NA \\ LOESS & 53.81 & NA & 18.49 & NA \\ UBP & 56.90 & 0.91 & 16.60 & 0.93 \\ \hline \multicolumn{5}{l}{\it Scenario 3: Wavy}\\ BCGAM & 64.03 & NA & 32.05 & NA \\ BISOREG & 70.78 & 0.95 & 46.53 & 0.91 \\ BNMR & 55.00 & 0.97 & 45.13 & 0.85 \\ CGAM & 69.21 & NA & 37.04 & NA \\ LOESS & 87.54 & NA & 35.18 & NA \\ UBP & 97.38 & 0.78 & 52.52 & 0.68 \\ \hline \multicolumn{5}{l}{\it Scenario 4: Flat-nonlinear}\\ BCGAM & 62.08 & NA & 28.82 & NA \\ BISOREG & 92.18 & 0.46 & 65.44 & 0.45 \\ BNMR & 61.45 & 0.68 & 46.91 & 0.61 \\ CGAM & 66.95 & NA & 34.57 & NA \\ LOESS & 65.11 & NA & 28.99 & NA \\ UBP & 62.59 & 0.88 & 28.39 & 0.66 \\ \hline \end{tabular} \end{table} The supplemental material includes additional simulation results, including standard errors for the estimates in Tables~\ref{tab:sim1} and \ref{tab:sim1deriv}, interval widths, and results on sensitivity to the choice of prior $\mu$ and $\phi^2$ as well as the order of the BP $M$. In addition we show results for computation time as a function of sample size and order of the BP. \section{Analysis of Real-Time PM$_{2.5}$ Concentration Inferred from Pressure Drop}\label{s:da} \subsection{Overview of the data analysis} We use data from 12 samples collected using three MARS devices during four laboratory experiments. These experiments are described in detail by \citet{Tryner2019a}. During each experiment, one of four different types of aerosol---urban PM (NIST SRM 1648A Urban PM), ammonium sulfate ((NH$_4$)$_2$SO$_{4}$), Arizona road dust, or match smoke---is nebulized into a controlled chamber containing all three MARS. Each MARS samples PM$_{2.5}$ onto a new polytetrafluoroethylene (PTFE) filter at a flow rate of 1 L min$^{-1}$ for between 7.5 and 13 hours while pressure drop across the filter is recorded every 30 seconds. Each filter is weighed before and after the experiment to measure the total mass of PM$_{2.5}$ accumulated. A TEOM measures the PM$_{2.5}$ concentration in the chamber every minute as a previously-validated point of comparison. We use BNMR to estimate time-resolved PM$_{2.5}$ concentration using the MARS pressure drop data from the 12 samples. Prior to analysis we removed the first 30 minutes of pressure drop as: 1) there was no PM$_{2.5}$ in the chamber at that time and 2) the new filter was stretching during that time and a decreasing trend is observed due to the stretching process. We also removed the final five minutes when there was 1) no PM$_{2.5}$ in the chamber and 2) the sampler was shutting down resulting in spurious noise in the pressure drop function. Then, we fit BNMR to the time-series of measured pressure drop for each sample. From the fitted model we then estimate the scaled first derivative of pressure drop at each time-point for which the TEOM recorded PM$_{2.5}$ as described in Section~\ref{sub:derivative}. For comparison, we perform the same procedure with BISOREG and UBP. We also estimate pressure drop and the scaled pointwise approximation of the derivative with LOESS, CGAM, and BCGAM. We omit OLS because of obvious nonlinearities in the pressure drop data. For each method we visualize and compare the performance with respect to estimating the pressure drop function and inferring real-time PM$_{2.5}$ from the scaled derivative of pressure drop. We show results from one of the 12 samples in the main text. The supplemental material includes estimated pressure drop and estimated real-time PM$_{2.5}$ concentration for all 12 samples. \subsection{Estimation of the pressure drop function} Figure~\ref{fig:analysis1} shows the data and estimates from all six methods for a single sample. Each panel show estimates from a single method along with 0.95 confidence or credible intervals. The fits are near identical visually over most of the range. However, there are differences in the lower tail. BNMR and BISOREG tend to level-off between 0 and 100 minutes. In contrast CGAM and especially BCGAM tend to over-smooth over the same time period. \begin{figure} \centering \subfloat[UBP]{ \includegraphics[]{img/Analysis_ubp_11_trim.eps} \label{subfig:ubp} } \subfloat[BCGAM]{ \includegraphics[]{img/Analysis_bcgam_11_trim.eps} \label{subfig:bcgam} } \subfloat[BNMR]{ \includegraphics[]{img/Analysis_bnmr_11_trim.eps} \label{subfig:bnmr} } \subfloat[BISOREG]{ \includegraphics[]{img/Analysis_bisoreg_11_trim.eps} \label{subfig:bisoreg} } \subfloat[CGAM]{ \includegraphics[]{img/Analysis_cgam_11_trim.eps} \label{subfig:cgam} } \subfloat[LOESS]{ \includegraphics[]{img/Analysis_loess_11_trim.eps} \label{subfig:loess} } \caption{Estimated pressure drop from the MARS data for one run. Each panel shows the estimates and 95\% intervals for each method separately. Results from all 12 runs are shown in the supplemental material.} \label{fig:analysis1} \end{figure} Comparing UBP to BNMR and BISOREG, which all use a BP basis, highlights an important difference between the constrained methods and the unconstrained method. Specifically, UBP experiences instability in the tails, while BNMR and BISOREG which impose monotonicity and further regulate with a selection prior (BISOREG) or a selection and clustering prior (BNMR), are more stable in the tails. To formally compare model fit for the pressure drop function we performed five-fold cross-validation for each sample. Table~\ref{tab:cv} shows cross-validation results for all five methods across all 12 samples. LOESS had the lowest cross-validation RMSE at 0.81 followed by UBP at 1.21. Hence, the unconstrained methods provided a better fit then any of the monotone methods. The best performing monotone methods were BISOREG at 1.27 and BNMR 1.29. CGAM and BCGAM had higher RMSEs of 1.47 and 1.96, respectively. \begin{table} \centering \caption{Summary of the model fit for each method in the data analysis. The table shows the cross-validation RMSE from the five-fold cross validation. For BNMR and BISOREG the table additionally shows the comparison of the scaled derivative to the time-resolved measurement of PM$_{2.5}$ from a TEOM. The results show the $R^2$, intercept, and slope from the regression of the TEOM PM$_{2.5}$ on the MARS estimated PM$_{2.5}$ obtained from the estimated first derivative of pressure drop. } \label{tab:cv} \begin{tabular}{lcccc}\hline & \multicolumn{1}{c}{Cross-Validation} & \multicolumn{3}{c}{Regression}\\ \cmidrule(lr){2-2} \cmidrule(lr){3-5} Model & RMSE & $R^2$ & Intercept & Slope \\ \hline BCGAM & 1.96 & 0.57 & 37.27 & 0.97 \\ BISOREG & 1.27 & 0.75 & 46.89 & 0.91 \\ BNMR & 1.29 & 0.75 & 44.59 & 0.92 \\ CGAM & 1.47 & 0.72 & -2.03 & 1.09 \\ LOESS & 0.81 & 0.81 & 51.60 & 0.88 \\ UBP & 1.21 & 0.63 & 84.77 & 0.69 \\ \hline \end{tabular} \end{table} LOESS outperforms the other methods in terms of cross-validation RMSE on the pressure drop function for two reasons. First, LOESS does not impose monotonicity and several of the pressure drop measurements show minor deviations from the largely monotone trend. The small waves result from small fluctuations in the air temperature measured by the device, which lead to small fluctuations in air density and thus small fluctuations in the mass flow rate through the filter. The second reason that LOESS has lower cross-validation RMSE is that three of samples show sharp change-points in the pressure drop functions (similar to the ``elbow'' in simulation scenario 4) and LOESS is the only method that did not over-smooth these points (see supplemental Figure 9). UBP can also estimate the non-monotone trend but struggles with the ``elbow.'' However, the non-monotonicity in LOESS and UBP results in negative estimates of aerosol concentration, which are not physically possible. The monotone methods smooth over the non-monotone areas of the data. This results in valid estimates of PM$_{2.5}$ because the derivative is always non-negative. It is also consistent with the theoretical framework for measuring time-resolved PM$_{2.5}$ from pressure drop using MARS as the pressure drop function should be monotone. However, this comes at a cost because the oscillation appears as autocorrelation in the residuals. This is not accounted for in the model as we assume independent and identically distributed residuals and could result in some bias in the intervals but results in a rational estimate of time-resolved PM$_{2.5}$. \subsection{Inference on time-resolved PM$_{2.5}$ with the scaled derivative} Our primary interest is estimating PM$_{2.5}$ concentration using the scaled first derivative of the estimated pressure drop function. We scale each derivative by the total mass collected on the filter as described in Section~\ref{sub:derivative}. Figure~\ref{fig:analysis2} shows the estimates for the scaled first derivative with both BNMR and BISOREG. For comparison, the PM$_{2.5}$ concentration measured with the TEOM is included. Both BNMR and BISOREG estimate the larger pattern in PM$_{2.5}$ concentration but do not fully capture localized features. \begin{figure} \centering \subfloat[BNMR]{ \includegraphics[]{img/Analysis_PM_bnmr_11_trim.eps} \label{subfig:bnmr2} } \subfloat[BISOREG]{ \includegraphics[]{img/Analysis_PM_bisoreg_11_trim.eps} \label{subfig:bisoreg2} } \caption{Estimated PM$_{2.5}$ concentration from the MARS data. Panel~\ref{subfig:bnmr2} shows the posterior mean and 95\% interval from BNMR and Panel~\ref{subfig:bisoreg2} shows the posterior mean and 95\% interval from BISOREG. The dashed line in each panel is the PM$_{2.5}$ concentration measured with the TEOM. Results from all 12 runs are shown in the supplemental material. Results with other methods are also shown in the supplemental material.} \label{fig:analysis2} \end{figure} To more formally compare the estimates of PM$_{2.5}$ concentration we regressed the one minute TEOM measurements on the estimated concentrations at those same time points obtained using each method (Table 3). The mean $R^2$ across all 12 samples was 0.75 with BNMR and BISOREG. Hence, these two approaches provide similar estimates of real-time concentration. Estimates of PM$_{2.5}$ from the other methods (BCGAM, CGAM, LOESS, and UBP) are presented in the supplement. All of these methods are being used beyond their original intention and suffer from shortcomings when estimating the derivative of a function. BCGAM and CGAM use a quadratic spline and result in piecewise linear derivatives which are not suitable to estimate the time-resolved PM$_{2.5}$. LOESS and UBP are non-monotone and result in negative estimates of PM$_{2.5}$ over some time segments. In addition, BCGAM, CGAM, and LOESS do not allow for the straight forward inference. When comparing to estimated PM$_{2.5}$ from these methods to the measurements from the TEOM, LOESS was the best performing method with an $R^2$ of 0.81 despite having negative estimates of PM$_{2.5}$ for a substantial period of time. The other approaches had higher $R^2$ ranging from 0.57 to 0.72. \subsection{Posterior visualization and MCMC performance} To better illustrate how BNMR works and compare the variable selection and clustering approach of BNMR to the variable selection only approach of BISOREG, we show the basis functions used in one sample in Figure~\ref{fig:BP}. Panel~\ref{subfig:BP} shows the BP basis of order 50 as used in the simulation and data analysis. Panel~\ref{subfig:transBP} shows the transformed BP basis $\mathbf{B}\mathbf{A}^{-1}$ as described in Section~\ref{sub:model}. We estimated the posterior mode subset of basis functions used with BNMR and BISOREG. Panel~\ref{subfig:bisoregBP} shows the posterior mode subset of basis functions included into the model with BISOREG. This is a subset of the full basis expansion shown in Panel~\ref{subfig:transBP}. Panel~\ref{subfig:bnmrBP} shows the posterior mode combination of basis functions used by BNMR. This includes and intercept and three additional combination basis functions. Each combination basis function is a cluster of one to three of the original transformed basis function in Panel~\ref{subfig:transBP}. BISOREG uses an intercept and nine additional basis functions. As a result only three unique non-zero slope parameters are estimated with BNMR compared to the nine used by BISOREG. The posterior mode basis functions are shown for all 12 samples in supplemental material. BNMR uses between three and five combination basis function at its posterior mode with each combination basis function being a cluster of one to five of the original basis functions. Finally, we compare the MCMC performance of BNMR and BISOREG which both use the same BP basis but have different priors and MCMC approaches. Supplemental Table 5 shows the mean effective sample size and autocorrelation in the posterior sample of $f$. BNMR had a larger average effective sample size than BISOREG (1164 verse 1066 from a posterior sample of 5000 after thinning by 10 from an original sample of 50000) and had lower autocorrelation at lag 1 (0.273 verse 0.375). In part, this efficiency gain can be attributed to the clustering which results in a smaller number of unique regression coefficients being sampled from a truncated multivariate normal distribution. However, there are numerous other differences in the priors and algorithms that likely also contribute to differences in efficiency. \section{Discussion} We propose BNMR to estimate a smooth monotone regression function. Our method is motivated by data generated from the MARS aerosol monitor. This affordable monitor measures the pressure drop across a filter. As particles accumulate on the filter the pressure drop increases. The time-resolved PM$_{2.5}$ concentration is inferred from the first derivative of pressure drop scaled by the total mass collected on the filter. Hence, our objective is to estimate a smooth monotone function and make inference on the scaled derivative of that function. Our proposed approach uses a BP expansion with a Dirichlet process prior that performs both variable selection and clustering on the regression coefficients for the basis expansion. This formulation enables flexible monotone regression while allowing the model to be null when there is no association between predictor and outcome and allowing the function to be linear when there is no evidence of nonlinearity. Further, we can make coherent, closed-form inference on not only the function being estimated but the derivatives of that function and the scaled derivative of the function. Our simulation study showed that BNMR performs similarly to other approaches for smooth nonlinear functions but offers improved inference at smaller sample sizes and when the true function is linear. By both clustering and selecting basis functions, BNMR is self-tuning and results in a smaller parameter space than methods that use variable selection alone. Our proposed method builds on a substantial body of research on statistical methods to measure or estimate exposure to PM$_{2.5}$, PM components, other environmental pollutants. This includes methods to infer exposures from existing monitoring networks, deployment of networks of portable devices, smartphones, and personal monitors \citep{Calder2008,Rundel2015,Das2017,Huang2018,Finazzi2019}. \section*{Supplemental Material} The supplemental material includes replicates of Figures~\ref{fig:BP}, \ref{fig:analysis1}, and \ref{fig:analysis2} for all 12 runs. It also includes additional simulation results and information on computation time and efficiency. The methods can be implemented with the {\tt R} package {\tt bnmr} available at \href{github.com/AnderWilson/bnmr}{github.com/AnderWilson/bnmr}. Data available on request from the authors. \section*{Acknowledgement} This work was supported by the U.S. National Aeronautics and Space Administration and the Robert Wood Johnson Foundation through the Earth and Space Air Prize and by the U.S. Centers for Disease Control, National Institute for Occupational Safety and Health (OH010662 and OH011598). This work utilized the RMACC Summit supercomputer, which is supported by the National Science Foundation (awards ACI-1532235 and ACI-1532236), the University of Colorado Boulder and Colorado State University. The RMACC Summit supercomputer is a joint effort of the University of Colorado Boulder and Colorado State University.
2,869,038,156,075
arxiv
\section{Introduction} According to General Relativity (GR), the most interesting celestial body predicted may be black holes. There is a strong gravitational field in the region near a black hole that can bend light rays. Due to the highly bending light rays in the strong gravity field, shadows cast by black holes usually appear in the observer's sky~\cite{1}. The light rays received come from the black hole's unstable photon orbits, or the photon region~\cite{2,3}. The first image of a black hole taken by Event Horizon Telescope (EHT)~\cite{4} confirmed the existence of black holes, attracting more researchers to study the observable effects of the black holes, e.g., the shadows of black holes, gravitational deflection of light or massive particles and the like. \par For the simplest black hole, spherical black hole, the shadow's boundary is a perfect circle. In the sixties of the last century, Synge considered a static observer to calculate the angular radius of the Schwarzschild black hole's shadow in his seminal paper~\cite{synge}. For rotating black holes, the shadow's shape is no longer circular but somewhat flattened on one side because of the ``dragging" of null geodesics by black holes. Bardeen first gave the shadow's shape of the Kerr black hole for a distant observer; one can find the results in Chandrasekhar's book~\cite{Chandrasekhar} and in~\cite{bardeen}. Since those pioneer works, shadows of objects have been extensively studied; one can refer to the Refs.~\cite{perlick,changzhe1,cunha,Badia,Konoplya,Abdujabbarov,Atamurotov,Bisnovatyi-Kogan,Younsi,Abdujabbarov2,Tsupko,Papnoi,Cunha1,cunha2,Atamurotov1,Amir,Sharif,Babar,Kumar,Singh}. \par Very recently, the authors of Refs.~\cite{changzhe2,changzhe3} proposed a new method for calculating the size and shape of shadow in terms of astrometric observables for finite-distant observers and introduced a new distortion parameter to describe the shadow's deviation from circularity. The shadows of the Kerr-de Sitter black hole for static observers were revisited in this way without introducing tetrads~\cite{changzhe2}. Furthermore, the appearance of the shadow of a static spherical black hole and the Kerr black hole was discussed in a unified framework~\cite{changzhe3}. \par This paper aims to apply this method to study the shadows of rotating Hayward-de Sitter black holes and examine the parameters' effects on the shadow's size and distortion. \par We organize this article as follows. In Sec.~\ref{sec2}, we review the method of calculating black hole shadows using astronomical observables briefly. In sec.~\ref{sec3}, we apply this approach to rotating Hayward-de Sitter black holes to analyze the influences of parameters on the shadow's shape and size. We conclude our results in Sec.~\ref{sec4}. In this paper we set $ G=c=1 $. \section{SHADOWS OF rotating BLACK HOLES}\label{sec2} In order to make this article self-sufficient, we briefly introduce some basics in this section. One can read Refs.~\cite{changzhe2,changzhe3} for details. \par In astrometry, the angle $ \epsilon $ between two incident light rays can be expressed by the following formula~\cite{Lebedev}: \begin{equation}\label{eq:angle1} \cos \epsilon \equiv \frac{\gamma^{*} w \cdot \gamma^{*} k}{\left|\gamma^{*} w \| \gamma^{*} k\right|}=\frac{w \cdot k}{(u \cdot w)(u \cdot k)}+1. \end{equation} Here, $ k $ and $ w $ are tangent vectors of the two light rays, respectively. $ \gamma^{*} $ is the projection operator, $ \gamma_{\nu}^{\mu}=\delta_{\nu}^{\mu}+u^{\mu} u_{\nu} $, for a given observer, whose 4-velocity is denoted by vector $ u $. \par Generally speaking, the metric of a rotating black hole can be written as \begin{equation} \mathrm{d} s^{2}=g_{00} \mathrm{d} t^{2}+g_{11} \mathrm{d} r^{2}+g_{22} \mathrm{d} \theta^{2}+g_{33} \mathrm{d} \phi^{2}+2 g_{03} \mathrm{d} t \mathrm{d} \phi. \end{equation} The 4-velocity of a static observer is $ u=\frac{1}{\sqrt{g_{00}}} \partial_{t} $. For the asymptotically de Sitter spacetime, there is a cosmological horizon. The observer is fixed at the so-called domain of outer communication that is the region between the event horizon and the cosmological horizon. When the observer located at $ \theta =0 $, it will find that the shadow is a disk and the angular radius is \begin{equation}\label{eq7} \cot \psi =\operatorname{sgn}\left(\frac{\pi}{2}-\psi\right) \sqrt{\frac{g_{11}}{g_{22}\left(\frac{l^{2}}{l^{1}}\right)^{2}+\left(g_{33}-\frac{g_{03}^{2}}{g_{00}}\right)\left(\frac{l^{3}}{l^{2}}\right)^{2}}}. \end{equation} Here, we have choose a light ray $ l=\left(l^{0},l^{1},l^{2},l^{3}\right) $ comes from the photon region. ``$ \text{sgn} $" represents the sign function. For a observer located at $ \theta>0 $, the shadow's silhouette is not a perfect circle as a consequence of the frame dragging effect. As an example, assume the observer located at $ \theta =\pi/2 $. Let $ k=\left(k^{0},k^{1},0,k^{3}\right) $ represent a light ray from a prograde orbit which moves in the same direction as the black hole's rotation, and $ w=\left(w^{0},w^{1},0,w^{3}\right) $ represent a light ray from a retrograde orbit that moving against the black hole's rotation. One can get the angle of the two light rays, in such a way that \begin{equation}\label{8} \cot\gamma=\operatorname{sgn}(k, w) \sqrt{\frac{\left(\frac{g_{11}}{\mathcal{K}-\mathcal{W}}+\left(g_{33}-\frac{g_{03}^{2}}{g_{00}}\right) \frac{1}{\frac{1}{\mathcal{W}}-\frac{1}{\mathcal{K}}}\right)^{2}}{g_{11}\left(g_{33}-\frac{g_{03}^{2}}{g_{00}}\right)}}, \end{equation} where $\mathcal{K}\equiv k^{3}/k^{1},\mathcal{W} \equiv w^{3}/w^{1} $, and $ \operatorname{sgn}(k, w)=\operatorname{sgn}\left(\cos\gamma\right)=\operatorname{sgn}\left(g_{11}+\left(g_{33}-g_{03}^{2}/g_{00}\right) \mathcal{K} \mathcal{W}\right) $. \par Similarly, the angle $ \alpha $ between a light ray $ l=\left(l^{0},l^{1},l^{2},l^{3}\right) $ from the photon region and $ k $ is \begin{equation}\label{10} \cot \alpha = {\rm{sgn}}(k,l)\sqrt {\frac{{{{\left( {{g_{11}}\frac{1}{{\mathcal{K} - {\mathcal{L}_3}}} + \left( {{g_{33}} - \frac{{g_{03}^2}}{{{g_{00}}}}} \right)\frac{1}{{\frac{{\rm{1}}}{{{\mathcal{L}_3}}} - \frac{1}{\mathcal{K}}}}} \right)}^2}}}{{{g_{22}}\left( {{g_{11}}{{\left( {\frac{{{\mathcal{L}_2}}}{{\mathcal{K} - {\mathcal{L}_3}}}} \right)}^2} + \left( {{g_{33}} - \frac{{g_{03}^2}}{{{g_{00}}}}} \right){{\left( {\frac{{{\mathcal{L}_2}}}{{1 - \frac{{{\mathcal{L}_{\rm{3}}}}}{\mathcal{K}}}}} \right)}^2}} \right) + {g_{11}}\left( {{g_{33}} - \frac{{g_{03}^2}}{{{g_{00}}}}} \right)}}} ; \end{equation} and the angle $ \beta $ between the light ray $ l $ and $ w $ is \begin{equation}\label{11} \cot \beta = {\mathop{\rm sgn}} (w,l)\sqrt {\frac{{{{\left( {{g_{11}}\frac{1}{{\mathcal{W} - {\mathcal{L}_3}}} + \left( {{g_{33}} - \frac{{g_{03}^2}}{{{g_{00}}}}} \right)\frac{1}{{\frac{{\rm{1}}}{{{\mathcal{L}_3}}} - \frac{1}{\mathcal{W}}}}} \right)}^2}}}{{{g_{22}}\left( {{g_{11}}{{\left( {\frac{{{\mathcal{L}_2}}}{{\mathcal{W} - {\mathcal{L}_3}}}} \right)}^2} + \left( {{g_{33}} - \frac{{g_{03}^2}}{{g_{00}^2}}} \right){{\left( {\frac{{{\mathcal{L}_2}}}{{1 - \frac{{{\mathcal{L}_3}}}{{{\mathcal{W}}}}}}} \right)}^2}} \right) + {g_{11}}\left( {{g_{33}} - \frac{{g_{03}^2}}{{{g_{00}}}}} \right)}}} . \end{equation} In above equations, $ \mathcal{L}_{2} \equiv l^{2}/l^{1}, \mathcal{L}_{3} \equiv l^{3}/l^{1} $, $ \operatorname{sgn}(k, l)=\operatorname{sgn}\left(\cos\alpha \right)=\operatorname{sgn}\left(g_{11}+\left(g_{33}-g_{03}^{2}/g_{00}\right) \mathcal{K} \mathcal{L}_{3}\right) $, and $ \operatorname{sgn}(w, l)=\operatorname{sgn}\left(\cos\beta \right)=\operatorname{sgn}\left(g_{11}+\left(g_{33}-g_{03}^{2}/g_{00}\right) \mathcal{W}\mathcal{L}_{3} \right) $. \par $ \gamma $, $ \alpha $ and $ \beta $ can provide us the shadow of black hole in the celestial sphere. For the sake of convince in researching the shadow, one can use the following stereographic projection for the celestial coordinates to describe the shape of shadow in a $ 2D $-plane~\cite{changzhe2}. \begin{equation}\label{14} \begin{aligned} Y_{\mathrm{sh}}&=\frac{2 \sin \Phi \sin \Psi}{1+\cos \Phi \sin \Psi} \\ &=\frac{2 \cos \beta \sin \gamma-2 \cot \gamma \sqrt{\sin ^{2} \gamma \sin ^{2} \beta+(\cos (\beta+\gamma)-\cos \alpha)(\cos (\beta-\gamma)-\cos \alpha)}}{1+\cos \beta \cos \gamma+\sqrt{\sin ^{2} \gamma \sin ^{2} \beta+(\cos (\beta+\gamma)-\cos \alpha)(\cos (\beta-\gamma)-\cos \alpha)}}, \end{aligned} \end{equation} \begin{equation}\label{15} \begin{aligned} Z_{\mathrm{sh}}&=\frac{2 \cos \Psi}{1+\cos \Phi \sin \Psi}\\ &=\frac{2 \csc \gamma \sqrt{(\cos \alpha-\cos (\beta+\gamma))(\cos (\beta-\gamma)-\cos \alpha)}}{1+\cos \beta \cos \gamma+\sqrt{\sin ^{2} \gamma \sin ^{2} \beta+(\cos (\beta+\gamma)-\cos \alpha)(\cos (\beta-\gamma)-\cos \alpha)}}. \end{aligned} \end{equation} Here, $ \Phi $ and $ \Psi $ are azimuth angle and polar angle in celestial coordinate system. \par In order to quantitatively describe shadow's shape, a distortion parameter $ \Xi $ in terms of $ \alpha $, $ \beta $ and $ \gamma $ is introduced, \begin{equation} \cos \Xi \equiv \frac{1+\cos \gamma-\cos \alpha-\cos \beta}{2 \sqrt{(1-\cos \alpha)(1-\cos \beta)}}, \end{equation} where $ \Xi $ ranges from $ 0 $ to $ \pi $. The $ \cos\Xi =0 $ for that shadow's shape is circular in the celestial sphere. For the non-vanished $ \cos\Xi $, it can quantify the deviation from circularity. The authors of Ref.~\cite{changzhe3} first proposed this kind of quantity for the shadow. Now, we can use $ \gamma $ and $ \Xi $ to represent the sizes and shapes of shadows without confusion. \section{APPLICATION IN ROTATING HAYWARD-DE SITTER BLACK HOLES}\label{sec3} In this section, we will apply this method described in the previous section to obtain the shadows of rotating Hayward-de Sitter black holes without introducing tetrads. \par The metric of rotating Hayward-ds Sitter black holes in the Boyer-Lindquist coordinates $ \left(t,r,\theta,\phi\right) $ is~\cite{Ali} \begin{equation}\label{eq16} d s^{2}=-\frac{\Delta_{r}}{\Sigma}\left(d t-\frac{a \sin ^{2} \theta}{\rho} d \phi\right)^{2}+\frac{\Sigma}{\Delta_{r}} d r^{2}+\frac{\Sigma}{\Delta_{\theta}} d \theta^{2}+\frac{\Delta_{\theta} \sin ^{2} \theta}{\Sigma}\left(a d t-\frac{r^{2}+a^{2}}{\rho} d \phi\right)^{2}, \end{equation} where \begin{equation} \Sigma=r^2+a^{2}\cos^{2}\theta,\quad\rho=1+\frac{\Lambda}{3}a^{2}, \end{equation} \begin{equation} \Delta_{r}=\left(r^{2}+a^{2}\right)\left(1-\frac{\Lambda}{3}r^{2}\right)-2 \tilde{m}\left(r\right)r,\quad\Delta_{\theta}=1+\frac{\Lambda}{3}a^{2}\cos^{2}\theta, \end{equation} \begin{equation} \tilde{m}\left(r\right)=M\left(\frac{r^{3}}{r^{3}+g^{3}}\right). \end{equation} Here, $ M $ represents the mass of black hole, $ a $ is the black hole spin parameter, $\Lambda $ is cosmological constant, and the parameter $ g $ is the magnetic monopole charge arising from the nonlinear electrodynamics. \subsection{Null geodesic equations and photon regions} The motion equations of photons in the spacetime, determined by the metric~\eqref{eq16}, can be given by the Lagrangian, \begin{equation} \mathcal{L}=\frac{1}{2}g_{\mu\nu}\dot{x}^{\mu}\dot{x}^{\nu}, \end{equation} where an overdot denotes the partial derivative with respect to an affine parameter. For the metric ~\eqref{eq16}, one can obtain the momenta ($ p_{\mu}=g_{\mu\lambda}\dot{x}^{\lambda} $) are \begin{gather}\label{eq21} p_{t}=\left(\frac{a^{2}\Delta_{\theta}\sin^{2}\theta}{\Sigma}-\frac{\Delta_{r}}{\Sigma}\right)\dot{t}+\left(\frac{a \Delta_{r} \sin ^{2} \theta}{\rho \Sigma}-\frac{a\left(a^{2}+r^{2}\right) \Delta_{\theta} \sin ^{2} \theta}{\rho \Sigma}\right)\dot{\phi},\\ p_{\phi}=\left(\frac{a \Delta_{r} \sin ^{2} \theta}{\rho \Sigma}-\frac{a\left(a^{2}+r^{2}\right) \Delta_{\theta} \sin ^{2} \theta}{\rho \Sigma}\right)\dot{t}+\left(\frac{\left(a^{2}+r^{2}\right)^{2} \Delta_{\theta} \sin ^{2} \theta}{\rho^{2} \Sigma}-\frac{a^{2} \Delta_{r} \sin ^{4} \theta}{\rho^{2} \Sigma}\right)\dot{\phi}\label{eq22},\\ p_{r}=\frac{\Sigma}{\Delta_{r}}\dot{r},\\ p_{\theta}=\frac{\Sigma}{\Delta_{\theta}}\dot{\theta},\label{eq23} \end{gather} where $ p_{t}=-E$, $p_{\phi}=L_{\phi} $ denote energy and angular momentum, respectively. Combining the momenta and Hamilton-Jacobi equation, we can get null geodesics equations. \par The Hamilton-Jacobi equation takes the following general form: \begin{equation}\label{eq24} -\frac{\partial S}{\partial \lambda}=\frac{1}{2} g^{\mu\nu}\frac{\partial S}{\partial x^{\mu}}\frac{\partial S}{\partial x^{\nu}}, \end{equation} where $ \lambda $ is an affine parameter and $ S $ is the Jacobi action which can be decomposed as a sum, \begin{equation}\label{eq25} S=-\frac{1}{2}m^{2}\lambda-Et+L_{\phi}\phi+S_{\theta}\left(\theta\right)+S_{r}\left(r\right), \end{equation} if $ S $ is separable. $ m $ is the mass of particle, which is zero for photons. From~\eqref{eq24} and~\eqref{eq25}, one can get \begin{equation} \Delta_{\theta}\left(\frac{\partial S_{\theta}}{\partial \theta}\right)^{2}+\frac{\left(L_{\phi}\rho\csc\theta-aE\sin\theta\right)^{2}}{\Delta_{\theta}}=\mathcal{Q}, \end{equation} and \begin{equation} \Delta_{r}\left(\frac{\partial S_{r}}{\partial r}\right)^{2}-\frac{\left(\left(a^{2}+r^{2}\right)E-a\rho L_{\phi}\right)^{2}}{\Delta_{r}}=-\mathcal{Q}, \end{equation} where $ \mathcal{Q} $ is a constant of separation called Carter constant, and $ \partial S/\partial x^{\mu}=p_{\mu} $. With the Hamilton-Jacobi equation, it is not difficult to get the null geodesic equations as \begin{gather} \left(\Sigma \dot{r}\right)^{2}=R,\\ (\Sigma \dot{\theta})^{2}=\Theta,\label{28}\\ \Sigma \dot{t}=E\left(\frac{\left(a^{2}+r^{2}\right)\left(a^{2}+r^{2}-a \lambda \rho\right)}{\Delta_{r}}+\frac{a\left(\lambda \rho-a \sin ^{2} \theta\right)}{\Delta_{\theta}}\right),\\ \Sigma \dot{\phi}=\rho E\left(\frac{a\left(a^{2}+r^{2}\right)-a^{2} \lambda \rho}{\Delta_{r}}+\frac{\left(\lambda \rho-a \sin ^{2} \theta\right)}{\Delta_{\theta} \sin ^{2} \theta}\right),\label{30} \end{gather} where \begin{gather}\label{31} R=E^{2}\left(\left(a^{2}+r^{2}-a \lambda \rho\right)^{2}-\eta \Delta_{r}\right),\\ \Theta=E^{2}\left(\Delta_{\theta} \eta-(\lambda \rho \csc \theta-a \sin \theta)^{2}\right),\label{32} \end{gather} and \begin{equation} \lambda\equiv\frac{L_{\phi}}{E},\quad \eta\equiv\frac{\mathcal{Q}}{E^{2}}. \end{equation} For spherical orbits, \begin{equation} R\left(r_{c}\right)=0 \end{equation} and \begin{equation} \left.\frac{dR\left(r\right)}{dr}\right|_{r=r_{c}}=0 \end{equation} must be satisfied, which lead to \begin{gather} \lambda=\left.\frac{-4 r \Delta_{r}+\left(a^{2}+r^{2}\right) \Delta_{r}^{\prime}}{a \rho \Delta_{r}^{\prime}}\right|_{r=r_{c}},\\ \eta=\left.\frac{16 r^{2} \Delta_{r}}{\Delta_{r}^{\prime}\,^2}\right|_{r=r_{c}}, \end{gather} where $ \Delta_{r}^{\prime} $ denotes the derivative of $ \Delta_{r} $ with respect to $ r $, and $ r_{c} $ is the location of photon sphere. Furthermore, we can rewrite $ R^{\prime\prime}\left(r_{c}\right) $ as \begin{equation}\label{38} R^{\prime\prime}\left(r_{c}\right)=\left.8 E^{2}\left(r^{2}+\frac{2r\Delta_{r}\left(\Delta_{r}^{\prime}-r\Delta_{r}^{\prime\prime}\right)}{\Delta_{r}^{\prime}\,^{2}}\right)\right|_{r=r_{c}}. \end{equation} A spherical null geodesic at $ r=r_{c} $ is unstable with respect to radial perturbations if $ R^{\prime\prime}\left(r_{c}\right)>0 $, and stable if $ R^{\prime\prime}\left(r_{c}\right)<0. $ Unstable photon orbits determine the contour of shadow. The range of $ r_{c} $ (photon region) can be determined by $ \Theta\ge 0 $ from~\eqref{28} and~\eqref{32}, which is \begin{equation}\label{39} \left(\left(4 r \Delta_{r}-\Sigma \Delta_{r}^{\prime}\right)^{2}-16 a^{2} r^{2} \Delta_{r} \Delta_{\theta} \sin ^{2} \theta\right)_{r=r_{c}} \leq 0. \end{equation} From~\eqref{38} and~\eqref{39}, we can get $r_{c-}\le r_{c}\le r_{c+} $, where $ r_{c-} $ and $ r_{c+}$ are the minimum and maximum radial position of the photon region. If limiting the light rays from the photon region, one can regard $ p^{\mu}=\dot{x}^{\mu} $ as functions of $ x^{\mu} $, $E $, and $ r_{c} $. \subsection{Sizes of shadow} For $ \theta =0 $, one can rewrite~\eqref{39} as \begin{equation} \left(4 r \Delta_{r}-\left(r^{2}+a^{2}\right) \Delta_{r}^{\prime}\right)_{r=r_{c}}=0. \end{equation} This means that the photon region becomes photon sphere, and $ r_{c}=r_{c-}=r_{c+} $. Substituting the metric~\eqref{eq16} and geodesic equations into~\eqref{eq7}, one can calculate the angular radius of the shadow in the following form, \begin{equation} \cot \psi=\sqrt{\frac{\left(a^{2}+r^{2}-a \lambda \rho\right)^{2}-\eta \Delta_{r}}{\Delta_{r} \eta+a \lambda \rho\left(2 a^{2}+2 r^{2}-a \lambda \rho\right)}}, \end{equation} where $ \lambda $ and $ \eta $ are function of $ r_{c} $. Here, we only consider shadow in the view of observers located outside of the photon region. \begin{center} \begin{figure}[ht] \centering \subfigure[]{ \begin{minipage}[t]{0.311\textwidth} \includegraphics[width=1\textwidth]{psiasa} \end{minipage}\label{5a} } \subfigure[]{ \begin{minipage}[t]{0.311\textwidth} \includegraphics[width=1\textwidth]{psiaslambda} \end{minipage}\label{5b} } \subfigure[]{ \begin{minipage}[t]{0.311\textwidth} \includegraphics[width=1\textwidth]{psiasg} \end{minipage}\label{5c} } \caption{The angular radius $ \psi $ of shadow as a function of the distance from the rotating Hayward-de Sitter black holes for selected parameters, and the observers are located at inclination angle $ \theta=0 $. The vertical dotted lines are the outer boundaries and the cosmological horizons. Here we set $ M=1 $.} \label{fig:5} \end{figure} \end{center} In Fig.~\ref{fig:5}, we plot the shadow's angular radius as a function of the distance from the black hole. The figures reflect that the photon sphere radius of the Schwarzschild black hole is the largest, and it's shadow has the largest size among the shadows observed at the same position. Besides, no matter which of $ a $, $ g $, and $ l $ increases, the size of shadow will become smaller. \par The situation of the observer locates at the equatorial plane ($ \theta=\pi/2 $) will be more complicated. In this case,~\eqref{39} can be rewritten as \begin{equation} \left(\left(4 r \Delta_{r}-r^{2} \Delta_{r}^{\prime}\right)^{2}-16 a^{2} r^{2} \Delta_{r}\right)_{r=r_{c}} \leq 0. \end{equation} Then one can obtain $ r_{c-}\le r_{c}\le r_{c+} $. From~\eqref{8}, we get the angular diameter $ \gamma $, \begin{equation} \cot \gamma=\operatorname{sgn}\left(1+\frac{\Delta_{r}^{2}}{\rho^{2}\left(\Delta_{r}-a^{2}\right)} \mathcal{K} \mathcal{W}\right) \left| \frac{\rho \sqrt{\Delta_{r}-a^{2}}}{\Delta_{r}} \frac{1}{\mathcal{K}-\mathcal{W}} +\frac{\Delta_{r}}{\rho \sqrt{\Delta_{r}-a^{2}} } \frac{1}{\frac{1}{\mathcal{W}}-\frac{1}{\mathcal{K}}}\right|, \end{equation} where \begin{equation}\label{44} \mathcal{K}=\left.\frac{p^{\phi}}{p^{r}}\right|_{r_{c}=r_{c-}}, \end{equation} \begin{equation}\label{45} \mathcal{W}=\left.\frac{p^{\phi}}{p^{r}}\right|_{r_{c}=r_{c+}}. \end{equation} From~\eqref{28} and~\eqref{30}, we get \begin{equation}\label{46} \frac{p^{\phi}}{p^{r}}=\frac{\dot{\phi}}{\dot{r}}=\frac{\rho\left(a^{3}+a\left(r^{2}-\Delta_{r}\right)-a^{2} \lambda \rho+\Delta_{r} \lambda \rho\right)}{\Delta_{r} \sqrt{\left(a^{2}+r^{2}-a \lambda \rho\right)^{2}-\Delta_{r} \eta}}. \end{equation} It is worth noting that $ \lambda $ and $ \eta $ can be regarded as functions of $ r_{c} $, so~\eqref{46} is a function of $ r $ and $ r_{c} $. Fig.~\ref{fig:6} shows that the angular parameter $ \gamma $ changes with the increase of the distance from the black hole. It is not difficult to find that the angular parameter $ \gamma $ decreases with the increase of $ a $, $ \Lambda $ and $ g $, and the outer boundary of the photon region $ r_{c+} $ is larger than the radius of the photon sphere in Schwarzschild spacetime. Therefore, the size of the black hole shadow will decrease with the increase of $ a $, $ \Lambda $ and $ g $, and the shadow of the Schwarzschild black hole has the largest size. \begin{center} \begin{figure}[htbp] \centering \subfigure[]{ \begin{minipage}[t]{0.311\textwidth} \includegraphics[width=1\textwidth]{gammaasa} \end{minipage}\label{6a} } \subfigure[]{ \begin{minipage}[t]{0.311\textwidth} \includegraphics[width=1\textwidth]{gammaaslambda} \end{minipage}\label{6b} } \subfigure[]{ \begin{minipage}[t]{0.311\textwidth} \includegraphics[width=1\textwidth]{gammaasg} \end{minipage}\label{6c} } \caption{The angular diameter $ \gamma $ of shadow as a function of the distance from the rotating Hayward-de Sitter black holes for selected parameters, and the observers are located at inclination angle $ \theta=0 $. The vertical dotted lines are the outer boundaries and the cosmological horizons. Here we set $M=1.$} \label{fig:6} \end{figure} \end{center} \subsection{Shadow's shape} In this part, we will consider the shadow's shape in different situations. The observers located at inclination angle $ \theta=0 $ would see the shadows as a perfect circle, while the observers located at $ \theta=\frac{\pi}{2} $ would find that the shadows are distorted. according to~\eqref{10} and~\eqref{11} ,the angular distances $ \alpha $ and $ \beta $ can be read as \begin{equation} \cot \alpha=\operatorname{sgn}\left(1+\frac{\Delta_{r}^{2}}{\left(\Delta_{r}-a^{2}\right) \rho^{2}} \mathcal{K} \mathcal{L}_{3}\right) \frac{\left|\frac{\Delta_{r}}{\rho\sqrt{\Delta_{r}-a^{2}}} \frac{1}{\frac{1}{\mathcal{L}_{3}}-\frac{1}{\mathcal{K}}}+\frac{\rho\sqrt{\Delta_{r}-a^{2}}}{\Delta r} \frac{1}{\mathcal{K}-\mathcal{L}_{3}}\right|}{\sqrt{1+\left(\frac{\mathcal{L}_{2}}{1-\frac{\mathcal{L}_{3}}{\mathcal{K}}}\right)^{2} \Delta r+\frac{\left(\Delta r-a^{2}\right) \rho^{2}}{\Delta r}\left(\frac{\mathcal{L}_{2}}{\mathcal{K}-\mathcal{L}_{3}}\right)^{2}}}, \end{equation} and \begin{equation} \cot \beta=\operatorname{sgn}\left(1+\frac{\Delta_{r}^{2}}{\left(\Delta_{r}-a^{2}\right) \rho^{2}} \mathcal{W} \mathcal{L}_{3}\right) \frac{\left|\frac{\Delta_{r}}{\rho\sqrt{\Delta_{r}-a^{2}}} \frac{1}{\frac{1}{\mathcal{L}_{3}}-\frac{1}{\mathcal{W}}}+\frac{\rho\sqrt{\Delta_{r}-a^{2}}}{\Delta_{r}} \frac{1}{\mathcal{W}-\mathcal{L}_{3}}\right|}{\sqrt{1+\left(\frac{\mathcal{L}_{2}}{1-\frac{\mathcal{L}_{3}}{\mathcal{W}}}\right)^{2} \Delta_{r}+\frac{\left(\Delta_{r}-a^{2}\right) \rho^{2}}{\Delta_{r}}\left(\frac{\mathcal{L}_{2}}{\mathcal{W}-\mathcal{L}_{3}}\right)^{2}}}, \end{equation} where $ \mathcal{K} $ and $ \mathcal{W} $ are given by~\eqref{44},~\eqref{45} and \begin{equation} \mathcal{L}_{2}\equiv\left.\frac{p^{\theta}}{p^{r}}\right|_{r_{c}}, \end{equation} \begin{equation} \mathcal{L}_{3}\equiv\left.\frac{p^{\phi}}{p^{r}}\right|_{r_{c}}, \end{equation} with \begin{equation} \frac{p^{\theta}}{p^{r}}=\frac{\dot{\theta}}{\dot{r}}=\pm \sqrt{\frac{\eta-(\lambda \rho-a)^{2}}{\left(a^{2}+r^{2}-a \lambda \rho\right)^{2}-\Delta_{r} \eta}}. \end{equation} \begin{center} \begin{figure}[htbp] \centering \subfigure[]{ \begin{minipage}[t]{\textwidth} \includegraphics[width=0.45\textwidth]{r40Kerr} \includegraphics[width=0.45\textwidth]{r631Kerr} \end{minipage} } \subfigure[]{ \begin{minipage}[t]{\textwidth} \includegraphics[width=0.45\textwidth]{r4Kerr1} \includegraphics[width=0.45\textwidth]{r4Kerr2} \end{minipage} } \caption{Shadows of rotating Hayward-de Sitter black holes with $ g=0 $ on projective plane $ \left(Y,Z\right) $ for selected parameters. $ r $ is the distance from the observer to the black holes. Here we set $ M=1 $. (a) Shadows of rotating Kerr(-de Sitter) black holes for selected spin parameters for distant observers. (b) Shadows of rotating Kerr-de Sitter black holes for observers located at $ r=4 $. } \label{fig:7} \end{figure} \end{center} \begin{figure}[htbp] \centering \subfigure[]{ \begin{minipage}[t]{0.311\textwidth} \includegraphics[width=1\textwidth]{r4Haywardlambda} \includegraphics[width=1\textwidth]{der4Haywardlambda} \end{minipage}\label{8a} } \subfigure[]{ \begin{minipage}[t]{0.311\textwidth} \includegraphics[width=1\textwidth]{r4Haywardg} \includegraphics[width=1\textwidth]{der4Haywardg} \end{minipage}\label{8b} } \subfigure[]{ \begin{minipage}[t]{0.311\textwidth} \includegraphics[width=1\textwidth]{r4Haywarda} \includegraphics[width=1\textwidth]{der4Haywarda} \end{minipage}\label{8c} } \caption{The shape of shadows and corresponding distortion parameters $ \Xi $ as function of $ \frac{\Phi}{\gamma} $ for selected different parameters for observers located at $ r=4 $. Here we set $ M=1 $. (a) Shadows and distortion parameters of rotating Hayward-de Sitter black holes of selected different cosmological constants. (b) Shadows and distortion parameters of rotating Hayward-de Sitter black holes of selected different magnetic monopole charges. (c) Shadows and distortion parameters of rotating Hayward-de Sitter black holes of selected different the spin parameters.} \label{fig:8} \end{figure} \begin{figure}[htbp] \centering \subfigure[]{ \begin{minipage}[t]{0.22\textwidth} \includegraphics[width=\textwidth]{r40Haywarda} \includegraphics[width=\textwidth]{der40Haywarda} \end{minipage}\label{9a} } \subfigure[]{ \begin{minipage}[t]{0.22\textwidth} \includegraphics[width=\textwidth]{r40Haywardg} \includegraphics[width=\textwidth]{der40Haywardg} \end{minipage}\label{9b} } \subfigure[]{ \begin{minipage}[t]{0.22\textwidth} \includegraphics[width=\textwidth]{r631Haywarda} \includegraphics[width=\textwidth]{der631Haywarda} \end{minipage}\label{9c} } \subfigure[]{ \begin{minipage}[t]{0.22\textwidth} \includegraphics[width=\textwidth]{r631Haywardg} \includegraphics[width=\textwidth]{der631Haywardg} \end{minipage}\label{9d} } \caption{The shapes of shadows and corresponding distortion parameters $ \Xi $ as function of $ \frac{\Phi}{\gamma} $ for selected different parameters for distant observers. Here we set $ M=1 $. (a) Shadows and distortion parameters of rotating Hayward black holes of selected different spin parameters for observers located at $ r=40 $. (b) Shadows and distortion parameters of rotating Hayward black holes of selected different magnetic monopole charges for observers located at $ r=40 $. (c) Shadows and distortion parameters of rotating Hayward-de black holes of selected different spin parameters for observers located at $ r=6.31 $. (d) Shadows and distortion parameters of rotating Hayward-de black holes of selected different the magnetic monopole charges for observers located at $ r=6.31 $. } \label{fig:9} \end{figure} \begin{figure}[htbp] \centering \subfigure[]{ \begin{minipage}[t]{0.475\textwidth} \includegraphics[width=\textwidth]{lambda0} \includegraphics[width=\textwidth]{delambda0} \end{minipage}\label{10a} } \subfigure[]{ \begin{minipage}[t]{0.475\textwidth} \includegraphics[width=\textwidth]{lambda005} \includegraphics[width=\textwidth]{delambda005} \end{minipage}\label{10b} } \caption{The shapes of shadows and corresponding distortion parameters $ \Xi $ as function of $ \frac{\Phi}{\gamma} $ for observers at selected position $ r $. } \label{fig:10} \end{figure} \par In Fig.~\ref{fig:7}, we set $g=0 $ and get the same results as the Kerr(-de Sitter) black holes in Ref.~\cite{changzhe2}. In Figs.~\ref{fig:8} and~\ref{fig:9}, we scale the shadows appropriately so that the degree of distortion of these shadows can be compared qualitatively. The upper parts of Figs.~\ref{fig:8} and~\ref{fig:9} are the shadows after scaling, with different parameters selected, and the lower parts are the images of the corresponding distortion parameters vary with $ \Phi/\gamma $, which are the quantitative description of the shadow's distortion. In Fig.~\ref{fig:8}, the observers are not far from the photon regions of the black holes, and in Fig.~\ref{fig:9}, the observers are far away from the black holes. It is not difficult to find that the shadow's distortion will decrease as the cosmological constant increases. In contrast, the distortion will increase with an increase of $ g $ or $a $. \par In Fig.~\ref{fig:10}, we plot the shapes and distortion parameters of the shadows for observers at different distances from the center of the black hole. It can be seen that the distortion parameter would increase with distance. Through the above discussion, we know that when the parameters $ g $ and $ a $ of rotating Hayward-de Sitter black holes are maximum, and the cosmological constant is zero, the distortion of the shadow is the largest. \section{CONCLUSIONS AND DISCUSSIONS}\label{sec4} In this article, we calculated the size and shape of rotating Hayward-de Sitter black hole shadow for static observers at a finite distance in terms of astronomical observables. For $ \theta =0 $, the shadow's boundary is a perfect circle, but for $ \theta =\frac{\pi}{2} $, the shadow's boundary will be distorted. To quantitatively describe the distortion of the shadows, we plotted the distortion parameter affected by the black hole's parameters, quantifying the distortion of the shape from circularity. We found that no matter which parameter increases, the size of the shadow will shrink. At the same distance, a Schwarzschild black hole has the largest shadow. Furthermore, the parameters $ g $ and $ a $ of rotating Hayward-de Sitter black holes are maximum, and the cosmological constant is zero, the distortion of the black hole shadow is the largest, and the distortion parameter would increase with distance. \par We only considered static observers fixed at inclination angle $ \theta =0 $ and $ \theta =\frac{\pi}{2} $, but this method is suitable for arbitrary observers. Studying the shadows of black holes is an important way for studying the properties of black holes, from which one can obtain rich information about space-time geometry. \section*{Conflicts of Interest} The authors declare that there are no conflicts of interest regarding the publication of this paper. \section*{Acknowledgments} We would like to thank the National Natural Science Foundation of China (Grant No.11571342) for supporting us on this work. \newpage \section*{References}
2,869,038,156,076
arxiv
\section{\label{sec:Introduction}Introduction} \selectlanguage{english \begin{comment} Placeholder \end{comment} \selectlanguage{australian In the last decade, there has been a sharp increase in the demand for data traffic~\cite{cisco2014global}. To address such massive consumer demand for data communications, especially from the user equipments (UEs) such as smartphones and tablets, many noteworthy technologies have been proposed~\cite{7126919}, such as small cell networks (SCNs), cognitive radio, device-to-device (D2D) communications, etc. In particular, D2D communications are usually defined as directly transferring data between mobile UEs which are in proximity. Due to the short communication distance between a D2D user pair, D2D communications hold great promise in improving network performance such as coverage, spectral efficiency, energy efficiency, and delay. Recently, D2D underlaid cellular networks have been standardized by the 3rd Generation Partnership Project (3GPP). The major challenge in the D2D enabled underlaid cellular network is the inclusion of inter-tier and intra-tier interference due to the aggressive frequency reuse, where cellular UEs and D2D UEs share the same spectrum resources. In parallel with the standardization effort, recently there has been a surge of academic studies in this area~\cite{7147772,6928445,lee2015power,6909030}. In more detail, by using the stochastic geometry theory, Andrews, et.al conducted network performance analyses for the downlink (DL)~\cite{6042301} and the uplink (UL)~\cite{6516885} of SCNs, in which UEs and/or base stations (BSs) were assumed to be randomly deployed according to a homogeneous Poisson distribution. In~\cite{peng2014device}, Peng developed an analytical framework for the D2D underlaid cellular network in the DL, where the Rician fading channel model was adopted to model the small-scale fast fading for the D2D communication links. In~\cite{7147772}, Liu provided a unified framework to analyze the downlink outage probability in a multi-channel environment with Rayleigh fading, where D2D UEs were selected based the received signal strength from the nearest BS. In~\cite{7073589}, Sun presented an analytical framework for evaluating the network performance in terms of load-aware coverage probability and network throughput using a dynamic TDD scheme in which mobile users in proximity can engage in D2D communications. In~\cite{7147834}, George proposed exclusion regions to protect cellular receivers from excessive interference from active D2D transmitters. In~\cite{7469370}, the authors derived approximate expressions for the distance distribution between two D2D peers conditioned on the core network\textquoteright s knowledge of the cellular network and analyzed the performance of network-assisted D2D discovery in random spatial networks. Although the existing work provides precious insights into resource allocation and mode selection for D2D communications, there still exists several problems: \begin{itemize} \item In some studies, only a single BS with one cellular UE and one D2D pair were considered, which did not take into account the influence from other cells. \item The mode selection scheme in the literature was not very practical, which was mostly based on the distance only and considered D2D receiver UEs as an additional tier of nodes, independent of the cellular UEs and the D2D transmitter UEs. Such tier of D2D receiver UEs without cellular capabilities appears from nowhere and is hard to justify. \item D2D communications usually coexist with the UL of cellular communications due to the relatively low inter-tier interference. Such feature has not been well treated in the literature. \item The pathloss model is not practical, e.g., LOS/NLOS transmissions have not been well studied in the context of D2D, and usually the same pathloss model was used for both the cellular and the D2D tiers. \item Shadow fading was widely ignored in the analysis, which did not reflect realistic networks. \end{itemize} \selectlanguage{english \begin{comment} Placeholder \end{comment} \selectlanguage{australian To sum up, up to now, there is no work investigating the D2D-enabled UL cellular network with the consideration of the lognormal shadow fading. To fill in this gap of theoretical study, in this paper, we consider the D2D-enhanced network and develop a tractable framework to quantify the network performance for a D2D-enabled UL cellular network. The main contributions of this paper are summarized as follows: \begin{itemize} \item We introduce a hybrid network model, in which the random and unpredictable spatial positions of mobile users and base stations are modeled as Possion point processes. This model captures several important characteristics of a D2D-enabled UL cellular network including lognormal fading, transmit power control and orthogonal scheduling of cellular users within a cell. \item We consider a flexible D2D mode selection which is based on the maximum DL received power from the strongest base station. Such maximum DL signal strength based mode selection scheme helps to mitigate the undesirable interference from D2D transmitters. \item We present a general and analytical framework, which considers that the D2D UEs are distributed according to a non-homogenous PPP. With this approach, a unified performance analysis is conducted for underlaid D2D communications and we derive analytical results in terms of the coverage probability and the area spectral efficiency (ASE) for both cellular UEs and D2D UEs. Our results shed new light on the system design of D2D communications. \end{itemize} \selectlanguage{english \begin{comment} Placeholder \end{comment} \selectlanguage{australian The rest of this paper is organized as follows. In Section~\ref{sec:System-model,-Assumptions,}, we introduce the system model and assumptions used in this paper. Section~\ref{sec:Analysis} presents our main results. We provide numerical results and more discussion in Section~\ref{sec:SIMULATION-AND-DISCUSSION} and conclude our work in Section~\ref{sec:Conclusion}. \section{\label{sec:System-model,-Assumptions,}System Model} \selectlanguage{english \begin{comment} Placeholder \end{comment} \selectlanguage{australian In this section, we present the system model that is used in this paper. \subsection{The Path Loss Model} \selectlanguage{english \begin{comment} Placeholder \end{comment} \selectlanguage{australian We consider a D2D underlaid UL cellular network, where BSs and UEs, including cellular UL UEs and D2D UEs, are assumed to be distributed on an infinite two-dimensional plane $\mathbf{\mathit{\mathtt{\mathbb{R}}}^{2}}$. We assume that the cellular BSs are spatially distributed according to a homogeneous PPP of intensity $\lambda_{b}$ , i.e., $\varPhi_{b}=\{X_{i}\}$, where $X_{i}$ denotes the spatial locations of the $i$th BS. Moreover, the UEs are also distributed in the network region according to another independent homogeneous PPP $\varPhi_{u}$ of intensity $\lambda_{u}$. The path loss functions for the UE-to-BS links and UE-to-UE links can be captured as following \begin{equation} PL_{\textrm{cellular}}^{^{\textrm{dB}}}=A_{B}^{\textrm{dB}}+\alpha_{B}10\log_{10}R+\xi_{B}, \end{equation} and \begin{equation} PL_{\textrm{D2D}}^{^{\textrm{dB}}}=A_{D}^{\textrm{dB}}+\alpha_{D}10\log_{10}R+\xi_{D}, \end{equation} where the path loss is expressed in dB unit, $A_{B}^{\textrm{dB}}$ and $A_{D}^{\textrm{dB}}$ are constants determined by the transmission frequency, $\alpha_{B}$ and $\alpha_{D}$ are path loss exponents for the UE-to-BS links and UE-to-UE links. Moreover, we denote by $\mathrm{\mathcal{H}_{B}}$ and $\mathrm{\mathcal{H}_{D}}$ the lognormal fading coefficients of a CU-to-BS link and a UE-to-UE link, and we assume that $\mathrm{\mathcal{H}_{B}=exp\left(\kappa\xi_{db}^{B}\right)}$ and $\mathrm{\mathcal{H}_{D}=exp\left(\kappa\xi_{db}^{D}\right)}$ are lognormal fading, where $\kappa=-\mathtt{\mathrm{In10/10}}$ is a constant, .i.e., $\xi_{db}^{B}\thicksim N\left(0,\sigma_{B}{}^{2}\right)$ and $\mathrm{\xi_{db}^{D}}\thicksim N\left(0,\sigma_{D}{}^{2}\right)$. The received power for a typical UE from a BS $b$ can be written as \begin{equation} P_{b}^{\textrm{rx}}=A_{B}P_{B}\mathrm{\mathcal{H}_{B}}\left(b\right)R^{-\alpha_{B}}, \end{equation} where $A_{B}=10^{\frac{1}{10}A_{B}^{\textrm{dB}}}$ is a constant determined by the transmission frequency for BS-to-UE links, $P_{B}$ is the transmission power of a BS, $\mathrm{\mathcal{H}_{B}}\left(b\right)$ is the lognormal shadowing from a BS $b$ to the typical UE. There are two modes for UEs in the considered D2D-enabled UL cellular network, i.e., cellular mode and D2D mode. Each UE is assigned with a mode to operate according to the comparison of the received DL power from its serving BS with a threshold. In more detail, \begin{equation} Mode=\begin{cases} \textrm{Cellular}, & \textrm{if }P^{\ast}=\underset{b}{\max}\left\{ P_{b}^{\textrm{rx}}\right\} >\beta\\ \textrm{D2D}, & \textrm{otherwise} \end{cases}, \end{equation} where the string variable $Mode$ takes the value of 'Cellular' or 'D2D'. In particular, for a tagged UE, if $P^{\ast}$ is large than a specific threshold $\beta>0$, then the UE is not appropriate to work in the D2D mode due to its potentially large interference, and hence it should operate in the cellular mode and directly connect with a BS. Otherwise, it should operate in the D2D mode. The UEs that are associated with cellular BSs are referred to as cellular UEs (CU) and the distance from a CU to its associated BS is denoted by $R^{B}$. From~\cite{6928445} , CUs are distributed following a non-homogenous PPP $\varPhi_{c}$. For a D2D UE, we adopt the same assumption in~\cite{7147772} that it randomly decides to be a D2D transmitter or D2D receiver with equal probability at the beginning of each time slot, and a D2D receiver UE selects the strongest D2D transmitter UE for signal reception. Base on the above system model, we can obtain the intensity of CU as $\lambda_{c}=q\lambda_{u}$, where $q$ denotes the probability of $P^{\ast}>\beta$ and will be derived in closed-form expressions in Section~\ref{sec:Analysis}. It is apparent that the D2D UEs are distributed following another non-homogenous PPP $\varPhi_{d}$, the intensity of which is $\lambda_{d}=\left(1-q\right)\lambda_{u}$. \subsection{The Underlaid D2D Model} \selectlanguage{english \begin{comment} Placeholder \end{comment} \selectlanguage{australian We assume an underlaid D2D model. That is, each D2D transmitter reuses the frequency with cellular UEs, which incurs inter-tier interference from D2D to cellular. However, there is no intra-cell interference between cellular UEs since we assume an orthogonal multiple access technique in a BS. It follows that there is only one uplink transmitter in each cellular BS. Here, we consider a fully loaded network with $\lambda_{u}\gg\lambda_{b}$, so that on each time-frequency resource block, each BS has at least one active UE to serve in its coverage area. Note that the case of $\lambda_{u}<\lambda_{b}$ is not trivial, which even changes the capacity scaling law~\cite{Ding2017capScaling}. Due to the page limit, we leave the study of $\lambda_{u}<\lambda_{b}$ as our future work. Generally speaking, the active CUs can be treated as a thinning PPP $\varPhi_{c}$ with the same intensity $\lambda_{b}$ as the cellular BSs. Moreover, we assume a channel inversion strategy for the power control for cellular UEs, i.e., \begin{equation} P_{c_{i}}=P_{0}\mathcal{\mathrm{\left(\frac{R_{i}^{\alpha_{B}}}{\mathcal{H_{\mathrm{c_{i}}}}\mathrm{A_{B}}}\right)^{\varepsilon}}}, \end{equation} where $P_{c_{i}}$ is the transmission power of the $i$-th cellular link, $R_{i}$ is the distance of the $i$-th link from a CU to the target BS, $\alpha_{B}$ denotes the pathloss exponent, $\epsilon\in(0,1]$ is the fractional path loss compensation, $P_{0}$ is the receiver sensitivity. For BS and D2D transmitters, they use constant transmmit powers $P_{B}$ and $P_{d}$, respectively. Besides, we denote \foreignlanguage{english}{the additive white Gaussian noise (AWGN) power} by $\sigma^{2}$. \subsection{The Performance Metrics} \selectlanguage{english \begin{comment} Placeholder \end{comment} \selectlanguage{australian According to~\cite{6042301}, the coverage probability is defined as \begin{equation} P_{Mode}\left(T,\lambda_{u},\alpha_{B,D}\right)=\Pr\left[\textrm{SINR}>T\right], \end{equation} where $T$ is the SINR threshold, the subscript string variable $Mode$ takes the value of 'Cellular' or 'D2D', and the interference in this paper consist of the interference from both cellular UEs and D2D transmitters. Furthermore, the area spectral efficiency(ASE) in $\textrm{bps/Hz/k\ensuremath{m^{2}}}$ for a give $\lambda_{b,u}$ can be formulated as \vspace{0.2cm} \noindent $A_{Mode}^{\textrm{ASE}}\left(\lambda_{Mode},\gamma_{0}\right)$ \[ =\lambda_{Mode}\int_{\gamma_{0}}^{\infty}log_{2}\left(1+x\right)f_{X}\left(\lambda_{Mode},\gamma_{0}\right)dx, \] where $\gamma_{0}$ is the minimum working SINR for the considered network, and $f_{X}\left(\lambda_{Mode},\gamma_{0}\right)$ is the PDF of the SINR observed at the typical receiver for a particular value of $\lambda_{Mode}$. For the whole network consisting of both cellular UEs and D2D UEs, the sum ASE can be written as \begin{equation} A^{\textrm{ASE}}=A_{\textrm{Cellular}}^{\textrm{ASE}}+A_{\textrm{D2D}}^{\textrm{ASE}}. \end{equation} \section{\label{sec:Analysis}Main Results} \selectlanguage{english \begin{comment} Placeholder \end{comment} \selectlanguage{australian First of all, we introduce the Equivalence Method that will be used throughout the paper~\cite{6566864}. Based on this method, we can transfer the strongest association scheme to the nearest BS association scheme. More specifically, with $i$th cellular link, if we let $\overline{R}_{i}=\mathrm{\mathcal{H}_{B}^{-1/\alpha_{B}}}R_{i}^{B}$, where $R_{i}^{B}$ is the distance separating a typical user from its tagged strongest base station,$\bar{R}_{i}$ is the distance separating a typical user from its tagged nearest base station in another PPP, then the received signal power in Eq.(3) and the transmission power in Eq.(5) are written as \begin{equation} P^{\ast}=\max\left\{ P_{B}\mathbb{\mathrm{A_{B}}}\left(\overline{R^{B}}_{i}\right)^{-\alpha_{B}}\right\} , \end{equation} and \begin{equation} P_{c_{i}}=\frac{P_{0}}{A_{B}{}^{\varepsilon}}\mathcal{\mathrm{\left(\overline{R}_{i}\right)^{\alpha\varepsilon}}}. \end{equation} Assume that a generic fading satisfy $\mathtt{\mathbb{E_{\mathrm{\mathrm{\mathcal{H}_{B}}}}\mathrm{\left[\left(\mathrm{\mathcal{H}_{B}}\right)^{2/\alpha_{B}}\right]=\exp(\frac{2\sigma_{B}^{2}}{\alpha_{B}})<\infty}}}$. The system which consists of a non-homogeneous PPP with densities $\lambda$ and in which each UE is associated with the BS providing the strongest received signal power is equivalent to another system which consists of another non-homogenous PPPs with densities $\lambda'\left(\cdotp\right)$ and in which each UE is associated with the BS providing the smallest path loss. Besides, densities\foreignlanguage{english}{$\lambda'\left(\cdotp\right)$}is given by \begin{equation} \lambda'\left(\varepsilon\right)=\frac{d}{d\varepsilon}\Lambda\left(\left[0,\varepsilon\right]\right), \end{equation} where \begin{equation} \Lambda\left(\left[0,\varepsilon\right]\right)=\pi\lambda\varepsilon^{2}\cdot e^{\frac{2\sigma_{B}^{2}}{\alpha_{B}^{2}}}. \end{equation} The transformed cellular network has the exactly same performance for the typical receiver (BS or D2D RU) on the coverage probability with the original network, which is proved in Appendix and validated in this paper. \subsection{The Probability of UE Operating in the Cellular Mode} \selectlanguage{english \begin{comment} Placeholder \end{comment} \selectlanguage{australian In this subsection, we present our results on the probability that the UE operates in cellular mode and the equivalence distance distributions in cellular mode and D2D mode respectively, particularly $q$ in Lemma~\ref{lem:When-operating-under}. The derived results will be used in the analysis of the coverage probability later. \begin{lem} \label{lem:When-operating-under}When operating under the model ,the probability that a generic mobile UE registers to the strongest BS and operates in cellular mode is given by \begin{equation} q=1-\exp\left(-\pi\lambda_{B}\left(\frac{A_{B}P_{B}}{\beta}\right){}^{2/\alpha_{B}}\cdot e^{\frac{2\sigma_{B}^{2}}{\alpha_{B}}}\right), \end{equation} and the probability that the UE operates in D2D mode is $\left(1-q\right)$. \end{lem} \begin{IEEEproof} The probability of the RSS large than the threshold is given by \begin{equation} P=\Pr\left[\max\left(A_{B}P_{B}\mathrm{\mathcal{H}_{B}}R^{-\alpha_{B}}\right)>\beta\right], \end{equation} \noindent where we use the standard power loss propagation model with path loss exponent $\alpha_{B}$ (for UE-BS links) and $\alpha_{D}$ (for UE-UE links). The the probability that a generic mobile UE operates in cellular mode is \begin{eqnarray} q & = & \mathsf{\mathit{\mathrm{1}-\Pr\left[\max\left(A_{B}P_{B}\mathrm{\mathcal{H}_{B}}R^{-\alpha_{B}}\right)\leq\beta\right]}}\nonumber \\ & = & 1-\exp\left(-\Lambda\left(\left[0,(\frac{\beta}{A_{B}P_{B}})^{-1/\alpha_{B}}\right]\right)\right)\nonumber \\ & = & 1-\exp\left(-\pi\lambda_{B}\left(\frac{A_{B}P_{B}}{\beta}\right){}^{2/\alpha_{B}}\cdot e^{\frac{2\sigma^{2}}{\alpha_{B}^{2}}}\right), \end{eqnarray} which concludes our proof. \end{IEEEproof} \selectlanguage{english \begin{comment} Placeholder \end{comment} \selectlanguage{australian Note that eq(13). explicitly account for the effects of channel fading, path loss, transmit power,spatial distribution of BSs and the RSS threshold $\beta$. From the result, one can see that the PPP $\phi_{u}$ can be divided into two PPPs: the PPP with intensity $q\lambda_{u}$ and the PPP with intensity $(1-q)\lambda_{u}$, which consist of cellular UEs and D2D UEs, respectively. Same with\cite{6928445}, We assume these two PPP are independent. \subsection{Equivalence Distance Distributions } \selectlanguage{english \begin{comment} Placeholder \end{comment} \selectlanguage{australian The distance $R_{i}^{B}$ from a typical user to its associate BS(maximum downlink receive power including lognormal fading) is an important quantity to calculate the average power. According to the Equivalence Theorem, $\overline{R}_{i}=\mathrm{\mathcal{H}_{B}^{-1/\alpha_{B}}R_{i}^{B}}$, each UE is associated with the BS providing the strongest received signal power is equivalent to another distribution in which each UE is associated with the nearest BS. In this subsection, we derived the pdf of $\overline{R}_{i}$, and then we derived the distribution of the distance of D2D links. We can also derive the average transmission power of CUs using this equivalence theorem and a simple validation is showed in this subsection. \begin{lem} The probability density function(pdf) of $\overline{R}_{i}$ can be written as \begin{eqnarray} f_{\overline{R_{i}}}\left(r\right) & = & \frac{2\pi\lambda_{B}r\cdotp\exp\left(-\pi\lambda_{B}r^{2}\cdot e^{\frac{2\sigma_{B}^{2}}{\alpha_{B}^{2}}}+\frac{2\sigma_{B}^{2}}{\alpha_{B}^{2}}\right)}{1-\exp\left(-\pi\lambda_{B}\left(\frac{B_{B}}{\beta}\right){}^{2/\alpha_{B}}\cdot e^{\frac{2\sigma^{2}}{\alpha_{B}^{2}}}\right)}, \end{eqnarray} where $B_{B}=A_{B}P_{B}$ is a constant. \end{lem} \begin{IEEEproof} The probability density function (PDF) of $\overline{R_{i}}$ can be derived using the simple fact that the null probability of a 2-D Poisson process in an area A is $exp(-\lambda A)$, and we have known that $\overline{R}_{i}\leq(\frac{\beta}{B_{B}})^{-1/\alpha_{B}}$ , which leads to Lemma 2. \end{IEEEproof} \begin{figure}[H] \begin{centering} \includegraphics[width=8cm]{cupower.eps} \par\end{centering} \caption{Transmit power of cellular UEs with $p_{0}$} \end{figure} As a numerical example, we plot cellular users' transmit power in Fig.~1. The analytical result is derived from (10) and Eq.(16). It shows that the analytical result matched well with the numerical result, which validates our analyisis. \begin{lem} The typical D2D transmitter selects the equivalent nearest UE as a potential receiver. If the potential D2D receiver is operating in a cellular mode, D2D TU must search for another receiver. We approximate the second neighbor as the receiver under this situation. The approximate cumulative distribution function(CDF) of $\overline{R}_{d}$ can be written as \begin{align} \Pr\left[\overline{R}_{d}<R\right]\nonumber \\ \approx & \int_{R+t}^{\infty}\left(\int_{0}^{R}f_{R_{d}}(\overline{R}_{d})d\overline{R}_{d}\right)f_{R_{d}}(r_{1})dr_{1}\nonumber \\ + & \int_{t}^{R+t}\left(\int_{0}^{r_{1}-t}f_{R_{d}}(\overline{R}_{d})d\overline{R}_{d}\right.\\ + & \int_{r_{1}-t}^{R}\cdot(1-Pc)\cdot f_{R_{d}}(\overline{R}_{d})d\overline{R}_{d}\nonumber \\ + & \left.\int_{r_{1}-t}^{R}\cdot Pc\cdot f_{R_{d_{2}}}\left(\overline{R}_{d}\right)d\overline{R}_{d}\right)f_{R_{d}}(r_{1})dr_{1},\nonumber \end{align} where $r_{1}$is the equivalent distance from TU to the strongest BS,$t=\left(\frac{\beta}{B_{B}}\right){}^{-1/\alpha_{B}}$. $Pc$ is the probability of a D2D receiver be a CU. \end{lem} \begin{IEEEproof} If there is no different with CUs and D2D UEs, the pdf of the distance between UEs is \begin{equation} f_{R_{d}}(r)=2\pi\lambda_{tu}r\cdotp\exp\left[-\pi\lambda_{tu}r^{2}\cdot e^{\frac{2\sigma_{D}^{2}}{\alpha_{D}^{2}}}+\frac{2\sigma_{D}^{2}}{\alpha_{D}^{2}}\right]. \end{equation} Acording to\cite{1512427} , the second neighbor point is distributed as \begin{equation} f_{R_{d_{2}}}(r)=2\pi^{2}\lambda_{tu}^{2}r^{3}\cdotp\exp\left[-\pi\lambda_{tu}r^{2}\cdot e^{\frac{2\sigma_{D}^{2}}{\alpha_{D}^{2}}}+\frac{4\sigma_{D}^{2}}{\alpha_{D}^{2}}\right], \end{equation} where $Pc$ is the probability of the potential D2D receiver operating in cellular mode, and it can be calculated as \begin{equation} Pc=\arccos\left(\frac{\overline{R}_{d}+r_{1}^{2}-t^{2}}{2\overline{R}_{d}r_{1}}\right)/\pi, \end{equation} which concludes our proof. \end{IEEEproof} \subsection{Coverage Probability} \selectlanguage{english \begin{comment} Placeholder \end{comment} \selectlanguage{australian Consider an arbitrary BS in cellular mode or UE in D2D mode. The SINR experenced at the receiver can be located in an arbitrary location and can be written as \begin{equation} \begin{array}{c} SINR=\frac{S_{signal}}{\underset{X_{c_{i}}\in\phi_{c}}{\sum}B_{i}^{B}\mathrm{\mathcal{H_{\mathrm{i}}^{\mathrm{B}}}}R_{B,i}^{-\alpha_{B}}+\underset{X_{d_{j}}\in\phi_{d}}{\sum}B_{j}^{D}\mathrm{\mathcal{H_{\mathrm{j}}^{\mathrm{D}}}}R_{D,j}^{-\alpha_{d}}+\eta_{c,d}}\end{array}, \end{equation} where $B_{i}^{B}=P_{C}^{i}\cdot\textrm{\ensuremath{A_{B}}}$ and $B^{D}=P_{D}\cdot\textrm{\ensuremath{A_{D}}}$ are constant based on transmission power of the $i$th CU and the TUs, $\mathcal{H_{\mathrm{i}}^{\mathrm{B}}}$and $\mathcal{H_{\mathrm{j}}^{\mathrm{D}}}$ are the lognorm fading in $i$th cellular uplink link and $j$th D2D link, $R_{B,i}$and $R_{D,i}$ are the distance from the $i$th CU and $j$th TU to the typical receiver. The Equivalence distance$\overline{R_{B}}_{i}=\mathrm{\mathcal{H}_{B,i}^{-1/\alpha_{B}}R_{B,i}}$and $\overline{R_{D}}_{j}=\mathrm{\mathcal{H}_{D,j}^{-1/\alpha_{D}}R_{D,j}}$, $\alpha_{B}$and $\alpha_{D}$ are path-loss exponent for cellular links and D2D links, respectively, $\eta_{c,d}$ is the noise for BS or receive UE. \subsubsection{Cellular mode} Let us consider a typical uplink, As the underlying PPP is stationnary, without loss of generality we assume that the typical receiver is located at the original. This analysis indicates the spatially averaged performance of the network by Slivnyak's theorem\cite{6042301}. Henceforth, we only need to focus on characterizing the performance of a typical link. \begin{lem} The complementary cumulative distribution function(CCDF) of the SINR at a typical BS(located in the origin) \begin{equation} \begin{array}{l} \Pr\mathrm{\left[\textrm{SINR}>T\right]}\\ =\int_{0}^{\text{t}}\int_{\omega=-\infty}^{\infty}\left[\frac{e^{i\omega/T}-1}{2\pi i\omega}\right]\mathcal{F}_{SINR^{-1}}(\omega)d\omega f_{\overline{R}_{i}}(r)dr, \end{array} \end{equation} where $\mathcal{F}_{SINR^{-1}}(\omega)$ denotes the conditional characteristic function of $\frac{1}{SINR}$. \begin{equation} \begin{array}{l} \mathcal{F}_{SINR^{-1}}(\omega)\\ =\exp\left\{ -2\pi\lambda_{B}e^{\frac{2\sigma^{2}}{\alpha_{B}^{2}}}\int_{t}^{\infty}\left(1-\int_{0}^{t}\exp\left(-1\times\right.\right.\right.\\ \left.\left.\left.\frac{i\omega}{(\overline{R_{B,0}})^{\alpha_{B}(\varepsilon-1)}}r^{\alpha_{B}\varepsilon}(\tau)^{-\alpha_{B}}\right)f_{\overline{R_{i}}}(r)dr\right)\tau d\tau\right\} \\ \times\exp\left\{ -\pi(1-q)\lambda_{u}e^{\frac{2\sigma^{2}}{\alpha_{B}^{2}}}\int_{t}^{\infty}\left(1-\exp\left(-1\times\right.\right.\right.\\ \left.\left.\frac{i\omega A_{B}^{\varepsilon}P_{d}}{P_{0}\cdot(\overline{R_{B,0}})^{\alpha_{B}(\varepsilon-1)}}(L)^{-\alpha_{B}}\right)LdL\right\} \\ \times\exp\left(-\frac{i\omega\eta}{\frac{P_{0}}{(\mathrm{A_{B}})^{\varepsilon-1}}\cdot(\overline{R_{B,0}})^{\alpha_{B}(\varepsilon-1)}}\right). \end{array} \end{equation} \end{lem} \begin{IEEEproof} Conditioning on the strongest BS being at a distance $R_{B,0}$ from the typical CU, the Equivalence distance$\overline{R_{B,0}}=\mathrm{\mathcal{H}_{B}^{-1/\alpha_{B}}R_{B,0}}$ $\left(\overline{R_{B,0}}\leq\left(\frac{\beta}{B_{B}}\right){}^{-1/\alpha_{B}}\right)$, probability of coverage averaged over the plane is \begin{equation} \begin{array}{cl} p_{c}(T,\lambda) & =\Pr[SINR>T]\\ & =\Pr[\frac{1}{SINR}<\frac{1}{T}]\\ & =\int_{0}^{\text{t}}\Pr[\left.\frac{1}{SINR}<\frac{1}{T}\right|\overline{R_{B,0}}]f_{\overline{R_{i}}}(r)dr \end{array}, \end{equation} where $i=\sqrt{-1}$ is the imaginary unit; The inner intergral is the conditional PDF of $\frac{1}{SINR}$;$\mathcal{F}_{SINR^{-1}}(\omega)$ denotes the conditional characteristic function of $\frac{1}{SINR}$which can be written by \begin{eqnarray} & & \mathcal{F}_{SINR^{-1}}(\omega)\nonumber \\ & = & \mathbb{E\mathrm{_{\phi}}}\left[\left.\exp\left(-i\omega\frac{1}{SINR}\right)\right|\overline{R_{B,0}}\right]\nonumber \\ & = & E_{\phi_{c}}\left[\exp\left(-\frac{i\omega}{\frac{P_{0}}{(\mathrm{A_{B}})^{\varepsilon-1}}\cdot(\overline{R_{B,0}})^{\alpha_{B}(\varepsilon-1)}}(I_{C})\right)\right]\nonumber \\ & \times & \mathbb{E}{}_{\phi_{d}}\left[\exp\left(-\frac{i\omega}{\frac{P_{0}}{(\mathrm{A_{B}})^{\varepsilon-1}}\cdot(\overline{R_{B,0}})^{\alpha_{B}(\varepsilon-1)}}(I_{D})\right)\right]\nonumber \\ & \times & \exp\left(-\frac{i\omega\eta}{\frac{P_{0}}{(\mathrm{A_{B}})^{\varepsilon-1}}\cdot(\overline{R_{B,0}})^{\alpha_{B}(\varepsilon-1)}}\right), \end{eqnarray} and using the definition of the Laplace transform yields, from \cite{6042301} we have{\small{} \begin{eqnarray} \mathcal{L_{\mathrm{I_{c}}}\mathrm{(\mathit{s})}} & = & \mathbb{E}{}_{\phi_{c}}\left[\exp(-\mathit{sI_{c}})\right]\nonumber \\ & = & \exp\left\{ -2\pi\lambda_{B}e^{\frac{2\sigma^{2}}{\alpha_{B}^{2}}}\int_{\overline{R_{B,0}}}^{\infty}\left(1-\int_{0}^{t}\exp\left(-1\times\right.\right.\right.\nonumber \\ & & \left.\left.\left.sP_{0}A_{B}^{(1-\varepsilon)}r^{\alpha_{B}\varepsilon}(\tau)^{-\alpha_{B}}\right)f_{\overline{R_{i}}}(r)dr\right)\tau d\tau\right\} , \end{eqnarray} } Plugging in $s=\frac{i\omega}{\frac{P_{0}}{(\mathrm{A_{B}})^{\varepsilon-1}}\cdot(\overline{R_{B,0}})^{\alpha_{B}(\varepsilon-1)}}$ gives.{\small{} \begin{eqnarray} & & \mathbb{E}{}_{\phi_{c}}[\exp(-\frac{i\omega}{\frac{P_{0}}{(\mathrm{A_{B}})^{\varepsilon-1}}\cdot(\overline{R_{B,0}})^{\alpha_{B}(\varepsilon-1)}}(I_{C}))\nonumber \\ & = & \exp\left\{ -2\pi\lambda_{B}e^{\frac{2\sigma^{2}}{\alpha_{B}^{2}}}\int_{t}^{\infty}\left(1-\int_{0}^{t}\exp\left(-1\times\right.\right.\right.\nonumber \\ & & \left.\left.\left.\frac{i\omega}{(\overline{R_{B,0}})^{\alpha_{B}(\varepsilon-1)}}r^{\alpha_{B}\varepsilon}(\tau)^{-\alpha_{B}}\right)f_{\overline{R_{i}}}(r)dr\right)\tau d\tau\right\} , \end{eqnarray} }Similarly, the term $\mathbb{E}{}_{\phi_{d}}\left[\exp\left(-\frac{i\omega}{\frac{P_{0}}{(\mathrm{A_{B}})^{\varepsilon-1}}\cdot(\overline{R_{B,0}})^{\alpha_{B}(\varepsilon-1)}}(I_{D})\right)\right]$ in Eq.(25) can be written by{\small{} \begin{eqnarray} & & \mathbb{E}{}_{\phi_{d}}[\exp(-\frac{i\omega}{\frac{P_{0}}{(\mathrm{A_{B}})^{\varepsilon-1}}\cdot(\overline{R_{B,0}})^{\alpha_{B}(\varepsilon-1)}}(I_{D}))]\nonumber \\ & = & \mathbb{E}{}_{\phi_{d}}[\underset{X_{d_{i}}\in\phi_{d}}{\prod}[\exp(-\frac{i\omega A_{B}^{\varepsilon}P_{d}}{P_{0}\cdot(\overline{R_{B,0}})^{\alpha_{B}(\varepsilon-1)}}(\overline{R_{C,i}})^{-\alpha_{B}})]]\nonumber \\ & = & \exp\left\{ -\pi(1-q)\lambda_{u}e^{\frac{2\sigma^{2}}{\alpha_{B}^{2}}}\int_{t}^{\infty}\left(1-\right.\right.\nonumber \\ & & \left.\left.\exp(-\frac{i\omega A_{B}^{\varepsilon}P_{d}}{P_{0}\cdot(\overline{R_{B,0}})^{\alpha_{B}(\varepsilon-1)}}(L)^{-\alpha_{B}}\right)LdL\right\} . \end{eqnarray} }where $\lambda_{u}$is the intensity of Users,$R_{D,i}^{\alpha}$is the distance from $i$th TU to typical BS. \end{IEEEproof} \subsubsection{Coverage Probability of D2D Mode} \selectlanguage{english \begin{comment} Placeholder \end{comment} \selectlanguage{australian Now let us consider a typical D2D link. As the underlying PPP is stationary, without loss of generality, we assume that the typical receiver is located at the original. \begin{lem} The CCDF of the SINR at a typical D2D UE(located in the origin) \begin{equation} \begin{array}{l} \Pr[\textrm{SINR}>T]\\ =\int_{0}^{\infty}\int_{\omega=-\infty}^{\infty}\left[\frac{e^{i\omega/T}-1}{2\pi i\omega}\right]\mathcal{F}_{SINR^{-1}}(\omega)d\omega f_{\overline{R_{d}}}(r)dr, \end{array} \end{equation} where $\mathcal{F}_{SINR^{-1}}(\omega)$ denotes the conditional characteristic function of $\frac{1}{SINR}$. \begin{equation} \begin{array}{l} \mathcal{F}_{SINR^{-1}}(\omega)\\ =\exp\left\{ -2\pi\lambda_{B}e^{\frac{2\sigma^{2}}{\alpha_{B}^{2}}}\int_{0}^{\infty}\left(1-\int_{0}^{t}\exp\left(-1\times\right.\right.\right.\\ \left.\left.\left.\frac{i\omega}{P_{d}(\overline{R_{d,0}})^{-\alpha_{d}}}P_{0}A_{B}^{-\varepsilon}r^{\alpha_{B}\varepsilon}(\tau)^{-\alpha_{B}}\right)f_{\overline{R_{i}}}(r)dr\right)\tau d\tau\right\} \\ \times\exp\left\{ -\pi(1-q)\lambda_{u}e^{\frac{2\sigma_{d}^{2}}{\alpha_{d}^{2}}}\int_{\overline{R_{d,0}}}^{\infty}\left(1-\exp\left(-1\times\right.\right.\right.\\ \left.\left.\frac{i\omega}{(\overline{R_{d,0}})^{-\alpha_{d}}}(L)^{-\alpha_{B}}\right)LdL\right\} \\ \times\exp\left(-\frac{i\omega\eta_{d}}{P_{d}A_{D}(\overline{R_{d,0}})^{-\alpha_{d}}}\right) \end{array} \end{equation} and \begin{equation} f_{\overline{R_{d}}}(r)=\frac{\partial\Pr\left[\overline{R}_{d}>R\right]}{\partial\overline{R}_{d}}. \end{equation} \end{lem} \begin{IEEEproof} The proof is very similar to that for the cellular mode, and hence we omit the proof here for brevity. \end{IEEEproof} \section{\label{sec:SIMULATION-AND-DISCUSSION}Simulations and Discussion} \selectlanguage{english \begin{comment} Placeholder \end{comment} \selectlanguage{australian In this section, we use numerical results to validate our results on the performance of the considered D2D-enabled UL cellular network. According to the 3GPP LTE specifications\cite{3gpp}, we set the BS intensity to $\lambda_{B}=5\,\textrm{BSs/k\ensuremath{m^{2}}}$, which results in an average inter-site distance of about 500$\,$m. The UE intensity is chosen as $\lambda=300\,\textrm{UEs/k\ensuremath{m^{2}}}$~\cite{ding2015performance}. The transmit power of each BS is $P_{B}=46\,\textrm{dBm}$, the transmit power of D2D transmitter is $10\,\textrm{dBm}$, the path-loss exponents are $\alpha_{c}=3.75$, $\alpha_{d}=3.75$, and the path-loss constants are $A_{B}=10^{-3.29}$, $A_{D}=10^{-5.578}$. The threshold for selecting cellular mode communication is set to $\beta=-65\textrm{dBm}$. The logmormal shadowing standard deviation is $8\,\textrm{dB}$ between UEs to BSs and $7\,\textrm{dB}$ between UEs to UEs. The noise power is set to $-95\,\textrm{dBm}$ for a UE receiver and $-114\,\textrm{dBm}$ for a BS receiver, respectively. \subsection{The Results on the Coverage Probability} \begin{figure}[H] \begin{centering} \includegraphics[width=8cm]{coverage.eps} \par\end{centering} \caption{Coverage probabality } \end{figure} In Fig.2, we plot the coverage probability for both a typical cellular UE and a typical D2D UE. From this figure, we can draw the following observations: \begin{itemize} \item Our analytical results match well with the simulation results, which validates our analysis and shows that the adopted model accurately captures the features of D2D communications. \item The coverage probability decreases with the increase of SINR threshold, because a higher SINR requirement decreases the coverage probability. \item In the D2D mode, the analytical results is shown to be larger than the simulation resutls. This is becuase we approximate the distance from a typical D2D TU to a typical D2D RU as that from a second nearest D2D UE to such typical D2D RU, when the nearest D2D UE to such typical D2D RU selects the cellular mode. However, the real distance from a typical D2D TU to a typical D2D RU could be larger than the approximate distance used in our analysis addressed in subsection 3.2. \end{itemize} \subsection{The Results on the ASE} In Fig.3, we display the ASE results with $\gamma_{0}=0\,\textrm{dB}$. Since $A^{\textrm{ASE}}(\lambda_{B},\lambda_{u},\gamma_{0})$ is a function of the coverage probablity, which has been validated in Fig.2, we only show analytical results in Fig.3. \begin{figure} \centering{}\includegraphics[width=8cm]{finalase.eps}\caption{ASE with the different density of users} \end{figure} From Fig.3, we can draw the following observations: \begin{itemize} \item The total ASE increases with the increase of the intensity of UE. This is because the spectral reuse factor increases with the number of UEs in the network. \item When the intensity of UE is around $\lambda=100\,\textrm{UEs/k\ensuremath{m^{2}}}$, the enabled-D2D links have a comparable contribution to the total ASE as the cellular links. This is because there are around 1/3 UEs operating in D2D mode and base on the coverage probability in D2D tier there are around 1/3 D2D users are given a acceptable service ($\text{SINR}>0dB$), and hence they make roughly equal contributions to the ASE performance. \item When the network is dense enough, i.e., $\lambda_{u}\in\left[50,250\right]\textrm{UEs/k\ensuremath{m^{2}}}$, which is the practical range of intensity for the existing 4G network and the futrue 5G network\cite{7126919}, the total ASE performance increases quickly, while the ASE of the cellular network stays on top of $5\,\textrm{bps/Hz/k\ensuremath{m^{2}}}$. \end{itemize} \subsection{The Performance Impact of $\beta$ on the ASE} In this subsection, we investigate the performance impact of $\beta$ on the ASE, which is shown in Fig. 4. From this figure, we can see there is a tradeoff in the coverage probability of the cellular mode. This means that with a proper choice of $\beta$, enabling D2D communications not only can improve the ASE of the network, but also can improve the coverage for cellular users. \begin{figure}[H] \begin{centering} \includegraphics[width=8cm]{asewithbeta.eps} \par\end{centering} \caption{Coverage probability with different beta} \end{figure} This is because the cell edge UEs in the conventional UL cellular network will be offloaded to D2D modes to enjoy a better coverage performance. \section{\label{sec:Conclusion}Conclusion} In this paper, we provided a stochastic geometry based theoretical framework to analyze the performance of a D2D underlaid uplink cellular network. In particular, we considered lognormal shadowing fading, a practical D2D mode selection criterion based on the maximum DL received power and the D2D power control mechanism. Our results showed that enabling D2D communications in cellular networks can improve the total ASE, while having a minor performance impact on the cellular network. As future work, a more practical path loss model incorporating both line-of-sight and non-line-of-sight transmissions will be considered, and we will find the optimal parameters for the network that can achieve the maximum total ASE. \bibliographystyle{unsrt}
2,869,038,156,077
arxiv
\section{Introduction}\label{S1} Fourier inversion plays a crucial role in signal processing and communications: it tells us how to convert an continuous signal into some trigonometric functions, which can then be processed digitally or coded on a computer \cite{papoulis1960fourier}. The inversion theorem (also name Fourier integral theorem) states that the input signal or image can be retrieved from its according frequency function via the inversion Fourier transform. In mathematics \cite{stein1971introduction,papoulis1960fourier,PWJ2000}, there are two common conditions versions of the Fourier inversion theorem hold for integrable function. \begin{itemize} \item For a real-valued integrable function $f$, if $f$ is a function of bounded variation in the neighborhood of $x_{0},$ then \begin{eqnarray*} \frac{f(x_{0}+0)+f(x_{0}-0)}{2}= \lim_{\begin{subarray} MM \to \infty \end{subarray}}\frac{1}{2\pi}\int_{-M}^{M}\widehat{f}(u)e^{\i ux_{0}}du . \end{eqnarray*} \item If both $f$ and $\widehat{f}$ are integrable, then $f$ can be recovered from its Fourier transform $\widehat{f},$ \begin{eqnarray*} f(x)=\frac{1}{2\pi}\int_{\mathbb{R}}\widehat{f}(u)e^{\i ux}du, \end{eqnarray*} for almost every $x$, where $\widehat{f}(u)=\int_{\mathbb{R}}f(x)e^{-\i ux}du.$ \end{itemize} There have been numerous proposals in the literature to generalize the classical Fourier transform (FT) by making use of the Hamiltonian quaternion algebra \cite{ouyang2015color, yang2015novel}, namely quaternion Fourier transforms (QFTs). Quaternion algebra \cite{hamilton1866elements} is thought to generalize the classical theory of holomorphic functions of one complex variable onto the multidimensional situation, and to provide the foundations for a refinement of classical harmonic analysis. In the meantime, quaternion algebra has become a well established mathematical discipline and an active area of research with numerous connections to other areas of both pure and applied mathematics. In particular, there is a well developed theory of quaternion analysis with many applications to Fourier analysis and partial differential equations theory, as well as to other fields of physics and engineering \cite{took2009quaternion,took2011augmented,gou2015three}. The QFTs play a vital role in the representation of multidimensional (or quaternionic) signals. They transform a 2D real (or quaternionic-valued) signals into the quaternionic-valued frequency domain signals. The four components of the QFTs separate four cases of symmetry into real signals instead of only two as in the complex Fourier transforms \cite{yang2015novel}. The QFTs have been found many applications in color image processing, especially in color-sensitive smoothing, edge detection and data compression etc \cite{evans2000hypercomplex, evans2000colour,pei1999color,sangwine1996fourier,ell2007hypercomplex,ell2014quaternion}. In \cite{chen2015pitt}, the authors studied the inversion theorem of QFTs for square integrable functions, the convergence of the quaternion Fourier integral is in mean square norm. To the best of our knowledge, there has been no previous work (at least systematically) studying the conditions of quaternion Fourier inversion theorem for integrable functions. Therefore, it is worthwhile and interesting to investigate them. Over the last few years, there has been a growing interest of the classical linear canonical transform (LCT) in engineering, computer sciences, physics and applied mathematics \cite{kou2012windowed, healy2016linear}. The LCT is a linear integral transformation of a four-parameter function and it can be considered as the generalization of the fractional Fourier transform (FRFT) and the FT. Comparing to the FRFT and the FT, the LCT has shown to be more flexible for signal processing. Therefore, it is desirable to extend the LCT to higher dimensions and to study its properties. To this end, quaternionic analysis offers possibilities of generalizing the underlying function theory from 2D to 4D, with the advantage of meeting exactly the same goals. See Refs. \cite{ell2014quaternion, hitzer2007quaternion} for a more complete account of this subject and related topics. A higher-dimensional extension of the LCT within the Clifford analysis setting was first studied in \cite{kou2013generalized}. The paper generalizes the theory of prolate spheroidal wave functions (also called Slepian functions) and it analyzes the energy preservation problems. Quaternion linear canonical transforms (QLCTs) \cite{kou2013uncertainty} are a family of integral transforms, which generalized the QFT and quaternion fractional Fourier transform (QFRFT) \cite{guanlei2008fractional, guo2011reduced, chen2015pitt, kou2014asymptotic, yang2014uncertainty}. Some important properties of QLCTs, such as convolution and Parseval theorems have been studied in \cite{guanlei2008fractional, guo2011reduced, chen2015pitt, hitzer2007quaternion, kou2013uncertainty, kou2014asymptotic, pei2001efficient, yang2014uncertainty, bahri2013convolution, de2015connecting}. Some studies \cite{hitzer2007quaternion, kou2014asymptotic, pei2001efficient} were briefly introduced the inversion theorem for QLCT and QFT, without a clear proof on his existence. In \cite{guanlei2008fractional}, authors proved the reversibility of QFRFT, without clearly states the conditions of the existence. Motivating of the above study, the main purpose of this paper is to solve the following two problems. \begin{itemize} \item {\bf Problem A:} If 2D quaternionic-valued integrable function $f$ is a quaternion bounded variation function (please refer to Definition \ref{def312}), then can $f$ be recovered from their QFTs or/and QLCTs functions? \item {\bf Problem B:} If 2D quaternionic-valued function $f$ and its QFTs or/and QLCTs are both integrable, then can $f$ be recovered from their QFTs or/and QLCTs? \end{itemize} We notice that the solutions of these two problems have not been carrying out in the literature. The outline of the paper is as follows. In order to make it self-contained, in Section \ref{sec2}, we collect some basic concepts of quaternionic analysis, QFTs, QLCTs and the 2D real functions of bounded variation to be used throughout the paper. We prove the inversion theorems of QFTs and QLCTs under different conditions for integrable functions in Section \ref{sec3}. Some conclusions are drawn, and future works are proposed in section \ref{sec4}. \section{Preliminary}\label{sec2} The present section collects some basic facts about quaternions, QFTs, QLCTs, and 2D bounded variation function, which will be needed throughout this paper. \subsection{The QFTs and QLCTs} \mbox{}\indent Let $\mathbb{H}$ denote the {\it Hamiltonian skew field of quaternions}: \begin{eqnarray} \mathbb{H} := \{q=q_0+\i q_1+ \j q_2+\k q_3 \, | \, q_0, q_1, q_2, q_3\in\mathbb{R}\} ,\label{hu1} \end{eqnarray} which is an associative non-commutative four-dimensional algebra. The basis elements $\{\i, \j, \k \}$ obey the Hamilton's multiplication rules: $\i^2=\j^2=\k^2=-1$, $\i \j =-\j \i =\k$, $\j \k =-\k \j =\i$ and $\k \i=-\i \k =\j$. In this way the quaternionic algebra arises as a natural extension of the complex field $\mathbb{C}$. In this paper, the complex field $\mathbb{C}$ can be regarded as the 2D plane which is spanned by $ \{ 1, \i\}$. The {\it quaternion conjugate} of a quaternion $q$ is defined by \begin{eqnarray*} \overline{q}=q_0- \i q_1- \j q_2- \k q_3,\quad q_0, q_1, q_2, q_3\in\mathbb{R}.\label{hu2} \end{eqnarray*} The modulus of $q\in\mathbb{H}$ is defined as \begin{eqnarray*} |q| = \sqrt{q\overline{q}} = \sqrt{\overline{q}q} = \sqrt{q_0^2+q_1^2+q_2^2+q_3^2}. \end{eqnarray*} It is not difficult to see that \begin{eqnarray*} \overline{qp}=\overline{p}\, \overline{q},\quad |q|=|\overline{q}|,\quad |qp|=|q||p|,\quad \forall \, q,p \in\mathbb{H}. \end{eqnarray*} By the Equation (\ref{hu1}), a quaternionic-valued function $f:\mathbb{R}^2\to\mathbb{H}$ can be expressed in the following form: \begin{eqnarray*} f(s,t)=f_0(s,t)+\i f_1(s,t)+\j f_2(s,t)+ \k f_3(s,t),\label{liu209} \end{eqnarray*} where $f_{n}\in\mathbb{R}, n=0,1,2,3$. Let $L^{p}(\mathbb{R}^2 , \mathbb{H}),$ ( integers $ p \geq$ 1) be the right-linear quaternionic-valued Banach space in $\mathbb{R}^2,$ whose quaternion modules are defined as follows: \begin{eqnarray*} L^{p}(\mathbb{R}^2, \mathbb{H}):=\left\{ f \,| \, f:\mathbb{R}^2 \to \mathbb{H}, \Vert f \Vert_{p}:=\left (\int_{\mathbb{R}^2} |f(s,t)|^p dsdt \right )^{\frac{1}{p}} <\infty \right\}. \end{eqnarray*} Due to the non-commutative property of multiplication of quaternions, there are different types of QFTs \cite{hitzer2007quaternion} and QLCTs \cite{kou2013uncertainty}, respectively. The $\textbf{two-sided}$ QFT: \begin{eqnarray}\label{twosidedqft} \mathcal{F}_{T}(u, v):=\int_{\mathbb{R}^2} e^{-{ \i}us}f(s,t)e^{{- \j}vt}dsdt.\label{hu24} \end{eqnarray} The $\textbf{right-sided}$ QFT: \begin{eqnarray*} \mathcal{F}_{R}(u, v):=\int_{\mathbb{R}^2} f(s,t)e^{-{ \i}us}e^{-{ \j}vt}dsdt.\label{hu25} \end{eqnarray*} The $\textbf{left-sided}$ QFT: \begin{eqnarray*} \mathcal{F}_{L}(u, v):=\int_{\mathbb{R}^2} e^{-{ \i}us}e^{-{ \j}vt}f(s,t)dsdt.\label{h25} \end{eqnarray*} \par The QLCTs are the generalization of QFTs, let $A_{i}=\left( \begin{array}{cc} a_{i} &b_{i} \\ c_{i} &d_{i} \\ \end{array} \right) \in \mathbb{R}^{2\times 2} $ be real matrixes parameter with unit determinant, i.e. $det(A_{i})$=$ a_{i}d_{i}-c_{i}b_{i}=1, $ for $i=1,2$. \begin{eqnarray}\label{hu26} \mathcal{L}_{T}^{ \i,\j}(f)(u,v):= \left\{ \begin{array}{llll} \int_{\mathbb{R}^2} K_{A_{1}}^{ \i}(s,u)f(s,t)K_{A_{2}}^{ \j}(t,v)dsdt & b_{1}, b_{2}\neq 0,\\[1.5ex] \int_{\mathbb{R}^2} \sqrt{d_{1}} e^{\i\frac{c_{1}d_{1}u^{2}}{2}}f(du,t)K_{A_{2}}^{\j}(t,v)dt & b_{1}=0, b_{2}\neq 0, \\[1.5ex] \int_{\mathbb{R}^2} K_{A_{1}}^{\i}(s,u)f(s,dv)\sqrt{d_{2}} e^{\j\frac{c_{2}d_{2}v^{2}}{2}}ds & b_{1}\neq 0, b_{2} =0, \\[1.5ex] \sqrt{d_{1}} e^{\i\frac{c_{1}d_{1}u^{2}}{2}}f(du,dv)\sqrt{d_{2}} e^{\j\frac{c_{2}d_{2}v^{2}}{2}} & b_{1}= 0, b_{2} =0., \end{array}\right. \end{eqnarray} where the kernels $K_{A_{1}}^{ \i}$ and $K_{A_{2}}^{ \j}$ of the QLCT are given by \begin{eqnarray*} K_{A_{1}}^{\i}(s,u):=\frac{1}{ \sqrt{\i2\pi b_{1}}}e^{\i(\frac{a_{1}}{2b_{1}}s^{2}-\frac{1}{b_{1}}us+\frac{d_{1}}{2b_{1}}u^{2} )} \quad and \quad K_{A_{2}}^{\j}(t,v):=\frac{1}{\sqrt{\j2\pi b_{2}}}e^{\j(\frac{a_{2}}{2b_{2}}t^{2}-\frac{1}{b_{2}}tv+\frac{d_{2}}{2b_{2}}v^{2} )}, \end{eqnarray*} respectively. Note that when $b_i=0$ $(i=1,2)$, the QLCT of a function is essentially a chirp multiplication and is of no particular interest for our objective interests. Hence, without loss of generality, we set $b_i >0$ $(i=1,2)$ throughout the paper. Let $\mathcal{L}_{R}^{ \i,\j}(f)(u,v)$ and $\mathcal{L}_{L}^{ \i,\j}(f)(u,v)$ be the $ \textbf{right-sided}$ and $\textbf{left-sided}$ QLCTs, respectively. They are defined by \begin{eqnarray*} \mathcal{L}_{R}^{ \i,\j}(f)(u,v):= \int_{\mathbb{R}^2}f(s,t)K_{A_{1}}^{ \i}(s,u)K_{A_{2}}^{ \j}(t,v)dsdt \end{eqnarray*} and \begin{eqnarray*} \mathcal{L}_{L}^{ \i,\j}(f)(u,v):= \int_{\mathbb{R}^2}K_{A_{1}}^{ \i}(s,u)K_{A_{2}}^{ \j}(t,v)f(s,t)dsdt, \end{eqnarray*} respectively. It is significant to note that the QLCT converts to its special cases while we take different matrices $A_{n}, n=1,2 $ \cite{guanlei2008fractional,pei2001efficient}. For example, when $A_{1}=A_{2}=\left( \begin{array}{cc} 0 &1 \\ -1 &0 \\ \end{array} \right)$, the QLCT reduces to the QFT times $ \sqrt{\frac{-\i}{2\pi}}$ and $\sqrt{\frac{-\j}{2\pi}}$, where $ \sqrt{-\i}=e^{-\i\pi/4}$ and $\sqrt{-\j}=e^{-\j\pi/4}$. If $ A_{1}=\left( \begin{array}{cc} \cos \alpha & \sin \alpha \\ -\sin \alpha & \cos\alpha \\ \end{array} \right) , A_{2}=\left( \begin{array}{cc} \cos \beta & \sin \beta \\ -\sin \beta & \cos\beta \\ \end{array} \right)$, the QLCT becomes the QFRFT multiplied with the fixed phase factors $ e^{-\i\alpha/2}, e^{-\j\beta/2} $. \par \subsection{2D Real Bounded Variation Functions Revisited} In 1881, Jordan \cite{jordan1881serie} introduced the 1D bounded variation functions and applied them to the Fourier theory. Hereinafter 1D bounded variation functions was generalized to 2D bounded variation functions by many authors, for instance \cite{clarkson1933definitions, adams1934properties}. In the following, we apply definition by Hardy \cite{hardy1906double}, it is a natural generalization of 1D bounded variation functions. Many important properties are analogous to the 1D case, such as the well-known Dirichlet-Jordan theorem \cite{zygmund2002trigonometric}. It sates that the Fourier series of a bounded variation function $f$ converges at almost every point $x \in [0, 2\pi]$ to the value $ \frac{f(x+0)+f(x-0)}{2}$. Hardy \cite{hardy1906double} generalized the theorem to the double Fourier series case. In this subsection, we reveiw some properties of 2D bounded variation functions $(BVFs)$, which are required for the subsequent derivations. For a more detailed presentation, please refer to \cite{adams1934properties,clarkson1933definitions}. The 2D real function $f$ is assumed to be defined in a rectangle $\mathbb{E}$ $(a_{1} \leq s \leq b_{1}, a_{2}\leq t \leq b_{2})$. By the term net we shall, unless otherwise specified, mean a set of parallels to the axis: $ a_{1}=s_{0}<s_{1}< \cdots <s_{m}=b_{1}, $ and $ a_{2}=t_{0}<t_{1}< \cdots <t_{n}=b_{2}. $ Each of the smaller rectangles into which $\mathbb{E}$ is divided by a net will be called a cell, we employ the notations $$ \triangle _{10}f(s_{i},t_{j}):=f(s_{i+1},t_{j})-f(s_{i},t_{j}),$$ $$ \triangle _{01}f(s_{i},t_{j}):=f(s_{i},t_{j+1})-f(s_{i},t_{j}),$$ $$ \triangle _{11}f(s_{i},t_{j})=\triangle _{10} ( \triangle _{01} f)(s_{i},t_{j}): =f(s_{i+1},t_{j+1})-f(s_{i+1},t_{j})-f(s_{i},t_{j+1})+f(s_{i},t_{j}).$$ \begin{defn}\label{De31 \cite{clarkson1933definitions} Let $f: [a_{1}, b_{1}]\times [a_{2}, b_{2}] \rightarrow \mathbb{R} $ be said to be a bounded variation function $(BVF)$, if it satisfies the following conditions: \begin{itemize} \item the sum $\sum_{i=0}^{m-1}\sum_{j=0}^{n-1}\big |\triangle _{11}f(s_{i},t_{j})\big |$ is bounded for all nets, \item $f(\tilde{s},t)$ considered as a function of $t$ alone in the interval $[a_{2}, b_{2}]$ is of 1D bounded variation for at least one $ \tilde{s} $ in $[a_{1}, b_{1}]$, \item $f(s, \tilde{t})$ considered as a function of $s$ alone in the interval $[a_{1}, b_{1}]$ is of 1D bounded variation for at least one $\tilde{t} $ in $[a_{2}, b_{2}]$. \end{itemize} \end{defn} \begin{remark}\label{re32} From Definition $\ref{De31}$, it follows that if $f$ and $g$ are both $BVFs$, then $f \pm g$ and $f g$ are also BVFs. \end{remark} The well-known Jordan decomposition Theorem states that a 1D real function $f$ is bounded variation if it can be written as a difference of two monotone increasing functions. The 2D $BVF$ in Definition $\ref{De31}$ has the analogue result. Before to proceed, we need the following definition of 2D monotone increasing function. \begin{defn}\label{de33}\cite{clarkson1933definitions} The 2D real function $f$ which satisfies the following conditions everywhere in its domain is called a quasi-monotone function: \begin{itemize} \item $ \triangle _{11}{_{(s_{0},t_{0})}^{(s_{1},t_{1})}}f(s,t):=f(s_{1},t_{1})-f(s_{1},t_{0})-f(s_{0},t_{1})+f(s_{0},t_{0})\geq 0, \mbox{for } s_{1} \geq s_{0} , t_{1} \geq t_{0} $, \item $f(s,t)$ is monotone and non-diminishing with respect for $s$, for every constant value of $t$, \item $f(s,t)$ is monotone and non-diminishing with respect for $t$, for every constant value of $s$. \end{itemize} \end{defn} \begin{lemma}\cite{hobson1907theory}\label{le38} If $f(s,t)$ is a quasi-monotone function, then four double limits $$ f(s- 0,t-0 )=\lim_{\begin{subarray} hh \to 0^{+}\\k \to 0^{+} \end{subarray}}f(s- h,t-k ), f(s+ 0,t-0 )=\lim_{\begin{subarray} hh \to 0^{+}\\k \to 0^{+} \end{subarray}} f(s+ h,t-k ),$$ $$f(s- 0,t+0 )=\lim_{\begin{subarray} hh \to 0^{+}\\k \to 0^{+} \end{subarray}}f(s- h,t+k ), f(s+0,t+0 )=\lim_{\begin{subarray} hh \to 0^{+}\\k \to 0^{+} \end{subarray}}f(s+ h,t+k ), $$ all exist and have definite numbers. \end{lemma} \begin{example}\label{ex34} $f(s,t)=e^{s}e^{t}$ is a quasi-monotone function. \end{example} \par The quasi-monotone function has lots of good properties, please refer to \cite{hobson1907theory} for more detail. From the Definition $\ref{de33}$, it leads to the fact that if $f(s,t)$ is bounded quasi-monotone function then $f(s,t)$ is the $BVF.$ \begin{lemma} \label{le36}\cite{adams1934properties} A necessary and sufficient condition that $f(s,t)$ is the $BVF$ is that it can be represented as the difference between two bounded functions, $f_{1}(s,t)$ and $f_{2}(s,t)$, satisfying the inequalities $$\triangle _{11}f_{i}(s,t)\geq 0,\triangle _{01}f_{i}(s,t)\geq 0,\triangle _{10}f_{i}(s,t)\geq 0, i=1,2.$$ \end{lemma} \begin{remark}\label{re37} From Definition $\ref{de33}$ and Lemma $\ref{le36}$, we can generalize the Jordan decomposition Theorem for BVF from one dimension to two dimension. If $f(s,t)$ is the $BVF$ if and only if it can be represented as the difference between two bounded quasi-monotone functions $f_{1}$ and $f_{2}$, i.e $f= f_{1}-f_{2}$ . \end{remark} The mean value theorem pay a important role in 1D Fourier theory. For the higher dimensional cases, we present 2D extension as follows. \begin{lemma}\cite{hobson1907theory}\label{le39} Let $\mathbb{E}$ be the plane rectangle $ [a_{1} ,a_{2}]\times [b_{1}, b_{2}]$, the non-negative function $f(s,t)$ is of quasi-monotone in $\mathbb{E}$, and $g(s,t)$ is summable in $\mathbb{ E}$. Then \begin{eqnarray*} \int_{a_{1}}^{b_{1}}\int_{ a_{2}}^{ b_{2}}f(s,t)g(s,t)dsdt=f(b_{1}-0 , b_{2}-0)\int_{\xi_{1} }^{b_{1}}\int_{ \xi_{2}}^{ b_{2}}g(s,t)dsdt, \end{eqnarray*} for some point $(\xi_{1} ,\xi_{2}) $ in $ \mathbb{E} $. \end{lemma} \setcounter{equation}{0} \setcounter{equation}{0} \section{Main results}\label{sec3} In this section, we firstly focus on the inversion theorem of 2D Fourier transform $(FT)$ by using the properties of the BVFs, then we drive the inversion 2D QFTs and QLCTs theorems. In the following, the cross-neighborhood of $ (x_{0},y_{0})$ \cite{hobson1907theory} defined by the set of pinits \begin{eqnarray*} |s-x_{0}|\leq \varepsilon_{1},\quad or \setminus and \quad |t-y_{0}|\leq \varepsilon_{2}, \quad (\varepsilon_{i}>0,i=1,2 ) \end{eqnarray*} \begin{defn} $f(s,t) $ is called to belong to $ \bf{L}$ class $( \bf{LC})$ in the cross- neighborhood of $(x_{0},y_{0})$, if f satisfies: $$ \int_{\varepsilon_{2}}^{\infty}\int_{0}^{\varepsilon_{1}} \left | \frac{ \tilde{f}(s,t)-\tilde{f}(a,t)}{s} \right |dsdt < \infty $$ and $$ \int_{\varepsilon_{1}}^{\infty}\int_{0}^{\varepsilon_{2}} \left | \frac{ \tilde{f}(s,t)-\tilde{f}(s,b)}{t} \right |dtds < \infty, $$ where $\tilde{f}(s,t)=f(x_{0}-s, y_{0}-t)+ f(x_{0}+s, y_{0}+t)+f(x_{0}-s, y_{0}+t)+f(x_{0}+s, y_{0}-t), \varepsilon_{n}>0,$ n=1,2, and $a\in\mathbb{ A}:=\{ s \in \mathbb{R}| \int_{\mathbb{R}}| \tilde{f}(s,t)|dt < \infty \}, b\in \mathbb{B}:=\{ t \in \mathbb{R}| \int_{\mathbb{R}}| \tilde{f}(s,t)|ds < \infty \}.$ \end{defn} \begin{remark} If $\int_{\mathbb{R}^{2}} |\tilde{f}(s,t)|dsdt < \infty$, the Fubini theorem implies that $ \int_{\mathbb{R}}| \tilde{f}(s,t)|dt < \infty $ holds for $s$ almost everywhere, and $ \int_{\mathbb{R}}| \tilde{f}(s,t)|ds < \infty $ holds for $t$ almost everywhere, then $\mathbb{A}$ and $\mathbb{B}$ are the real line except a measurable zero set. \begin{example} If $f(s,t)=f_{1}(s)f_{2}(t)\in L^{1}(\mathbb{R}^{2}, \mathbb{R} ), \frac{\partial f(s,t)}{\partial s}|_{s=x_{0}} $ and $\frac{\partial f(s,t)}{\partial s}|_{t=y_{0}} $ exist, then $f(s,t) \in \bf{LC} $. \end{example} \end{remark} \begin{remark} In this paper, we use the swash capital and capital to denote the QFT and 2D FT respectively. \end{remark} \subsection{Problem A for QFTs} In this subsection, first, by using the good properties of the BVF, we drive inversion theorem for 2D FT, the situation is by the similar argument as the proof of the convergence of two-dimensional Fourier series \cite{hardy1906double}, Second, the bounded variation function is defined in quaternion fields, and the inversion theorem of two-sided QFT is proved. Finally we generalize that our idea to the all types of QFTs. \begin{lemma}\cite{PWJ2000}\label{le310} For any given real number $ a, b,$ we have $$ \left |\int_{a}^{b}\frac {\sin t}{ t}dt \right | \leq 6.$$ \end{lemma} We first proceed the 2D Fourier inversion theorem of real function. \begin{theorem}($\textbf{ 2D Fourier Inversion Theorem}$)\label{th311} Suppose that $f \in L^{1}(\mathbb{R}^2, \mathbb{R})$, $f$ is the BVF and belongs to $\bf{LC}$ in a cross-neighborhood of $(x_{0},y_{0})$, then \begin{eqnarray}\label{h825} \eta(x_{0},y_{0})= \lim_{\begin{subarray} NN \to \infty\\ M \to \infty \end{subarray}}\frac{1}{4\pi^2}\int_{-N}^{N}\int_{-M}^{M}F(u,v)e^{\i ux_{0}}e^{\i vy_{0}}dudv, \end{eqnarray} where $$\eta(x_{0},y_{0}):=\frac {f(x_{0}+0,y_{0}+0)+f(x_{0}+0,y_{0}-0)+f(x_{0}-0,y_{0}+0)+f(x_{0}-0,y_{0}-0)}{ 4},$$ and the 2D FT is defined by \begin{eqnarray}\label{e9} F(u,v):=\int_{\mathbb{R}^{2}}f(s,t)e^{-\i us}e^{-\i vt}dsdt. \end{eqnarray} If $f$ is continuous at $(x_{0},y_{0})$, then $\eta(x_{0},y_{0})=f(x_{0},y_{0})$. \end{theorem} \begin{proof} Since $f \in L^{1}(\mathbb{R}^2, \mathbb{ R})$, then $F(u,v)=\int_{\mathbb{R}^{2}}f(s,t)e^{-\i us}e^{-\i vt}dsdt $ is well defined. \\ Set $$I(x_{0},y_{0},N,M):=\frac{1}{4\pi^2}\int_{-N}^{N}\int_{-M}^{M}F(u,v)e^{\i ux_{0}}e^{\i vy_{0}}dudv,$$ inserting the definition of $F(u,v)$, we have \begin{eqnarray*} I(x_{0},y_{0},N,M)=&&\frac{1}{4\pi^2}\int_{-N}^{N}\int_{-M}^{M}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f(s,t)e^{-\i us}e^{-\i vt}e^{\i ux_{0}}e^{\i vy_{0}}dsdtdudv\\ =&&\frac{1}{4\pi^2}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f(s,t)\int_{-N}^{N}\int_{-M}^{M}e^{\i u(x_{0}-s)}e^{\i v(y_{0}-t)}dudvdsdt.\\ \end{eqnarray*} Switching the order of integration is permitted by the Fubin theorem, because of $ f \in L^{1}(\mathbb{R}^2, \mathbb{R})$.\\ \begin{eqnarray*} I(x_{0},y_{0},N,M)=&&\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f(s,t)\frac{\sin [M(x_{0}-s)]}{\pi(x_{0}-s)}\frac{\sin [N(y_{0}-t)]}{\pi(y_{0}-t)}dsdt\\ =&&\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f(x_{0}-s,y_{0}-t)\frac{\sin (Ms)}{\pi s}\frac{\sin (Nt)}{\pi t}dsdt\\ =&&\int_{0}^{\infty}\int_{0}^{\infty}\big(f(x_{0}-s,y_{0}-t)+f(x_{0}+s,y_{0}-t)\\ &&+f(x_{0}-s,y_{0}+t)+f(x_{0}+s,y_{0}+t)\big)\frac{\sin Ms}{\pi s}\frac{\sin Nt}{\pi t}dsdt. \end{eqnarray*} Since \begin{eqnarray*} \int_{0}^{\infty}\int_{0}^{\infty}\frac{\sin (Ms)}{\pi s}\frac{\sin (Nt)}{\pi t}dsdt=\frac{1}{4}, \end{eqnarray*} then \begin{eqnarray*} I(x_{0},y_{0},N,M) - \eta(x_{0},y_{0}) =&& \int_{0}^{\infty}\int_{0}^{\infty}(f(x_{0}-s,y_{0}-t)+f(x_{0}+s,y_{0}-t)+f(x_{0}-s,y_{0}+t)\\ &&+f(x_{0}+s,y_{0}+t)-4\eta(x_{0},y_{0}))\frac{\sin (Ms)}{\pi s}\frac{\sin (Nt)}{\pi t}dsdt. \end{eqnarray*} Let $$\phi_{(x_{0},y_{0})}(s,t):=\tilde{f}(s,t)-4\eta(x_{0},y_{0}),$$ where $\tilde{f}(s,t)=f(x_{0}-s,y_{0}-t)+f(x_{0}+s,y_{0}-t)+f(x_{0}-s,y_{0}+t)+f(x_{0}+s,y_{0}+t). $\\ Then $$ \lim_{\begin{subarray} tt \to 0^{+}\\s \to 0^{+} \end{subarray}}\phi_{(x_{0},y_{0})}(s,t)=\tilde{f}(s,t)-4\eta(x_{0},y_{0}) =0.$$ Applying Remarks \ref{re32} and \ref{re37}, $\phi_{(x_{0},y_{0})}$is also a BVF and can be expressed as the difference between two bounded quasi-monotone functions $ h_{1},$ and $ h_{2}.$ i.e., $$\phi_{(x_{0},y_{0})}=h_{1}-h_{2}.$$ Furthermore, the quasi-monotone functions $ h_{1}$ and $h_{2}$ satisfy $$ \lim_{\begin{subarray} tt \to 0^{+}\\s \to 0^{+} \end{subarray}} h_{i}(s,t)=0, \quad i=1,2, \quad respectively.$$ Therefore $ h_{1}(s,t) $ and $h_{2}(s,t)$ are non-negative in the rectangle $ [0, \delta_{1}] \times [0, \delta_{2}]. $ For any $ \varepsilon >0$, there exists $ \epsilon_{1} >0,\epsilon_{2} >0$, such that $ 0 <\epsilon_{1}<\delta_{1}, 0<\epsilon_{2}<\delta_{2},$ $$ 0\leq h_{i}(s,t)<\varepsilon, \quad for \quad all \quad 0 <s\leq \epsilon_{1},0 <t\leq\epsilon_{2}. $$ Now we divide the $ [0, \infty ) \times [0, \infty ) $ into four parts with the $\epsilon_{1},\epsilon_{2}. $ \begin{eqnarray} I(x_{0},y_{0},N,M) - \eta(x_{0},y_{0}) =&&\int_{0}^{\epsilon_{2}}\int_{0}^{\epsilon_{1}}\left(h_{1}(s,t)-h_{2}(s,t)\right)\frac{\sin (Ms)}{\pi s}\frac{\sin (Nt)}{\pi t}dsdt \nonumber \\ &&+\int_{\epsilon_{2}}^{\infty}\int_{0}^{\epsilon_{1}}\left(\tilde{f}(s,t)-4\eta(x_{0},y_{0})\right)\frac{\sin (Ms)}{\pi s}\frac{\sin (Nt)}{\pi t}dsdt \nonumber \\ &&+\int_{0}^{\epsilon_{2}}\int_{\epsilon_{1}}^{\infty}\left(\tilde{f}(s,t)-4\eta(x_{0},y_{0})\right)\frac{\sin (Ms)}{\pi s}\frac{\sin (Nt)}{\pi t}dsdt \nonumber \\ &&+\int_{\epsilon_{2}}^{\infty}\int_{\epsilon_{1}}^{\infty}\left(\tilde{f}(s,t)-4\eta(x_{0},y_{0})\right)\frac{\sin (Ms)}{\pi s}\frac{\sin (Nt)}{\pi t}dsdt. \label{hu32} \end{eqnarray} Let $I_{1}$ denote the first term of the Equation (\ref{hu32}), we can obtain that \begin{eqnarray*} |I_{1}| \leq \sum_{i=1}^{2} \frac{2}{\pi^2}\left | \int_{0 }^{\epsilon_{1}}\int_{0}^{ \epsilon_{2}}h_{i}(s,t)\frac{\sin (Ms)}{ s}\frac{\sin (Nt)}{ t}dsdt\right |. \end{eqnarray*} By the Lemma \ref{le39}, there exist two points $\big\{(\xi^{(1)}_{i} , \xi^{(2)}_{i}), i=1,2\big \}$ such that \begin{eqnarray*} \sum_{i=1}^{2} \frac{2}{\pi^2}\left | \int_{0 }^{\epsilon_{1}}\int_{0}^{\epsilon_{2}}h_{i}(s,t)\frac{\sin (Ms)}{ s}\frac{\sin (Nt)}{ t}dsdt\right| = \sum_{i=1}^{2}h_{i}(\epsilon_{1},\epsilon_{2}) \frac{2}{\pi^2}\left | \int_{\xi^{(1)}_{i} }^{\epsilon_{1}}\int_{\xi^{(2)}_{i}}^{ \epsilon_{2}}\frac{\sin (Ms)}{ s}\frac{\sin (Nt)}{t}dsdt\right |\\ =\sum_{i=1}^{2}h_{i}(\epsilon_{1},\epsilon_{2}) \frac{2}{\pi^2}\left | \int_{\xi^{(1)}_{i} }^{\epsilon_{1}}\int_{\xi^{(2)}_{i}}^{ \epsilon_{2}}\frac{\sin (Ms)}{ s}\frac{\sin (Nt)}{ t}dsdt\right |\\ \leq \sum_{i=1}^{2}h_{i}(\epsilon_{1},\epsilon_{2}) \frac{2}{\pi^2}\left | \int_{M\xi^{(1)}_{i} }^{M\epsilon_{1}}\frac{\sin s}{ s}ds\right|\left|\int_{N\xi^{(2)}_{i}}^{ N\epsilon_{2}}\frac{\sin t}{ t}dsdt\right|. \end{eqnarray*} Hence \begin{eqnarray*} |I_{1}| \leq (\frac{4}{\pi^2} 6^2)\varepsilon, \end{eqnarray*} the last step applying Lemma \ref{le310}. \par Let $I_{2}$ be the second term of the Equation $(\ref{hu32}),$ \begin{eqnarray*} I_{2}&&:=\int_{\epsilon_{2}}^{\infty}\int_{0}^{\epsilon_{1}}(\tilde{f}(s,t)-4\eta(x_{0},y_{0}))\frac{\sin (Ms)}{\pi s}\frac{\sin (Nt)}{\pi t}dsdt \nonumber \\ &&=\int_{\epsilon_{2}}^{\infty}\int_{0}^{\epsilon_{1}}(\tilde{f}(s,t)-\tilde{f}(a,t))\frac{\sin (Ms)}{\pi s}\frac{\sin (Nt)}{\pi t}dsdt +\int_{\epsilon_{2}}^{\infty}\int_{0}^{\epsilon_{1}} \tilde{f}(a,t) \frac{\sin (Ms)}{\pi s}\frac{\sin (Nt)}{\pi t}dsdt \nonumber \\ &&-\int_{\epsilon_{2}}^{\infty}\int_{0}^{\epsilon_{1}}4\eta(x_{0},y_{0}))\frac{\sin (Ms)}{\pi s}\frac{\sin (Nt)}{\pi t}dsdt. \label{hu34} \end{eqnarray*} Since $f(s,t) \in \bf{LC}$, then due to the Riemanm-Lebesque lemma \cite{stein1971introduction}, the first term of $ I_{2} \rightarrow 0,$ as N $\rightarrow \infty $.\\ For the second term of $ I_{2}$, $$ \left |\int_{\epsilon_{2}}^{\infty}\int_{0}^{\epsilon_{1}} \tilde{f}(a,t) \frac{\sin (Ms)}{\pi s}\frac{\sin (Nt)}{\pi t}dsdt \right |\leqslant \left | \int_{\epsilon_{2}}^{\infty} \tilde{f}(a,t)\frac{\sin (Nt)}{\pi t}dt \right| \left | \int_{0}^{\epsilon_{1}} \frac{\sin (Ms)}{\pi s}ds \right|, $$ where $\tilde{f}(a,\cdot) \in L^{1}(\mathbb{R}, \mathbb{R})$ because of $ f(s,t) \in \bf{LC}$, then by the Riemanm-Lebesque lemma \cite{stein1971introduction}, the second term of $ I_{2} \rightarrow 0,$ as N $\rightarrow \infty $. $$ \left |\int_{\epsilon_{2}}^{\infty}\int_{0}^{\epsilon_{1}}4\eta(x_{0},y_{0})\frac{\sin (Ms)}{\pi s}\frac{\sin (Nt)}{\pi t}dsdt \right | \leqslant \left | 4\eta(x_{0},y_{0}) \int_{0}^{\epsilon_{1}} \frac{\sin (Ms)}{\pi s} ds \right| \left |\int_{\epsilon_{2}}^{\infty} \frac{\sin(Nt)}{\pi t}dt \right |,$$ since $ \int_{\epsilon_{2}}^{\infty} \frac{\sin (Nt)}{\pi t}dt \rightarrow 0,$ as N $\rightarrow 0$, we have the third term of $ I_{2} \rightarrow 0,$ as N $\rightarrow \infty .$ \par Let $I_{3}$ denote the third term of the Equation (\ref{hu32}). $$I_{3}:=\int_{\epsilon_{2}}^{\infty}\int_{0}^{\epsilon_{1}}(\tilde{f}(s,t)-4\eta(x_{0},y_{0}))\frac{\sin (Ms)}{\pi s}\frac{\sin (Nt)}{\pi t}dsdt,$$ taking similarly argument as in $I_{2}$, imply that $I_{3} \to 0 $, as M $\rightarrow \infty $. \par Let $I_{4}$ denote the fourth term of the Equation (\ref{hu32}). \begin{eqnarray*} I_{4}:=&&\int_{\epsilon_{2}}^{\infty}\int_{\epsilon_{1}}^{\infty}(\tilde{f}(s,t)-4\eta(x_{0},y_{0}))\frac{\sin (Ms)}{\pi s}\frac{\sin (Nt)}{\pi t}dsdt \nonumber \\ =&&\int_{\epsilon_{2}}^{\infty}\int_{\epsilon_{1}}^{\infty}\tilde{f}(s,t)\frac{\sin (Ms)}{\pi s}\frac{\sin (Nt)}{\pi t}dsdt-\int_{\epsilon_{2}}^{\infty}\int_{\epsilon_{1}}^{\infty}4\eta(x_{0},y_{0})\frac{\sin (Ms)}{\pi s}\frac{\sin (Nt)}{\pi t}dsdt, \label{hu36} \end{eqnarray*} since $\tilde{f}\in L^{1}(\mathbb{R}^2, \mathbb{R})$ and $\frac{1}{\pi t} \leq \frac{1}{\pi \epsilon_{2}},\frac{1}{\pi s} \leq \frac{1}{\pi \epsilon_{1}}$, as $ s \in (\epsilon_{1 } , \infty),t \in (\epsilon_{2 }, \infty),$ by the Riemanm-Lebesque Lemma \cite{stein1971introduction}, we can conclude that $I_{4} \to 0 $ , as M $ \rightarrow \infty,$ N $\rightarrow \infty .$ We conclude the proof, i.e.. $$I(x_{0},y_{0},N,M) - \eta(x_{0},y_{0}) \to 0, as \quad M \to \quad \infty,\quad N \to \quad \infty. $$ \end{proof} \begin{remark} For separable function $f(s,t)=f_{1}(s)f_{2}(t)$ is a BVF in a rectangle-neighborhood of $(x_{0},y_{0})$ and belongs to $L^{1}(\mathbb{R}^{2}, \mathbb{R} )$, then the Equation $(\ref{h825})$ is also valid, that is to say, the $ \mathbf{LC}$ condition is not required in same cases. \end{remark} \begin{defn}\label{def312} The function $f(s,t)=f_{0}(s,t)+\i f_{1}(s,t)+\j f_{2}(s,t)+\k f_{3}(s,t)$ is said to be \textbf{ quaternion bounded variation function $(QBVF)$ } if and only if it's components $f_{n}(s,t),n=0,1,2,3$ are all BVFs. \end{defn} \begin{lemma}\label{le313} If $f \in L^{1}(\mathbb{R}^2,\mathbb{H})$, if and only if its components $f_{n} \in L^{1}(\mathbb{R}^2, \mathbb{R}), n=0,1,2,3.$ \end{lemma} \begin{proof} The module $|f(s,t)|$ of a quaternionic-valued function $f(s,t)$ is given by \begin{eqnarray*} |f(s,t)| = \sqrt{|f_0(s,t)|^2+|f_1(s,t)|^2+|f_2(s,t)|^2+|f_3(s,t)|^2}. \end{eqnarray*} Therefore, if $ f \in L^{1}(\mathbb{R}^2,\mathbb{H}),$ then \begin{eqnarray*} \int_{\mathbb{R}^2} \left|f_{n}(s,t)\right|dsdt \leq \int_{\mathbb{R}^2} |f(s,t)|dsdt < \infty, \end{eqnarray*} $n=0,1,2,3.$ \par On the other hand, since \begin{eqnarray*} |f(s,t)| \leq |f_0(s,t)|+|f_1(s,t)|+|f_2(s,t)|+|f_3(s,t)|, \end{eqnarray*} then \begin{eqnarray*} \int_{\mathbb{R}^2} |f(s,t)|dsdt \leq \sum_{n=0}^{3}\int_{\mathbb{R}^2} |f_{n}(s,t)|dsdt < \infty. \end{eqnarray*} \end{proof} \begin{theorem}($\textbf{ Inversion Theorem for two-sided QFT}$)\label{the314} Suppose in the cross-neighborhood of $ (x_{0},y_{0})$, $f(s,t)$ is the QBVF and belongs to $\bf{LC},$ and $f \in L^{1}(\mathbb{R}^2,\mathbb{H}) $, then \begin{eqnarray}\label{h81} \eta(x_{0},y_{0})= \lim_{\begin{subarray} N \to \infty \\M \to \infty \end{subarray} }\frac{1}{4\pi^2}\int_{-N}^{N}\int_{-M}^{M}e^{\i ux_{0}}\mathcal{F}_{T}(u,v)e^{\j vy_{0}}dudv, \end{eqnarray} where \begin{eqnarray*} \eta(x_{0},y_{0})&&=\frac {f(x_{0}+0,y_{0}+0)+f(x_{0}+0,y_{0}-0)+f(x_{0}-0,y_{0}+0)+f(x_{0}-0,y_{0}-0)}{ 4}\\ &&=\eta_{0}+\i\eta_{1}+\j\eta_{2}+\k\eta_{3},\\ \eta_{n}(x_{0},y_{0})&&=\frac {f_{n}(x_{0}+0,y_{0}+0)+f_{n}(x_{0}+0,y_{0}-0)+f_{n}(x_{0}-0,y_{0}+0)+f_{n}(x_{0}-0,y_{0}-0)}{ 4},\quad n=0,1,2,3,\\ \mathcal{F}_{T}(u,v)&&=\int_{\mathbb{R}^{2}}e^{-\i us}f(s,t)e^{-\j vt}dsdt. \end{eqnarray*} If $f(s,t)$ is continuous at $(x_{0},y_{0})$, then $\eta(x_{0},y_{0})=f(x_{0},y_{0})$. \end{theorem} \begin{proof} Set $$ \nu(x_{0},y_{0},N,M):=\frac{1}{4\pi^2}\int_{-N}^{N}\int_{-M}^{M}e^{\i ux_{0}}\mathcal{F}_{T}(u,v)e^{\j vy_{0}}dudv,$$ and rewrite this expression by inserting the definition of $\mathcal{F}_{T}(u,v)$: \begin{eqnarray}\label{sin1} &&\nu(x_{0},y_{0},N,M)=\frac{1}{4\pi^2}\int_{-N}^{N}\int_{-M}^{M}e^{\i ux_{0}}\int_{\mathbb{R}^{2}}e^{-\i us}f(s,t)e^{-\j vt}dsdte^{\j vy_{0}}dudv \nonumber \\ &=&\int_{\mathbb{R}^{2}}\int_{-N}^{N}\int_{-M}^{M}e^{\i u(x_{0}-s)}f(s,t)e^{\j v(y_{0}-t)}dudvdsdt \nonumber \\ &=&\int_{\mathbb{R}^{2}}\frac{\sin M(x_{0}-s)}{\pi(x_{0}-s)}f(s,t)\frac{\sin N(y_{0}-t)}{\pi(y_{0}-t)}dsdt \nonumber \\ &=&\int_{\mathbb{R}^{2}}f(x_{0}-s,y_{0}-t)\frac{\sin (Ms)}{\pi s}\frac{\sin(Nt)}{\pi t}dsdt \\ &=&\int_{\mathbb{R}^{2}}f(x_{0}-s,y_{0}-t)\frac{\sin (Ms)}{\pi s}\frac{\sin (Nt)}{\pi t}dsdt\nonumber \\ &=&\int_{\mathbb{R}^{2}}f_{0}(x_{0}-s,y_{0}-t)\frac{\sin (Ms)}{\pi s}\frac{\sin (Nt)}{\pi t}dsdt +\i \int_{\mathbb{R}^{2}}f_{1}(x_{0}-s,y_{0}-t)\frac{\sin (Ms)}{\pi s}\frac{\sin( Nt)}{\pi t}dsdt \nonumber \\ &&+\j\int_{\mathbb{R}^{2}}f_{2}(x_{0}-s,y_{0}-t)\frac{\sin (Ms)}{\pi s}\frac{\sin( Nt)}{\pi t}dsdt +\k \int_{\mathbb{R}^{2}}f_{3}(x_{0}-s,y_{0}-t)\frac{\sin (Ms)}{\pi s}\frac{\sin (Nt)}{\pi t}dsdt.\nonumber \end{eqnarray} Switching the order of integration is permitted by the Fubin Theorem in the first step.\\ Set \begin{eqnarray*} \nu_{n}(x_{0},y_{0},N,M):=\int_{\mathbb{R}^2}f_{n}(x_{0}-s,y_{0}-t)\frac{\sin (Ms)}{\pi s}\frac{\sin( Nt)}{\pi t}dsdt, n=0,1,2,3, \end{eqnarray*} then \begin{eqnarray*} \left|\nu(x_{0},y_{0},N,M)-\eta(x_{0},y_{0}) \right|= \sqrt{\sum_{n}^{4}|\nu_{n}(x_{0},y_{0},N,M)-\eta_{n}(x_{0},y_{0})|^2}. \end{eqnarray*} According to Lemma \ref{le313} and Theorem \ref{th311}, we have $$|\nu_{n}(x_{0},y_{0},N,M)-\eta_{n}(x_{0},y_{0})| \to 0,\quad as \quad N \to \infty, M \to \infty, \quad n=0,1,2,3,$$ hence $|\nu(x_{0},y_{0},N,M)-\eta(x_{0},y_{0})|\to 0,$ as N $\rightarrow \infty,$ M $\rightarrow \infty$. \end{proof} Using the similar argument, and the fact that the Sinc function in Equation $(\ref{sin1})$ is real-valued, it commutes with quaternionic-value function $f,$ then we obtain the inversion formulas for left-sided and right-sided QFTs, respectively. \begin{theorem}\label{cor315} Suppose in the cross-neighborhood of $(x_{0},y_{0})$, $f(s,t)$ is the QBVF and belongs to $ \bf{LC}$, and $f\in L^{1}(\mathbb{R}^2,\mathbb{H})$, then \begin{description} \item[(a)] the inversion theorem of right-sided QFT: \begin{eqnarray} \label{r1} \eta(x_{0},y_{0})= \lim_{\begin{subarray} N \to \infty \\M \to \infty \end{subarray} }\frac{1}{4\pi^2}\int_{-N}^{N}\int_{-M}^{M}\mathcal{F}_{R}(u,v)e^{\j vy_{0}}e^{\i ux_{0}}dudv, \end{eqnarray} \item[(b)] the inversion theorem of left-sided QFT: \begin{eqnarray} \label{l1} \eta(x_{0},y_{0})= \lim_{\begin{subarray} N \to \infty \\M \to \infty \end{subarray} }\frac{1}{4\pi^2}\int_{-N}^{N}\int_{-M}^{M}e^{\j vy_{0}}e^{\i ux_{0}}\mathcal{F}_{L}(u,v)dudv, \end{eqnarray} \end{description} where \begin{eqnarray*} \eta(x_{0},y_{0})&&=\frac {f(x_{0}+0,y_{0}+0)+f(x_{0}+0,y_{0}-0)+f(x_{0}-0,y_{0}+0)+f(x_{0}-0,y_{0}-0)}{ 4},\\ &&=\eta_{0}+\i\eta_{1}+\j\eta_{2}+\k\eta_{3},\\ \eta_{n}(x_{0},y_{0})&&=\frac {f_{n}(x_{0}+0,y_{0}+0)+f_{n}(x_{0}+0,y_{0}-0)+f_{n}(x_{0}-0,y_{0}+0)+f_{n}(x_{0}-0,y_{0}-0)}{ 4},\\ &&n=0,1,2,3,\\ \end{eqnarray*} if $f(s,t)$ is continuous at $(x_{0},y_{0})$, then $f(x_{0},y_{0})=\eta(x_{0},y_{0}) .$ \end{theorem} \begin{remark}\label{re316} We shall note that the two-sided QFT defined above can be generalized as follows: \begin{eqnarray}\label{genqft} \mathcal{F}_{T}(u,v):=\int_{\mathbb{R}^2}e^{-\mu_{1}us}f(s,t)e^{-\mu_{2}vt}dsdt, \end{eqnarray} or \begin{eqnarray*} \mathcal{F}_{T}(u,v):=\int_{\mathbb{R}^2}e^{-\mu_{1}us}f(s,t)e^{-\mu_{1}vt}dsdt, \end{eqnarray*} where $\mu_{1}:= \mu_{1,1} \i +\mu_{1,2} \j +\mu_{1,3} \k$ and $\mu_2 := \mu_{2,1} \i +\mu_{2,2} \j +\mu_{2,3} \k$ so that \begin{eqnarray} &&\mu_{n}=\mu_{n.1}\i+\mu_{n.2}\j+\mu_{n.3}\k; \label{hu37}\\ &&\mu_{n}^{2}=-\mu_{n.1}^{2}-\mu_{n.2}^{2}-\mu_{n.3}^{2}=-1,n=1,2,\nonumber\\ &&\mu_{1.1}\mu_{2.1}+\mu_{1.2}\mu_{2.2}+\mu_{1.3}\mu_{2.3}=0.\nonumber \end{eqnarray} Equation (\ref{twosidedqft}) is the special case of (\ref{genqft}) in which $\mu_1=\i$ and $\mu_2=\j$. The right-sided and left-sided QFTs can be also generalized similarly as above. Since $\int_{-N}^{N}e^{\mu_{n}x}dx=2\sin(N),N>0,n=1,2$, by the similar argument, we could find that if the quaternionic-value function $f(s,t)$ satisfies the conditions of Theorem \ref{the314}, then $f$ can be recovered from its all above different types of QFTs. \end{remark} \subsection{Problem A for QLCTs} In this subsection, the inversion theorem of two-sided QLCT is studied. \begin{theorem}($\textbf{ Inversion Theorem for two-sided QLCT}$ )\label{the317}\\ Suppose in the cross-neighborhood of $ (x_{0},y_{0})$, $f$ is QBVF, $e^{\i\frac{a_{1}}{2b_{1}}s^{2}}f(s,t)e^{\j\frac{a_{2}}{2b_{2}}t^{2}}$belongs to $\bf{LC}$ and $f\in L^{1}(\mathbb{R}^2,\mathbb{H}) $, then \begin{eqnarray}\label{h82} f(x_{0},y_{0})= \lim_{\begin{subarray} N \to \infty \\M \to \infty \end{subarray} }\frac{1}{4\pi^2}\int_{-N}^{N}\int_{-M}^{M}K_{A_{1}^{-1}}^{\i}(u,x_{0})\mathcal{L}_{T}^{\i,\j}(f)(u, v)K_{A_{2}^{-1}}^{\j}(v,y_{0})dudv, \end{eqnarray} where \begin{eqnarray*} f(x_{0},y_{0})=\frac {f(x_{0}+0,y_{0}+0)+f(x_{0}+0,y_{0}-0)+f(x_{0}-0,y_{0}+0)+f(x_{0}-0,y_{0}-0)}{ 4}. \end{eqnarray*} \end{theorem} \begin{proof} Set \begin{eqnarray*} J(x_{0},y_{0},N,M)=\int_{-N}^{N}\int_{-M}^{M}K_{A_{1}^{-1}}^{\i}(u,x_{0})\mathcal{L}_{T}^{\i,\j}(f)(u, v)K_{A_{2}^{-1}}^{\j}(v,y_{0})dudv \label{hu38} \end{eqnarray*} and rewrite this expression by inserting the definition of $\mathcal{L}_{T}^{\i,\j}(f)(u, v)$ in the Equation (\ref{hu26}): \begin{eqnarray} &&J(x_{0},y_{0},N,M)=\int_{-N}^{N}\int_{-M}^{M}K_{A_{1}^{-1}}^{\i}(u,x_{0})\left(\int_{\mathbb{R}^2} K_{A_{1}}^{\i}(s,u)f(s,t)K_{A_{2}}^{\j}(t,v)dsdt)K_{A_{2}^{-1}}^{\j}(v,y_{0}\right)dudv \nonumber \\ =&&\int_{-N}^{N}\int_{-M}^{M}\frac{1}{4\pi^2b_{1}b_{2}}\int_{\mathbb{R}^2} e^{-\i\frac{s-x_{0}}{b_{1}}u}e^{\i\frac{a_{1}(s^2-x_{0}^2)}{2b_{1}}}f(s,t) e^{-\j\frac{t-y_{0}}{b_{2}}v}e^{\j\frac{a_{2}(t^2-y_{0}^2)}{2b_{2}}}dsdtdudv \nonumber \\ =&&\int_{\mathbb{R}^2}\frac{1}{\pi^2} \frac{\sin(\frac{s-x_{0}}{b_{1}}N)}{s-x_{0}}e^{\i\frac{a_{1}(s^2-x_{0}^2)}{2b_{1}}}f(s,t)\frac{\sin(\frac{t-y_{0}}{b_{2}}M)}{t-y_{0}} e^{\j\frac{a_{2}(t^2-y_{0}^2)}{2b_{2}}}dsdt \nonumber \\ =&&\int_{\mathbb{R}^2}\frac{1}{\pi^2} \frac{\sin(\frac{s-x_{0}}{b_{1}}N)}{s-x_{0}}e^{\i\frac{a_{1}(-x_{0}^2)}{2b_{1}}}g(s,t)\frac{\sin(\frac{t-y_{0}}{b_{2}}M)}{t-y_{0}} e^{\j\frac{a_{2}(-y_{0}^2)}{2b_{2}}}dsdt, \nonumber \label{hu39} \end{eqnarray} where $g(s,t)=e^{\i\frac{a_{1}}{2b_{1}}s^{2}}f(s,t)e^{\j\frac{a_{2}}{2b_{2}}t^{2}}$. Since $e^{\i\frac{a_{1}}{2b_{1}}s^{2}},$ $e^{\j\frac{a_{2}}{2b_{2}}t^{2}}$ are the $QBVFs$ in the rectangle-neighborhood of $(x_{0},y_{0})$, then $g(s,t)$ is also a $QBVF$ by using Remark \ref{re32}. By Theorem \ref{the314}, we have \begin{eqnarray*} \lim_{\begin{subarray} N \to \infty \\M \to \infty \end{subarray} }e^{\i\frac{a_{1}}{2b_{1}}x_{0}^{2}}J(x_{0},y_{0},N,M)e^{\j\frac{a_{2}}{2b_{2}}y_{0}^{2}} &&=\lim_{\begin{subarray} N \to \infty \\M \to \infty \end{subarray} }\int_{\mathbb{R}^2}\frac{1}{\pi^2} \frac{\sin((s-x_{0})\frac{N}{b_{1}})}{s-x_{0}}g(s,t)\frac{\sin((t-y_{0})\frac{M}{b_{2}})}{t-y_{0}}dsdt\nonumber\\ &&=g(x_{0},y_{0}). \end{eqnarray*} That is to say: \begin{eqnarray*} \lim_{\begin{subarray} N \to \infty \\M \to \infty \end{subarray} }J(x_{0},y_{0},N,M) &&=\lim_{\begin{subarray} N \to \infty \\M \to \infty \end{subarray} }\int_{\mathbb{R}^2}\frac{1}{\pi^2} \frac{\sin((s-x_{0})\frac{N}{b_{1}})}{s-x_{0}}g(s,t)\frac{\sin((t-y_{0})\frac{M}{b_{2}})}{t-y_{0}}dsdt\\ &&=e^{\i\frac{-a_{1}}{2b_{1}}x_{0}^{2}}g(x_{0},y_{0})e^{-\j\frac{a_{2}}{2b_{2}}y_{0}^{2}}\\ &&=f(x_{0},y_{0}), \end{eqnarray*} this complete the proof. \end{proof} \begin{remark}\label{re318} \begin{enumerate} \item If $f\in L^{1}(\mathbb{R}^{2},\mathbb{ H}),$ is a QBVF in the rectangle-neighborhood $(x_{0}, y_{0})$, and its four component are separable i.e., $f_{n}(s,t)=f_{n}^{(1)}(s)f_{n}^{(2)}(t), n=0,1,2,3,$ then the inversion formulae $ (\ref{h81})$, $(\ref{r1})$, $(\ref{l1})$ and $ (\ref{h82})$ can be holden without the condition $ \mathbf{LC}$. \item The proof of Theorem \ref{the317} only works for the two-sided QLCT, but not for the right-sided and left-sided QLCTs. A straightforward computation shows that \begin{eqnarray*} &&\int_{-N}^{N}\int_{-M}^{M}\left(\int_{\mathbb{R}^2} f(s,t)K_{A_{1}}^{\bf i}(s,u)K_{A_{2}}^{\bf j}(t,v)dsdt\right)K_{A_{2}^{-1}}^{\j}(v,y_{0})K_{A_{1}^{-1}}^{\i}(u,x_{0})dudv\\ =&& \int_{-N}^{N}\int_{-M}^{M}\left(\int_{\mathbb{R}^2} \frac{1}{4\pi^2b_{1}b_{2}}f(s,t)e^{\i(\frac{a_{1}}{2b_{1}}s^2-\frac{1}{b_{1}}su+\frac{d_{1}}{2b_{1}}u^2)} e^{\j(\frac{a_{2}}{2b_{2}}t^2-\frac{1}{b_{2}}tv+\frac{d_{2}}{2b_{2}}v^2)}dsdt\right)\\ ~&& e^{\j(\frac{-d_{2}}{2b_{2}}v^2+\frac{1}{b_{2}}y_{0}v-\frac{a_{2}}{2b_{2}}y_{0}^2)}e^{\i(\frac{-d_{1}}{2b_{1}}u^2+\frac{1}{b_{1}}x_{0}u-\frac{a_{1}}{2b_{1}}x_{0}^2)}dudv\\ = &&\int_{\mathbb{R}^2}f(s,t)\int_{-N}^{N}\frac{1}{4\pi^2b_{1}b_{2}}e^{\i(\frac{a_{1}}{2b_{1}}s^2-\frac{1}{b_{1}}su+\frac{d_{1}}{2b_{1}}u^2)}du\\ &&\int_{-M}^{M}e^{\j(\frac{1}{b_{2}}(y_{0}-t))v}dve^{\j(\frac{a_{2}}{2b_{2}}(t^{2}-y_{0}^{2}))}e^{\i(\frac{-d_{1}}{2b_{1}}u^2+\frac{1}{b_{1}}x_{0}u-\frac{a_{1}}{2b_{1}}x_{0}^2)}dsdt\\ =&&\int_{\mathbb{R}^2}f(s,t)\int_{-N}^{N}\frac{1}{4\pi^2b_{1}b_{2}}e^{\i(\frac{a_{1}}{2b_{1}}s^2-\frac{1}{b_{1}}su+\frac{d_{1}}{2b_{1}}u^2)}du\\ &&\frac{2\sin(\frac{1}{b_{2}}(y_{0}-t)M)}{\frac{1}{b_{2}}(y_{0}-t)}e^{\j(\frac{a_{2}}{2b_{2}}(t^{2}-y_{0}^{2}))}e^{\i(\frac{-d_{1}}{2b_{1}}u^2+\frac{1}{b_{1}}x_{0}u-\frac{a_{1}}{2b_{1}}x_{0}^2)}dsdt. \end{eqnarray*} The non-commutativity of $e^{\j(\frac{a_{2}}{2b_{2}}(t^{2}-y_{0}^{2}))} $ and $e^{\i(\frac{-d_{1}}{2b_{1}}u^2+\frac{1}{b_{1}}x_{0}u-\frac{a_{1}}{2b_{1}}x_{0}^2)}$ in the last equation gives the reason why this method fails to right-sided and left-sided QLCTs, but in the following subsection, we show that the inversion formulas of right-sided and left-sided QLCTs can be holden pointwise almost everywhere by making use of the relations between QFTs and QLCTs. \end{enumerate} \end{remark} \subsection{Problem B for QFTs} In this subsection, we first give the sufficient conditions to solve the inversion problems of different types of QFTs in $L^{1}(\mathbb{R}^{2}, \mathbb{H})$. Moreover, with the relations between two-sided QFT and two-sided QLCT, two-sided QLCT and right-sided, left-sided QLCTs, we obtain the inversion problems of different types of QLCTs. We first give two technical Lemmas. \begin{lemma}\label{l51} Suppose $f\in L^{1}(\mathbb{R}^{2}, \mathbb{H})$, $w$ is the Gaussian function on $ \mathbb{R}^{2}$, i.e., for $\alpha >0$ $w(x,y)=\frac{1}{4\pi^{2}}e^{-\alpha(x^{2}+y^{2})}$,then \begin{eqnarray*}\label{F5} \int_{\mathbb{R}^{2}}\mathcal{F}_{T}(x,y)w(x,y)dxdy=\int_{\mathbb{R}^{2}}f(s,t)\mathcal{W}_{T}(s,t)dsdt. \end{eqnarray*} \end{lemma} \begin{proof} Since \begin{eqnarray*} \mathcal{W}_{T}(s,t)&&=\frac{1}{4\pi^{2}}\int_{\mathbb{R}^{2}}e^{-\i sx}e^{-\alpha(x^{2}+y^{2})}e^{-\j ty}dxdy\\ &&= \frac{1}{4\pi^{2}}\int_{\mathbb{R}}e^{-\i sx}e^{-\alpha x^{2}}dx\int_{\mathbb{R}}e^{-\alpha y^{2}}e^{-\j ty}dy=\frac{1}{4\pi \alpha}e^{-\frac{s^{2}+t^{2}}{4\alpha}}, \end{eqnarray*} then $\mathcal{W}_{T}(s,t)$ is the Gauss-Weierstrass kernels in $\mathbb{R}^{2}$. \par Since \begin{eqnarray*} \left |\int_{\mathbb{R}^{2}}\mathcal{F}_{T}(x,y)w(x,y)dxdy \right | \leq \int_{\mathbb{R}^{2}}\int_{\mathbb{R}^{2}}\left |e^{-\i sx}f(s,t)e^{-\j ty}w(x,y)\right |dsdtdxdy \leq \|f\|_{1}\|w\|_{1}, \end{eqnarray*} then $\int_{\mathbb{R}^{2}}\mathcal{F}_{T}(x,y)w(x,y)dxdy $ is well defined, and by the Fubini Theorem, we have that \begin{eqnarray*} \int_{\mathbb{R}^{2}}\mathcal{F}_{T}(x,y)w(x,y)dxdy &&=\frac{1}{4\pi^{2}}\int_{\mathbb{R}^{2}}\int_{\mathbb{R}^{2}}e^{-\i sx}f(s,t)e^{-\j ty} e^{-\alpha(x^{2}+y^{2})}dsdtdxdy\\ &&=\frac{1}{4\pi^{2}}\int_{\mathbb{R}^{2}}\int_{\mathbb{R}^{2}}e^{-\i sx}f(s,t)e^{-\j ty} e^{-\alpha(x^{2}+y^{2})}dsdtdxdy\\ &&=\frac{1}{4\pi^{2}}\int_{\mathbb{R}}e^{-\alpha x^{2}}e^{-\i sx}dx\int_{\mathbb{R}^{2}}f(s,t)dsdt\int_{\mathbb{R}}e^{-\j ty} e^{-\alpha y^{2}}dy\\ &&=\frac{1}{4\pi \alpha}\int_{\mathbb{R}^{2}}f(s,t)e^{-\frac{s^{2}+t^{2}}{4a}} dsdt =\int_{\mathbb{R}^{2}}f(s,t)\mathcal{W}_{T}(s,t)dsdt. \end{eqnarray*} \end{proof} From Lemma \ref{l51}, if $ f\in L^{1}(\mathbb{R}^{2}, \mathbb{H}) $, then we have \begin{eqnarray} &&\frac{1}{4\pi^{2}}\int_{\mathbb{R}^{2}}e^{\i sx}\mathcal{F}_{T}(x,y)e^{\j ty}e^{-\alpha(x^{2}+y^{2})} dxdy \nonumber \\ =&&\frac{1}{4\pi^{2}}\int_{\mathbb{R}^{2}}e^{\i sx}\int_{\mathbb{R}^{2}}e^{-\i xu}f(u,v)e^{-\j yv}dudve^{\j ty}e^{-\alpha(x^{2}+y^{2})}dxdy \nonumber\\ =&&\frac{1}{4\pi \alpha}\int_{\mathbb{R}^{2}}f(u,v)e^{-\frac{(u-s)^{2}+(v-t)^{2}}{4a}}dudv \nonumber\\ =&&f_{0}\ast \mathcal{W}_{T}+ f_{1}\ast \mathcal{W}_{T} \i+f_{2}\ast \mathcal{W}_{T} \j+f_{3}\ast \mathcal{W}_{T}\k. \end{eqnarray} \begin{lemma}\label{l52} If $f \in L^{1}(\mathbb{R}^{2}, \mathbb{H})$, $\mathcal{W}_{T}(s,t)$ is the Gauss-Weierstrass kernel in $\mathbb{R}^{2}$, $\mathcal{W}_{T}(s,t)=\frac{1}{4\pi \alpha}e^{-\frac{s^{2}+t^{2}}{4\alpha}}$, then \begin{eqnarray*} \lim_{\begin{subarray} \alpha \alpha \to 0^{+} \end{subarray}} \| f\ast \mathcal{W}_{T}-f \|_{1}=0. \end{eqnarray*} \end{lemma} \begin{proof} By Theorem 1.18 in \cite{stein1971introduction}, if $ f_{n}\in L^{1}(\mathbb{R}^{2},\mathbb{R}),$ then $$\|f_{n}\ast \mathcal{W}_{T}-f_{n} \|_{1}\rightarrow 0,\quad as \quad \alpha \rightarrow 0, n=0,1,2,3.$$ Since $f \in L^{1}(\mathbb{R}^{2}, \mathbb{H})$, using the Lemma \ref{le313}, the components $f_{n}$ of $f$ are all $ \in L^{1}(\mathbb{R}^{2}, \mathbb{R}), n=0,1,2,3,$ and $$\| f\ast \mathcal{W}_{T}(s,t)-f\|_{1}\leq \sum_{n=0}^{3} \| f_{n}\ast \mathcal{W}_{T}(s,t)-f_{n}\|_{1},$$ then $$\| f\ast \mathcal{W}_{T}(s,t)-f\|_{1}\rightarrow 0,\quad as \quad \alpha \rightarrow 0, $$ that is to say, the Gauss means of the integral $\frac{1}{4\pi^{2}}\int_{\mathbb{R}^{2}}e^{\i sx}\mathcal{F}_{T}(x,y)e^{\j ty} dxdy $ converge to $f(s,t)$ in the $L^{1}$ norm. \end{proof} We are now ready to prove one of our main results. \begin{theorem}($\textbf{ Inversion Theorem for two-sided QFT}$)\label{T53}\\ Suppose $f$ and $\mathcal{F}_{T} \in L^{1}(\mathbb{R}^{2}, \mathbb{H})$, then \begin{eqnarray}\label{I3} f(s,t)=\frac{1}{4\pi^{2}}\int_{\mathbb{R}^{2}}e^{\i su}\mathcal{F}_{T}(u,v)e^{\j tv} dudv, \end{eqnarray} for almost everywhere $(s,t)$. \end{theorem} \begin{proof} From Lemma \ref{l52}, since $$ \left \| \frac{1}{4\pi^{2}}\int_{\mathbb{R}^{2}}e^{\i su}\mathcal{F}_{T}(u,v)e^{\j ty}e^{-\alpha(u^{2}+v^{2})} dudv-f(s,t) \right \|_{1}\longrightarrow 0, \quad as \quad \alpha \rightarrow 0,$$ there exists a sequence $ \alpha_{k}\longrightarrow 0 $ such that $\frac{1}{4\pi^{2}}\int_{\mathbb{R}^{2}}e^{\i su}\mathcal{F}_{T}(u,v)e^{\j tv}e^{-\alpha(u^{2}+v^{2})} dudv\longrightarrow f(s,t) $ for almost everywhere $ (s,t)$. \begin{eqnarray*}\label{F4} f(s,t)=\lim_{\begin{subarray} \alpha \alpha_{k} \to 0^{+} \end{subarray}}\frac{1}{4\pi^{2}}\int_{\mathbb{R}^{2}}e^{\i su}\mathcal{F}_{T}(u,v)e^{\j tv}e^{-\alpha_{k}(u^{2}+v^{2})} dudv. \end{eqnarray*} Since $\mathcal{F}_{T}\in L^{1}(\mathbb{R}^{2}, \mathbb{H})$, the quaternion Lebesgue dominated convergence theorem \cite{kou2016envelope} gives us the following pointwise equality \begin{eqnarray*} f(s,t)=\frac{1}{4\pi^{2}}\int_{\mathbb{R}^{2}}e^{\i sx}\mathcal{F}_{T}(x,y)e^{\j ty} dsdt, a.e.. \end{eqnarray*} \end{proof} With the similar argument, we can obtain the inversion theorem of right-sided and left-sided QFTs. \begin{theorem}($\textbf{ Inversion Theorem for right-sided QFT}$)\label{R1}\\ Suppose $f $ and $\mathcal{F}_{R} \in L^{1}(\mathbb{R}^{2}, \mathbb{H})$, then \begin{eqnarray}\label{RR1} f(s,t)=\frac{1}{4\pi^{2}}\int_{\mathbb{R}^{2}}\mathcal{F}_{R}(u,v)e^{\j tv}e^{\i su} dudv, \end{eqnarray} for almost everywhere $(s,t)$. \end{theorem} \begin{theorem}($\textbf{ Inversion Theorem for left-sided QFT}$)\label{L1}\\ Suppose $f$ and $\mathcal{F}_{L} \in L^{1}(\mathbb{R}^{2}, \mathbb{H})$, then \begin{eqnarray}\label{LL1} f(s,t)=\frac{1}{4\pi^{2}}\int_{\mathbb{R}^{2}}e^{\j tv}e^{\i su}\mathcal{F}_{L}(u,v) dudv, \end{eqnarray} for almost everywhere $(s,t)$. \end{theorem} In what follows, another sufficient conditions for the inversion formulas of QFTs hold pointwisly in quaternion field is below. \begin{coro}\label{T55} Suppose $f\in L^{1}(\mathbb{R}^2,\mathbb{H}),$ then $ f$ can be restructured by its two-sided QFT function as in Equation $(\ref{I3})$ if one of the following conditions hold. \begin{description} \item[$(I).$] $\mathcal{F}_{T,n} \in L^{1}(\mathbb{R}^2,\mathbb{H}).$ \item[$(II).$] $F_{n}\in L^{1}(\mathbb{R}^2, \mathbb{C}).$ \item[$(III).$] $f$ is continuous at $(0,0)$, $ F_{n}\geq 0 .$ \end{description} where $ \mathcal{F}_{T,n} $ and $F_{n} $ are the two-sided QFT and the 2D FT of $f_{n}, n=0,1,2,3$, respectively, which are the components of the $f$. 2D FT is defined in Equation $(\ref{e9}).$ \end{coro} \begin{proof} For conditions $(I),$ since $$ \| \mathcal{F}_{T}\|_{1} \leq \sum_{0}^{3}\| \mathcal{F}_{T,n}\|_{1} < \infty ,$$ therefore, from Theorem \ref{T53}, Equation $(\ref{I3})$ holds . \par For conditions $(II),$ The relationship between two-sided QFT $ \mathcal{H}_{T} $ and 2D FT $ H $ of a real integrable function $h$ is given as follows: \begin{eqnarray}\label{F1} \mathcal{H}_{T}(u,v)=\frac{ H(u,v)(1-\k)+H(u,-v)(1+\k) }{2}, \end{eqnarray} \begin{eqnarray}\label{F2} H(u,v)=\frac{ \mathcal{H}_{T}(u,v)(1+\k)+\mathcal{H}_{T}(u,-v)(1-\k) }{2}, \end{eqnarray} Equation (\ref{F1}) was given in \cite{pei2001efficient}, while Equation (\ref{F2}) also can be proved by similar argument, we omit it. By Equations (\ref{F1}) and (\ref{F2}), $\mathcal{F}_{T,n} \in L^{1}(\mathbb{R}^2,\mathbb{H}) $ if and only if $F_{n} \in L^{1}(\mathbb{R}^2,\mathbb{C}). $ Therefore statement $(I)$ and $ (II)$ are equivalent. \par For statement $(III),$ since $f$ is continuous at $(0,0)$ and $ F_{n}\geq 0 $, then $F_{n}\in L^{1}(\mathbb{R}^2, \mathbb{C})$ due to \cite{stein1971introduction}, hence, from the statement $(II) ,$ Equation $(\ref{I3})$ holds . \end{proof} \begin{remark}\label{rem319} \begin{enumerate} \item We can replace the two-sided QFT by right-sided, or left-sided QFTs in the Equations $(\ref{F1})$ and $(\ref{F2})$. \item It is easy to show that The Corollary $\ref{T55}$ not only work for two-sided QFT, but also work for the right-sided and left-sided QFTs and the generalized QFTs in Remark $\ref{re316}$. \end{enumerate} \end{remark} Before giving other sufficient conditions of inversion theorem for QFTs, we introduce the following concept \cite{stein1971introduction}: \begin{defn} If $f\in L^{1}(\mathbb{R}^{2}, \mathbb{H})$ is differentiable in the $L^{1}$ norm with respect to $s$ and there exists a function $ g \in L^{1}(\mathbb{R}^{2}, \mathbb{H})$ such that $$\lim_{\begin{subarray} \\h \to 0 \end{subarray}} \int_{\mathbb{R}^{2}} \left | \frac{f(s+h,t)-f(s,t)}{h} -g(s,t) \right | dsdt=0,$$ then the function $g$ is said to be \textbf{ the partial derivative of $f$ with respect to $s$ in the $L^{1}$ norm}. \end{defn} \begin{lemma}\label{h91} If $f\in L^{1}(\mathbb{R}^{2}, \mathbb{H})$, and $g(s,t)$ is the partial derivative of $f(s,t)$ with respect to $s$ in the $L^{1}$ norm then \begin{eqnarray*} \mathcal{G}_{T}(u,v)=\i u\mathcal{F}_{T}(u,v). \end{eqnarray*} \end{lemma} \begin{proof} Since $g(s,t)$ is the partial derivative of $f$ with respect to $s$ in the $L^{1}$ norm, then $$\lim_{\begin{subarray} \\h \to 0 \end{subarray}} \int_{\mathbb{R}^{2}} \left | \frac{f(s+h,t)-f(s,t)}{h} -g(s,t) \right | dsdt=0,$$ it's easy to see that the two-sided QFT of $ \frac{f(s+h,t)-f(s,t)}{h} -g(s,t)$ is $ \frac{e^{\i uh}\mathcal{F}_{T}(u,v)-\mathcal{G}_{T}(u,v)}{h} $, and $$ \left |\frac{e^{\i uh}\mathcal{F}_{T}(u,v)-\mathcal{F}_{T}(u,v)}{h}-\mathcal{G}_{T}(u,v) \right | \leq \int_{\mathbb{R}^{2}} \left | \frac{f(s+h,t)-f(s,t)}{h} -g(s,t) \right| dsdt, $$ then letting $ h \to 0,$ we obatin: \begin{eqnarray*} \mathcal{G}_{T}(u,v)=\i u\mathcal{F}_{T}(u,v). \end{eqnarray*} \end{proof} The Lemma \ref{h91} can be extended to higher derivatives by induction as follows: \begin{theorem}\label{h92} If $f \in L^{1}(\mathbb{R}^{2}, \mathbb{H})$ has derivatives in the $ L^{1}(\mathbb{R}^{2}, \mathbb{H})$ norm of all orders $ \leq m+n$, then \begin{eqnarray} \label{p92} \mathcal{F}_{T}\left(\frac{\partial ^{m+n}}{\partial s^{m}\partial t^{n}}f\right ) (u,v)=(\i u)^{m}\mathcal{F}_{T}(u,v)(\j v)^{n}, \end{eqnarray} where $\mathcal{F}_{T}(\frac{\partial ^{m+n}}{\partial s^{m}\partial t^{n}}f) (u,v)$ is the two-sided QFT of $\frac{\partial ^{m+n}}{\partial s^{m}\partial t^{n}}f(s,t). $ \end{theorem} By the non - commutativity of quaternions, we obtain the following results for left-sided and right-sided QFTs. \begin{theorem} If $f(s,t) \in L^{1}(\mathbb{R}^{2}, \mathbb{H})$ has derivatives with respect to $s$ in the $ L^{1}(\mathbb{R}^{2}, \mathbb{H})$ norm of all orders $ \leq m$, then \begin{eqnarray*} \label{p93} \mathcal{F}_{L}\left(\frac{\partial ^{m}}{\partial s^{m}}f\right) (u,v)=(\i u)^{m}\mathcal{F}_{L}(u,v), \end{eqnarray*} where $\mathcal{F}_{L}(\frac{\partial ^{m}}{\partial s^{m}}f) (u,v)$ is the left-sided QFT of $\frac{\partial ^{m}}{\partial s^{m}}f(s,t) .$ \end{theorem} \begin{theorem} If $f(s,t) \in L^{1}(\mathbb{R}^{2}, \mathbb{H})$ has derivatives with respect to $t$ in the $ L^{1}(\mathbb{R}^{2}, \mathbb{H})$ norm of all orders $ \leq n$, then \begin{eqnarray*} \label{p94} \mathcal{F}_{R}\left(\frac{\partial ^{n}}{\partial t^{n}}f\right) (u,v)=\mathcal{F}_{R}(u,v)(\j v)^{n}, \end{eqnarray*} where $\mathcal{F}_{R}(\frac{\partial ^{n}}{\partial t^{n}}f) (u,v)$ is the right-sided QFT of $\frac{\partial ^{n}}{\partial t^{n}}f(s,t). $ \end{theorem} \begin{theorem}($\textbf{ Inversion Theorem for two-sided QFT}$)\label{h93}\\ If $f \in L^{1}(\mathbb{R}^{2}, \mathbb{H})$ has derivatives in the $ L^{1}(\mathbb{R}^{2}, \mathbb{H})$ norm of all orders $ \leq 3$, then \begin{eqnarray*} f(s,t)=\frac{1}{4\pi^{2}}\int_{\mathbb{R}^{2}}e^{\i su}\mathcal{F}_{T}(u,v)e^{\j tv} dudv, \end{eqnarray*} for almost everywhere $(s,t)$. \end{theorem} \begin{proof} Let $z=(u,v)\in \mathbb{R}^{2}$, then there exists constant $C$ such that \begin{eqnarray*} (1+|z|^{2})^{\frac{3}{2}} \leq (1+|u|^{2}+|v|^{2})^{3} \leq C \sum_{|\alpha|\leq 3}|z^{\alpha}|, \end{eqnarray*} where $z^{\alpha}=u^{m}v^{n},|\alpha|=m+n $. then by Equation $(\ref{p92})$, \begin{eqnarray*} |\mathcal{F}_{T}(u,v)|&&\leq (1+|z|^{2})^{-\frac{3}{2}} C\sum_{|\alpha|\leq 3}|z^{\alpha}||\mathcal{F}_{T}(u,v)|\\ &&=(1+|z|^{2})^{-\frac{3}{2}} C\sum_{|\alpha|\leq 3}\left|\mathcal{F}_{T}\left(\frac{\partial ^{m+n}}{\partial s^{m}\partial t^{n}}f\right)(u,v)\right|\\ &&\leq (1+|z|^{2})^{-\frac{3}{2}}C \sum_{|\alpha|\leq 3}\left \|\frac{\partial ^{m+n}}{\partial s^{m}\partial t^{n}}f\right\|_{1}. \end{eqnarray*} Since $ (1+|z|^{2})^{-\frac{3}{2}} $ is an integrable function on $\mathbb{R}^{2} $, it follow that $\mathcal{F}_{T} \in L^{1}(\mathbb{R}^{2}, \mathbb{H})$, then by Theorem $ \ref{T53}$, the proof is completed. \end{proof} \begin{remark} The above method can only be applied to two-sided QFT, but not for left-sided and right-sided QFTs, because of non - commutativity of quaternions. \end{remark} \subsection{Problem B for QLCTs} In this subsection, by using the relationship between QFTs and QLCTs, we firstly prove the following Lemma, which arises from the relationship between two-sided QLCT and QFT \cite{bahri2014relationship}, then we derive the inversion of QLCTs. \begin{lemma} \label{l55} $$f(s,t)\in L(\mathbb{R}^2,\mathbb{H}) \quad if \quad and \quad only \quad if \quad p(s,t)\in L(\mathbb{R}^2,\mathbb{H})$$ and $$\mathcal{L}_{T}^{\i,\j}(f)(u,v )\in L(\mathbb{R}^2,\mathbb{H}) \quad if \quad and \quad only \quad if \quad \quad \mathcal{P}_{T}(u,v)\in L(\mathbb{R}^2,\mathbb{H}),$$ where \begin{eqnarray*} p(s,t):=e^{\i\frac{a_{1}}{2b_{1}}s^{2}}f(s,t)e^{\j\frac{a_{2}}{2b_{2}}t^{2}}.\label{hu47} \end{eqnarray*} and $\mathcal{P}_{T}(u,v)$ is the two-sided QFT of $p(s,t).$ \end{lemma} \begin{proof} Using \begin{eqnarray*}\label{QFR} \mathcal{L}_{T}^{\i,\j}(f)(u,v )&&=\int_{\mathbb{R}^2} K_{A_{1}}^{\i}(s,u)f(s,t)K_{A_{2}}^{\j}(t,v)dsdt \nonumber\\ &&=\frac{1}{\sqrt{\i2b_{1}\pi}}e^{\i\frac{d_{1}}{2b_{1}}u^{2}} \mathcal{P}_{T}(\frac{1}{b_{1}}u,\frac{1}{b_{2}}v)\frac{1}{\sqrt{\j2b_{2}\pi}}e^{\j\frac{d_{2}}{2b_{2}}v^{2}}.\label{hu46} \end{eqnarray*} \end{proof} By Lemma \ref{l55} and Theorem \ref{T53}, the inversion theorem of two-sided QLCT are presented. \begin{theorem}($\textbf{ Inversion Theorem for two-sided QLCT}$)\label{T57}\\ If one of the following conditions hold, \begin{description} \item[$(\alpha)$] $f $ and $\mathcal{L}_{T}^{\i,\j}(f) \in L^{1}(\mathbb{R}^{2}, \mathbb{H})$, \item[$(\beta)$] For $e^{\i\frac{a_{1}}{2b_{1}}s^{2}}f(s,t)e^{\j\frac{a_{2}}{2b_{2}}t^{2}}\in L^{1}(\mathbb{R}^{2}, \mathbb{H})$ has derivatives in the $ L^{1}(\mathbb{R}^{2}, \mathbb{H})$ norm of all orders $ \leq 3$, \end{description} then the original function $f$ can be recovered from its two-sided QLCT by Equation $(\ref{F3})$ \begin{eqnarray}\label{F3} f(s,t)=\mathcal{L}_{T^{-1}}^{\i,\j}( \mathcal{L}_{T}^{\i,\j}(f))(s,t )=\int_{\mathbb{R}^{2}}K_{A_{1}^{-1}}^{\i}(u,s)\mathcal{L}_{T}^{\i,\j}(f)(u,v )K_{A_{2}^{-1}}^{\j}(v,t)dudv, \end{eqnarray} for almost everywhere $(s,t)$. \end{theorem} \begin{proof} In one hand, if $f$ satisfies conditions $ (\alpha),$ Lemma \ref{l55} follows that if $f$ and $\mathcal{L}_{T}^{\i,\j}(f) \in L^{1}(\mathbb{R}^{2}, \mathbb{H})$, then $ p(s,t)$ and its two-sided QFT $ \mathcal{P}_{T}(u,v)$ also belong to $L^{1}(\mathbb{R}^{2}, \mathbb{H})$, by Theorem $ \ref{T53}$, the $ p(s,t)$ can be recovered from its QFT almost everywhere as follows: \begin{eqnarray*} p(s,t)=\frac{1}{4 \pi^{2}}\int_{\mathbb{R}^{2}}e^{\i us}\mathcal{P}_{T}(u,v)e^{\j vt}dudv, a.e., \end{eqnarray*} then, from Lemma \ref{l55}, a straightforward calculation gives: \begin{eqnarray*} e^{\i\frac{a_{1}}{2b_{1}}s^{2}}f(s,t)e^{\j\frac{a_{2}}{2b_{2}}t^{2}}=&&\frac{1}{4b_{1}b_{2} \pi^{2}} \int_{\mathbb{R}^{2}}e^{\i \frac{1}{b_{1}}us}\mathcal{P}_{T}(\frac{1}{b_{1}}u,\frac{1}{b_{2}}v)e^{\j \frac{1}{b_{2}}vt}dudv, \\ =&&b_{1}b_{2}\frac{1}{4 \pi^{2}} \int_{\mathbb{R}^{2}}e^{\i \frac{1}{b_{1}}us}\sqrt{\i2b_{1}\pi} e^{-\i\frac{d_{1}}{2b_{1}}u^{2}} \mathcal{L}_{T}^{\i,\j}(f)(u,v )\sqrt{\j2b_{2}\pi}e^{-\i\frac{d_{2}}{2b_{2}}v^{2}} e^{\j \frac{1}{b_{2}}vt}dudv,\\ f(s,t)=&& \int_{\mathbb{R}^{2}}\frac{1}{\sqrt{-\i2b_{1}\pi} }e^{-\i(\frac{a_{1}}{2b_{1}}s^{2}+\frac{1}{b_{1}}us-\frac{d_{1}}{2b_{1}}u^{2})} \mathcal{L}_{T}^{\i,\j}(f)(u,v )\frac{1}{\sqrt{-\j2b_{2}\pi} }e^{-\j(\frac{a_{2}}{2b_{2}}t^{2}+\frac{1}{b_{2}}vt-\frac{d_{2}}{2b_{2}}v^{2})}dudv,a.e., \end{eqnarray*} the proof of the theorem for conditions $ (\alpha)$ is completed. \par On the other hand, if $f$ satisfies conditions $ ( \beta),$ that is to say, $p\in L^{1}(\mathbb{R}^{2}, \mathbb{H})$ has derivatives in the $ L^{1}(\mathbb{R}^{2}, \mathbb{H})$ norm of all orders $ \leq 3$, according to Theorem \ref{h93}, it follows that $ \mathcal{P}_{T} \in L^{1}(\mathbb{R}^{2}, \mathbb{H})$, then Lemma \ref{l55} implies that $ \mathcal{L}_{T}^{\i,\j}(f) \in L^{1}(\mathbb{R}^{2}, \mathbb{H})$. Hence by conditions $ (\alpha),$ we can complete our proof for conditions $ (\beta)$. \end{proof} Using the relationship between two-sided QLCT and right-sided, left-sided QLCTs, the existence and invertibility of right-sided and left-sided QLCTs can be inherited from the two-sided QLCT. \begin{lemma}\label{relation} If $f\in L^{1}(R^{2}, \mathbb{H})$, the right-sided and left-sided QLCTs of $f$ can be decomposed into the sum of two two-sided QLCT. \begin{eqnarray} \label{f6} \mathcal{L}_{R}^{\i,\j}(f)(u,v)= \mathcal{L}_{T}^{\i,\j}(f_{a})(u,v)+ \mathcal{L}_{T}^{-\i,\j}(f_{b})(u,v)\j, \end{eqnarray} \begin{eqnarray} \label{f7} \mathcal{L}_{L}^{\i,\j}(f)(u,v)= \mathcal{L}_{T}^{\i,\j}(f_{d})(u,v)+ \i\mathcal{L}_{T}^{\i,-\j}(f_{e})(u,v), \end{eqnarray} where \begin{eqnarray}\label{f8} f=f_{a}+f_{b}\j, f_{a}:=f_{0}+\i f_{1}, f_{b}:=f_{2}+\i f_{3}, f=f_{d}+\i f_{e}, f_{d}:=f_{0}+\j f_{2}, f_{e}:=f_{1}+\j f_{3}. \end{eqnarray} \end{lemma} \begin{proof} It suffices to prove Equation (\ref{f6}), Equation (\ref{f7}) can be proved in the similar way, so we omit it . \begin{eqnarray*} \mathcal{L}_{R}^{\i,\j}(f)(u,v)=&&\int_{\mathbb{R}^2}[ f_{a}(s,t)+f_{b}(s,t)\j]K_{A_{1}}^{ \i}(s,u)K_{A_{2}}^{ \j}(t,v)dxdy \nonumber \\ =&&\int_{\mathbb{R}^2}[ f_{a}(s,t)]K_{A_{1}}^{ \i}(s,u)K_{A_{2}}^{ \j}(t,v)dsdt +\int_{\mathbb{R}^2}[ f_{b}(s,t)]K_{A_{1}}^{ -\i}(s,u)K_{A_{2}}^{ \j}(t,v)dsdt \j \nonumber \\ =&&\int_{\mathbb{R}^2}K_{A_{1}}^{ \i}(s,u)f_{a}(s,t)K_{A_{2}}^{ \j}(t,v)dsdt +\int_{\mathbb{R}^2}K_{A_{1}}^{ -\i}(s,u) f_{b}(s,t)K_{A_{2}}^{ \j}(t,v)dsdt \j, \nonumber \\ \end{eqnarray*} \end{proof} Therefore, using suitable conditions in Theorems \ref{the317}, \ref{T57} and Lemma \ref{relation}, we have the following inversion formulas of right-sided and left-sided QLCTs. \begin{coro}($\textbf{ Inversion formulas for right-sided and left-sided QLCTs}$)\label{c419}\\ Suppose $f\in L^{1}(\mathbb{R}^2,\mathbb{H}) $, in the cross-neighborhood of point $(x_{0},y_{0}),$ $f(s,t)$ is a QBVF, \\ if $e^{\i\frac{a_{1}}{2b_{1}}s^{2}}f_{a}(s,t)e^{\j\frac{a_{2}}{2b_{2}}t^{2}}$ and $e^{-\i\frac{a_{1}}{2b_{1}}s^{2}}f_{b}(s,t)e^{\j\frac{a_{2}}{2b_{2}}t^{2}}$ belongs to $\bf{LC},$ then the inversion formula of right-sided QLCT of $f $ is obtained as follows: \begin{eqnarray}\label{xx2} f(x_{0},y_{0})&&= \lim_{\begin{subarray} N N\to \infty \\M \to \infty \end{subarray} }\int_{-N}^{N}\int_{-M}^{M}K_{A_{1}^{-1}}^{\i}(u,x_{0})\mathcal{L}_{T}^{\i,\j}(f_{a})(u,v)K_{A_{2}^{-1}}^{\j}(v,y_{0})dudv \nonumber\\ &&+\lim_{\begin{subarray} NN \to \infty \\M \to \infty \end{subarray} }\int_{-N}^{N}\int_{-M}^{M}K_{A_{1}^{-1}}^{-\i}(u,x_{0})\mathcal{L}_{T}^{-\i,\j}(f_{b})(u,v)K_{A_{2}^{-1}}^{\j}(v,y_{0})dudv\j. \end{eqnarray} If $e^{\i\frac{a_{1}}{2b_{1}}s^{2}}f_{d}(s,t)e^{\j\frac{a_{2}}{2b_{2}}t^{2}}$ and $e^{\i\frac{a_{1}}{2b_{1}}s^{2}}f_{e}(s,t)e^{-\j\frac{a_{2}}{2b_{2}}t^{2}}$ belongs to $\bf{LC},$\\ then the inversion formula of left-sided QLCT of $f $ is obtained as follows: \begin{eqnarray}\label{xxx2} f(x_{0},y_{0})&&= \lim_{\begin{subarray} N N\to \infty \\M \to \infty \end{subarray} }\int_{-N}^{N}\int_{-M}^{M}K_{A_{1}^{-1}}^{\i}(u,x_{0})\mathcal{L}_{T}^{\i,\j}(f_{d})(u,v)K_{A_{2}^{-1}}^{\j}(v,y_{0})dudv \nonumber\\ &&+\i\lim_{\begin{subarray} NN \to \infty \\M \to \infty \end{subarray} }\int_{-N}^{N}\int_{-M}^{M}K_{A_{1}^{-1}}^{\i}(u,x_{0}) \mathcal{L}_{T}^{\i,-\j}(f_{e})(u,v)K_{A_{2}^{-1}}^{-\j}(v,y_{0})dudv, \end{eqnarray} where $$ f(x_{0},y_{0})=\frac {f(x_{0}+0,y_{0}+0)+f(x_{0}+0,y_{0}-0)+f(x_{0}-0,y_{0}+0)+f(x_{0}-0,y_{0}-0)}{ 4}.$$ \end{coro} \begin{coro}($\textbf{ Inversion formula for right-sided QLCT}$)\label{c420}\\ Let $f \in L^{1}(\mathbb{R}^{2}, \mathbb{H}) $. If \par $1)$ $\mathcal{L}_{T}^{\i,\j}(f_{a})$ and $\mathcal{L}_{T}^{-\i,\j}(f_{b}) \in L^{1}(\mathbb{R}^{2}, \mathbb{H}) $, \par or \par $2) e^{\i\frac{a_{1}}{2b_{1}}s^{2}}f_{a}(s,t)e^{\j\frac{a_{2}}{2b_{2}}t^{2}}$ and $ e^{-\i\frac{a_{1}}{2b_{1}}s^{2}}f_{b}(s,t)e^{\j\frac{a_{2}}{2b_{2}}t^{2}}$ both have derivatives in the $ L^{1}(\mathbb{R}^{2}, \mathbb{H})$ norm of all orders $ \leq 3$, then the inversion formula of right-sided QLCT of $f $ is \begin{eqnarray}\label{rr1} f(s,t)= \mathcal{L}_{T^{-1}}^{\i,\j}(\mathcal{L}_{T}^{\i,\j}(f_{a}))(s,t)+ \mathcal{L}_{T^{-1}}^{-\i,\j}(\mathcal{L}_{T}^{-\i,\j}(f_{b}))(s,t)\j. \quad a.e.. \end{eqnarray} \end{coro} \begin{coro}($\textbf{ Inversion formula for left-sided QLCT}$)\label{c421}\\ Let $f\in L^{1}(\mathbb{R}^{2}, \mathbb{H}).$ If \par $1) \mathcal{L}_{T}^{\i,\j}(f_{d})$ and $\mathcal{L}_{T}^{\i,-\j}(f_{e})\in L^{1}(\mathbb{R}^{2}, \mathbb{H}) $, \par or \par $2) e^{\i\frac{a_{1}}{2b_{1}}s^{2}}f_{d}(s,t)e^{\j\frac{a_{2}}{2b_{2}}t^{2}}$ and $ e^{\i\frac{a_{1}}{2b_{1}}s^{2}}f_{e}(s,t)e^{-\j\frac{a_{2}}{2b_{2}}t^{2}}$ both have derivatives in the $ L^{1}(\mathbb{R}^{2}, \mathbb{H})$ norm of all orders $ \leq 3$,\\ then the inversion formula of left-sided QLCT of $f$ is \begin{eqnarray}\label{ll1} f(s,t)= \mathcal{L}_{T^{-1}}^{\i,\j}(\mathcal{L}_{T}^{\i,\j}(f_{d}))(s,t)+\i \mathcal{L}_{T^{-1}}^{\i,-\j}(\mathcal{L}_{T}^{\i,-\j}(f_{e}))(s,t), \quad a.e.. \end{eqnarray} \end{coro} \begin{remark} From the above theorem, we can find that the original function can be recovered from its right-sided and left-sided QLCTs by two-sided QLCTs of its components. \end{remark} The proof of following Lemma $\ref{l81}$ is straightforward. \begin{lemma}\label{l81} If $f\in L^{1}(\mathbb{R}^{2}, \mathbb{H})$, then \begin{eqnarray*} \mathcal{L}_{R}^{\i}(f)(u,t) &=&\mathcal{F}_{r}^{\i}(f(\cdot,t) \frac{1}{\sqrt{2b_{1}\i\pi}}e^{\i\frac{a_{1}}{2b_{1}}(\cdot)^{2}}) (\frac{u}{b_{1}}, t)e^{\i\frac{d_{1}}{2b_{1}}u^{2}},\\ \mathcal{L}_{R}^{\i,\j}(f)(u,v) &=&\mathcal{F}_{r}^{\j}\bigg(\mathcal{L}_{R}^{\i}(f) (u, \cdot)e^{\j\frac{a_{2}}{2b_{2}}(\cdot)^{2}}\bigg)(u, \frac{v}{b_{2}})\frac{1}{\sqrt{2b_{2}\j\pi}}e^{\j\frac{d_{2}}{2b_{2}}v^{2}}, \end{eqnarray*} where \begin{eqnarray*} \mathcal{F}_{r}^{\i}(u,t)&&:=\int_{\mathbb{R}}f(s,t)e^{\i us}ds.\\ \mathcal{F}_{r}^{\j}(u,v)&&:=\int_{\mathbb{R}}f(u,t)e^{\j vt}dt.\\ \mathcal{L}_{R}^{\i}(f)(u,t)&&:=\int_{\mathbb{R}}f(s,t)K_{A_{1}}^{\i}(s,u)ds.\\ \end{eqnarray*} \end{lemma} Using similar argument as the proof of Theorem $\ref{T53}$, we have the following Lemma. \begin{lemma}\label{l82} Suppose $f$ and $\mathcal{F}\in L^{1}(\mathbb{R}, \mathbb{H}) $ then $$ f(x)= \frac{1}{2\pi}\int_{\mathbb{R}}\mathcal{F}(w)e^{\mu xw}dw, $$ for $x$ almost everywhere, where $\mathcal{F}(w)=\int_{\mathbb{R}}f(x)e^{-\mu xw}dx,$ $\mu$ is a pure unit quaternion, which has unit magnitude having no real part. \end{lemma} Then we can prove our desired results. \begin{theorem}\label{h823}($\textbf{ Inversion Theorem for right-sided QLCT}$)\\ Suppose $f$ and $\mathcal{L}_{R}^{\i,\j}(f) \in L^{1}(\mathbb{R}^{2}, \mathbb{H}),$ then the inversion formula of right-sided QLCT of $f $ is \begin{eqnarray}\label{TT57} f(s,t)= \int_{\mathbb{R}^{2}}\mathcal{L}_{R}^{\i,\j}(f)(u,v)K_{A_{2}^{-1}}^{\j}(v,t)K_{A_{1}^{-1}}^{\i} (u,s)dudv, \end{eqnarray} for almost everywhere $ (s,t)$. \end{theorem} \begin{proof} On one hand, by assumption $f \in L^{1}(\mathbb{R}^{2}, \mathbb{H})$, then $ \mathcal{L}_{R}^{\i}(f) \in L^{1}(\mathbb{R}, \mathbb{H})$ of variable $t$, this implies that $$\mathcal{L}_{R}^{\i}(u,\cdot)\frac{1}{\sqrt{2\pi b_{2}\j}}e^{\j\frac{a_{2}}{2b_{2}}(\cdot)^{2}} \in L^{1}(\mathbb{R}, \mathbb{H}).$$ On the other hand, since $ \mathcal{L}_{R}^{\i,\j}(f)(u,v)\in L^{1}(\mathbb{R}^{2}, \mathbb{H}), $ by the Lemma $\ref{l81}$, we have $$\mathcal{F}_{r}^{\j}\left(\mathcal{L}_{R}^{\i}(f) (u, \cdot)e^{\j\frac{a_{2}}{2b_{2}}(\cdot)^{2}}\right )(u, \frac{v}{b_{2}})\in L^{1}(\mathbb{R}^{2}, \mathbb{H}), $$ combining with Lemma $\ref{l82}$, it follows that \begin{eqnarray*} \mathcal{L}_{R}^{\i}(f)(u, t)e^{\j\frac{a_{2}}{2b_{2}}t^{2}}&=&\int_{\mathbb{R}} \mathcal{F}_{r}^{\j}\bigg(\mathcal{L}_{R}^{\i}(f) (u, t)e^{\j\frac{a_{2}}{2b_{2}}t^{2}}\bigg)(u, \frac{v}{b_{2}})\frac{1}{2b_{2}\pi} e^{\j(\frac{1}{b_{2}}vt)}dv\\ \mathcal{L}_{R}^{\i}(f)(u, t)e^{\j\frac{a_{2}}{2b_{2}}t^{2}}&=&\int_{\mathbb{R}} \mathcal{L}_{R}^{\i,\j}(f)(u,v) \sqrt{2b_{2}\j\pi}e^{\j(-\frac{d_{2}}{2b_{2}}v^{2})}\frac{1}{2b_{2}\pi} e^{\j(\frac{1}{b_{2}}vt)}dv\\ \mathcal{L}_{R}^{\i}(f)(u, t)&=&\int_{\mathbb{R}} \mathcal{L}_{R}^{\i,\j}(f)(u,v) \frac{1}{\sqrt{-2b_{2}\j\pi}}e^{\j(-\frac{d_{2}}{2b_{2}}v^{2}+\frac{1}{b_{2}}vt-\frac{a_{2}}{2b_{2}}t^{2})}dv\\ &=&\int_{\mathbb{R}} \mathcal{L}_{R}^{\i,\j}(f)(u,v)K_{A_{2}^{-1}}^{\j}(v,t)dv. \end{eqnarray*} for almost everywhere $ t.$\\ Similarly, $f\in L^{1}(\mathbb{R}^{2}, \mathbb{H})$ and $\mathcal{L}_{R}^{\i}(f)(\cdot, t) \in L^{1}(\mathbb{R}, \mathbb{H})$ because of \begin{eqnarray*} \int_{\mathbb{R}}\left |\mathcal{L}_{R}^{\i}(f)(u, t)\right |du \leq \int_{\mathbb{R}^{2}}\left | \mathcal{L}_{R}^{\i,\j}(f)(u,v)\right | dvdu, \end{eqnarray*} then \begin{eqnarray*} f(s,t)&&=\int_{\mathbb{R}} \mathcal{L}_{R}^{\i}(f)(u,t) \frac{1}{\sqrt{-2b_{1}\i\pi}}e^{-\i(\frac{d_{1}}{2b_{1}}u^{2}+\frac{1}{2b_{1}}us-\frac{a_{1}}{2b_{1}}s^{2})}du\\ &&=\int_{\mathbb{R}^{2}} \mathcal{L}_{R}^{\i,\j}(f)(u,v)K_{A_{2}^{-1}}^{\j}(v,t)K_{A_{1}^{-1}}^{\i}(u,s)dvdu, \end{eqnarray*} for almost everywhere $ (s,t).$ \end{proof} The similarly result can be obtained for left-sided QLCT. \begin{theorem}\label{h824}($\textbf{ Inversion Theorem for left-sided QLCT}$)\\ Suppose $f$ and $\mathcal{L}_{L}^{\i,\j}(f) \in L^{1}(\mathbb{R}^{2}, \mathbb{H}),$ then the inversion formula of left-sided QLCT of $f $ is \begin{eqnarray}\label{TT58} f(s,t)= \int_{\mathbb{R}^{2}}K_{A_{1}^{-1}}^{\i} (u,s)K_{A_{2}^{-1}}^{\j}(v,t)\mathcal{L}_{L}^{\i,\j}(f)(u,v)dudv, \end{eqnarray} for almost everywhere $ (s,t)$. \end{theorem} \begin{remark}\label{rem319} \begin{enumerate} \item In above Lemmas and Theorems, we can replace the imaginary units $\i$ and $\j$ by the pure unit quaternion $ \mu_{1}$ and $ \mu_{2}$ respectively, which are defined in Equations $(\ref{hu37})$. \item When $A_{1}=A_{2}=\left( \begin{array}{cc} 0 &1 \\ -1 &0 \\ \end{array} \right)$, the QLCTs reduce to the QFTs, then the inversion theorems of right-sided and left-sided QLCTs, such as Theorems $\ref{c419},\ref{c420},\ref{c421}$, become the inversion theorems for right-sided and left-sided QFTs. \item When $ A_{1}=\left( \begin{array}{cc} \cos \alpha & \sin \alpha \\ -\sin \alpha & \cos\alpha \\ \end{array} \right) , A_{2}=\left( \begin{array}{cc} \cos \beta & \sin \beta \\ -\sin \beta & \cos\beta \\ \end{array} \right)$, the inversion theorems of different types of QLCTs become the inversion theorems of different types of QFRFTs. \end{enumerate} \end{remark} \setcounter{equation}{0} \section{Conclusion}\label{sec4} This paper studied the conditions on the inversion Theorems of QFTs and QLCTs, which are summarized in the following Tables 1 and 2. \begin{table}[h]\label{tab1} \caption{Inversion conditions of QFTs} \centering \begin{tabular}{|c|c|c|} \hline QFT & Inversion conditions for $f \in L^{1}(\mathbb{R}^{2}, \mathbb{H})$ & Inversion formula\\ \hline $\multirow{10}{*}{ All QFTs} $& In the cross-neighborhood of $ (x_{0},y_{0}),$ & $ \multirow{2}{*}{ Formulas (\ref{h81}), (\ref{r1}), (\ref{l1}).}$\\ & $ f$ is a QBVF and belongs to $\bf{LC}$. & \\ \cline{2-3} & $\mathcal{F}_{T} \in L^{1}(\mathbb{R}^{2}, \mathbb{H})$, & $\multirow{10}{*}{ Formulas (\ref{I3}),(\ref{RR1}),(\ref{LL1}).}$ \\ \cline{2-2} &$\mathcal{F}_{T,n} \in L^{1}(\mathbb{R}^2,\mathbb{H}),$ & \\ &where $\mathcal{F}_{T, n} $ is the QFT of $f_{n}, n=0,1,2,3$, &\\ &which are the components of the $f$. &\\ \cline{2-2} &$F_{n} \in L^{1}(\mathbb{R}^2,\mathbb{H}),$ &\\ &where $F_{n} $ is the 2D FT of $f_{n}, n=0,1,2,3$, &\\ &which are the components of the $f$. &\\ \cline{2-2} &$f$ is continuous at $(0,0)$, $ F_{n}\geq 0 ,$ &\\ \hline Two-sided QFT & $f $ has derivatives & Formula $ (\ref{I3}).$\\ &in the $ L^{1}(\mathbb{R}^{2}, \mathbb{H})$ norm of all orders $ \leq 3.$ & \\ \hline \end{tabular} \end{table} \begin{table}[h!]\label{tab2} \caption{Inversion conditions of QLCTs} \centering \begin{tabular}{|c|c|c|} \hline QLCTs & Inversion conditions for $f \in L^{1}(\mathbb{R}^{2}, \mathbb{H})$ & Inversion formula\\ \hline \multirow{4}{*}{Two-sided QLCT} & In the cross-neighborhood of $ (x_{0},y_{0}),$ $ f $ is a QBVF & \multirow{2}{*}{ Formula $(\ref{h82})$} \\ & and $e^{\i\frac{a_{1}}{2b_{1}}s^{2}}f(s,t)e^{\j\frac{a_{2}}{2b_{2}}t^{2}}$ belongs to $\bf{LC}$. & \\ \cline{2-3} & $\mathcal{L}_{T}^{\i,\j}(f) \in L^{1}(\mathbb{R}^{2}, \mathbb{H})$ & \multirow{3}{*}{ Formula (\ref{F3})}\\ \cline{2-2} &$e^{\i\frac{a_{1}}{2b_{1}}s^{2}}f(s,t)e^{\j\frac{a_{2}}{2b_{2}}t^{2}} \in L^{1}(\mathbb{R}^{2}, \mathbb{H}) $ has derivatives & \\ & in the $ L^{1}(\mathbb{R}^{2}, \mathbb{H})$ norm of all orders $ \leq 3.$ &\\ \cline{1-3} \multirow{8}{*}{ Right-sided QLCT} & In the cross-neighborhood of $ (x_{0},y_{0}),$ & \multirow{3}{*}{ Formula $(\ref{xx2})$} \\ & $ f $ is a QBVF, $e^{\i\frac{a_{1}}{2b_{1}}s^{2}}f_{a}(s,t)e^{\j\frac{a_{2}}{2b_{2}}t^{2}}$ and & \\ & $ e^{-\i\frac{a_{1}}{2b_{1}}s^{2}}f_{b}(s,t)e^{\j\frac{a_{2}}{2b_{2}}t^{2}}$ belongs to $\bf{LC}$. & \\ \cline{2-3} & $\mathcal{L}_{R}^{\i,\j}(f) \in L^{1}(\mathbb{R}^{2}, \mathbb{H})$ & Formula (\ref{TT57})\\ \cline{2-3} &$ e^{\i\frac{a_{1}}{2b_{1}}s^{2}}f_{a}(s,t)e^{\j\frac{a_{2}}{2b_{2}}t^{2}}$ and $ e^{-\i\frac{a_{1}}{2b_{1}}s^{2}}f_{b}(s,t)e^{\j\frac{a_{2}}{2b_{2}}t^{2}} $ & \multirow{4}{*}{ Formula (\ref{rr1})} \\ & both have derivatives in the $ L^{1}(\mathbb{R}^{2}, \mathbb{H})$ norm of all orders $ \leq 3.$ & \\ \cline{2-2} & $ \mathcal{L}_{T}^{\i,\j}(f_{a})$ and $\mathcal{L}_{T}^{-\i,\j}(f_{b}) \in L^{1}(\mathbb{R}^{2}, \mathbb{H})$, & \\ \cline{1-3} \multirow{8}{*}{ Left-sided QLCT} & In the cross-neighborhood of $ (x_{0},y_{0}),$ & \multirow{3}{*}{ Formula $(\ref{xxx2})$} \\ & $ f $ is a QBVF, $e^{\i\frac{a_{1}}{2b_{1}}s^{2}}f_{d}(s,t)e^{\j\frac{a_{2}}{2b_{2}}t^{2}}$ and & \\ & $ e^{\i\frac{a_{1}}{2b_{1}}s^{2}}f_{e}(s,t)e^{-\j\frac{a_{2}}{2b_{2}}t^{2}}$ belongs to $\bf{LC}$. & \\ \cline{2-3} & $\mathcal{L}_{L}^{\i,\j}(f) \in L^{1}(\mathbb{R}^{2}, \mathbb{H})$ & Formula (\ref{TT58})\\ \cline{2-3} &$ e^{\i\frac{a_{1}}{2b_{1}}s^{2}}f_{d}(s,t)e^{\j\frac{a_{2}}{2b_{2}}t^{2}}$ and $ e^{\i\frac{a_{1}}{2b_{1}}s^{2}}f_{e}(s,t)e^{-\j\frac{a_{2}}{2b_{2}}t^{2}} $& \multirow{4}{*}{ Formula (\ref{ll1})} \\ & both have derivatives in the $ L^{1}(\mathbb{R}^{2}, \mathbb{H})$ norm of all orders $ \leq 3.$ & \\ \cline{2-2} & $ \mathcal{L}_{T}^{\i,\j}(f_{d})$ and $\mathcal{L}_{T}^{\i,-\j}(f_{e}) \in L^{1}(\mathbb{R}^{2}, \mathbb{H})$. & \\ \cline{1-3} &\multicolumn{2}{c| }{ Where $f_{a}, f_{b}, f_{e}, f_{d}$ is defined by Equation (\ref{f8})} \\ \hline \end{tabular} \end{table} Further investigations on this topic will be focus on the applications of QLCT to problems of color image processing. \section{Acknowledgements} The authors acknowledge financial support from the National Natural Science Funds for Young Scholars (No. 11401606, 11501015) and University of Macau No. MYRG2015-00058-FST, MYRG099(Y1-L2)-FST13-KKI and the Macao Science and Technology Development Fund FDCT/094/2011A, FDCT/099/2012/A3.
2,869,038,156,078
arxiv
\section{Introduction} The large sieve originated in the work of Linnik in the early forties and became a fundamental tool in number theory. Since then, it has been a topic largely studied and many variants were obtained. The case of power moduli received a particular interest due to its applications. Let $\{a_n\}$ denote a sequence of complex numbers, $M,N,k$ positive integers and let $Q$ be a real number $\geq 1$. The main goal is to obtain an estimate of the following kind \begin{equation}\label{largesieve} \sum_{q \leq Q}\sum_{a=1 \atop (a,q)=1}^{q^k} \left\vert \sum_{n=M+1}^{M+N} a_n e\left(\frac{a}{q^k}n\right)\right\vert^2 \ll \Delta \sum_{n=M+1}^{M+N} \vert a_n\vert^2\end{equation} where $e(\alpha):=\exp(2i\pi \alpha)$ for $\alpha \in \mathbb{R}$. Let us recall the general large sieve inequality. We say that a set of real numbers $\{x_k\}$ is $\delta$-spaced modulo $1$ if $\|x_k-x_j\| \geq \delta$ for all $k\neq j$ where $\|x\|$ denotes the distance of a real number $x$ to its closest integer. \begin{theorem}\cite[Theorem $2$, Chapter $27$]{Dav} Let $\{a_n\}$ be a sequence of complex numbers, $\{x_k\}$ be a set of real numbers which is $\delta$-spaced modulo $1$, and $M,N$ be integers. Then we have \begin{equation}\label{classic} \sum_{k}\left\vert \sum_{n=M+1}^{M+N} a_n e(x_k n)\right\vert^2 \ll (\delta^{-1}+N) \sum_{n=M+1}^{M+N} \vert a_n\vert^2. \end{equation} \end{theorem}As it is for instance pointed out by Zhao in \cite{Zhaoacta}, the classical large sieve inequality \eqref{classic} enables us to obtain \eqref{largesieve} with \begin{equation}\label{trivialdelta} \Delta= \min\left\{Q^{2k}+N,Q(Q^k+N)\right\}.\end{equation} Furthermore we can expect the fractions with power denominator to be regularly spaced. This led Zhao to conjecture that we can take $\Delta=Q^{\varepsilon}(Q^{k+1}+N)$ in \eqref{largesieve}. The bounds coming from \eqref{trivialdelta} imply the conjecture of Zhao in the ranges $Q \leq N^{1/2k}$ and $Q \geq N^{1/k}$. \\ Several authors obtained improvements of \eqref{trivialdelta} in the critical range $Q^k \leq N \leq Q^{2k}$. First Zhao proved an inequality of type \eqref{largesieve} with \begin{equation}\label{Zhaobound} \Delta = \left[Q^{k+1}+\left(NQ^{\frac{\kappa-1}{\kappa}}+N^{1-\kappa}Q^{\frac{\kappa+k}{\kappa}}\right)N^{\varepsilon}\right] \end{equation} where $\kappa=2^{k-1}$. Baier and Zhao proved in \cite{BZhaoIJNT} that we can take \begin{equation}\label{BaierZhao} \Delta= (Q^{k+1}+N+N^{1/2+\varepsilon}Q^k) (\log\log 10 NQ)^{k+1}\end{equation} which improves \eqref{Zhaobound} in the range $N^{\frac{1}{2k}+\varepsilon} \ll Q \ll N^{\frac{(\kappa-2)}{(2(\kappa-1)\kappa-2k)}-\varepsilon}$. In particular, for $k=3$, this led to an improvement in the range $N^{1/6+\varepsilon} \ll Q \ll N^{1/5-\varepsilon}$. Using a Fourier analytic method, they obtained a further improvement for $k=3$. These results have been sharpened by Halupczok \cite{KarinIJNT} using the breakthrough work of Wooley on Vinogradov mean value conjecture \cite{Wooley}. Precisely, she proved (and further generalized to any polynomial moduli \cite{Karinquart}) that we can take \begin{equation}\label{Karinbound} \Delta= (QN)^{\epsilon} \left(Q^{k+1}+Q^{1-\delta}N+Q^{1+k\delta}N^{1-\delta}\right)\end{equation} where $\delta=1/(2k(k-1))$. It should be noticed that this improved \eqref{Zhaobound} for $k$ sufficiently large and \eqref{BaierZhao} for all $k\geq 3$ and $Q^k \leq N\leq Q^{2k-2+2\delta}$. In the meantime, the conjecture in Vinogradov mean value theorem was proved for all degrees exceeding 3 by Bourgain, Demeter and Guth in \cite{Bourgainvino} and later using another method by Wooley \cite{Wooley2}. By this impressive work, the best possible exponent in Vinogradov mean's value theorem has been confirmed. Thus we can replace $\delta$ in \eqref{Karinbound} by $2\delta$, with only few obvious changes to be made in her proof (see the discussion in \cite{Karinsurvey}). To conclude \eqref{largesieve} holds with $\Delta= (QN)^{\varepsilon} A_k(Q,N)$ where \begin{equation}\label{KarinMvt}A_k(Q,N)=\left(Q^{k+1}+Q^{1-\frac{1}{k(k-1)}}N+Q^{1+\frac{1}{k-1}}N^{1-\frac{1}{k(k-1)}}\right) .\end{equation} \\ She has further refined the bound \eqref{KarinMvt} in \cite{preKarin} and obtained \begin{equation}\label{Karin2k}\Delta= Q^{\epsilon}\left(Q^{k+1},\min\left\{A_k(Q, N), N^{1-\omega} Q^{ 1+(2k-1)\omega}\right\}\right) \end{equation} with $\omega=1/((k-1)(k-2)+2)$. \\ We propose an improvement of the existing results in some ranges of the parameters $N$ and $Q$. In order to do so, we employ an elementary method which makes use of a bound on the number of points modulo an integer of polynomial equations due to Cilleruelo, Garaev, Ostafe and Shparlinski \cite{JavierIgorboxes}. Our result is the following \begin{theorem}\label{shortrange} With $\{a_n\}$, $Q,M$, and $N$ as before such that $N^{1/2k} \leq Q \leq N^{\frac{1}{k}}$, we have $$\sum_{q \leq Q}\sum_{a=1 \atop (a,q)=1}^{q^k} \left\vert \sum_{n=M+1}^{M+N} a_n e\left(\frac{a}{q^k}n\right)\right\vert^2 \ll (QN)^{\varepsilon}Q^{1+1/(k+1)}N^{1-\frac{1}{k(k+1)}} \sum_{n=M+1}^{M+N} \vert a_n\vert^2.$$ \end{theorem} \begin{remark}This improves \eqref{BaierZhao} for all $k\geq 3$ when $Q^k \leq N\ll Q^{2k-2+\frac{2(k-2)}{k^2+k-2}}$ and improves \eqref{Karin2k} when $ Q^{k+1+\frac{2}{k-1}}\leq N \leq Q^{2k-1+O(1/k^3)}$. The range where it improves all the previous bounds becomes non empty as soon as $k\geq 4$ and covers almost the whole range except the corners. Additionally it is a step towards Conjecture $21$ raised in \cite{preKarin}. \end{remark} \begin{remark} This result can be easily generalized to any polynomial $f(q)$ of degree $k$ in a similar way as in \cite[Corollary $2.3$]{Karinquart}. For the sake of simplicity and coherence, we restricted the presentation to the case of monomials. \end{remark} \section{Proof of the main result} To begin with, we follow a standard path and define a subset of Farey fractions $$ \mathcal{S}(Q):=\left\{ \frac{a}{q^k}, \,\, (a,q)=1,\, 1\leq a < q^k,\, Q \leq q \leq 2Q\right\}.$$ It is easy to remark that two distinct elements of $\mathcal{S}(Q)$ are $1/Q^{2k}$ spaced. In the case of squares, Zhao \cite{Zhaoacta} introduced a quantity to measure the spacings between these Farey fractions. Similarly we define \begin{equation}\label{numberclose} M(N,Q)= \max_{x \in \mathcal{S}(Q)}\# \left\{y \in \mathcal{S}(Q): \|x-y\| <1/2N \right\}.\end{equation} As noticed in \cite{Zhaoacta}, any good estimate on this quantity leads to an equality of type \eqref{largesieve}. We prove the following bound. \begin{lemma}\label{estimateclose} For any $\varepsilon>0$ and integer $N$ and $Q^k \leq N \leq Q^{2k}$, we have \begin{equation}\label{estimatefrac} M(N,Q) \ll (QN)^{\varepsilon} \left(Q^{1+1/(k+1)}N^{-\frac{1}{k(k+1)}}\right)\end{equation} \end{lemma} In order to do so, we use a result bounding the number of solutions of polynomial equations in boxes. This can be proved in a similar way as Theorem $1$ of \cite{JavierIgorboxes} or could be deduced from the generalization of Kerr \cite[Theorem $3.1$]{Kerrboxes}. \begin{theorem}\label{boxes} Let $f$ a polynomial of degree $k\geq 2$ with leading coefficient coprime to $m$, $1 \leq H,R \leq m$ and integers $K,L$. We define by $N(H,R;K,L)$ the number of solutions to the congruence \begin{equation}\label{congruence} f(x) \equiv y \,\,(\bmod\, m) \end{equation} with \begin{equation}\label{constraint} (x,y) \in [K+1,K+H] \times [L+1,L+R]. \end{equation} Then, uniformly over arbitrary integers $K$ and $L$, we have for any $\varepsilon > 0$ \begin{equation}\label{JaviIgor} N(H,R;K,L) \ll H\left((R/m)^{1/j(k)+\varepsilon} + (R/H^k)^{1/2j(k)+\varepsilon}\right). \end{equation} \end{theorem} At the time these articles \cite{JavierIgorboxes,Kerrboxes} were written, the authors pointed out that $j(k)=k(k+1)$ was an admissible value following the work of Wooley \cite{Wooley}. As already noticed above, the resolution of the Vinogradov mean value conjecture allows nowadays to take $j(k)=\frac{k(k+1)}{2}$. \\ \subsection{Proof of Lemma \ref{estimateclose}}Let $x=\frac{a}{q^k}$ with $(a,q)=1$ and $y=\frac{b}{r^k}$ with $(a,q)=(b,r)=1$. We want to estimate the number of pairs $(b,r)$ with $(b,r)=1$ such that \begin{equation}\label{maj}\left\|\frac{a}{q^k}-\frac{b}{r^k}\right\|=\frac{\vert ar^k-bq^k\vert}{q^kr^k} <1/2N. \end{equation} Setting $z=ar^k-bq^k$, our problem boils down to estimate the number of pairs $(b,r)$ such that $\vert z\vert \ll Q^{2k}/N$. Equivalently, we will count the number of pairs $(r,z)$. The number of solutions is bounded above by the number of pairs with $r\sim Q$ and $\vert z\vert \ll Q^{2k}/N$ which are solutions of the congruence \begin{equation}\label{cong}ar^k= z \,(\,\bmod\, q^k).\end{equation} Applying Theorem \ref{boxes} to the polynomial $f(x)=ax^k$ with parameters $H=Q$, $R=Q^{2k}/N$, and $m=q^k$ we deduce that the numbers of pairs $(r,z)$ verifying \eqref{cong} is bounded above by $$(QN)^{\varepsilon}Q(Q^k/N)^{\frac{1}{k(k+1)}}. $$ The result follows. \subsection{Proof of Theorem \ref{shortrange}} We first split the range of $q$ into $\log Q$ dyadic intervals. Then we divide the interval $(0,1)$ into $N$ subintervals $I_k$ of size $1/N$. In each of the intervals $I_{2k}$ we pick one element of $\mathcal{S}(Q)$ (we do the same for the odd indices). To avoid any problems at the edges we split the odd indices into two subsets: the first including elements of the initial interval and the second the fractions of the last interval. This gives a sequence of elements of $\mathcal{S}(Q)$ which are $1/N$-spaced. We can therefore apply \eqref{classic} with $\delta=1/N$. We repeat the procedure again. In order to pick all the possible fractions from $\mathcal{S}(Q)$, we have to repeat the process at most $M(N,Q)$ times and obtain $$ \sum_{q \leq Q}\sum_{a=1 \atop (a,q)=1}^{q^k} \left\vert \sum_{n=M+1}^{M+N} a_n e\left(\frac{a}{q^k}n\right)\right\vert^2 \ll M(N,Q) N \sum_{n=M+1}^{M+N} \vert a_n\vert^2.$$ Using Lemma \ref{estimateclose}, we conclude the proof. \section*{Acknowledgements} The author gratefully acknowledges comments from Karin Halupczok, specifically Remark $1.4$. The author is supported by the Austrian Science Fund (FWF) project Y-901 headed by Christoph Aistleitner.
2,869,038,156,079
arxiv
\section{Introduction} \label{Introduction} The simplest example of an ind-Grassmannian is the infinite projective space $\mathbf P^\infty$. The Barth-Van de Ven-Tyurin (BVT) Theorem, proved more than 30 years ago \cite{BV}, \cite{T}, \cite{Sa} (see also a recent proof by A. Coand\u a and G. Trautmann, \cite{CT}), claims that any vector bundle of finite rank on $\mathbf P^\infty$ is isomorphic to a direct sum of line bundles. In the last decade natural examples of infinite flag varieties (or flag ind-varieties) have arisen as homogeneous spaces of locally linear ind-groups, \cite{DPW}, \cite{DiP}. In the present paper we concentrate our attention to the special case of ind-Grassmannians, i.e. to inductive limits of Grassmannians of growing dimension. If $V=\displaystyle\bigcup_{n>k} V^n$ is a countable-dimensional vector space, then the ind-variety $\mathbf G(k;V)=\displaystyle\lim_\to G(k;V^n)$ (or simply $\mathbf G(k;\infty)$) of $k$-dimensional subspaces of $V$ is of course an ind-Grassmannian: this is the simplest example beyond $\mathbf P^\infty=\mathbf G(1;\infty)$. A significant difference between $\mathbf G(k;V)$ and a general ind-Grassmannian $\mathbf X=\displaystyle\lim_\to G(k_i;V^{n_i})$ defined via a sequence of embeddings \begin{equation}\label{eq1} G(k_1;V^{n_1})\stackrel{\phi_1}{\longrightarrow}G(k_2;V^{n_2}) \stackrel{\phi_2}{\longrightarrow}\dots\stackrel{\phi_{m-1}}{\longrightarrow}G(k_m;V^{n_m}) \stackrel{\phi_m}{\longrightarrow}\dots, \end{equation} is that in general the morphisms $\phi_m$ can have arbitrary degrees. We say that the ind-Grassmannian $\mathbf X$ is \emph{twisted} if $\deg\phi_m>1$ for infinitely many $m$, and that $\mathbf X$ is \emph{linear} if $\deg\phi_m=1$ for almost all $m$. \begin{conjecture}\label{con1} Let the ground field be $\CC$, and let $\mathbf E$ be a vector bundle of rank $r\in\ZZ_{>0}$ on an ind-grasmannian $\mathbf X=\displaystyle\lim_\to G(k_m;V^{n_m})$, i.e. $\mathbf E=\displaystyle\lim_\gets E_m$, where $\{E_m\}$ is an inverse system of vector bundles of (fixed) rank $r$ on $G(k_m;V^{n_m})$. Then \begin{itemize} \item[(i)] $\mathbf E$ is semisimple: it is isomorphic to a direct sum of simple vector bundles on $\mathbf X$, i.e. vector bundles on $\mathbf X$ with no non-trivial subbundles; \item[(ii)] for $m\gg0$ the restriction of each simple bundle $\mathbf E$ to $G(k_m,V^{n_m})$ is a homogeneous vector bundle; \item[(iii)] each simple bundle $\mathbf E'$ has rank 1 unless $\mathbf X$ is isomorphic $\mathbf G(k;\infty)$ for some $k$: in the latter case $\mathbf E'$, twisted by a suitable line bundle, is isomorphic to a simple subbundle of the tensor algebra $T^{\cdot}(\mathbf S)$, $\mathbf S$ being the tautological bundle of rank $k$ on $\mathbf G(k;\infty)$; \item[(iv)] each simple bundle $\mathbf E$ (and thus each vector bundle of finite rank on $\mathbf X$) is trivial whenever $\mathbf X$ is a twisted ind-Grassmannian. \end{itemize} \end{conjecture} The BVT Theorem and Sato's theorem about finite rank bundles on $\mathbf G(k;\infty)$, \cite{Sa}, \cite{Sa2}, as well as the results in \cite{DP}, are particular cases of the above conjecture. The purpose of the present note is to prove Conjecture \ref{con1} for vector bundles of rank 2, and also for vector bundles of arbitrary rank $r$ on linear ind-Grassmannians $\mathbf X$. In the 70's and 80's Yuri Ivanovich Manin taught us mathematics in (and beyond) his seminar, and the theory of vector bundles was a reoccuring topic (among many others). In 1980, he asked one of us (I.P.) to report on A. Tyurin's paper \cite{T}, and most importantly to try to understand this paper. The present note is a very preliminary progress report. \textbf{Acknowledgement. }We acknowledge the support and hospitality of the Max Planck Institute for Mathematics in Bonn where the present note was conceived. A. S. T. also acknowledges partial support from Jacobs University Bremen. Finally, we thank the referee for a number of sharp comments. \section{Notation and Conventions} The ground field is $\CC$. Our notation is mostly standard: if $X$ is an algebraic variety, (over $\CC$), $\mathcal{O}_X$ denotes its structure sheaf, $\Omega^1_X$ (respectively $T_X$) denotes the cotangent (resp. tangent) sheaf on X under the assumption that $X$ is smooth etc. If $F$ is a sheaf on $X$, its cohomologies are denoted by $H^i( F)$, $h^i(F):=\dim H^i(F)$, and $\chi(F)$ stands for the Euler characteristic of $F$. The Chern classes of $F$ are denoted by $c_i(F)$. If $f:X\to Y$ is a morphism, $f^*$ and $f_*$ denote respectively the inverse and direct image functors of $\mathcal{O}$-modules. All vector bundles are assumed to have finite rank. We denote the dual of a sheaf of $\mathcal O_X$-modules $F$ (or that of a vector space) by the superscript $^\vee$. Furthermore, in what follows for any ind-Grassmannian $\mathbf X$ defined by \refeq{eq1}, no embedding $\phi_i$ is an isomorphism. We fix a finite dimensional space $V$ and denote by $X$ the Grassmannian $G(k;V)$ for $k<\dim V$. In the sequel we write sometimes $G(k;n)$ indicating simply the dimension of $V$. Below we will often consider (parts of) the following diagram of flag varieties: \begin{equation}\label{eqDiag} \xymatrix{ &&Z:=\Fl(k-1,k,k+1;V) \ar[ld]_{\pi_1} \ar[dr]^{\pi_2} & \\ &Y:=\Fl(k-1,k+1;V)\ar[ld]_{p_1}\ar[rd]^{p_2}&&X:=G(k;V), \\ Y^1:=G(k-1;V)&&Y^2:=G(k+1;V)&\\ } \end{equation} under the assumption that $k+1<\dim V$. Moreover we reserve the letters $X,Y,Z$ for the varieties in the above diagram. By $S_k$, $S_{k-1}$, $S_{k+1}$ we denote the tautological bundles on $X$,$Y$ and $Z$, whenever they are defined ($S_k$ is defined on $X$ and $Z$, $S_{k-1}$ is defined on $Y^1$, $Y$ and $Z$, etc.). By $\mathcal O_X(i)$, $i\in \ZZ$, we denote the isomorphism class (in the Picard group $\operatorname{Pic}\nolimits X$) of the line bundle $(\Lambda^k(S_k^\vee))^{\otimes i}$, where $\Lambda^k$ stands for the $k^{th}$ exterior power (in this case maximal exterior power as $\rk S_k^\vee=k$). The Picard group of $Y$ is isomorphic to the direct product of the Picard groups of $Y^1$ and $Y^2$, and by $\mathcal{O}_Y(i,j)$ we denote the isomorphism class of the line bundle $p_1^*(\Lambda^{k-1}(S_{k-1}^\vee))^{\otimes i} \otimes_{\mathcal{O}_Y}p_2^*(\Lambda^{k+1}(S_{k+1}^\vee))^{\otimes j}$. If $\phi:X=G(k;V)\to X':=G(k;V')$ is an embedding, then $\phi^*\mathcal{O}_{X'}(1)\simeq \mathcal{O}_X(d)$ for some $d\in\ZZ_{\geq 0}$: by definition $d$ is the \emph{degree} $\deg\phi$ of $\phi$. We say that $\phi$ is linear if $\deg\phi=1$. By a \textit{projective subspace} (in particular a \emph{line}, i.e. a 1-dimensional projective subspace) of $X$ we mean a linearly embedded projective space into $X$. It is well known that all such are Schubert varieties of the form $\{V^k\in X| V^{k-1}\subset V^k\subset V^t\}$ or $\{V^k\in X| V^i\subset V^k\subset V^{k+1}\}$, where $V^k$ is a variable $k$-dimensional subspace of $V$, and $V^{k-1}$, $V^{k+1}$, $V^t$, $V^i$ are fixed subspaces of $V$ of respective dimensions $k-1$, $k+1$, $t$, $i$. (Here and in what follows $V^t$ always denotes a vector space of dimension $t$). In other words, all projective subspaces of $X$ are of the form $G(1;V^t/V^{k-1})$ or $G(k-i, V^{k+1}/V^i)$. Note also that $Y=\Fl(k-1,k+1;V)$ is the variety of lines in $X=G(k;V)$. \section{The linear case} We consider the cases of linear and twisted ind-Grassmannians separately. In the case of a linear ind-Grassmannian, we show that Conjecture \ref{con1} is a straightforward corollary of existing results combined with the following proposition. We recall, \cite{DP}, that a \textit{standard extension} of Grassmannians is an embedding of the form \begin{equation}\label{eq31} G(k;V)\to G(k+a;V\oplus \hat W), \quad \{ V^k\subset \CC^n\}\mapsto\{V^k\oplus W\subset V\oplus\hat W\}, \end{equation} where $W$ is a fixed $a$-dimensional subspace of a finite dimensional vector space $\hat W$. \begin{proposition}\label{linear embed} Let $\phi:X=G(k;V)\to X':=G(k';V')$ be an embedding of degree 1. Then $\phi$ is a standard extension, or $\phi$ factors through a standard extension $\mathbb{P}^r\to G(k';V')$ for some $r$. \end{proposition} \begin{proof} We assume that $k\leq n-k$, $k\leq n'-k'$, where $n=\dim V$ and $n'=\dim V'$, and use induction on $k$. For $k=1$ the statement is obvious as the image of $\phi$ is a projective subspace of $G(k';V')$ and hence $\phi$ is a standard extension. Assume that the statement is true for $k-1$. Since $\deg \phi=1$, $\phi$ induces an embedding $\phi_Y:Y\to Y'$, where $Y=\Fl(k-1,k+1;V)$ is the variety of lines in $X$ and $Y\:=\Fl(k'-1,k'+1;V')$ is the variety of lines in $X'$. Moreover, clearly we have a commutative diagram of natural projections and embeddings \[ \xymatrix{ &Z\ar[rrr]^{\phi_Z}\ar[dl]_{\pi_1}\ar[dr]^{\pi_2}&&&Z'\ar[dl]_{\pi_1'}\ar[dr]^{\pi_2'}& \\ Y\ar[dr]&&X\ar[dr]&Y'&&X',\\ &\ar[r]_{\phi_Y}&\ar[ur]&\ar[r]_{\phi}&\ar[ur]& } \] where $Z:=\Fl(k-1,k,k+1;V)$ and $Z':=\Fl(k'-1,k',k'+1;V')$. We claim that there is an isomorphism \begin{equation}\label{eqLE1} \phi^*_Y\mathcal{O}_{Y'}(1,1)\simeq\mathcal{O}_Y(1,1). \end{equation} Indeed, $\phi^*_Y\mathcal{O}_{Y'}(1,1)$ is determined up to isomorphism by its restriction to the fibers of $p_1$ and $p_2$ (see diagram \refeq{eqDiag}), and therefore it is enough to check that \begin{equation}\label{eqLE2} \phi^*_Y\mathcal{O}_{Y'}(1,1)_{|p_1^{-1}(V^{k-1})}\simeq\mathcal{O}_{p_1^{-1}(V^{k-1})}(1), \end{equation} \begin{equation}\label{eqLE21} \phi^*_Y\mathcal{O}_{Y'}(1,1)_{|p_2^{-1}(V^{k+1})}\simeq \mathcal{O}_{p_2^{-1}(V^{k+1})}(1) \end{equation} for some fixed subspaces $V^{k-1}\subset V$, $V^{k+1}\subset V$. Note that the restriction of $\phi$ to the projective subspace $G(1;V/V^{k-1})\subset X$ is simply an isomorphism of $G(1;V/V^{k-1})$ with a projective subspace of $X'$, hence the map induced by $\phi$ on the variety $G(2;V/V^{k-1})$ of projective lines in $G(1;V/V^{k-1})$ is an isomorphism with the Grassmannian of 2-dimensional subspaces of an appropriate subquotient of $V'$. Note furthermore that $p_1^{-1}(V^{k-1})$ is nothing but the variety of lines $G(2;V/V^{k-1})$ in $G(1;V/V^{k-1})$, and that the image of $G(2;V/V^{k-1})$ under $\phi$ is nothing but $\phi_Y(p_1^{-1}(V^{k-1}))$. This shows that the restriction of $\phi^*_Y\mathcal{O}_{Y'}(1,1)$ to $G(2;V/V^{k-1})$ is isomorphic to the restriction of $\mathcal{O}_Y(1,1)$ to $G(2;V/V^{k-1})$, and we obtain \refeq{eqLE2}. The isomorphism \refeq{eqLE21} follows from a very similar argument. The isomorphism \refeq{eqLE1} leaves us with two alternatives: \begin{equation}\label{eqLE3} \phi^*_{Y}\mathcal{O}_{Y'}(1,0)\simeq\mathcal{O}_Y \mathrm{~or~} \phi_Y^*\mathcal{O}_{Y'}(0,1)\simeq \mathcal{O}_Y, \end{equation} or \begin{equation}\label{eqLE4} \phi^*_{Y}\mathcal{O}_{Y'}(1,0)\simeq\mathcal{O}_Y(1,0) \mathrm{~or~} \phi_Y^*\mathcal{O}_{Y'}(1,0)\simeq \mathcal{O}_Y(0,1). \end{equation} Let \refeq{eqLE3} hold, more precisely let $\phi_Y^*\mathcal{O}_{Y'}(1,0)\simeq\mathcal{O}_Y$. Then $\phi_Y$ maps each fiber of $p_2$ into a single point in $Y'$ (depending on the image in $Y^2$ of this fiber), say $({(V')}^{k'-1}\subset {(V')}^{k'+1})$, and moreover the space ${(V')}^{k'-1}$ is constant. Thus $\phi$ maps $X$ into the projective subspace $G(1;V'/{(V')}^{k'-1})$ of $X'$. If $\phi_Y^*\mathcal{O}_{Y'}(0,1)\simeq\mathcal{O}_Y$, then $\phi$ maps $X$ into the projective subspace $G(1;{(V')}^{k'+1})$ of $X'$. Therefore, the Proposition is proved in the case \refeq{eqLE3} holds. We assume now that \refeq{eqLE4} holds. It is easy to see that \refeq{eqLE4} implies that $\phi$ induces a linear embedding $\phi_{Y^1}$ of $Y^1:=G(k-1;V)$ into $G(k'-1;V')$ or $G(k'+1;V')$. Assume that $\phi_{Y^1}:Y^1\to {(Y')}^1:=G(k'-1;V')$ (the other case is completely similar). Then, by the induction assumption, $\phi_{Y^1}$ is a standard extension or factors through a standard extension $\mathbb{P}^r\to {(Y')}^1$. If $\phi_{Y^1}$ is a standard extension corresponding to a fixed subspace $W\subset \hat W$, then $\phi_{Y^1}^* S_{k'-1}\simeq S_{k-1}\oplus \left(W\otimes_\CC\mathcal{O}_{Y^1}\right)$ and we have a vector bundle monomorphism \begin{equation}\label{eqLE5} 0\to\pi_1^*p_1^*\phi_{Y^1}^*S_{k'-1}\to \pi_2^*\phi^*S_{k'}. \end{equation} By restricting \refeq{eqLE5} to the fibers of $\pi_1$ we see that the quotient line bundle $\pi_2^*\phi^*S_{k'}/\pi_1^*p_1^*\phi_{Y^1}^*S_{k'-1}$ is isomorphic to $S_k/S_{k-1}\otimes \pi_1^*p_1^*\mathcal{L}$, where $\mathcal{L}$ is a line bundle on $Y^1$. Applying $\pi_{2*}$ we obtain \begin{equation}\label{eqLE6} 0\to W\otimes_\CC \mathcal{O}_X\to\pi_{2*}(\pi_2^*\phi^*S_{k'})=\phi^*S_{k'}\to \pi_{2*}((S_k/S_{k-1})\otimes\pi_1^*p_1^*\mathcal{L}) \to 0. \end{equation} Since $\rk\phi^*S_{k'}=k'$ and $\dim W=k'-k$, $\rk\pi_{2*}((S_k/S_{k-1})\otimes\pi_1^*p_1^*\mathcal{L})=k$, which implies immediately that $\mathcal{L}$ is trivial. Hence \refeq{eqLE6} reduces to $0\to W\otimes_{\CC}\mathcal{O}_X\to\phi^*S_{k'}\to S_k\to 0$, and thus \begin{equation}\label{eqLE7} \phi^*S_{k'}\simeq S_k\oplus \left(W\otimes_\CC\mathcal{O}_X\right) \end{equation} as there are no non-trivial extensions of $S_k$ by a trivial bundle. Now \refeq{eqLE7} implies that $\phi$ is a standard extension. It remains to consider the case when $\phi_{Y^1}$ maps $Y^1$ into a projective subspace $\mathbb{P}^s$ of ${(Y')}^1$. Then $\mathbb{P}^s$ is of the form $G(1;V'/{(V')}^{k'-2})$ for some ${(V')}^{k'-2}\subset V'$, or of the form $G(k'-1;{(V')}^{k'})$ for some ${(V')}^{k'}\subset V'$. The second case is clearly impossible because it would imply that $\phi$ maps $X$ into the single point ${(V')}^{k'}$. Hence $\mathbb{P}^s=G(1;V'/{(V')}^{k'-2})$ and $\phi$ maps $X$ into the Grassmannian $G(2;V'/{(V')}^{k'-2})$ in $G(k';V')$. Let $S_2'$ be the rank 2 tautological bundle on $G(2;V'/{(V')}^{k'-2})$. Then its restriction $S'':=\phi^*S_2'$ to any line $l$ in $X$ is isomorphic to $\mathcal{O}_{l}\oplus\mathcal{O}_{l}(-1)$, and we claim that this implies one of the two alternatives: \begin{equation}\label{eqLE8} S''\simeq\mathcal{O}_X\oplus\mathcal{O}_X(-1) \end{equation} or \begin{equation}\label{eqLE9} S''\simeq S_2 \text{~and~} k=2,\text{~or~} S''\simeq(V\otimes_\CC \mathcal{O}_X)/S_2\text{~and~}k=n-k=2. \end{equation} Let $k\geq 2$. The evaluation map $\pi_1^*\pi_{1*}\pi_2^*S''\to \pi_2^*S''$ is a monomorphism of the line bundle $ \pi_1^*\mathcal{L}:=\pi_1^*\pi_{1*}\pi_2^*S''$ into $\pi_2^*S''$ (here $\mathcal{L}:=\pi_{1*}\pi_2^*S''$). Restricting this monomorphism to the fibers of $\pi_2$ we see immediately that $\pi_1^*\mathcal{L}$ is trivial when restricted to those fibers and is hence trivial. Therefore $\mathcal{L}$ is trivial, i.e. $\pi_1^*\mathcal{L}=\mathcal{O}_Z$. Push-down to $X$ yields \begin{equation}\label{eqLE10} 0\to\mathcal{O}_X\to S''\to\mathcal{O}_X(-1)\to 0, \end{equation} and hence \refeq{eqLE10} splits as $\Ext^1(\mathcal{O}_X(-1),\mathcal{O}_X)=0$. Therefore \refeq{eqLE8} holds. For $k=2$, there is an additional possibility for the above monomorphisms to be of the form $\pi_1^*\mathcal{O}_Y(-1,0)\to\pi_2^*S$ (or of the form $\pi_1^*\mathcal{O}_Y(0,-1)\to\pi_2^*S$ if $n-k=2$) which yields the option \refeq{eqLE9}. If \refeq{eqLE8} holds, $\phi$ maps $X$ into an appropriate projective subspace of $G(2;V'/{(V')}^{k'-2})$ which is then a projective subspace of $X'$, and if \refeq{eqLE9} holds, $\phi$ is a standard extension corresponding to a zero dimensional space $W$. The proof is now complete. \end{proof} We are ready now to prove the following theorem. \begin{theorem} Conjecture \ref{con1} holds for any linear ind-Grassmannian $\mathbf X$. \end{theorem} \begin{proof} Assume that $\deg \phi_m=1$ for all $m$, and apply Proposition \ref{linear embed}. If infinitely many $\phi_m$'s factor through respective projective subspaces, then $\mathbf X$ is isomorphic to $\mathbf P^\infty$ and the BVT Theorem implies Conjecture \ref{con1}. Otherwise, all $\phi_m$'s are standard extensions of the form \refeq{eq31}. There are two alternatives: $\displaystyle\lim_{m\to\infty} k_{m}=\lim_{m\to\infty}(n_{m}-k_{m})=\infty$, or one of the limits $\displaystyle\lim_{m\to \infty}k_{m}$ or $\displaystyle\lim_{m\to \infty}(n_{m}-k_{m})$ equals $l$ for some $l\in \NN$. In the first case the claim of Conjecture \ref{con1} is proved in \cite{DP}: Theorem 4.2. In the second case $\mathbf X$ is isomorphic to $\mathbf G(l;\infty)$, and therefore Conjecture \ref{con1} is proved in this case by E. Sato in \cite{Sa2}. \end{proof} \section{Auxiliary results} In order to prove Conjecture \ref{con1} for rank 2 bundles $\mathbf E$ on a twisted ind-Grassmannian $\mathbf X=\displaystyle \lim_\to G(k_m;V^{n_m})$, we need to prove that the vector bundle $\mathbf E=\displaystyle\lim_{\gets}E_m$ of rank 2 on $\mathbf X$ is trivial, i.e. that $E_m$ is a trivial bundle on $G(k_m;V^{n_m})$ for each $m$. From this point on we assume that none of the Grassmannians $G(k_m;V^{n_m})$ is a projective space, as for a twisted projective ind-space Conjecture 1.1 is proved in \cite{DP} for bundles of arbitrary rank $r$. The following known proposition gives a useful triviality criterion for vector bundles of arbitrary rank on Grassmannians. \begin{prop}\label{prop31} A vector bundle $E$ on $X=G(k;n)$ is trivial iff its restriction $E_{|l}$ is trivial for every line $l$ in $G(k;n)$, $l\in Y=\Fl(k-1,k+1;n)$. \end{prop} \begin{proof} We recall the proof given in \cite{P}. It uses the well known fact that the Proposition holds for any projective space, [OSS, Theorem 3.2.1]. Let first $k=2$, $n=4$, i.e. $X=G(2;4)$. Since $E$ is linearly trivial, $\pi_2^*E$ is trivial along the fibers of $\pi_1$ (we refer here to diagram \refeq{eqDiag}). Moreover, $\pi_{1*}\pi_2^*E$ is trivial along the images of the fibers of $\pi_2$ in $Y$. These images are of the form $\mathbb{P}_1^1\times\mathbb{P}_2^1$, where $\mathbb{P}_1^1$ (respectively $\mathbb{P}_2^1$) are lines in $Y^1:=G(1;4)$ and $Y^2:=G(3;4)$. The fiber of $p_1$ is filled by lines of the form $\mathbb{P}^1_2$, and thus $\pi_{1*}\pi_2^*E$ is linearly trivial, and hence trivial along the fibers of $p_1$. Finally the lines of the form $\mathbb{P}_1^1$ fill $Y^1$, hence ${p_1}_*\pi_{1*}\pi_2^*E$ is also a trivial bundle. This implies that $E=\pi_{2*}\pi_1^*p_1^*(p_{1*}\pi_{1*}\pi_2^*E)$ is also trivial. The next case is the case when $k=2$ and $n$ is arbitrary, $n\geq 5$. Then the above argument goes through by induction on $n$ since the fiber of $p_1$ is isomorphic to $G(2;n-1)$. The proof is completed by induction on $k$ for $k\geq 3$: the base of $p_1$ is $G(k-1;n)$ and the fiber of $p_1$ is $G(2;n-1)$. \end{proof} If $C\subset N$ is a smooth rational curve in an algebraic variety $N$ and $E$ is a vector bundle on $N$, then by a classical theorem of Grothendieck, $\displaystyle E_{|C}$ is isomorphic to $\bigoplus_i\mathcal{O}_C(d_i)$ for some $d_1\geq d_2\geq\dots\geq d_{\rk E}$. We call the ordered $\rk E$-tuple $(d_1,\dots,d_{\rk E})$ \emph{the splitting type} of $E_{|C}$ and denote it by $\mathbf{d}_E(C)$. If $N=X=G(k;n)$, then the lines on $N$ are parametrized by points $l\in Y$, and we obtain a map \[ Y\to \ZZ^{\rk E}\ :\ l\mapsto \mathbf{d}_E(l). \] By semicontinuity (cf. \cite[Ch.I, Lemma 3.2.2]{OSS}), there is a dense open set $U_E\subset Y$ of lines with minimal splitting type with respect to the lexicographical ordering on $\ZZ^{\rk E}$. Denote this minimal splitting type by $\mathbf{d}_E$. By definition, $U_E=\{l\in Y|~ \mathbf{d}_E(l)=\mathbf{d}_E\}$ is the set of \emph{non-jumping} lines of $E$, and its complement $Y\setminus U_E$ is the proper closed set of \emph{jumping} lines. A coherent sheaf $F$ over a smooth irreducible variety $N$ is called $normal$ if for every open set $U\subset N$ and every closed algebraic subset $A\subset U$ of codimension at least 2 the restriction map ${F}(U)\to {F}(U\smallsetminus A)$ is surjective. It is well known that, since $N$ is smooth, hence normal, a normal torsion-free sheaf $F$ on $N$ is reflexive, i.e. $F^{\lor\lor}=F$. Therefore, by \cite[Ch.II, Theorem 2.1.4]{OSS} $F$ is necessarily a line bundle (see \cite[Ch.II, 1.1.12 and 1.1.15]{OSS}). \begin{theorem}\label{thSubbdl} Let $E$ be a rank $r$ vector bundle of splitting type $\mathbf{d}_E=(d_1,...,d_r),\ d_1\ge...\ge d_r,$ on $X=G(k;n)$. If $d_s-d_{s+1}\ge2$ for some $s<r$, then there is a normal subsheaf $F\subset E$ of rank $s$ with the following properties: over the open set $\pi_2(\pi_1^{-1}(U_E))\subset X$ the sheaf $F$ is a subbundle of $E$, and for any $l\in U_E$ $$ F_{|l}\simeq\overset{s}{\underset{i=1}\bigoplus}\mathcal{O}_{l}(d_i). $$ \end{theorem} \begin{proof} It is similar to the proof of Theorem 2.1.4 of \cite[Ch.II]{OSS}. Consider the vector bundle $E'=E\bigotimes\mathcal{O}_X(-d_s)$ and the evaluation map $\Phi:\pi_1^*\pi_{1*}\pi_2^*E'\to \pi_2^*E'$. The definition of $U_E$ implies that $\Phi_{|\pi_1^{-1}(U_E)}$ is a morphism of constant rank $s$ and that its image ${\rm \im}\Phi\subset \pi_2^*E'$ is a subbundle of rank $s$ over $\pi_1^{-1}(U_E)$. Let $M:=\pi_2^*E'/{\rm im}\Phi$, let $T(M)$ be the torsion subsheaf of $M$, and $F':=\ker(\pi_2^*E'\to M':=M/T(M))$. Consider the singular set $\operatorname{Sing}\nolimits F'$ of the sheaf $F'$ and set $A:=Z\smallsetminus\operatorname{Sing}\nolimits F'$. By the above, $A$ is an open subset of $Z$ containing $\pi_1^{-1}(U_E)$ and $f={\pi_2}_{|A}:A\to B:=\pi_2(A)$ is a submersion with connected fibers. Next, take any point $l\in Y$ and put $ L:=\pi_1^{-1}(l)$. By definition, $L\simeq\mathbb{P}^1$, and we have \begin{equation}\label{tangent} {T_{Z/X}}_{|L}\simeq\mathcal{O}_{L}(-1)^{\oplus(n-2)}, \end{equation} where $T_{Z/X}$ is the relative tangent bundle of Z over X. The construction of the sheaves $F'$ and $M$ implies that for any $l\in U_E$: ${F'}^{\vee}_{|{L}}=\oplus_{i=1}^s\mathcal{O}_{L}(-d_i+d_s),\ \ {M'}_{|{L}} =\oplus_{i=s+1}^r\mathcal{O}_{L}(d_i-d_s)$. This, together with (\ref{tangent}) and the condition $d_s-d_{s+1}\ge2,$ immediately implies that $H^0(\Omega^1_{A/B}\otimes{F'}^{\vee}\otimes M'_{|{L}})=0$. Hence $H^0(\Omega^1_{A/B}\otimes{F'}^{\vee}\otimes M'_{|\pi_1^{-1}(U_E)})=0$, and thus, since $\pi_1^{-1}(U_E)$ is dense open in $Z$, $\Hom(T_{A/B},\mathcal H om(F',M'_{|A}))= H^0(\Omega^1_{A/B}\otimes{F'}^{\vee}\otimes M'_{|A})=0.$ Now we apply the Descent Lemma (see \cite[Ch.II, Lemma 2.1.3]{OSS}) to the data $(f_{|\pi_1^{-1}(U_E)}:\pi_1^{-1}(U_E)\to V_E,\ F'_{|\pi_1^{-1}(U_E)} \subset E'_{|\pi_1^{-1}(U_E)})$. Then $F:=(\pi_{2*}F')\otimes\mathcal{O}_X(-d_s)$ is the desired sheaf. \end{proof} \section{The case $\rk\mathbf{E}=2$} In what follows, when considering a twisted ind-Grassmannian $\mathbf X=\displaystyle\lim_\to G(k_m;V^{n_m})$ we set $G(k_m;V^{n_m})=X_m$. \refth{thSubbdl} yields now the following corollary. \begin{corollary}\label{d=(0,0)} Let $\displaystyle\mathbf{E}=\lim_{\gets}E_m$ be a rank 2 vector bundle on a twisted ind-Grassmannian $\displaystyle\mathbf{X}=\lim_{\to}X_m$. Then there exists $m_0\ge1$ such that $\mathbf{d}_{E_m}=(0,0)$ for any $m\ge m_0.$ \end{corollary} \begin{proof} Note first that the fact that $\mathbf X$ is twisted implies \begin{equation}\label{c_1=0} c_1(E_m)=0,\ m\ge1. \end{equation} Indeed, $c_1(E_m)$ is nothing but the integer corresponding to the line bundle $\Lambda^2(E_m)$ in the identification of $\operatorname{Pic}\nolimits X_m$ with $\ZZ$. As $\mathbf X$ is twisted, $c_1(E_m)=\deg\phi_m\deg\phi_{m+1}\dots\deg\phi_{m+k}c_1(E_{m+k+1})$ for any $k\geq 1$, in other words $c_1(E_m)$ is divisible by larger and larger integers and hence $c_1(E_m)=0$ (cf. \cite[Lemma 3.2]{DP}). Suppose that for any $m_0\ge1$ there exists $m\ge m_0$ such that $\mathbf{d}_{E_m}=(a_m,-a_m)$ with $a_m>0$. Then Theorem \ref{thSubbdl} applies to $E_m$ with $s=1$, and hence $E_m$ has a normal rank-1 subsheaf $F_m$ such that \begin{equation}\label{F|l} F_{m|l}\simeq\mathcal{O}_{l}(a_m) \end{equation} for a certain line $l$ in $X_m$. Since $F_m$ is a torsion-free normal subsheaf of the vector bundle $E$, the sheaf $F_m$ is a line bundle, i.e. $F_m\simeq\mathcal{O}_{X_m}(a_m)$. Therefore we have a monomorphism: \begin{equation}\label{injectn} 0\to\mathcal{O}_{X_m}(a_m)\to E_m,\ \ \ a_m\ge1. \end{equation} This is clearly impossible. In fact, this monomorphism implies in view of (\ref{c_1=0}) that any rational curve $C\subset X_m$ of degree $\delta_m:=\deg\phi_1\cdot...\cdot\deg\phi_{m-1}$ has splitting type $\mathbf{d}_{E_m}(C)=(a'_m,-a'_m)$, where $a'_m\ge a_m\delta_m\ge\delta_m$. Hence, by semiconinuity, any line $l\in X_1$ has splitting type $\mathbf{d}_{E_1}(l)=(b,-b),\ \ b\ge\delta_m$. Since $\delta_m\to\infty$ as $m_0\to\infty,$ this is a contradiction. \end{proof} We now recall some standard facts about the Chow rings of $X_m=G(k_m;V^{n_m}),$ (see, e.g., \cite[14.7]{F}): \begin{itemize} \item[(i)] $A^1(X_m)=\operatorname{Pic}\nolimits(X_m)=\mathbb{Z}[\mathbb{V}_m]$, $A^2(X_m)=\mathbb{Z}[\mathbb{W}_{1,m}]\oplus\mathbb{Z}[\mathbb{W}_{2,m}]$, where $\mathbb{\mathbb{V}}_m,\mathbb{W}_{1,m},\mathbb{W}_{2,m}$ are the following Schubert varieties: $\mathbb{V}_m:=\{V^{k_m}\in X_m|\ \dim(V^{k_m}\cap V_0^{n_m-k_m})\ge1$ for a fixed subspace $V_0^{n_m-k_m-1}$ of $V^{n_m}\}$, $\mathbb{W}_{1,m}:=\{V^{k_m}\in X_m| $ $\dim (V^{k_m}\cap V_0^{n_m-k_m-1})\ge1$ for a fixed subspace $V_0^{n_m-k_m-1}$ in $V^{n_m}\}$, $\mathbb{W}_{2,m}:=\{{V}^{k_m}\in X_m|\ \dim({V}^{k_m}\cap V_0^{n_m-k_m+1})\ge2$ for a fixed subspace $V_0^{n_m-k_m+1}$ of $V^{n_m}\}$; \item[(ii)] $[\mathbb{V}_m]^2=[\mathbb{W}_{1,m}]+[\mathbb{W}_{2,m}]$ in $A^2(X_m)$; \item[(iii)] $A_2(X_m)=\mathbb{Z}[\mathbb{P}^2_{1,m}]\oplus\mathbb{Z}[\mathbb{P}^2_{2,m}]$, where the projective planes $\mathbb{P}^2_{1,m}$ (called \emph{$\alpha$-planes}) and $\mathbb{P}^2_{2,m}$ (called \emph{$\beta$-planes}) are respectively the Schubert varieties $\mathbb{P}^2_{1,m}:=\{V^{k_m}\in X_m|\ V_0^{k_m-1}\subset {V}^{k_m}\subset V_0^{k_m+2}$ for a fixed flag $V_0^{k_m-1}\subset V_0^{k_m+2}$ in $V^{n_m}\}$, $\mathbb{P}^2_{2,m}:=\{V^{k_m}\in X_m|\ V_0^{k_m-2}\subset {V}^{k_m}\subset V_0^{k_m+1}$ for a fixed flag $V_0^{k_m-2}\subset V_0^{k_m+1}$ in $V^{n_m}\};$ \item[(iv)] the bases $[\mathbb{W}_{i,m}]$ and $[\mathbb{P}^2_{j,m}]$ are dual in the standard sense that $[\mathbb{W}_{i,m}]\cdot[\mathbb{P}^2_{j,m}]=\delta_{i,j}.$ \end{itemize} \begin{lemma}\label{c_2(E_m)=0} There exists $m_1\in\ZZ_{>0}$ such that for any $m\ge m_1$ one of the following holds: \begin{itemize} \item[(1)] $c_2({E_m}_{|\mathbb{P}^2_{1,m}})>0,$ $c_2({E_m}_{|\mathbb{P}^2_{2,m}})\le0$, \item[(2)] $c_2({E_m}_{|\mathbb{P}^2_{2,m}})>0,$ $c_2({E_m}_{|\mathbb{P}^2_{1,m}})\le0$, \item[(3)] $c_2({E_m}_{|\mathbb{P}^2_{1,m}})=0$, $c_2({E_m}_{|\mathbb{P}^2_{2,m}})=0$. \end{itemize} \end{lemma} \begin{proof} According to (i), for any $m\ge1$ there exist $\lambda_{1m},\lambda_{2m}\in\ZZ$ such that \begin{equation}\label{c_2(E_m)} c_2(E_m)=\lambda_{1m}[\mathbb{W}_{1,m}]+\lambda_{2m}[\mathbb{W}_{2,m}]. \end{equation} Moreover, (iv) implies \begin{equation}\label{lambda_jm} \lambda_{jm}=c_2({E_m}_{|\mathbb{P}^2_{j,m}}),\ \ j=1,2. \end{equation} Next, (i) yields: \begin{equation}\label{abcd} \phi_m^*[\mathbb{W}_{1,m+1}]=a_{11}(m)[\mathbb{W}_{1,m}]+a_{21}(m)[\mathbb{W}_{2,m}],\ \ \phi_m^*[\mathbb{W}_{2,m+1}]=a_{12}(m)[\mathbb{W}_{1,m}]+a_{22}(m)[\mathbb{W}_{2,m}], \end{equation} where $a_{ij}(m)\in\mathbb{Z}$. Consider the $2\times2$-matrix $A(m)=(a_{ij}(m))$ and the column vector $\Lambda_m=(\lambda_{1m},\lambda_{2m})^t.$ Then, in view of (iv), the relation (\ref{abcd}) gives: $\Lambda_m=A(m)\Lambda_{m+1}$. Iterating this equation and denoting by $A(m,i)$ the $2\times2$-matrix $A(m)\cdot A(m+1)\cdot...\cdot A(m+i),\ i\ge1,$ we obtain \begin{equation}\label{Lambda_m} \Lambda_m=A(m,i)\Lambda_{m+i+1}. \end{equation} The twisting condition $\phi_m^*[\mathbb{V}_{m+1}]=\deg\phi_m[\mathbb{V}_{m}]$ together with (ii) implies: $\phi_m^*([\mathbb{W}_{1,m+1}]+[\mathbb{W}_{2,m+1}])=(\deg\phi_m)^2([\mathbb{W}_{1,m}]+[\mathbb{W}_{2,m}])$. Substituting (\ref{abcd}) into the last equality, we have: $a_{11}(m)+a_{12}(m)=a_{21}(m)+a_{22}(m)=(\deg\phi_m)^2,\ \ \ m\ge1.$ This means that the column vector ${v}=(1,1)^t$ is an eigenvector of $A(m)$ with eigenvalue $(\deg\phi_m)^2$. Hence, it is an eigenvector of $A(m,i)$ with the eigenvalue $d_{m,i}=(\deg\phi_m)^2(\deg\phi_{m+1})^2...(\deg\phi_{m+i})^2:$ \begin{equation}\label{eigen} A(m,i){v}=d_{m,i}{v}. \end{equation} Notice that the entries of $A(m),\ m\ge1,$ are nonnegative integers (in fact, from the definition of the Schubert varieties $\mathbb{W}_{j,m+1}$ it immediately follows that $\phi_m^*[\mathbb{W}_{j,m+1}]$ is an effective cycle on $X_m$, so that (\ref{abcd}) and (iv) give $0\le\phi_m^*[\mathbb{W}_{i,m+1}]\cdot[\mathbb{P}^2_{j,m}]=a_{ij}(m)$); hence also the entries of $A(m,i),\ m,i\ge1,$ are nonnegative integers). Besides, clearly $d_{m,i}\to\infty$ as $i\to\infty$ for any $m\ge1$. This, together with (\ref{Lambda_m}) and (\ref{eigen}), implies that, for $m\gg1$, $\lambda_{1m}$ and $\lambda_{2m}$ cannot both be nonzero and have the same sign. This together with (\ref{lambda_jm}) is equivalent to the statement of the Lemma. \end{proof} In what follows we denote the $\alpha$-planes and the $\beta$-planes on $X=G(2;4)$ respectively by $\mathbb{P}_\alpha^2$ and $\mathbb{P}_\beta^2$. \begin{proposition}\label{not exist} There exists no rank 2 vector bundle $E$ on the Grassmannian $X=G(2;4)$ such that: \begin{itemize} \item[(a)] $c_2(E)=a[\mathbb{P}^2_{\alpha}],\ \ a>0,$ \item[(b)] $E_{|\mathbb{P}^2_{\beta}}$ is trivial for a generic $\beta$-plane $\mathbb{P}^2_{\beta}$ on $X$. \end{itemize} \end{proposition} \begin{proof} Now assume that there exists a vector bundle $E$ on $X$ satisfying the conditions (a) and (b) of the Proposition. Fix a $\beta$-plane $P\subset X$ such that \begin{equation}\label{E|Y} E_{|P}\simeq\mathcal{O}_{P}^{\oplus2}. \end{equation} As $X$ is the Grassmannian of lines in $\mathbb{P}^3$, the plane $P$ is the dual plane of a certain plane $\tilde P$ in $\mathbb{P}^3$. Next, fix a point $x_0\in\mathbb{P}^3\smallsetminus\tilde P$ and denote by $S$ the variety of lines in $\mathbb{P}^3$ which contain $x_0$. Consider the variety $Q=\{(x,l)\in\mathbb{P}^3\times X\ |\ x\in l\cap\tilde P\}$ with natural projections $p:Q\to S:(x,l)\mapsto\Span(x,x_0)$ and $\sigma:Q\to X:(x,l)\mapsto l$. Clearly, $\sigma$ is the blowing up of $X$ at the plane $P$, and the exceptional divisor $D_P=\sigma^{-1}(P)$ is isomorphic to the incidence subvariety of $P\times\tilde{P}$. Moreover, one easily checks that $Q\simeq\mathbb{P}(\mathcal{O}_{S}(1)\oplus T_{S}(-1))$, so that the projection $p:Q\to S$ coincides with the structure morphism $\mathbb{P}(\mathcal{O}_{S}(1)\oplus T_{S}(-1))\to S$. Let $\mathcal{O}_Q(1)$ be the Grothendieck line bundle on $Q$ such that $p_*\mathcal{O}_Q(1)=\mathcal{O}_{S}(1)\oplus T_{S}(-1)$. Using the Euler exact triple on $Q$ \begin{equation}\label{Euler} 0\to\Omega^1_{Q/S}\to p^*(\mathcal{O}_{S}(1)\oplus T_{S}(-1)) \otimes\mathcal{O}_Q(-1)\to\mathcal{O}_Q\to 0, \end{equation} we find the $p$-relative dualizing sheaf $\omega_{Q/S}:=\det(\Omega^1_{Q/S})$: \begin{equation}\label{rel dual} \omega_{Q/S}\simeq\mathcal{O}_Q(-3)\otimes p^*\mathcal{O}_{S}(2). \end{equation} Set $\mathcal{E}:=\sigma^*E$. By construction, for each $y\in S$ the fiber $Q_y=p^{-1}(y)$ is a plane such that $l_y=Q_y\cap D_P$ is a line, and, by (\ref{E|Y}), \begin{equation}\label{triv on l} \mathcal{E}_{|l_y}\simeq\mathcal{O}_{l_y}^{\oplus2}. \end{equation} Furthermore, $\sigma(Q_y)$ is an $\alpha$-plane in $X$, and from (\ref{triv on l}) it follows clearly that $h^0(\mathcal{E}_{|Q_y}(-1))=\mathcal{E}^\vee_{|Q_y}(-1))=0$. Hence, in view of condition (a) of the Proposition, the sheaf $\mathcal{E}_{|Q_y}$ is the cohomology sheaf of a monad \begin{equation}\label{eqMonad} 0\to\mathcal{O}_{Q_y}(-1)^{\oplus a}\to\mathcal{O}_{Q_y}^{\oplus(2a+2)}\to \mathcal{O}_{Q_y}(1)^{\oplus a}\to0 \end{equation} (see \cite[Ch. II, Ex. 3.2.3]{OSS}). This monad immediately implies the equalities \begin{equation}\label{cohomology} h^1(\mathcal{E}_{|Q_y}(-1))=h^1(\mathcal{E}_{|Q_y}(-2))=a,\ \ h^1(\mathcal{E}_{|Q_y}\otimes\Omega^1_{Q_y})=2a+2, \end{equation} $$ h^i(\mathcal{E}_{|Q_y}(-1))=h^i(\mathcal{E}_{|Q_y}(-2))= h^i(\mathcal{E}_{|Q_y}\otimes\Omega^1_{Q_y})=0,\ \ i\ne1. $$ Consider the sheaves of $\mathcal{O}_{S}$-modules \begin{equation}\label{E_i} E_{-1}:=R^1p_*(\mathcal{E}\otimes\mathcal{O}_Q(-2)\otimes p^*\mathcal{O}_{S}(2)),\ \ \ E_0:=R^1p_*(\mathcal{E}\otimes\Omega^1_{Q/S}), \ \ \ E_1:=R^1p_*(\mathcal{E}\otimes\mathcal{O}_Q(-1)). \end{equation} The equalities (\ref{cohomology}) together with Cohomology and Base Change imply that $E_{-1},\ E_1$ and $E_0$ are locally free $\mathcal{O}_{S}$-modules, and $\rk(E_{-1})=\rk(E_1)=a,$ and $\rk(E_0)=2a+2$. Moreover, \begin{equation}\label{R_i} R^ip_*(\mathcal{E}\otimes\mathcal{O}_Q(-2))= R^ip_*(\mathcal{E}\otimes\Omega^1_{Q/S})=R^ip_*(\mathcal{E}\otimes\mathcal{O}_Q(-1))=0 \end{equation} for $i\ne 1$. Note that $\mathcal{E}^\vee\simeq\mathcal{E}$ as $c_1(\mathcal{E})=0$ and $\rk\mathcal{E}=2$. Furthermore, (\ref{rel dual}) implies that the nondegenerate pairing ($p$-relative Serre duality) $R^1p_*(\mathcal{E}\otimes\mathcal{O}_Q(-1))\otimes R^1p_*(\mathcal{E}^\vee\otimes\mathcal{O}_Q(1)\otimes \omega_{Q/S})\to R^2p_*\omega_{Q/S}=\mathcal{O}_{S}$ can be rewritten as $E_1\otimes E_{-1}\to\mathcal{O}_{S}, $ thus giving an isomorphism \begin{equation}\label{isom dual} E_{-1}\simeq E_1^\vee. \end{equation} Similarly, since $\mathcal{E}^\vee\simeq\mathcal{E}$ and $\Omega^1_{Q/S}\simeq T_{Q/S}\otimes\omega_{Q/S}$, $p$-relative Serre duality yields a nondegenerate pairing $E_0\otimes E_0=R^1p_*(\mathcal{E}\otimes\Omega^1_{Q/S})\otimes R^1p_*(\mathcal{E}\otimes\Omega^1_{Q/S})= R^1p_*(\mathcal{E}\otimes\Omega^1_{Q/S})\otimes R^1p_*(\mathcal{E}^\vee\otimes T_{Q/S}\otimes\omega_{Q/S}) \to R^2p_*\omega_{Q/S}=\mathcal{O}_{S}$. Therefore $E_0$ is self-dual, i.e. $E_0\simeq E_0^\vee$, and in particular $c_1(E_0)=0$. Now, let $J$ denote the fiber product $Q\times_{S}Q$ with projections $Q\overset{pr_1}\leftarrow J\overset{pr_2}\to Q$ such that $p\circ pr_1=p\circ pr_2$. Put $F_1\boxtimes F_2:=pr_1^*F_1\otimes pr_2^*F_2$ for sheaves $F_1$ and $F_2$ on $Q$, and consider the standard $\mathcal{O}_J$-resolution of the structure sheaf $\mathcal{O}_{\Delta}$ of the diagonal $\Delta\hookrightarrow J$ \begin{equation}\label{resoln of diag} 0\to\mathcal{O}_Q(-1)\otimes p^*\mathcal{O}_{S}(2)\boxtimes\mathcal{O}_Q(-2)\to {\Omega^1}_{Q/S}(1)\boxtimes\mathcal{O}_Q(-1)\to\mathcal{O}_J\to \mathcal{O}_{\Delta}\to0. \end{equation} Twist this sequence by the sheaf $(\mathcal{E}\otimes\mathcal{O}_Q(-1))\boxtimes\mathcal{O}_Q(1)$ and apply the functor $R^ipr_{2*}$ to the resulting sequence. In view of (\ref{E_i}) and (\ref{R_i}) we obtain the following monad for $\mathcal{E}$: \begin{equation}\label{monad1} 0\to p^*E_{-1}\otimes\mathcal{O}_Q(-1)\overset{\lambda}\to p^*E_0\overset{\mu} \to p^*E_1\otimes\mathcal{O}_Q(1)\to0,\ \ \ \ \ \ker(\mu)/{\rm im}(\lambda)=\mathcal{E}. \end{equation} Put $R:=p^*h$, where $h$ is the class of a line in $S$. Furthermore, set $H:=\sigma^*H_X$, $[\mathbb{P}_\alpha]:=\sigma^*[\mathbb{P}^2_\alpha]$, $[\mathbb{P}_\beta]:=\sigma^*[\mathbb{P}^2_\beta]$, where $H_X$ is the class of a hyperplane section of $X$ (via the Pl\"ucker embedding), and respectively, $[\mathbb{P}^2_\alpha]$ and $[\mathbb{P}^2_\beta]$ are the classes of an $\alpha$- and $\beta$-plane. Note that, clearly, $\mathcal{O}_Q(H)\simeq\mathcal{O}_Q(1)$. Thus, taking into account the duality (\ref{isom dual}), we rewrite the monad (\ref{monad1}) as \begin{equation}\label{monad2} 0\to p^*E_1^\vee\otimes\mathcal{O}_Q(-H)\overset{\lambda}\to p^*E_0\overset{\mu}\to p^*E_1\otimes\mathcal{O}_Q(H)\to0,\ \ \ \ \ \ \ker(\mu)/{\rm im}(\lambda)\simeq\mathcal{E}. \end{equation} In particular, it becomes clear that \refeq{monad1} is a relative version of the monad \refeq{eqMonad}. As a next step, we are going to express all Chern classes of the sheaves in (\ref{monad2}) in terms of $a$. We start by writing down the Chern polynomials of the bundles $p^*E_1\otimes\mathcal{O}_Q(H)$ and $p^*E_1^\vee\otimes\mathcal{O}_Q(-H)$ in the form \begin{equation}\label{Chern1} c_t(p^*E_1\otimes\mathcal{O}_Q(H))=\prod_{i=1}^a(1+(\delta_i+H)t),\ \ \ c_t(p^*E_1^\vee\otimes\mathcal{O}_Q(-H))=\prod_{i=1}^a(1-(\delta_i+H)t), \end{equation} where $\delta_i$ are the Chern roots of the bundle $p^*E_1$. Thus \begin{equation}\label{c,d} cR^2=\sum_{i=1}^a\delta_i^2,\ \ dR=\sum_{i=1}^a\delta_i. \end{equation} for some $c,d\in\mathbb{Z}$. Next we invoke the following easily verified relations in $A^\cdot(Q)$: \begin{equation}\label{rel in A(Q)} H^4=RH^3=2[pt],\ \ \ R^2H^2=R^2[\mathbb{P}_\alpha]= RH[\mathbb{P}_\alpha]=H^2[\mathbb{P}_\alpha]=RH[\mathbb{P}_\beta]=H^2[\mathbb{P}_\beta]=[pt], \end{equation} $$ [\mathbb{P}_\alpha][\mathbb{P}_\beta]=R^2[\mathbb{P}_\beta]=R^4=R^3H=0, $$ where $[pt]$ is the class of a point. This, together with (\ref{c,d}), gives \begin{equation}\label{sums} \sum_{1\le i<j\le a}\delta_i^2\delta_j^2= \sum_{1\le i<j\le a}(\delta_i^2\delta_j+\delta_i\delta_j^2)H=0, \sum_{1\le i<j\le a}\delta_i\delta_jH^2=\frac{1}{2}(d^2-c)[pt], \sum_{1\le i\le a}(\delta_i+\delta_j)H^3=2(a-1)d[pt]. \end{equation} Note that, since $c_1(E_0)=0$, \begin{equation}\label{Chern2} c_t(p^*E_0)=1+bR^2t^2 \end{equation} for some $b\in\mathbb{Z}$. Furthermore, \begin{equation}\label{c_t(E)} c_t(\mathcal{E})=1+a[\mathbb{P}_\alpha]t^2 \end{equation} by the condition of the Proposition. Substituting (\ref{Chern2}) and (\ref{c_t(E)}) into the polynomial $f(t):=c_t(\mathcal{E})c_t(p^*E_1\otimes\mathcal{O}_Q(H)) c_t(p^*E_1^\vee\otimes\mathcal{O}_Q(-H))$, we have $f(t)=(1+a[\mathbb{P}_\alpha]t^2)\prod_{i=1}^a(1-(\delta_i+H)^2t^2)$. Expanding $f(t)$ in $t$ and using (\ref{c,d})-(\ref{sums}), we obtain \begin{equation}\label{f(t)2} f(t)=1+(a[\mathbb{P}_\alpha]-cR^2-2dRH-aH^2)t^2+e[pt]t^4,\ \ \ \end{equation} where \begin{equation}\label{e} e=-3c-a(2d+a)+(a-1)(a+4d)+2d^2. \end{equation} Next, the monad (\ref{monad2}) implies $f(t)=c_t(p^*E_0)$. A comparison of (\ref{f(t)2}) with (\ref{Chern2}) yields \begin{equation}\label{c_2} c_2(\mathcal{E})=a[\mathbb{P}_\alpha]=(b+c)R^2+2dRH+aH^2, \end{equation} \begin{equation}\label{c_4} e=c_4(p^*E_0)=0. \end{equation} The relation (\ref{c_4}) is the crucial relation which enables us to express the Chern classes of all sheaves in (\ref{monad2}) just in terms of $a$. More precisely, (\ref{c_2}) and (\ref{rel in A(Q)}) give $0=c_2(\mathcal{E})[\mathbb{P}_\beta]=2d+a$, hence $a=-2d$. Substituting these latter equalities into (\ref{e}) we get $e=-a(a-2)/2-3c$. Hence $c=-a(a-2)/6$ by (\ref{c_4}). Since $a=-2d$, (\ref{c,d}) and the equality $c=-a(a-2)/6$ give $c_1(E_1)=-a/2,\ \ c_2(E_1)=(d^2-c)/2=a(5a-4)/24$. Substituting this into the standard formulas $e_k:=c_k(p^*E_1\otimes\mathcal{O}_Q(H))=\sum_{i=0}^2\binom{a-i}{k-i}R^iH^{k-i}c_i(E_1), \ \ 1\le k\le4$, we obtain \begin{equation}\label{ee_i} e_1=-aR/2+aH,\ \ e_2=(5a^2/24-a/6)R^2+(a^2-a)(-RH+H^2)/2, \end{equation} $$e_3=(5a^3/24-7a^2/12+a/3)R^2H+(-a^3/4+3a^2/4-a/2)RH^2+(a^3/6-a^2/2+a/3)H^3,$$ $$ e_4=(-7a^4/144+43a^3/144-41a^2/72+a/3)[pt]. $$ It remains to write down explicitely $c_2(p^*E_0)$: (\ref{rel in A(Q)}), (\ref{c_2}) and the relations $a=-2d$, $c=-a(a-2)/6$ give $a=c_2(\mathcal{E})[\mathbb{P}_\alpha]=b+c,$ hence \begin{equation}\label{c_2(E_0)} c_2(E_0)=b=(a^2+4a)/6 \end{equation} by (\ref{Chern2}). Our next and final step will be to obtain a contradiction by computing the Euler characteristic of the sheaf $\mathcal{E}$ and two different ways. We first compute the Todd class ${\rm td}(T_Q)$ of the bundle $T_Q$. From the exact triple dual to (\ref{Euler}) we find $c_t(T_{Q/S})=1+(-2R+3H)t+(2R^2-4RH+3H^2)t^2$. Next, $c_t(T_Q)=c_t(T_{Q/S})c_t(p^*T_S)$. Hence $c_1(T_Q)=R+3H,\ c_2(T_Q)=-R^2+5RH+3H^2,\ c_3(T_Q)=-3R^2H+9H^2R,\ c_4(T_Q)=9[pt].$ Substituting into the formula for the Todd class of $T_Q$, ${\rm td}(T_Q)=1+\frac{1}{2}c_1+\frac{1}{12}(c_1^2+c_2)+\frac{1}{24}c_1c_2 -\frac{1}{720}(c_1^4-4c_1^2c_2-3c_2^2-c_1c_3+c_4)$, where $c_i:=c_i(T_Q)$ (see, e.g., \cite[p.432]{H}), we get \begin{equation}\label{td(T_Q)} {\rm td}(T_Q)=1+\frac{1}{2}R+\frac{3}{2}H+\frac{11}{12}RH+H^2+\frac{1}{12}HR^2+ \frac{3}{4}H^2R+\frac{3}{8}H^3+[pt]. \end{equation} Next, by the hypotheses of Proposition $c_1(\mathcal{E})=0,\ c_2(\mathcal{E})=a[\mathbb{P}_{\alpha}],\ c_3(\mathcal{E})=c_4(\mathcal{E})=0$. Substituting this into the general formula for the Chern character of a vector bundle $F$, $$ {\rm ch}(F)=\rk(F)+c_1+(c_1^2-2c_2)/2+(c_1^3-3c_1c_2-3c_3)/6+(c_1^4-4c_1^2c_2+4c_1c_3+2c_2^2-4c_4)/24, \ \ $$ $c_i:=c_i(F)$ (see, e.g., \cite[p.432]{H}), and using (\ref{td(T_Q)}), we obtain by the Riemann-Roch Theorem for $F=\mathcal{E}$ \begin{equation}\label{chi(E)} \chi(\mathcal{E})=\frac{1}{12}a^2-\frac{23}{12}a+2. \end{equation} In a similar way, using (\ref{ee_i}), we obtain \begin{equation}\label{chi(E1)+chi(E-1)} \chi(p^*E_1\otimes\mathcal{O}_Q(H))+\chi(p^*E_1^\vee\otimes\mathcal{O}_Q(-H))= \frac{5}{216}a^4-\frac{29}{216}a^3-\frac{1}{54}a^2+\frac{113}{36}a. \end{equation} Next, in view of (\ref{c_2(E_0)}) and the equality $c_1(E_0)=0$ the Riemann-Roch Theorem for $E_0$ easily gives \begin{equation}\label{chi(E_0)} \chi(p^*E_0)=\chi(E_0)=-\frac{1}{6}a^2+\frac{4}{3}a+2. \end{equation} Together with (\ref{chi(E)}) and (\ref{chi(E1)+chi(E-1)}) this yields $$ \Phi(a):=\chi(p^*E_0)-(\chi(\mathcal{E})+ \chi(p^*E_1\otimes\mathcal{O}_Q(H))+\chi(p^*E_1^\vee\otimes\mathcal{O}_Q(-H)))= -\frac{5}{216}a(a-2)(a-3)(a-\frac{4}{5}). $$ The monad (\ref{monad2}) implies now $\Phi(a)=0.$ The only positive integer roots of the polynomial $\Phi(a)$ are $a=2$ and $a=3$. However, (\ref{chi(E)}) implies $\chi(\mathcal{E})=-\frac{3}{2}$ for $a=2$, and (\ref{chi(E_0)}) implies $\chi(p^*E_0)=\frac{9}{2}$ for $a=3$. This is a contradiction as the values of $\chi(\mathcal{E})$ and $\chi(p^*E_0)$ are integers by definition. \end{proof} We need a last piece of notation. Consider the flag variety $Fl(k_m-2,k_m+2;V^{n_m})$. Any point $u=(V^{k_m-2},V^{k_m+2})\in \Fl(k_m-2,k_m+2;V^{n_m})$ determines a standard extension \begin{equation}\label{i_z} i_{u}:\ X=G(2;4)\hookrightarrow X_m, \end{equation} \begin{equation}\label{eq} W^2\mapsto V^{k_m-2}\oplus W^2\subset V^{k_m+2}\subset V^{n_m}=V^{k_m-2}\oplus W^4\subset V^{n_m}, \end{equation} where $W^2\in X=G(2;W^4)$ and an isomorphism $V^{k_m-2}\oplus W^4\simeq V^{k_m+2}$ is fixed (clearly $i_{u}$ does not depend on the choice of this isomorphism modulo $\Aut(X_m)$). We clearly have isomorphisms of Chow groups \begin{equation}\label{isomChow} i_{u}^*:\ A^2(X_m)\overset{\sim}\to A^2(X),\ \ \ i_{u*}:\ A_2(X)\overset{\sim}\to A_2(X_m), \end{equation} and the flag variety $Y_m:=Fl(k_m-1,k_m+1;V^{n_m})$ (respectively, $Y:=Fl(1,3;4)$) is the set of lines in $X_m$ (respectively, in $X$). \vspace{0.3cm} \begin{theorem}\label{th56} Let $\displaystyle\mathbf{X} = \lim_{\to}X_m$ be a twisted ind-Grassmannian. Then any vector bundle $\displaystyle\mathbf{E}=\lim_{\gets}E_m$ on $\mathbf{X}$ of rank 2 is trivial, and hence Conjecture \ref{con1}(iv) holds for vector bundles of rank 2. \end{theorem} \begin{proof} Fix $m\ge\max\{m_0,m_1\},$ where $m_0$ and $m_1$ are as in Corollary \ref{d=(0,0)} and Lemma \ref{c_2(E_m)=0}. For $j=1,2$, let $E^{(j)}$ denote the restriction of $E_m$ to a projective plane of type $\mathbb{P}^2_{j,m}$, $T^j\simeq\Fl(k_m-j,k_m+3-j,V^{n_m})$ be the variety of planes of the form $\mathbb{P}^2_{j,m}$ in $X_m$, and $\Pi^j:=\{\mathbb{P}^2_{j,m}\in T^j|\ {E_m}_{|\mathbb{P}^2_{j,m}}$ is properly unstable (i.e. not semistable)$\}.$ As semistability is an open condition, $\Pi^j$ is a closed subset of $T^j$. (i) Assume that $c_2(E^{(1)})>0$. Then, since $m\ge m_1$, Lemma \ref{c_2(E_m)=0} implies $c_2(E^{(2)})\le0$. (i.1) Suppose that $c_2(E^{(2)})=0$. If $\Pi^2\ne T^2$, then for any $\mathbb{P}^2_{2,m}\in T^2\smallsetminus \Pi^2$ the corresponding bundle $E^{(2)}$ is semistable, hence $E^{(2)}$ is trivial as $c_2(E^{(2)})=0$, see \cite[Prop. 2.3,(4)]{DL}. Thus, for a generic point $u\in Fl(k_m-2,k_m+2;V^{n_m})$, the bundle $E=i_{u}^*E_m$ on $X=G(2;4)$ satisfies the conditions of Proposition \ref{not exist}, which is a contradiction. We therefore assume $\Pi^2=T^2$. Then for any $\mathbb{P}^2_{2,m}\in T^2$ the corresponding bundle $E^{(2)}$ has a maximal destabilizing subsheaf $0\to\mathcal{O}_{\mathbb{P}^2_{2,m}}(a)\to E^{(2)}.$ Moreover $a>0$. In fact, otherwise the condition $c_2(E^{(2)})=0$ would imply that $a=0$ and $E^{(2)}/\mathcal{O}_{\mathbb{P}^2_{2,m}}=\mathcal{O}_{\mathbb{P}^2_{2,m}}$, i.e. $E^{(2)}$ would be trivial, in particular semistable. Hence \begin{equation}\label{a,-a} \mathbf{d}_{E^{(2)}}=(a,-a). \end{equation} Since any line in $X_m$ is contained in a plane $\mathbb{P}^2_{2,m}\in T^2$, (\ref{a,-a}) implies $\mathbf{d}_{E_m}=(a,-a)$ with $a>0$ for $m>m_0$, contrary to Corollary \ref{d=(0,0)}. (i.2) Assume $c_2(E^{(2)})<0$. Since $E^{(2)}$ is not stable for any $\mathbb{P}^2_{2,m}\in T^2$, its maximal destabilizing subsheaf $0\to\mathcal{O}_{\mathbb{P}^2_{2,m}}(a)\to E^{(2)}$ clearly satisfies the condition $a>0$, i.e. $E^{(2)}$ is properly unstable, hence $\Pi^2=T^2$. Then we again obtain a contradiction as above. (ii) Now we assume that $c_2(E^{(2)})>0$. Then, replacing $E^{(2)}$ by $E^{(1)}$ and vice versa, we arrive to a contradiction by the same argument as in case (i). (iii) We must therefore assume $c_2(E^{(1)})=c_2(E^{(2)})=0$. Set $D(E_m):=\{l\in Y_m|~\mathbf{d}_{E_m}(l)\ne(0,0)\}$ and $D(E):=\{l\in Y|~\mathbf{d}_E(l)\ne(0,0)\}$. By Corollary \ref{d=(0,0)}, $\mathbf{d}_{E_m}=(0,0),$ hence $\mathbf{d}_E=(0,0)$ for a generic embedding $i_u:X\hookrightarrow X_m$. Then by deformation theory \cite{B}, $D(E_m)$ (respectively, $D(E)$) is an effective divisor on $Y_m$ (respectively, on $Y$). Hence, $\mathcal{O}_Y(D(E))=p_1^*\mathcal{O}_{Y^1}(a) \otimes p_2^*\mathcal{O}_{Y^2}(b)$ for some $a,b\ge0$, where $p_1$, $p_2$ are as in diagram \refeq{eqDiag}. Note that each fiber of $p_1$ (respectively, of $p_2$) is a plane $\tilde{\mathbb{P}}^2_{\alpha}$ dual to some $\alpha$-plane $\mathbb{P}^2_{\alpha}$ (respectively, a plane $\tilde{\mathbb{P}}^2_{\beta}$ dual to some $\beta$-plane $\mathbb{P}^2_{\beta}$). Thus, setting $D(E_{|\mathbb{P}^2_{\alpha}}):=\{l\in\tilde{\mathbb{P}}^2_{\alpha}|~\mathbf{d}_E(l)\ne(0,0)\}$, $D(E_{|\mathbb{P}^2_{\beta}}):=\{l\in\tilde{\mathbb{P}}^2_{\beta}|~\mathbf{d}_E(l)\ne(0,0)\}$, we obtain $\mathcal{O}_{\tilde{\mathbb{P}}^2_{\alpha}}(D(E_{|\mathbb{P}^2_{\alpha}}))= \mathcal{O}_Y(D(E))_{|\tilde{\mathbb{P}}^2_{\alpha}}= \mathcal{O}_{\tilde{\mathbb{P}}^2_{\alpha}}(b),\ \ \ \mathcal{O}_{\tilde{\mathbb{P}}^2_{\beta}}(D(E_{|\mathbb{P}^2_{\beta}}))= \mathcal{O}_Y(D(E))_{|\tilde{\mathbb{P}}^2_{\beta}}= \mathcal{O}_{\tilde{\mathbb{P}}^2_{\beta}}(a).$ Now if $E_{|\mathbb{P}^2_{\alpha}}$ is semistable, a theorem of Barth \cite[Ch. II, Theorem 2.2.3]{OSS} implies that $D(E_{|\mathbb{P}^2_{\alpha}})$ is a divisor of degree $c_2(E_{|\mathbb{P}^2_{\alpha}})=a$ on $\mathbb{P}^2_{\alpha}$. Hence $a=c_2(E^{(1)})=0$ for a semistable $E_{|\mathbb{P}^2_{\alpha}}$. If $E_{|\mathbb{P}^2_{\alpha}}$ is not semistable, it is unstable and the equality $\mathbf{d}_E(l)=(0,0)$ yields $\mathbf{d}_{E_{|\mathbb{P}^2_{\alpha}}}=(0,0)$. Then the maximal destabilizing subsheaf of $E_{|\mathbb{P}^2_{\alpha}}$ is isomorphic to $\mathcal{O}_{\mathbb{P}^2_{\alpha}}$ and, since $c_2(E_{|\mathbb{P}^2_{\alpha}})=0,$ we obtain an exact triple $0\to\mathcal{O}_{\mathbb{P}^2_{\alpha}}\to E_{|\mathbb{P}^2_{\alpha}}\to \mathcal{O}_{\mathbb{P}^2_{\alpha}}\to 0$, so that $E_{|\mathbb{P}^2_{\alpha}}\simeq\mathcal{O}_{\mathbb{P}^2_{\alpha}}^{\oplus2}$ is semistable, a contradiction. This shows that $a=0$ whenever $c_2(E^{(1)})=c_2(E^{(2)})=0$. Similarly, $b=0$. Therefore $D(E_m)=\emptyset$, and Proposition \ref{prop31} implies that $E_m$ is trivial. Therefore $\mathbf{E}$ is trivial as well. \end{proof} In \cite{DP} Conjecture \ref{con1} (iv) was proved not only when $\mathbf{X}$ is a twisted projective ind-space, but also for finite rank bundles on special twisted ind-Grassmannians defined through certain homogeneous embeddings $\phi_m$. These include embeddings of the form \[ G(k;n)\to G(ka;nb) \] \[ V^k\subset V\mapsto V^k\otimes W^a\subset V\otimes W^b, \] where $W^a\subset W^b$ is a fixed pair of finite-dimensional spaces with $a>b$, or of the form \[ G(k;n)\to G\left(\frac{k(k+1)}{2};n^2\right) \] \[ V^k\subset V\mapsto S^2(V^k)\subset V\otimes V. \] More precisely, Conjecture \ref{con1} (iv) was proved in \cite{DP} for twisted ind-Grassmannians whose defining embeddings are homogeneous embeddings satisfying some specific numerical conditions relating the degrees $\deg\phi_m$ with the pairs of integers $(k_m,n_m)$. There are many twisted ind-Grassmannians for which those conditions are not satisfied. For instance, this applies to the ind-Grassmannians defined by iterating each of the following embeddings: \begin{eqnarray*} G(k;n)\to G\left(\frac{k(k+1)}{2};\frac{n(n+1)}{2}\right)\\ V^k\subset V\mapsto S^2(V^k)\subset S^2(V), \\ G(k;n)\to G\left(\frac{k(k-1)}{2};\frac{n(n-1)}{2}\right)\\ V^k\subset V\mapsto \Lambda^2(V^k)\subset \Lambda^2(V). \end{eqnarray*} Therefore the resulting ind-Grassmannians $\mathbf G(k,n,S^2)$ and $\mathbf G (k,n,\Lambda^2)$ are examples of twisted ind-Grassmannians for which \refth{th56} is new.
2,869,038,156,080
arxiv
\section{Acknowledgments} This work was supported in part by the School of Computing, University of Utah and in part by NSF IIS-2007398. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor. \fi \bibliographystyle{ACM-Reference-Format} \section{Conclusion and Future Work}\label{sec:conclusion} In this paper, we present the first study that compares model-intrinsic and model-agnostic explanations for explainable product search. Specifically, we propose a hierarchical gated network as an extension to the state-of-the-art explainable product search model (i.e., DREM), and then conduct a series of experiments to compare and analyze the effectiveness of the post-hoc and pre-hoc search explanations generated by the vanilla DREM and DREM with HGN. We acknowledge that there are still many limitations of this study such as the template-based explanation generation, the systematic bias introduced by UI and the differences between AMT workers and real product search users. In future, we will seek for opportunities of online experiments with real product search engines to further analyze the effectiveness of product search explanations and validate the observations in this paper. \section{Retrieval Experiments}\label{sec:setup} In general, the evaluation of an explainable product search model involves two parts: (1) the evaluation of retrieval performance in terms of retrieving items that are most likely to be purchased by users, and (2) the evaluation of explanation effectiveness in terms of illustrating the connections between users, queries, and retrieved items as well as increasing the conversion rates from search to purchase. In this section, we focus on the first part and introduce our settings and results in retrieval experiments. \subsection{Experimental Setup} The goal of retrieval experiments is to evaluate the effectiveness of product search models in retrieving relevant items for user-query pairs. To this end, we conduct experiments on a well-established product search dataset and implement a couple of state-of-the-art baselines to analyze the performance of DREM with HGN. \subsubsection{Dataset} Our testbed is a well-established Amazon product search benchmark datasets~\cite{van2016learning,ai2017learning,guo2019attentive}. The dataset contains user's purchases, reviews, and queries in a variety of categories as well as detained descriptions and meta data of a large number of items on Amazon\footnote{Please refer to~\cite{van2016learning,ai2017learning,guo2019attentive} for the details of Amazon search datasets.}. Specifically, we conduct experiments on three categories, i.e., \textit{Electronics}, \textit{Health\&PersonalCare}, and \textit{Sports\&Outdoors}, and use the 5-core data where each user/item has at least 5 reviews~\cite{ai2017learning,guo2019attentive}. \iffalse \begin{table*}[t] \centering \caption{Statistics for the 5-core data of \textit{Electronics}, \textit{Health\&PersonalCare}, \textit{Sports\&Outdoors} in the Amazon Review dataset.} \scalebox{0.8}{ \begin{tabular}{ p{3cm} l c c c } \toprule & & \textit{Electronics}& \textit{Health\&PersonalCare} & \textit{Sports\&Outdoors}\\ \midrule \multirow{6}{*}{\textbf{Corpus Statistics}} & Vocabulary size & 142,922 & 38,772 & 32,386\\ % &Number of reviews & 1,689,188 & 346,355 & 296,337 \\ % &Number of users & 192,403 & 38,609 & 35,598\\ % &Number of items & 63,001 & 18,534 & 18,357\\ % &Number of brands & 3,525 & 3,855 & 2,412\\ % &Number of categories & 983 & 861 & 1,443\\ % \midrule \multirow{5}{*}{\textbf{Product Knowledge}} & \textit{Also\_bought} per item & 36.70$_{\pm 38.56}$ & 63.03$_{\pm 35.36}$ & 75.18$_{\pm 31.98}$\\ % &\textit{Also\_viewed} per item & 4.36$_{\pm 9.44}$ & 15.43$_{\pm 9.35}$ & 14.46$_{\pm 12.24}$ \\ % &\textit{Bought\_together} per item & 0.59$_{\pm 0.72}$ & 0.86$_{\pm 0.77}$ & 0.83$_{\pm 0.76}$\\ % &\textit{Brand} per item & 0.47$_{\pm 0.50}$ & 0.76$_{\pm 0.43}$ & 0.67$_{\pm 0.47}$ \\ % &\textit{Category} per item & 4.39$_{\pm 0.95}$ & 4.20$_{\pm 0.93}$ & 4.82$_{\pm 1.33}$\\ \midrule \multirow{3}{*}{\textbf{Train/Test Partitions}} &Number of reviews & 1,275,432/413,756 & 261,281/85,074 & 224,807/71,530\\ % &Number of user-query pairs & 1,204,928/5,505 & 232,187/207 & 214,919/1,739\\ &Relevant items per pair & 1.12$_{\pm 0.48}$/1.01$_{\pm 0.09}$ & 1.13$_{\pm 0.47}$/1.00$_{\pm 0.00}$ & 1.12$_{\pm 0.45}$/1.01$_{\pm 0.13}$\\ \bottomrule \end{tabular} } \label{tab:dataset_statistics} \end{table*} \fi \begin{table} \centering \caption{Statistics for the 5-core data.} \scalebox{0.68}{ \begin{tabular}{l c c c } \toprule & \textit{Electronics}& \textit{Health\&PersonalCare} & \textit{Sports\&Outdoors}\\ \midrule Vocabulary size & 142,922 & 38,772 & 32,386\\ % Number of reviews & 1,689,188 & 346,355 & 296,337 \\ % Number of users & 192,403 & 38,609 & 35,598\\ % Number of items & 63,001 & 18,534 & 18,357\\ % Number of brands & 3,525 & 3,855 & 2,412\\ % Number of categories & 983 & 861 & 1,443\\ % \midrule \textit{Also\_bought} per item & 36.70$_{\pm 38.56}$ & 63.03$_{\pm 35.36}$ & 75.18$_{\pm 31.98}$\\ % \textit{Also\_viewed} per item & 4.36$_{\pm 9.44}$ & 15.43$_{\pm 9.35}$ & 14.46$_{\pm 12.24}$ \\ % \textit{Bought\_together} per item & 0.59$_{\pm 0.72}$ & 0.86$_{\pm 0.77}$ & 0.83$_{\pm 0.76}$\\ % \textit{Brand} per item & 0.47$_{\pm 0.50}$ & 0.76$_{\pm 0.43}$ & 0.67$_{\pm 0.47}$ \\ % \textit{Category} per item & 4.39$_{\pm 0.95}$ & 4.20$_{\pm 0.93}$ & 4.82$_{\pm 1.33}$\\ \midrule Number of reviews & 1,275,432/413,756 & 261,281/85,074 & 224,807/71,530\\ % Number of user-query pairs & 1,204,928/5,505 & 232,187/207 & 214,919/1,739\\ Relevant items per pair & 1.12$_{\pm 0.48}$/1.01$_{\pm 0.09}$ & 1.13$_{\pm 0.47}$/1.00$_{\pm 0.00}$ & 1.12$_{\pm 0.45}$/1.01$_{\pm 0.13}$\\ \bottomrule \end{tabular} } \label{tab:dataset_statistics} \end{table} Other than review text, to incorporate rich product meta data for product search, we also consider five types of entity relationships in our experiments. They are \textit{Also\_bought}: users who purchased item $i_1$ has also purchased item $i_2$ ($i_1\rightarrow i_2$); \textit{Also\_viewed}: users who viewed item $i_1$ also viewed item $i_2$ ($i_1\rightarrow i_2$); \textit{Bought\_together}: item $i_1$ was purchased together with item $i_2$ in a single transaction ($i_1\rightarrow i_2$); \textit{Brand}: item $i$ has brand $b$ ($i\rightarrow b$); and \textit{Category}: item $i$ has category $c$ ($i\rightarrow c$). More data statistics can be found in Table~\ref{tab:dataset_statistics}. \iffalse As introduced by previous studies~\cite{van2016learning,ai2017learning,guo2019attentive}, search queries in Amazon product search benchmark datasets are extracted from product category information using a two-step process. For each user, it first extracts the category hierarchies of each item in the user's purchase history and filter out those with less than two levels. After that, it flattens the string of each category hierarchy, remove duplicated words and stopwords, and then treat the processed string as a topical query submitted by the user that leads to the purchase of the corresponding item. As users often search for “a producer’s name, a brand or a set of terms which describe the category of the product" on e-shopping websites~\cite{rowley2000product}, it's commonly believed that the queries extracted through this process are good enough to simulate real-world product search queries~\cite{van2016learning,ai2017learning,guo2019attentive,ai2019explain}. \fi \subsubsection{Baselines} Other than the naive DREM proposed by Ai et al.~\cite{ai2019explain}, in our experiments, we include six state-of-the-art product search baselines including classic retrieval models such as \begin{itemize} \item \textbf{QL}: the query-likelihood model~\cite{ponte1998language} that ranks items according to the log likelihood of queries in the unigram language model built with item descriptions and reviews. \item \textbf{BM25}: the classic probabilistic model proposed by Robertson and Walker~\cite{robertson2009probabilistic} built on the item's descriptions and reviews. \item \textbf{LTR}\footnote{We extract ranking features for LTR following the same method used by Ai et al.~\cite{ai2019explain}, which is ignored in this paper due to page limit.}: a learning-to-rank model built with LambdaMART. \end{itemize} and latent product search baselines such as \begin{itemize} \item \textbf{LSE}: the Latent Semantic Entity model~\cite{van2016learning} that ranks items based on the similarity of queres and items in latent spaces. \item \textbf{HEM}: the Hierarchical Embedding Model~\cite{ai2017learning} that personalizes product search results with a latent retrieval framework. \item \textbf{ZAM}: the original Zero Attention Model~\cite{ai2019zero} that conducts selective personalization in product search. \end{itemize} \subsubsection{Implementation and Evaluation Details} Following previous studies~\cite{guo2019attentive,ai2019explain}, we partition the data in each product category by randomly hiding 30\% user purchases from the training process and use them as the testing data. We randomly select 30\% queries as the test queries and match users with queries extracted from their purchase history to form training and testing user-query pairs. A item is considered relevant to a user-query pair when it is relevant to the query and has been purchased by the user. More information about our data partition can be found in Table~\ref{tab:dataset_statistics}. For implementation details, we follow the settings proposed by Ai et al.~\cite{ai2019explain} by building QL and BM25 with galago\footnote{https://sourceforge.net/p/lemur/wiki/Galago/}, building LTR with ranklib\footnote{https://sourceforge.net/p/lemur/wiki/RankLib/}, and tuning the Dirichlet smoothing parameter $\mu$ in QL from 1000 to 3000, the scoring parameter $k$ and $b$ in BM25 from 0.5 to 4 and 0.25 to 1, respectively. The number of trees and leaf in LambdaMART model used in LTR are set as 1000 and 10, and we tune the learning rate from 0.01 to 0.1. For latent product retrieval models such as LSE, HEM, ZAM, the vanilla DREM~\cite{ai2019explain} and our extended DREM with HGN (DREM-HGN), we use Adagrad~\cite{luo2019adaptive} with batch size 64 to optimize the latent vectors and set the sample size of negative sampling as 5. We clipped batch gradients with norm 5 to avoid unstable updates and train each model for 20 epochs by gradually decrease the learning rate from 0.5 to 0 (note that most models converge after 10 epoches). For fair comparison, we fixed the personalization weight $\eta$ in HEM, ZAM, DREM, and DREM-HGN as 0.5 (which results in Eq.~(\ref{equ:P_iuq})) and the size of all latent vectors as 100 (i.e., $\alpha=100$). We acknowledge that having larger vector size could boost the performance of some latent product search models, especially those using rich product knowledge and information (e.g., DREM and DREM-HGN)~\cite{ai2019explain}. However, this is the not focus of this paper and we ignore the tunning of $\alpha$ so that the embeddings learned by different models have comparable dimentionalities. We adopt mean average precision (MAP), mean reciprocal rank (MRR) and normalized discounted cumulative gain (NDCG) to evaluate the performance of product search models. For each user-query pair, we retrieve 100 items among all candidate items in each dataset to generate the rank list and compute MAP and MRR accordingly. We also report NDCG with cutoff 10 and 50. Significant tests are measured by the Fisher randomization test~\cite{smucker2007comparison} with p < 0.05. \subsection{Retrieval Results} \begin{table*}[t] \centering \small \caption{The retrieval performance of product search models. Best performance in each model group are highlighted in bold. $*$ and $\dagger$ denote significant improvements over the best classic retrieval baselines and latent search baselines, respectively.} \scalebox{0.85}{ \begin{tabular}{ c || c | c | c | c || c | c | c | c || c | c | c | c } \hline & \multicolumn{4}{c||}{\textit{Electronics}} & \multicolumn{4}{c||}{\textit{Health\&PersonalCare}} & \multicolumn{4}{c}{\textit{Sports\&Outdoors}}\\ \hline Model & MAP & MRR & NDCG@10 & NDCG@50 & MAP & MRR & NDCG@10 & NDCG@50 & MAP & MRR & NDCG@10 & NDCG@50 \\\hline \hline QL & 0.166 & 0.164 & 0.187 & 0.210 & 0.063 & 0.063 & 0.059 & 0.097 & 0.068 & 0.067 & 0.080 & 0.117 \\ \hlin BM25 & 0.216 & 0.213 & 0.227 & 0.270 & \textbf{0.076} & \textbf{0.076} & \textbf{0.088} & \textbf{0.131} & 0.092 & 0.091 & 0.097 & 0.142 \\ \hlin LTR & \textbf{0.216} & \textbf{0.216} & \textbf{0.230} & 0.303 & 0.060 & 0.060 & 0.055 & 0.113 & \textbf{0.109}$^\dagger$ & \textbf{0.109}$^\dagger$ & \textbf{0.120}$^\dagger$ & \textbf{0.166} \\ \hline \hlin LSE & 0.108 & 0.108 & 0.137 & 0.183 & 0.016 & 0.016 & 0.000 & 0.113 & 0.015 & 0.015 & 0.019 & 0.040\\ \hlin HEM & 0.156 & 0.156 & 0.182 & 0.197 & 0.157$^*$ & 0.157$^*$ & 0.146$^*$ & 0.200$^*$ & 0.075 & 0.075 & 0.086 & 0.128\\ \hlin ZAM & 0.115 & 0.115 & 0.130 & 0.162 & 0.208$^*$ & 0.208$^*$ & 0.244$^*$ & 0.263$^*$ & 0.074 & 0.075 & 0.087 & 0.153\\ \hlin Vanilla DREM & \textbf{0.231}$^*$ & \textbf{0.232}$^*$ & \textbf{0.268}$^*$ & \textbf{0.314}$^*$ & \textbf{0.349}$^*$ & \textbf{0.349}$^*$ & \textbf{0.378}$^*$ & \textbf{0.429}$^*$ & \textbf{0.099} & \textbf{0.099} & \textbf{0.113} & \textbf{0.180}$^*$\\ \hline \hlin DREM-HGN & \textbf{0.244}$^{*\dagger}$ & \textbf{0.245}$^{*\dagger}$ & \textbf{0.275}$^{*\dagger}$ & \textbf{0.339}$^{*\dagger}$ & \textbf{0.536}$^{*\dagger}$ & \textbf{0.536}$^{*\dagger}$ & \textbf{0.556}$^{*\dagger}$ & \textbf{0.588}$^{*\dagger}$ & \textbf{0.126}$^{*\dagger}$ & \textbf{0.127}$^{*\dagger}$ & \textbf{0.141}$^{*\dagger}$ & \textbf{0.215}$^{*\dagger}$\\ \hline \hline \end{tabular} } \label{tab:retrieval_results} \vspace{-10pt} \end{table*} The results of our retrieval experiments are shown in Table~\ref{tab:retrieval_results}. Similar to previous studies~\cite{van2016learning,ai2017learning}, we observed that latent product search models usually perform better than classic retrieval models constructed based on text matching signals. For example, our best latent product search baseline (i.e., the vanilla DREM) has significantly outperformed QL and BM25 on all the datasets. This is reasonable as previous studies have observed significant vocabulary gap between queries and item descriptions~\cite{nurmi2008product,van2016learning}, and users often purchase items that ``seems'' irrelevant to their submitted query in text~\cite{10.1145/3336191.3371780}. After incorporating more complex behavior features such as item popularity, the LTR baseline has managed to outperform DREM on \textit{Sports\&Outdoors}, but still performs worse than DREM on \textit{Electronics} and \textit{Health\&PersonalCare}. Among all latent product search models, the non-personalized baseline (i.e., LSE) performs the worst, which demonstrates the importance of personalization in product search. In the results of personalized product search models, we observed that DREM has significantly outperformed other baselines with large improvements from 25\% to 50\%. This indicates that incorporating rich information from product knowledge graph and meta data is indeed helpful in improving the effectiveness of product search. Further, our proposed model (i.e., DREM-HGN) has achieved the best performance in our experiments. It has outperformed all the baselines significantly and achieved 5.6\%, 53.6\%, and 28.2\% MRR improvements over the vanilla DREM on \textit{Electronics}, \textit{Health\&PersonalCare}, and \textit{Sports\&Outdoors}, respectively. This indicates that user representations encoded from product knowledge with the hierarchical gated network is more useful than the user embedding learned by DREM from randomly initialed vectors. While the results of DREM-HGN outperforming the vanilla DREM is not surprising as the former has incorporated a complicated attention network to model query-specific user preferences, the main advantage of HGN is its transparency and model-intrinsic interpretability that allows the generation of pre-hoc explanation for product search. To compare the post-hoc and pre-hoc search explanations generated by DREM and DREM-HGN in detail, we further conduct a series of explanation evaluation and analysis. \section{Explanation Evaluation} \label{sec:exp_explanation} In practice, both pre-hoc and post-hoc explanations have their unique advantages for IR. For example, pre-hoc search explanations are considered more reliable as they are directly inferred from the structure of the retrieval model. In contrast, post-hoc search explanations are more flexible as it neither enforces the retrieval model to have intrinsic interpretability nor requires access to the internal structure and data flow of the model. To the best of our knowledge, DREM is the only explainable models for product search in the literature. Ai et al.~\cite{ai2019explain} has conducted a laboratory user study and show that the post-hoc search explanations extracted by DREM are useful in attracting users to purchase the items. However, how this is achieved or what factors are important for the quality of search explanations are mostly unexplored. In this section, we conduct experiments to evaluate and compare the pre-hoc explanations created by DREM-HGN with the post-hoc explanations created by the vanilla DREM for product search. Specifically, we want to study and shed some lights on the following research questions: \vspace{2pt} \noindent\textbf{RQ1}: \textit{Which types of search explanations do users prefer? Model-intrinsic ones or model-agnostic ones?} \vspace{2pt} \noindent\textbf{RQ2}: \textit{What factors are important for product search explanations?} \vspace{2pt} \subsection{Experimental Setup} We design and conduct a crowdsourcing experiment on Amazon Mechanical Turk\footnote{\url{https://requester.mturk.com/}} (AMT) to evaluate the pre-hoc and post-hoc search explanations generated by DREM-HGN and DREM. \subsubsection{Explanation Generation} As described in Section~\ref{sec:model}, both DREM and DREM-HGN create explanations with templates using knowledge entities and relations extracted by the models. For fair comparison, we adopt a single set of explanation templates for both DREM and DREM-HGN. For example, given a specific relation triple such as (item, \textit{Brand}, Apple) extracted by the vanilla DREM, we would create an explanation as \textit{``This product was retrieved because the user often buys products with \textbf{brands} such as \textbf{Apple}''}. \noindent Also, as DREM-HGN relies on the attention weights extracted from HGN to explain its behavior, we add the corresponding information in the template and create explanations such as \textit{``This product was retrieved \textbf{50\%} because the user often buys products with \textbf{brands} such as \textbf{Apple}''}. To avoid the randomness in search explanations and to improve the robustness of crowdsourcing, we increase the redundancy of our experiment by allowing each model to provide a group of explanations instead of a single one. Specifically, we extracted and grouped the top-3 search explanations extracted by DREM and DREM-HGN and allow each explanation to include at most 3 relevant knowledge entities. Instead of requiring each crowdsourcing worker to annotate each search explanation, we let workers to annotate the search explanations in groups so that the final results would be influenced less by the quality variance of explanations provided by each model. We refer to the group of explanations generated by DREM and DREM-HGN as Model-Agnostic Explanation (MAE) and Model-Intrinsic Explanation (MIE), respectively. \subsubsection{Annotation Strategy} Previous studies~\cite{zhang2016explainable,wang2018explainable,ai2019explain} evaluated product recommendation and search explanations mainly from three perspectives: (1) whether the explanation has provided more relevant information about the item and the query, or \textit{Informativeness}; (2) whether the explanation is useful in attracting the user to purchase the item, or \textit{Usefulness}; and (3) whether providing the explanation would increase user's satisfaction for the service provided by the product search engine, or \textit{Satisfaction}. In this paper, we adopt the same strategy to evaluate the performance of search explanations. However, instead of requiring crowdsourcing workers to directly annotate each explanation with a 5-level score~\cite{wang2018explainable}, we propose to conduct pairwise comparisons for the explanations generated by DREM and DREM-HGN for each user-query-item triple and let the workers to annotate their pairwise preferences only. Pairwise preferences have been proven to be much more robust and reliable comparing to pointwise relevance judgements in IR~\cite{joachims2017accurately}. Through this way, we hope to improve the quality of our crowdsourcing experiments as well as exploring the possibility of building automatic search explanation evaluation models for product search, which is further discussed in Section~\ref{sec:classification}. \subsubsection{Data Sampling} Our crowdsourcing dataset is sampled from the retrieval experiment dataset of \textit{Electronics}. \textit{Electronics} is one of the most popular product categories on Amazon. Products in \textit{Electronics} usually have less complicated knowledge structures (e.g., less entity relations per item as shown in Table~\ref{tab:dataset_statistics}) and are more familiar to workers on AMT. Specifically, we randomly sampled 101 user-query pairs from the test data of \textit{Electronics} where both DREM and DREM-HGN achieved MRR scores greater or equal to 0.1. For fairness, we extracted the user-query-item triples to explain by pairing user-query pairs with the item purchased by the user in the corresponding session. Thus, all sampled items are indeed purchased by the user and AMT workers only need to judge which explanations can better explain the user's purchase in the search session. Specifically, we recruited three workers per case, and applied a voting process to assign the final labels \begin{figure} \centering \includegraphics[width=3in]{./figure/UI_example.png} \caption{An illustration of the crowdsourcing UI.} \label{fig:UI} \end{figure} \subsubsection{UI Design} Figure~\ref{fig:UI} provides an illustration of the UI we used for the crowdsourcing experiments. On the top of the UI, we provided a variety of information related to the current item, including product links, images, titles, descriptions, the search query, and the recent purchases and reviews of the current user. In the center of the UI, we implemented a tab-based frame that allows workers to navigate and annotate the informativeness, usefulness, and satisfaction of search explanations. In each tab, we provided and asked workers to read a detailed instruction on the annotation process on the left, and show the groups of explanations created by DREM (i.e., MAE) and DREM-HGN (i.e., MIE) in the middle. We also provided the original links on Amazon to all items and entities shown in the product descriptions, user reviews, or search explanations. To avoid unnecessary biases in the annotation process, we anonymized MAE and MIE by randomly assigning them as ``Group A'' and ``Group B''. Workers only need to click the buttons on the right to indicate which explanation group provide better search explanations: ``Group A'', ``Group B'', both, or none. We eyeballed the collected data and manually filtered out workers with unreasonable behaviors. The source code of our models, experiment platforms, and all crowdsourcing data can be found in links below\footnote{\url{https://github.com/utahIRlab/AMTurk-Product-Search-Explanation-Evaluation}}\footnote{\url{https://github.com/utahIRlab/drem-attention}}. \vspace{-5pt} \subsection{Crowdsourcing Results} To answer \textbf{RQ1}, we show the results of our crowdsourcing experiment in Table~\ref{tab:crowd_results}. As shown in table, most workers found that the model-intrinsic explanations provided by DREM-HGN (i.e., MIE) are preferable over the model-agnostic explanations provided by DREM (i.e., MAE) from the perspectives of \textit{Informativeness} and \textit{Satisfaction}. This is not surprising as MIE provides more information about the actual inference process of the retrieval model (e.g., the attention weights), which makes it more reliable and trustworthy to users. However, in terms of \textit{Usefulness}, we do not observe any significant differences between MIE and MAE. The overall scores of MAE on \textit{Usefulness} is slightly higher than those for MIE. In fact, the Fleiss Kappa $\kappa$~\cite{viera2005understanding} of binary classification (MIE is better or not) on \textit{Usefulness} is $-0.03$, which is much lower than those for \textit{Informativeness} (0.1) and \textit{Satisfaction} (0.11). One possible reason is that \textit{Usefulness} -- whether the explanations can attract the user to purchase the item -- is a subjective question which varies significantly based on user's preferences and worker's opinions. In contrast, the questions of whether the explanations provide more information (i.e., \textit{Informativeness}) or increase user's satisfaction on the product search service (i.e., \textit{Satisfaction}) are objective in spite of whether the user purchases the item or not. To analyze the relation between \textit{Informativeness}, \textit{Usefulness} and \textit{Satisfaction}, we compute the Pearson Coefficient for each pair of labels. The coefficients are 0.483 for (\textit{Informativeness}, \textit{Usefulness}), 0.457 for (\textit{Usefulness}, \textit{Satisfaction}), and 0.494 for (\textit{Informativeness}, \textit{Satisfaction}). Interestingly, we observe that the coefficient between \textit{Usefulness} and \textit{Satisfaction} is the lowest among all pairs. This may indicate that user's satisfaction on search engines and search explanations is not directly related to whether the explanations encourage the purchases of the items. Even when a user decide not to purchase an item after seeing the explanations, they may still feel satisfied if the explanations have helped them make more informed decisions. The high coefficient between \textit{Informativeness} and \textit{Satisfaction} can also serve as a side evidence for this phenomenon. \begin{table}[t] \centering \caption{Crowdsourcing results for explanation evaluation.} \scalebox{0.8}{ \begin{tabular}{ c | c | c | c } \hline & \textit{Informativeness \scriptsize{($\kappa=0.10$)}} & \textit{Usefulness} \scriptsize{($\kappa=-0.03$)}& \textit{Satisfaction} \scriptsize{($\kappa=0.11$)} \\ \hline \hline MIE wins & 55\% & 43\% & 51\% \\ \hline MAE wins & 37\% & 47\% & 35\% \\ \hline \hline Equal & 8\% & 11\% & 14\% \\ \hline \hline \end{tabular} } \label{tab:crowd_results} \end{table} \begin{figure*}[t] \centering \includegraphics[width=5.5in]{./figure/feat_importance.pdf} \caption{Feature Importance in the GBDT model for explanation performance prediction.} \vspace{-10pt} \label{fig:feature} \end{figure*} \vspace{-5pt} \subsection{Performance Prediction and Analysis} \label{sec:classification} To answer \textbf{RQ2}, we propose a performance prediction task for explainable product search by training a classification model to infer user's preferences over search explanations. By creating such performance prediction models, we want to explore the possibility of evaluating product search explanations without involving human in the loop and analyze the importance of different explanation properties with respect to the effectiveness of search explanations. \begin{table}[t] \centering \vspace{-12pt} \caption{Descriptions of search explanation features.} \scalebox{0.8}{ \begin{tabular}{ p{8.5cm} } \toprule \textbf{Performance}\\ \textit{MRR}: the MRR of the retrieval model. \\ \textit{log\_purchase\_prob}: $P(i|u,q)$ in DREM or DREM-HGN. \\ \midrule \textbf{Fidelity}\\ \textit{exist\_confidence}: model confidence on the existence of the relations/entities used in explanations ($M(e | u, i)$ for MAE or 1 for MIE). \\ \textit{existance\_rate}: percentage of relations/entities used in explanations that are actually observed in the dataset. \\ \midrule \textbf{Novelty}\\ \textit{entity\_iuf}: inverse user frequency of entities used in explanations. \\ \textit{entity\_iif}: inverse item frequency of entities used in explanations. \\ \textit{user\_entity\_mutual\_info}: mutual information between users and entities used in explanations in observed data. \\ \textit{item\_entity\_mutual\_info}: mutual information between items and entities used in explanations in observed data. \\ \textit{relation\_info\_entropy}: the entropy of the distribution of users who have the relations used in explanations. \\ \bottomrule \end{tabular} } \label{tab:classification_features} \end{table} \subsubsection{Feature Design} The goal of the classification model is to predict user's preferences over an arbitrary pair of result explanations in product search. To this end, we extract three groups of features to represent each search explanation in the feature space. They are (1) \textit{Performance} features, which indicate the retrieval effectiveness of the explainable search model that creates the explanation; (2) \textit{Fidelity} features, which indicate whether the information provided by the explanation is correct or trustworthy; and (3) \textit{Novelty} features, which indicate whether the information shown in the explanation is novel or surprising to the user, query, or item. For each feature, we compute the maximum, minimum, and mean scores of the entities in each explanation and concatenate the features of all explanations in each group to form the feature vector of the group. In total, we have 59 features for each explanation group. Detailed information about each feature can be found in Table~\ref{tab:classification_features}. To form the feature vector of a pair of explanation groups and avoid introducing biases to the experiment, we concatenated the features of MIE and MAE in both forward and backward orders and create two data points in the classification task for each user-query-item triple. Therefore, we have 202 pair of input data and corresponding labels in the performance prediction task. \subsubsection{Experiment Setup} We build the performance prediction model with GBDT~\cite{ke2017lightgbm} in LightGBM\footnote{\url{https://github.com/microsoft/LightGBM}}. We conducted a 5-fold cross validation to predict the pairwise preference of \textit{Informativeness}, \textit{Usefulness} and \textit{Satisfaction}. For each GBDT, we tuned the maximum tree depth from 5 to 20, leaves number from 10 to 30, minimum leaf data from 10 to 50, and learning rate from 0.1 to 0.5. The final results are aggregated over all the test folds in cross validation. \subsubsection{Prediction Results} \begin{table}[t] \centering \caption{Explanation performance prediction results.} \scalebox{0.8}{ \begin{tabular}{ r| c | c | c | c } \hline & Total & Correct & Type-1-error & Type-2-error \\ \hline \hline \textit{Informativeness} & 202 & 125 & 16 & 61 \\ \hline \textit{Usefulness} & 202 & 127 & 24 & 51 \\ \hline \textit{Satisfaction} & 202 & 127 & 28 & 47 \\ \hline \hline \end{tabular} } \label{tab:classification_results} \end{table} Table~\ref{tab:classification_results} depicts the results of our explanation performance prediction experiment. \textit{Correct} represents the pairs of explanations where the model has correctly predicted their pairwise preferences; \textit{Type-1-error} refers to the pairs where MIE and MAE are equally good while the model predicted that one is better than the other; and \textit{Type-2-error} are cases where MIE or MAE is better than the other while the model predicted otherwise. \textit{Can we predict the performance of product search explanations without human annotations?} The overall preference prediction accuracy is 61.9\% for \textit{Informativeness} and 62.9\% for \textit{Usefulness} and \textit{Satisfaction}. If we allow Type-1-errors, the accuracy would further increase to 69.8\%, 74.8\%, and 76.7\%, respectively. While such results are far from perfect, they are much better than those produced by a random model and show that it is possible infer user's preferences on product search explanations from their feature representations. This could serve as an evidence to the potential of automatic product search explanation evaluation in future studies. \textit{What properties are important for effective product search explanations?} Intuitively, MIE tends to have higher fidelity because it is directly inferred from the model's internal structure, and MAE may have higher novelty as DREM could extract unobserved entity relations based on soft matching~\cite{ai2019explain}. As MIE is better than MAE on \textit{Informativeness} and \textit{Satisfaction} while MAE is slightly better than MIE on \textit{Usefulness}, one may expect that fidelity would be more important for the \textit{Informativeness} and \textit{Satisfaction} of product search explanation while novelty may be more important for \textit{Usefulness}. To examine this hypothesis, we plot the aggregated feature importance in the GBDTs from cross validation based on the total gains of splits which use the feature. As shown in Figure~\ref{fig:feature}, we observe that fidelity features such as \textit{exist\_confidence} are the most important features for \textit{Informativeness} and \textit{Satisfaction}, which indicates that the reliability of result explanations are important for user's overall satisfaction with the explainable product search engine. In contrast, novelty features such as the inverse user/item frequency and the mutual information between user/item and knowledge entities used in explanations are more important for the prediction of \textit{Usefulness}. From this perspective, it seems that users are more likely to purchase an item if the search engine could provide some interesting and novel explanations to why it retrieves the corresponding item. Also, in our experiments, we observe that performance features (i.e., MRR and log\_purchase\_prob) have shown no or minor effect on user's preference over MIE or MAE. This may because we have filtered out test cases where DREM and DREM-HGN have significant retrieval performance differences (e.g., we only sampled cases where both DREM and DREM-HGN have MRR $\geq$ 0.1 for crowdsourcing), but it may also indicate that users are not sensitive to the retrieval performance of a product search engine when judging the quality of search explanations. In other words, building explainable product search models that provide effective explanations requires us to rethink of the model design from different perspectives but not simply focusing on the optimization of retrieval performance. \section{Introduction} As online marketplaces have gradually dominated the retail market, product retrieval systems such as product search engines and recommendation systems have become the main entry for users to discover products. Meanwhile, with increasing concerns on the transparency and accountability of AI systems, studies on explainable AI have received more attention in both academic communities and industry~\cite{gilpin2018explaining,du2018techniques}. Specifically in the domain of e-commerce information retrieval, explainability means the ability of a product retrieval system in providing explanations that allow users to understand, trust, and effectively control the retrieved products. Previous studies have shown that providing recommendation results together with explanations on why the items are retrieved not only increases the conversion rates from clicks to purchases, but also improves user's satisfaction on e-shopping websites~\cite{zhang2014explicit}. Thus, how to improve the explanability of product retrieval systems has become an important challenge and opportunity for e-commerce. Interestingly, despite of the extensive studies on explainable product recommendation~\cite{herlocker2000explaining,bilgic2005explaining,tintarev2007effective,mcauley2013hidden}, the effectiveness and the potentials of explainable product search is mostly unexplored. As of today, more than 80\% of shoppers find products starting from search online\footnote{\url{https://www.retaildive.com/news/87-of-shoppers-now-begin-product-searches-online/530139/}}, which means that product search is still the most popular method to find products on e-commerce platforms. On the one hand, developing product search engines is similar to developing product recommendation systems from multiple perspectives, including the need of personalization~\cite{ai2017learning}, the model of heterogeneous information~\cite{zamani2020learning}, etc. On the other hand, by explicitly formulating and feeding a query to the systems, user's requirements and expectations for product search engines are significantly different from those for product recommendation. For instance, while it is preferable to recommend PC games to a customer who recently purchased Alienware gaming laptops, it may not be a good idea when the user is searching for ``running shoes''. Thus, how to retrieve and explain search results based on both the explicit need and implicit preferences of product search users make explainable product search an unique challenge in explainable information retrieval. Existing studies on explainable AI can be broadly categorized into two directions, namely \textit{model-intrinsic} (or pre-hoc) interpretability and \textit{model-agnostic} (or post-hoc) interpretability~\cite{lipton2018mythos}. Model-intrinsic interpretability focuses on the construction of transparent AI systems that can explicitly explain its behavior based on its inference process. In contrast, model-agnostic interpretability focuses on explaining model outputs without knowing the internal mechanism of the model. Previous studies on explainable IR have explored both paradigms in document retrieval~\cite{singh2019exs,fernando2019study,singh2020model,10.1145/3331184.3331377} by creating pre-hoc or post-hoc explanations with text-matching signals extracted by the retrieval models from query-document pairs. In product search, however, it has been shown that text matching is relatively less important~\cite{10.1145/3336191.3371780,ai2019zero} comparing to other information such as knowledge entities and their relationships~\cite{guo2019attentive,liu2020structural} in determining user's purchase decisions. Thus, how to create model-intrinsic/agnostic explanations in product search and how those two approaches would benefit or affect the development of explainable product search systems is mostly unknown. To the best of our knowledge, the only study on explainable product search is the Dynamic Relation Embedding Model~\cite{ai2019explain} (DREM) that utilizes product knowledge graph to generate post-hoc result explanations. For evaluation, however, Ai et al.~\cite{ai2019explain} simply use a survey to examine whether users are more likely to purchase after seeing the explanations and conduct no comparison of different explanation methodologies as well as possible factors that affect the effectiveness of search explanations. To fill in this blank, we propose to construct and train an intrinsic-explainable model for product search with user-interaction data and knowledge graph. Inspired by the Zero Attention mechanism~\cite{ai2019zero}, we propose to extend DREM with a Hierarchical Gated Network (HGN) that explicitly construct user representations from items and knowledge entities related to the user's purchase history. By extracting the attention weights from HGN, our proposed model is capable of generating model-intrinsic explanations for product search results. To understand the advantages and drawbacks of pre-hoc and post-hoc search explanations, we conduct a crowdsourcing study with Amazon Mechanical Turk to evaluate and analyze the performance of model-agnostic and model-instrinsic explanations generated by the original DREM and the proposed DREM with HGN. Experiment results show that model-intrinsic explanations usually could be more informative and reliable while model-agnostic explanations could have better potentials in attracting users to purchase the product. Further, we propose an explanation performance task and build models to explore the possibility of automatically evaluating search explanations without human annotations. Based on feature analysis, we find that the fidelity of search explanations could be more important for user's overall satisfaction with the search engines while the novelty of the search explanations could be more useful in attracting users to purchase the item. \section{Problem Formulation}\label{sec:problem} \section{Methodology}\label{sec:model} In this section, we describe our proposed method for explainable product search. We start from introducing the framework of latent product retrieval models, the structure of the state-of-the-art explainable product search model (i.e., DREM) for model-agnostic explanation, and then propose a hierarchical gated network (HGN) to extend DREM for model-intrinsic search explanations. \subsection{Latent Product Retrieval Framework} As discussed previously, the goal of product search is to retrieve products according to user's needs so that we can maximize the average transaction rate (i.e., user purchases) in search sessions. Usually, this means ranking and showing products to users according to their probabilities to be purchased~\cite{van2016learning,ai2017learning}. Different from traditional IR tasks such as ad-hoc retrieval, information in product search is often stored in heterogeneous forms and classic retrieval models based on text matching often performs suboptimal in practice~\cite{nurmi2008product,guo2018multi,ai2019explain}. Therefore, the state-of-the-art methods in product search often build retrieval models in latent spaces by representing and matching queries, users, and items with latent vectors. In general, user's purchase decisions are affected by two factors~\cite{ai2017learning,guo2018multi,ai2019explain}: (1) the explicit purchase intents in the current session, which are usually expressed by user's queries, and (2) the implicit preferences over product properties (e.g., colors and brands), which are usually inferred from user's historical behaviors (e.g., previous purchases). Formally, let $\bm{q}$, $\bm{u}$, $\bm{i}\in\mathbb{R}^{\alpha}$ be the $\alpha$ dimensional latent representations of the search query, the user's personal preferences, and the item, respectively. Following previous studies~\cite{ai2017learning,guo2019attentive}, we model the probability of an item $i$ being purchased by a user $u$ after submitting a query $q$ with a latent generative model as \begin{equation} P(i|u,q) = \frac{\exp (\bm{i} \cdot \bm{S_{uq}})}{\sum_{i'\in I}\exp (\bm{i}' \cdot \bm{S_{uq}})}, ~~\bm{S_{uq}} = \bm{q} + \bm{u} \label{equ:P_iuq} \end{equation} where $I$ is the universal set of candidate items, and $S_{uq}$ is the latent representation of the user's purchase intent in search, which could be modeled as the linear combination of $\bm{q}$ and $\bm{u}$.\footnote{For simplicity, we ignore the discussions of more complicated models for $S_{uq}$ as it is not the focus of this paper.} Under this formulation, the representations of queries, users, and items can be directly optimized for product search by maximizing the log likelihood of observed user purchases in search defined as \begin{equation} \mathcal{L} =\log \!\prod_{u,q,i}\!P(i|u,q)=\!\!\sum_{u,q,i}\!\! \big(\bm{i}\! \cdot \!(\bm{q} \!+\! \bm{u}) \!-\! \log\! \sum_{i'\in I}\!\!\exp (\bm{i}' \!\cdot\! (\bm{q} \!+\! \bm{u}))\big) \label{equ:final_log_likelihood} \end{equation} While directly computing $\mathcal{L}$ is prohibitive due to the softmax function and the large number of items in $I$, there are many effective and mature solutions built with approximation algorithms such as hierarchical softmax and negative sampling~\cite{mikolov2013efficient}. In this paper, we adopt the negative sampling strategy that approximate the denominator of softmax function by randomly sampling negative samples from $I$. Therefore, the key problem of product search in latent space is how to construct the representations of queries, users, and items. \subsection{Dynamic Relation Embedding Model and Model-agnostic Explanations} To the best of our knowledge, the first model proposed for explainable product search is the Dynamic Relation Embedding Model (DREM)~\cite{ai2019explain}. In order to utilize heterogeneous data and knowledge for product search and explanations, Ai et al.~\cite{ai2019explain} proposed to build a latent dynamic knowledge graph that jointly encodes the relationships between queries, users, items, as well as product-related knowledge entities. Specifically, the construction of DREM and search explanations include two parts: the modeling of entity relationships, and the extraction of explainable knowledge path between users and retrieved items. \subsubsection{Product Knowledge Graph and Query Modeling}\label{sec:DREM_model} Product search is different from product recommendation as the relevance and relationships between users and items could vary based on user's information need expressed in the search query. To model both the static relationships between knowledge entities and the dynamic relationships between users, queries, and items, Ai et al.~\cite{ai2019explain} propose to adopt the TransE models~\cite{bordes2013translating} for product search and treat \textit{Search\&Purchase} as a special relationship that translates users to items. Formally, let $(h,r,t) \in \mathcal{G}$ be a relation triple with head entity $h$, relation $r$, and tail entity $t$ (e.g., \textit{IPhone} is \textit{Produced\_by} \textit{Apple}) in observed data $\mathcal{G}$. Then DREM defines a linear translation function and a latent generative model to model $(h,r,t)$ as \begin{equation} P(t|h,r) = \frac{\exp (\bm{t} \cdot (\bm{h} + \bm{r}))}{\sum_{t'\in T}\exp (\bm{t}' \cdot \bm{h} + \bm{r})} \label{equ:transE} \end{equation} where $T$ is the universal set of possible tail entity $t$, and $\bm{h}$, $\bm{r}$, $\bm{t}\in\mathbb{R}^{\alpha}$ are the embedding representations of the head entity, the relation, and the tail entity, respectively. In other words, \textit{the entity $h$ can be translated to entity $t$ through relation $r$ with probability $P(t|h,r)$}. As the relationship between users and items (i.e., \textit{Search\&Purchase}) varies according to different search queries, Ai et al.~\cite{ai2019explain} propose to create a dynamic relation embedding by encoding query string with a non-linear project function $\phi$ as \begin{equation} \bm{q} = \phi(\{w_q|w_q \in q\}) = \tanh(W \cdot \frac{\sum_{w_q \in q}\bm{w_q}}{|q|} + b) \label{equ:f} \end{equation} where $w_q$ and $\bm{w_q}\in\mathbb{R}^{\alpha}$ are query words and their corresponding embedding representations, $W\in\mathbb{R}^{\alpha \times \alpha}$ and $b\in\mathbb{R}^{\alpha}$ are model parameters, and $\bm{q}\in\mathbb{R}^{\alpha}$ is the query embedding as well as the relation embeddding of \textit{Search\&Purchase}. The probability of a user $u$ searched and purchased an item $i$ is then computed in the same way with other relation triples as shown in Eq.~(\ref{equ:transE}). To optimize the embedding representations of all entities and relations for product search, DREM directly maximizes the log likelihood of all observed relation triples as \begin{equation} \begin{split} \mathcal{L} = \!\!\!\sum_{(u,q,i)}& \!\!\!\log P(i|u,q) + \!\!\!\sum_{(h,r,t) \in \mathcal{G}} \!\!\! \log P(t|h,r)\\ \approx \!\!\!\sum_{(u,q,i)}&\log\sigma\big((\bm{u}+\bm{q})\!\cdot \!\bm{i}\big) + k\!\cdot\! \mathbb{E}_{i'\sim P_i}[\log\sigma\big(\!\!-\!(\bm{u}+\bm{q})\!\cdot\! \bm{i'}\big)] \\ + \!\!\!\!\!\!\sum_{(h,r,t) \in \mathcal{G}}\!\!\!\!\!\!&\log\sigma\big((\bm{h}+\bm{r})\!\cdot \!\bm{t}\big) + k\!\cdot\! \mathbb{E}_{t'\sim P_t}[\log\sigma\big(\!\!-\!(\bm{h}+\bm{r})\!\cdot\! \bm{t'}\big)] \\ \end{split} \label{equ:aggregated_loss} \end{equation} where $\sigma(x)$ is the sigmoid function (i.e., $\sigma(x)=\frac{1}{1+e^{-x}}$) and we apply a negative sampling strategy with sample size $k$. $P_i$ is defined as a uniform item noisy distribution and $P_t$ is defined as a frequency-based entity noisy distribution~\cite{van2016learning,ai2019explain}. \subsubsection{Post-hoc Search Explanations} With the latent knowledge graph learned from observed product purchases and meta data, Ai et al.~\cite{ai2019explain} argues that DREM is capable of creating post-hoc search explanations for each item retrieved for a user-query pair. Specifically, as all relations and entities are encoded in the latent space with the TransE models defined in Eq.~(\ref{equ:transE}), one can infer an arbitrary item from a user-query pair by finding a set of relations and intermediate entities that translate the joint representation of user and query (i.e., $\bm{S_uq}$) to the item representation (i.e., $\bm{i}$). Then, the path from the user to the item can be used to create an explanation of why the item is relevant to the user's search intent. Let $\{r_u^j\}$ (where $r_u^0$ is \textit{Search\&Purchase}) and $\{r_i^m\}$ be two sequences of relations that finally translate a user $u$ and an item $i$ to an entity in entity space $\Omega_e$. DREM defines a soft matching path between $u$ and $i$ through $e\in \Omega_e$ with score: \begin{equation} \begin{split} M(e | u, i) =& \log \big(P(e|u, \{r_u^j\})P(e|i, \{r_i^m\})\big) \\ =& \log P(e|e_u) + \log P(e|e_i)\\ =& \log\frac{\exp(\bm{e_u}\!\cdot\! \bm{e} - \gamma j)}{\sum_{e'\in \Omega_e}\!\!\exp(\bm{e_u}\!\cdot\! \bm{e'})} \!+\! \log\frac{\exp(\bm{e_i}\!\cdot\! \bm{e} - \gamma m)}{\sum_{e'\in \Omega_e}\!\!\exp(\bm{e_i}\!\cdot\! \bm{e'})} \end{split} \label{equ:soft_match} \end{equation} where $\bm{e_u}=\bm{u}+\sum_j \bm{r_u^j}$, $\bm{e_i}=\bm{i}+\sum_m \bm{r_i^m}$, and $\gamma$ is a hyper-parameter\footnote{Please refer to the original paper~\cite{ai2019explain} for more details.}. While the soft matching score of a path does not have any meanings to users, Ai et al.~\cite{ai2019explain} argues that it indicates the model's confidence on the path. Thus, they sort all potential inference path from user-query pair to a target item with the soft matching score and directly create a search explanation using simple templates and the relations/entities on the path to explain why $i$ is retrieved for $u$ by $q$. For example, suppose that there is a path from user $u$ to Apple Pencil with query \textit{``tablet''} and relation \textit{Brought\_Together}, then we can explain why we retrieved Apple Pencil with a post-hoc explanation as ``Apple Pencil is retrieved because it is frequently \textit{Brought\_Together} with products retrieved by query ``tablet'' ''. \subsection{Hierarchical Gated Network and Model-intrinsic Explanations} While DREM can provide post-hoc explanations to search results with inference paths on knowledge graph, the retrieval process of the model is simply ranking items according to the dot product between user-query pair and item representation in the latent space, which are not necessarily correlated to the generated explanations. In practice, we may prefer a transparent retrieval model that could provide direct explanations to its inference process for many reasons such as model reliability and result accountability~\cite{lipton2018mythos}. Inspired by the Zero Attention Mechanism (ZAM)~\cite{ai2019zero}, in this paper, we propose an extension to DREM to enhance it's interpretability and enable it to provide model-intrinsic search explanations. \subsubsection{Attention Network with Gates} ZAM is first proposed to conduct selective personalization in product search~\cite{ai2019zero}. The idea of ZAM is to relax the assumption of traditional attention mechanism by allowing the network to attend none input data when the query is not relevant to any input vectors. Let $\bm{q}$ be the query vector and $\bm{X}$ be the input vectors of an attention network, then ZAM computes the output $\bm{y}$ by attending $\bm{q}$ to both $\bm{X}$ and a zero vector $\bm{0}$ as \begin{equation} \bm{y} = \sum_{x \in X}\frac{\exp(f(\bm{q},\bm{x}))}{\exp(f(\bm{q}, \bm{0})) + \sum_{x' \in X}\exp(f(\bm{q},\bm{x'}))}\bm{x} \label{equ:ZAM} \end{equation} where $\bm{0}$ is a vector with all elements equal to 0, and $f(\bm{q},\bm{x})$ is the attention function that computes the attention score of $x$ with $q$. By adding $\bm{0}$ to the attention network, ZAM naturally creates a gate that controls whether the output vector of the attention network would be fed into downstream applications or not. Let $\bm{a}_X$ be the vector of $\{f(\bm{q}|\bm{x})|x\in X\}$, then ZAM can be reformulated as \begin{equation} \bm{y} = \frac{\exp(\bm{a}_X)}{\exp(f(\bm{q}, \bm{0})) + \exp^+(\bm{a}_X)}\cdot \bm{X} \label{equ:ZAM_sig} \end{equation} where $exp^+(\bm{a}_X)$ is the element-wise sum of $exp(\bm{a}_X)$. Thus, the output $\bm{y}$ would be influenced by the input $X$ only when the aggregated attention of $X$ is significantly larger than a threshold $f(\bm{q}, \bm{0})$. \begin{figure*} \centering \includegraphics[width=4.8in]{./figure/model.pdf} \caption{An illustration of the vanilla DREM and DREM with HGN. Different types of entities are colored differently. Squashed rectangles are vectors randomly initialized and learned in training, and rectangles are vectors encoded from other vectors.} \vspace{-12pt} \label{fig:model} \end{figure*} \subsubsection{User Modeling in Hierarchy} We now describe how we extend DREM to a transparent product search model with the idea of ZAM. The construction of a interpretable retrieval model with heterogeneous product knowledge involves two questions: (1) how to model user preferences in a specific knowledge domain according to the current search query, and (2) how to jointly combine user preferences in each knowledge domain to retrieve items for the current search query. To solve these questions, we proposed to build a Hierarchical Gated Network (HGN) to model user preferences in search with knowledge entities associated to user's purchase history. A illustration of the DREM with HGN is shown in Figure~\ref{fig:model}. Formally, let $\Omega_e^u$ be the set of entities with type $e$ associated to a user $u$. For example, $\Omega_e^u$ could be the items or brands purchased by $u$. For each knowledge domain $e$, we compute a latent embedding $\bm{u}_e$ for $u$ in $e$ by attending each entity with the current query $q$ as \begin{equation} \bm{u}_e = \sum_{e \in \Omega_e^u}\frac{\exp(f_e(\bm{q},\bm{e}))}{\exp(f_e(\bm{q}, \bm{0})) + \sum_{e' \in \Omega_e^u}\exp(f_e(\bm{q},\bm{e'}))}\bm{e} \label{equ:ZAM_e} \end{equation} where $f_e(\bm{q},\bm{e})$ is a simple attention function defined as \begin{equation} f(\bm{q}, \bm{e}) = \big(\bm{e}\cdot \tanh(\bm{W}_e^f \cdot \bm{q} + \bm{b}_{e})\big) \cdot \bm{W}_e^h \label{equ:attention_function} \end{equation} where $\bm{W}_e^h\in \mathbb{R}^{\beta}$, $\bm{W}_e^{f} \in \mathbb{R}^{\alpha \times \beta \times \alpha}$, $\bm{b}_{e} \in \mathbb{R}^{\alpha \times \beta}$, and $\beta$ is a hyper-parameter that controls the number of the attention heads. To further aggregate $\bm{u}_e$ from each domain to create the final user embedding $\bm{u}$, we apply another layer of zero attention above all knowledge domains. Let $\Omega^u = \{\Omega_e^u\}$, then \begin{equation} \bm{u} = \sum_{\Omega_e^u \in \Omega^u}\frac{\exp(f_u(\bm{q},\bm{u}_e))}{\exp(f_u(\bm{q}, \bm{0})) + \sum_{\Omega_{e'}^u \in \Omega^u}\exp(f_u(\bm{q},\bm{u}_e'))}\bm{u}_e \label{equ:HGN} \end{equation} where $f_u(\bm{q},\bm{u}_e))$ is another attention function with similar form of Eq.~(\ref{equ:attention_function}) but different set of parameters. Intuitively, the idea of HGN is to construct a hierarchical zero attention network that aggregates fine-grained user preferences from each knowledge domain to a unified user vector based on the search query. For parameter optimization, we simply follow the methodology of DREM introduced in Section~\ref{sec:DREM_model} and replace the original user vector $\bm{u}$ with the new user vector constructed by HGN. Through this way, we can easily track down the usage of each knowledge entity in product search and create a higher-level of transparency and interpretability to DREM. \subsubsection{Pre-hoc Search Explanations} The advantage of HGN-based DREM is its ability to create model-intrinsic explanations. Attention network is explainable by nature as the importance of input data is directly reflected by their attention weights in model outputs. With the help of HGN, we can not only infer the importance of each user-associated entity in building the final retrieval model (i.e., $S_{uq}$ in Eq.~(\ref{equ:P_iuq})), but also distinguish how much utility is obtained from understanding user's preferences over retrieved items or the general relevance/popularity between items and search queries. For pre-hoc search explanations, the attention weights in HGN can be split into two parts. The first part is the attention score of each user-associated knowledge domain and entity. Let $A^e_{\Omega_e^u}$ be the attention weight that entity $e$ received within domain $\Omega_e^u$, and $A^u_{\Omega_e^u}$ be the attention weight that domain $\Omega_e^u$ received in search, then \begin{equation} \begin{split} A^e_{\Omega_e^u} =& \frac{\exp(f_e(\bm{q},\bm{e}))}{\exp(f_e(\bm{q}, \bm{0})) + \sum_{e' \in \Omega_e^u}\exp(f_e(\bm{q},\bm{e'}))} \\ A^u_{\Omega_e^u}=& \frac{\exp(f_u(\bm{q},\bm{u}_e))}{\exp(f_u(\bm{q}, \bm{0})) + \sum_{\Omega_{e'}^u \in \Omega^u}\exp(f_u(\bm{q},\bm{u}_e'))} \end{split} \label{equ:HGN_entity_attention} \end{equation} Intuitively, $A^u_{\Omega_e^u}$ is the importance of domain $\Omega_e^u$ in building the final user model $u$, and $A^e_{\Omega_e^u}$ is the importance of entity $e$ in the domain. To explain the behavior of the product search model with these information, we can adopt simple templates to generate user readable search explanations with the attention weights. For example, for a specific user-query pair, if the attention weight of the domain \textit{Brand} is 0.5, and \textit{Apple} is the entity that received the highest attention within \textit{Brand}, we can generate a pre-hoc search explanations as ``this product are retrieved 50\% because of the \textit{Brand} of products previously purchased by the user, such as \textit{Apple}''. The second part of the attention weights in HGN is the attention on the zero vector. As depicted in Figure~\ref{fig:model}, HGN allows the model to attend to a zero vector when aggregating the information extracted from each knowledge domain with weight $A_0^u$ as \begin{equation} \begin{split} A^u_{0} =& \frac{\exp(f_u(\bm{q},\bm{0}))}{\exp(f_u(\bm{q}, \bm{0})) + \sum_{\Omega_{e'}^u \in \Omega^u}\exp(f_u(\bm{q},\bm{u}_e'))} \end{split} \label{equ:HGN_query_attention} \end{equation} Particularly, we apply a negative sampling strategy to maximize the probability of observed purchase $P(i|u,q)$, which has been proven to be equivalent to factorizing the pointwise mutual information between $\bm{S_{uq}}$ and $\bm{i}$~\cite{levy2014neural}. According to Eq.~(\ref{equ:P_iuq}) and (\ref{equ:HGN_query_attention}), when $A^u_{0}$ is close to 1, $\bm{S_{uq}}$ would downgrade to $\bm{q}$ and the final retrieval model is essentially retrieving items according to their mutual information with the query. From this perspective, the weight of the zero vector in HGN can be seen an indicator of the importance of item popularity under the query in the generation of the final ranked list. Therefore, given a particular $A_0^u$, we could explain the results retrieved by HGN as ``this product is retrieved $A_0^u$\% because of its popularity under the query''. \section{Related Work}\label{sec:related_work} There are three lines of studies that are important to our work: Interpretable AI, Explainable IR and Product Search. \textbf{\textit{Interpretable AI}}. The research of interpretable and explainable AI is a growing topic as the concerns on transparency and accountability of AI systems have increased dramatically recently~\cite{gilpin2018explaining}. In general, existing studies on interpretable AI can be broadly categorized into two groups, i.e., the studies on explaining machine learning (ML) models based on their internal structures, and the studies on explaining model outputs by treating the ML model as a black box~\cite{lipton2018mythos}. Examples of the first group including the examination of network neurons and layers~\cite{nguyen2016synthesizing,bau2017network,frankle2018lottery,yosinski2014transferable,sharif2014cnn}, the use of attention networks~\cite{vaswani2017attention,wiegreffe2019attention,jain2019attention}, and the design of disentangled model structure and information representations~\cite{cramer2008effects,burgess2017understanding,higgins2017beta,locatello2018challenging}. Examples of the second group including the construction of proxy models with linear classifiers~\cite{ribeiro2016should}, decision trees~\cite{schmitz1999ann,zilke2016deepred}, extracted rules~\cite{andrews1995survey,fu1994rule}, and salience map~\cite{simonyan2013deep,zeiler2014visualizing}. Both paradigms have their own advantages and disadvantages depending on application scenarios, and the field of interpretable AI is still young with numerous new studies and approaches emerging every year~\cite{gilpin2018explaining}. \textbf{\textit{Explainable Recommendation}}. The studies of explainable retrieval systems have drawn the attention of researchers mainly starting from the last decade. Early IR systems based on term matching are transparent and explainable in nature~\cite{ponte1998language,robertson2009probabilistic,tintarev2015explaining}. However, as more state-of-the-art retrieval systems rely on complex ML and latent representation models~\cite{mitra2018introduction,guo2019deep}, interpretability is no longer a minor problem for IR. Most existing studies on explainable IR focus on recommendation tasks~\cite{zhang2020explainable}. For example, model-based explainable recommendation methods attempt to develop models that generate both recommendations and explanations together~\cite{tintarev2007survey,zhang2016explainable,burke2002hybrid}. Peake and Wang~\cite{peake2018explanation} created post-hoc explanations based on the latent vectors in recommendation models; Zhang et al.~\cite{zhang2014explicit} explained recommendation results with facets extracted from user reviews. Another line of explainable recommendation research focuses on analyzing the nature of user behaviors to help users better understand recommendations~\cite{herlocker2000explaining, herlocker2000understanding,balog2020measuring}. Bilgic and Mooney~\cite{bilgic2005explaining} used statistical histrograms as explanations to help users understand rating distribution; Tintarew and Mashthoff~\cite{tintarev2007effective} provided user-centered design approaches to analyze the explanation effectiveness. \textbf{\textit{Explainable Search}}. Search is fundamentally different from recommendation as user intents are explicitly expressed with queries. Different from explainable recommendation, the studies on explainable search mostly focuses on the domain of ad-hoc retrieval, i.e., retrieving text documents such as news articles or web pages based on user's query. For example, Zeon Trevor et al.~\cite{fernando2019study} proposes to use DeepSHAP~\cite{lundberg2017unified} to explain the outputs of neural retrieval models; Verma and Ganguly~\cite{verma2019lirme} explore different sampling methods to build explanation models for a given retrieval model and proposes a couple of metrics to evaluate the explanations based on the terms in queries and documents. Unfortunately, those methods are not applicable to product search as they are purely designed for text retrieval and text matching signals are relatively unimportant~\cite{10.1145/3336191.3371780,ai2019zero} compared to other information such as entity relationships and user purchase history in determining user's purchases. As for how to create result explanations with heterogeneous entity and information in product search, to the best of our knowledge, the only study on this topic is proposed by Ai et al.~\cite{ai2019explain} that construct a dynamic relation embedding model to incorporate product knowledge graph and use it to explain product search results. However, they only conducted a laboratory study to examine the effectiveness of the explanations generated by their model and did no comparison and study on different explanation methodologies as well as what factors are important for product search explanations. \textbf{\textit{Product Search}}. Early studies on product search focus on retrieving products based on structured product facets such as brands and categories~\cite{lim2013semantic,duan2013probabilistic,duan2013supporting}. However, as there exists a significant vocabulary gap between user queries and product descriptions~\cite{van2013deep, nurmi2008product}, state-of-the-art approaches usually conduct product search in latent space with deep learning techniques~\cite{guo2018multi, wang2020metasearch,bi2019study}. For example, Bi et al.~\cite{bi2019negative,10.1145/3404835.3462911} extract fine-grained review information with embedding networks; Guo et al.~\cite{guo2019attentive} model long/short term user preferences with attention networks over user query history. There are also considerable studies on extracting ranking features and applying learning-to-rank methods for product search~\cite{aryafar2017ensemble,hu2018reinforcement,karmaker2017application,wu2017ensemble,carmel2020multi}. In this paper, our main focus is not to build the state-of-the-art product search models but to explore how to build effective search explanations to better improve user experience.
2,869,038,156,081
arxiv
\section{\protect\bigskip Introduction} The interaction of multipole moments with the electromagnetic fields has attracted a lot of attention and produced fundamental quantum effects. For example, the Aharonov-Bohm effect \cite{1,2,3,4} for a charged particle, the scalar Aharonov-Bohm and He- Mckellar- Willens effects \cite{5,6,7,8,9,10,11 , for bound states \cite{12}, and Landau quantization \cite{13,14,15,16} for an electric dipole moment of a neutral particle. Furthermore, recent studies have investigated the interaction between the quadrupole moments of neutral particle and external fields in several quantum systems such as geometric quantum phases \cite{17}, noncommutative quantum mechanics \cite{19}, nuclear structure \cite{20,21}, atomic systems \cite{22,23,24,25,26,27}, molecular systems \cite{28,29,30}, and Landau quantization \cite{18,31,32,33 . In particular, the study of the Landau quantization has recently been applied to the quantum dynamics of an electric quadrupole moment \cite{18,31 . It investigated the possibility of achieving the Landau quantization for neutral particles, resulting from the coupling of the electric quadrupole moment with a magnetic field, making a similar minimal coupling with a constant magnetic field \cite{1,2}. Moreover, they have discussed the conditions necessary for the field configuration in order to achieve the Landau quantization for neutral particles possessing an electric quadrupole moment \cite{17,18}. It is shown \cite{18} that the field configuration in the quadrupole system is dependent on the structure of the quadrupole tensor ( i.e., diagonal or non-diagonal), and has to be different in each case. However, all the previous studies have used different methods for quantum systems of multipole moments with constant mass. Such methods need to be modified to include the spatial dependence of the mass. On the other hand, quantum mechanical systems with position-dpendent mass (PDM) have attracted attention over the years. Namely, the von Roos Hamiltonian \cite{34} has been extensively investigated in the literature \cite{35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55}. Not only because of its ordering ambiguity associated with the non-unique representation of the kinetic energy operator, but also because of its feasible applicability in many fields of physics. Recent studies on such PDM charged particles in constant magnetic fields \cite{56,57,58,59,60}, and position-dependent magnetic fields \cite{61} are carried out (using different interaction potentials).\ To the best of our knowledge, however, no studies have ever been considered to discuss the quantum mechanical effects on PDM neutral particles possessing an electric quadrupole moment. To fill this gap, we discuss in this work a quantum system that consists of a PDM neutral particle with an electric quadrupole moment interacting with external fields. We follow the discussion in \cite{18} and extend their idea for PDM systems. This paper is organized as follows. In section II, we start giving a brief description of the quantum dynamics for a moving electric quadrupole moment interacting with external fields with a constant mass as done in \cite{18}, and extend it to include the PDM case. In so doing, we use the very recent result suggested by \cite{58,59} for the PDM-minimal-coupling and the PDM-momentum operator. Furthermore, we discuss the possibility of achieving the Landau quantization for such a system, and the separability of the problem in the cylindrical coordinates $\left( \rho ,\phi ,z\right) $, under azimuthal symmetrization, by considering that the field configurations and the PDM settings are purely radial dependent as in \cite{54,55,59,60,61}. In section III, we discuss a Landau levels analog for an electric quadrupole moment interacting with an external magnetic field in the absence of electric field. In the same section we obtain exact eigenfunctions and eigenvalues for different PDM settings. We take into account, in section IV, the effect of an electric field on the problem at hand, by choosing two models for the radial electric field, a Coulomb-type electric field \overrightarrow{E}=\frac{\lambda }{\rho }\widehat{\rho }$ and a linear-type electric field $\overrightarrow{E}=\frac{\lambda \rho }{2}\widehat{\rho }$. Finally, we report exact solutions of the radial Schr\"{o}dinger equation for both case of an electric field and for the same PDM settings presented in previous section. Our conclusion is given in section V. \section{Analogous to the Landau-type quantization:} In this section,we start our discussion by describing the quantum dynamics of a moving electric quadrupole moment interacting with electromagnetic fields as suggested in \cite{17} . By considering an electric quadrupole moment as a scalar particle, the potential energy of a multipole expansion in the rest frame of a particle can be written as \begin{equation} U=q\Phi -\overrightarrow{d}.\nabla \Phi +\underset{i,j}{\tsum Q_{ij}\partial _{i}\partial _{j}\Phi \end{equation where $q$ is the electric charge, $\overrightarrow{d}$ is the electric dipole moment, $Q_{ij}$ is the electric quadrupole moment tensor, and $\Phi $ is the electric potential. In order to study the dynamics of an electric quadrupole moment, we consider $q=0$ , $\overrightarrow{d}=0$ and $\overrightarrow{E}=-\overrightarrow \nabla }\Phi $, where $\overrightarrow{E}$ is the electric field. Therefore, the equation (1) reads \begin{equation} U=-\underset{i,j}{\tsum }Q_{ij}\partial _{i}E_{j} \end{equation For a moving quadrupole, Lagrangian of this system (a constant mass system) becomes \begin{equation} L=\frac{1}{2}mv^{2}+\underset{i,j}{\tsum }Q_{ij}\partial _{i}E_{j} \end{equation Now we must apply the Lorentz transformation of the electromagnetic fields. Therefore, we replace the electric field in (3) by \begin{equation} \overrightarrow{E}\rightarrow \overrightarrow{E}+\frac{1}{c}\overrightarrow{ }\times \overrightarrow{B} \end{equation where $\overrightarrow{E}$ and $\overrightarrow{B}$ are the electric and magnetic fields, respectively. Thus, Lagrangian (3) becomes \begin{equation} L=\frac{1}{2}mv^{2}+\overrightarrow{Q}.\overrightarrow{E}-\frac{1}{c \overrightarrow{v}.(\overrightarrow{Q}\times \overrightarrow{B}) \end{equation where we used \begin{equation} Q_{i}=\underset{i,j}{\tsum }Q_{ij}\partial _{j}\quad ,\quad \overrightarrow{ }=\underset{i}{\tsum }Q_{i}\widehat{e}_{i} \end{equation as in \cite{18,31}. Using the canonical momentum \begin{equation} \overrightarrow{P}=m\overrightarrow{v}-\frac{1}{c}(\overrightarrow{Q}\times \overrightarrow{B}) \end{equation the classical Hamiltonian of a constant mass reads \begin{equation} H=\frac{1}{2m}\left[ \overrightarrow{P}+\frac{1}{c}(\overrightarrow{Q}\times \overrightarrow{B})\right] ^{2}-\overrightarrow{Q}.\overrightarrow{E} \end{equation} To write the quantum Hamiltonian operator, we replace the canonical momentum $\overrightarrow{P}$ by the operator $\widehat{P}=-i\overrightarrow{\nabla }$ for constant mass settings. However, in this work we are interested to study the PDM system. Thus, we rewrite the PDM-non relativistic Hamiltonian (in \hbar =2m_{\circ }=c=1$ units) as \begin{equation} \widehat{H}=\left( \frac{\widehat{P}\left( \overrightarrow{r}\right) \overrightarrow{A}_{eff}\left( \overrightarrow{r}\right) }{\sqrt{m\left( \overrightarrow{r}\right) }}\right) ^{2}-\overrightarrow{Q}.\overrightarrow{ } \end{equation where the kinetic energy term was proposed by Mustafa and Algadhi \cite{59} along with the definition of PDM- momentum operator (which resulted from a factorization recipe of Mustafa and Mazharimousavi in \cite{46}): \begin{equation} \widehat{P}\left( \overrightarrow{r}\right) =-i\left[ \overrightarrow{\nabla }-\frac{1}{4}\left( \frac{\overrightarrow{\nabla }m\left( \overrightarrow{r \right) }{m\left( \overrightarrow{r}\right) }\right) \right] \end{equation} \ where \begin{equation} \overrightarrow{A}_{eff}\left( \overrightarrow{r}\right) =\overrightarrow{Q \times \overrightarrow{B}\quad ,\quad V_{eff}\left( \overrightarrow{r \right) =-\overrightarrow{Q}.\overrightarrow{E} \end{equation} In this way, the corresponding time-independent Schr\"{o}dinger equation is written in the form \begin{equation} \left[ \left( \frac{\widehat{P}\left( \overrightarrow{r}\right) \overrightarrow{A}_{eff}\left( \overrightarrow{r}\right) }{\sqrt{m\left( \overrightarrow{r}\right) }}\right) ^{2}-\overrightarrow{Q}.\overrightarrow{ }\right] \psi \left( \overrightarrow{r}\right) =\varepsilon \psi \left( \overrightarrow{r}\right) . \end{equation} Hence, \begin{eqnarray} &&\left[ \left( \frac{\widehat{P}\left( \overrightarrow{r}\right) }{\sqrt m\left( \overrightarrow{r}\right) }}\right) ^{2}-\frac{2i}{m\left( \overrightarrow{r}\right) }\overrightarrow{A}_{eff}\left( \overrightarrow{r \right) \cdot \overrightarrow{\nabla }-\frac{i}{m\left( \overrightarrow{r \right) }\left( \overrightarrow{\nabla }\cdot \overrightarrow{A}_{eff}\left( \overrightarrow{r}\right) \right) \right. \notag \\ &&\left. +i\overrightarrow{A}_{eff}\left( \overrightarrow{r}\right) .\left( \frac{\overrightarrow{\nabla }m\left( \overrightarrow{r}\right) }{m\left( \overrightarrow{r}\right) ^{2}}\right) +\frac{\overrightarrow{A}_{eff}\left( \overrightarrow{r}\right) ^{2}}{m\left( \overrightarrow{r}\right) } \overrightarrow{Q}.\overrightarrow{E}\right] \left. \psi \left( \overrightarrow{r}\right) =\varepsilon \psi \left( \overrightarrow{r}\right) \right. \end{eqnarray in which the vector potential satisfies the Coulomb gauge $\overrightarrow \nabla }\cdot \overrightarrow{A}_{eff}=0$. Moreover, using the momentum operator in equation(10) would imply: \begin{eqnarray} &&\left[ -\frac{1}{m\left( \overrightarrow{r}\right) }\overrightarrow{\nabla }^{2}+\left( \frac{\overrightarrow{\nabla }m\left( \overrightarrow{r}\right) }{m\left( \overrightarrow{r}\right) ^{2}}\right) \cdot \overrightarrow \nabla }+\frac{1}{4}\left( \frac{\overrightarrow{\nabla }^{2}m\left( \overrightarrow{r}\right) }{m\left( \overrightarrow{r}\right) ^{2}}\right) \frac{7}{16}\left( \frac{\left[ \overrightarrow{\nabla }m\left( \overrightarrow{r}\right) \right] ^{2}}{m\left( \overrightarrow{r}\right) ^{3}}\right) -\frac{2\ i\ }{m\left( \overrightarrow{r}\right) \overrightarrow{A}_{eff}\left( \overrightarrow{r}\right) \cdot \overrightarrow{\nabla }\right. \notag \\ &&\qquad \quad \qquad \qquad \left. +i\ \ \overrightarrow{A}_{eff}\left( \overrightarrow{r}\right) .\left( \frac{\overrightarrow{\nabla }m\left( \overrightarrow{r}\right) }{m\left( \overrightarrow{r}\right) ^{2}}\right) \frac{\overrightarrow{A}_{eff}\left( \overrightarrow{r}\right) ^{2}}{m\left( \overrightarrow{r}\right) }-\overrightarrow{Q}.\overrightarrow{E}\right] \left. \psi \left( \overrightarrow{r}\right) =\varepsilon \psi \left( \overrightarrow{r}\right) .\right. \end{eqnarray} The discussion of the possibility of achieving the Landau quantization for an electric quadrupole moment was done in \cite{18}, where they found that the Landau quantization can be achieved by imposing these two conditions: the first one is that the tensor $Q_{ij}$ must be symmetric and tracless. And the second one is that the field configuration must be chosen in such a way that there exists a uniform effective magnetic field given by \begin{equation} \overrightarrow{B}_{eff}=\overrightarrow{\nabla }\times \overrightarrow{A _{eff}=\overrightarrow{\nabla }\times (\overrightarrow{Q}\times \overrightarrow{B})=constant\ vector \end{equation perpendicular to the plane of motion of the electric quadrupole moment. Thus, it is clear that the field configuration depends on the choice of the components of the tensor $Q_{ij}$ that describes the electric quadrupole moment. Moreover, $\overrightarrow{E}$ must satisfy the electrostatic conditions $\left( \overrightarrow{\nabla }\times \overrightarrow{E =0~,~\partial _{t}\overrightarrow{E}=0\right) .$ In the following, we present the field configurations and structures of the tensor $Q_{ij}$. Thus, we choose the case when the tensor $Q_{ij}$ has the non-null components: \begin{equation} Q_{\rho \rho }=Q_{\phi \phi }=Q,~~Q_{zz}=-2Q\quad (diagonal~form) \end{equation which was studied by \cite{17,18}, where $Q$ is a constant. It is notable that this choice satisfies the properties of the tensor $Q_{ij}.$ Moreover, we consider a magnetic field given by \cite{18,31} \begin{equation} \overrightarrow{B}=\frac{1}{2}B_{\circ }\rho ^{2}\ \widehat{z} \end{equation where $B_{\circ }$ is a constant. By using the the definitions of (6) and the assumption in (16) we obtain the electric quadrupole moment as \begin{equation} \overrightarrow{Q}=\left( Q\partial _{\rho }\right) \widehat{\rho }\ +\left( Q\partial _{\phi }\right) \widehat{\phi }-\left( 2Q\partial _{z}\right) \widehat{z} \end{equation At this point, we can find the effective vector potential $\overrightarrow{A _{eff}$ as \begin{equation} \overrightarrow{A}_{eff}\left( \overrightarrow{r}\right) =\overrightarrow{Q \times \overrightarrow{B}=-QB_{\circ }\rho \widehat{\phi } \end{equation Consequently, the effective magnetic field reads \begin{equation} \overrightarrow{B}_{eff}\left( \overrightarrow{r}\right) =\overrightarrow \nabla }\times \overrightarrow{A}_{eff}\left( \overrightarrow{r}\right) =-2QB_{\circ }\widehat{z} \end{equation which satisfies the second condition (15), where $\overrightarrow{B}_{eff}$ is a uniform effective magnetic field. We may now discuss the separability of the PDM Schr\"{o}dinger equation(14) in the cylindrical coordinates $\left( \rho ,\phi ,z\right) $ and under azimuthal symmetrization. By assuming that the field configurations and that the PDM functions are only radially dependent \cite{54,55,59,60,61} (i,e., m\left( \overrightarrow{r}\right) =M\left( \rho ,\phi ,z\right) =g\left( \rho \right) $ ), the wavefunction can be written as \begin{equation} \psi \left( \rho ,\phi ,z\right) =e^{im\phi }e^{ikz}R\left( \rho \right) \end{equation where $m=0,\pm 1,\pm 2,....,\pm \ell $ is the magnetic quantum number. Thereby, and with the substitution of (19),(20) and (21) into (14), we obtain the radial equation: \begin{eqnarray} &&\left. \frac{R^{\prime \prime }\left( \rho \right) }{R\left( \rho \right) -\left( \frac{g^{\prime }\left( \rho \right) }{g\left( \rho \right) }-\frac{ }{\rho }\right) \frac{R^{\prime }\left( \rho \right) }{R\left( \rho \right) -\frac{1}{4}\left( \frac{g^{\prime \prime }\left( \rho \right) }{g\left( \rho \right) }-\frac{g^{\prime }\left( \rho \right) }{\rho g\left( \rho \right) }\right) +\frac{7}{16}\left( \frac{g^{\prime }\left( \rho \right) } g\left( \rho \right) }\right) ^{2}\right. \notag \\ &&\qquad \qquad \qquad \left. +g\left( \rho \right) (\varepsilon \overrightarrow{Q}.\overrightarrow{E})-\frac{m^{2}}{\rho ^{2}}-Q^{2}B_{\circ }^{2}\rho ^{2}+2QB_{\circ }m-k_{z}^{2}=0.\right. \end{eqnarray Which is to be solved for no electric field $\overrightarrow{E}=0$ and for a different choice of radial electric fields $\overrightarrow{E}\neq 0,$ with suitable PDM settings, to find the exact eigenvalues and eigenfunctions of the system. \section{\protect\bigskip PDM particles with an electric quadrupole moment in a magnetic field:} In this section,we focus on the discussion of Landau quantization for an electric quadrupole moment interacting with an external magnetic field, and a vanishing electric field $\overrightarrow{E}=0$ $\left( i.e.~~V_{eff}=0\right) .$ At this point, equation(22) would read \begin{eqnarray} &&R^{\prime \prime }\left( \rho \right) +\left[ -\left( \frac{g^{\prime }\left( \rho \right) }{g\left( \rho \right) }-\frac{1}{\rho }\right) R^{\prime }\left( \rho \right) -\frac{1}{4}\left( \frac{g^{\prime \prime }\left( \rho \right) }{g\left( \rho \right) }-\frac{g^{\prime }\left( \rho \right) }{\rho g\left( \rho \right) }\right) +\frac{7}{16}\left( \frac g^{\prime }\left( \rho \right) }{g\left( \rho \right) }\right) ^{2}\right. \notag \\ &&\qquad \qquad \qquad \left. +g\left( \rho \right) \varepsilon -\frac{m^{2 }{\rho ^{2}}-Q^{2}B_{\circ }^{2}\rho ^{2}+2QB_{\circ }m-k_{z}^{2}\right] R\left( \rho \right) =0. \end{eqnarray In the following examples, we use some power-low PDM type in the radial Sch \"{o}dinger equation (23) and report their exact-solutions. \subsection{ \protect\bigskip Model-I: A linear-type PDM $g\left( \protec \rho \right) =\protect\eta \protect\rho $ :} Consider a neutral particle with the radial cylindrical PDM setting, g\left( \rho \right) =\eta \rho ,$ and the electric quadrupole moment of (18) in presence of the magnetic field in (17). Then, the Schr\"{o}dinger equation(23) would read \begin{equation} R^{\prime \prime }\left( \rho \right) +\left[ \frac{-\left( m^{2}-3/16\right) }{\rho ^{2}}-Q^{2}B_{\circ }^{2}\rho ^{2}+\eta \rho \varepsilon +2QB_{\circ }m-k_{z}^{2}\right] R\left( \rho \right) =0 \end{equation Now, let us make a simple change of variables in equation (24) and use $r \sqrt{QB_{\circ }}\rho $. Then equation (24) becomes \begin{equation} R^{\prime \prime }\left( r\right) +\left[ \frac{-\left( m^{2}-3/16\right) } r^{2}}-r^{2}+\frac{\eta \varepsilon }{\left( QB_{\circ }\right) ^{3/2}}r \frac{2QB_{\circ }m-k_{z}^{2}}{QB_{\circ }}\right] R\left( r\right) =0, \end{equation which implies the one-dimensional Schr\"{o}dinger form of the Biconfluent Heun equation ( see, \cite{63,64}) reads \smallskip \begin{equation} R^{\prime \prime }\left( r\right) +\left[ \frac{\left( 1-\alpha ^{2}\right) }{4r^{2}}-\frac{1}{2r}\delta -\beta r-r^{2}+\gamma -\frac{\beta ^{2}}{4 \right] R\left( r\right) =0 \end{equation where \begin{equation} \frac{1}{4}\left( 1-\alpha ^{2}\right) =3/16-m^{2},~\beta =\frac{-\eta \varepsilon }{\left( QB_{\circ }\right) ^{3/2}},~\frac{\delta }{2}=0,\quad \gamma -\frac{\beta ^{2}}{4}=\frac{2QB_{\circ }m-k_{z}^{2}}{QB_{\circ }} \end{equation} To solve the above equation $\left( 26\right) $, we consider the asymptotic behavior for $r\rightarrow 0~and~r\rightarrow \infty ,$ the function R\left( r\right) $ can be written in terms of an unknown function $u\left( r\right) $ as follows \begin{equation} R\left( r\right) =r^{\left( 1+\alpha \right) /2}e^{-\left( \beta r+r^{2}\right) /2}u\left( r\right) \end{equation that transforms equation$\left( 26\right) $ into a simpler form \begin{equation} ru^{\prime \prime }\left( r\right) +\left[ 1+\alpha -\beta r-2r^{2}\right] u^{\prime }\left( r\right) +\left[ \left( \gamma -2-\alpha \right) r-\frac{ }{2}\left( \delta +\left( 1+\alpha \right) \beta \right) \right] u\left( r\right) =0 \end{equation which is the Biconfluent Heun-type equation (BHE) \cite{64}, where $\alpha ,\beta ,\gamma $ and $\delta $ are arbitrary parameters. The polynomial solutions of this equation (c.f, e.g., \cite{62,63,64,65,66}) is \begin{equation} u\left( r\right) =H_{B}\left( \alpha ,\beta ,\gamma ,\delta ;r\right) \end{equation where $H_{B}\left( \alpha ,\beta ,\gamma ,\delta ;r\right) $ are the Heun polynomials of degree $n_{\rho }$ such that \begin{equation} \gamma -2-\alpha =2n_{\rho },\quad where\quad n_{\rho }=0,1,2,...,\quad and\quad a_{n_{\rho }+1}=0~. \end{equation Here, $n_{\rho }$ is the radial quantum number and $a_{n_{\rho }+1}$ is a polynomial of degree $n_{\rho }+1$ defined by the recurrent relation (see \cite{64,65,66} for more details). By substituting $\left( 27\right) $ into \left( 31\right) $, we get the exact eigenvalues \begin{equation} \varepsilon _{n_{\rho },m}=\frac{\left( 2QB_{\circ }\right) ^{3/2}}{\eta \left[ 1+n_{\rho }-m+\sqrt{m^{2}+1/16}+\frac{k_{z}^{2}}{2QB_{\circ }}\right] ^{1/2} \end{equation where the cyclotron frequency is $\omega =\frac{\left( 2QB_{\circ }\right) ^{3/2}}{\eta },$ and the exact normalized eigenfunctuons is \begin{equation} R\left( \rho \right) =N\rho ^{\left\vert \widetilde{\ell }\right\vert +1/2}e^{-\left( \frac{Q^{2}B_{\circ }^{2}\rho ^{2}-\eta \varepsilon \rho } 2QB_{\circ }}\right) }H_{B}\left( \alpha ,\beta ,\gamma ,0;\sqrt{QB_{\circ } \rho \right) \end{equation where $N$ is the normalization constant, $\left\vert \widetilde{\ell \right\vert =\sqrt{m^{2}+1/16}$, and $\alpha ,\beta ~and~\gamma $ are defined respectively in (27). Comparing with \cite{31}, the eigenvalues are changed due to the effect of the PDM of the system, \ where the spectrum of energy (32) is proportional to $n^{1/2}$ and removes degeneracies (associated with the magnetic quantum number) similar to the energy levels reported in \cite{31}, where they are proportional to $n$. Furthermore, the frequency is also modified. \subsection{\protect\bigskip Model-II: A harmonic-type PDM $g\left( \protec \rho \right) =\protect\eta \protect\rho ^{2}$:} A PDM neutral particle with $g\left( \rho \right) =\eta \rho ^{2},$ and an electric quadrupole moment interacting with the external magnetic field (17) would imply that equation (23) be rewritten as \begin{equation} R^{\prime \prime }\left( \rho \right) -\frac{1}{\rho }R^{\prime }\left( \rho \right) +\left[ \frac{-\left( m^{2}-3/4\right) }{\rho ^{2}}-\left( Q^{2}B_{\circ }^{2}-\eta \varepsilon \right) \rho ^{2}+2QB_{\circ }m-k_{z}^{2}\right] R\left( \rho \right) =0 \end{equation To determine the radial part $R\left( \rho \right) $ of the wave function and the energy spectrum, we follow the same analysis of Gasiorowicz \cite{68} and start by using the change of variable $x=\left( Q^{2}B_{\circ }^{2}-\eta \varepsilon \right) ^{1/4}\rho $ in (34) to obtain \begin{equation} R^{\prime \prime }\left( x\right) -\frac{1}{x}R^{\prime }\left( x\right) \frac{L^{2}}{x^{2}}R\left( x\right) -x^{2}R\left( x\right) +\mu R\left( x\right) =0 \end{equation where \begin{equation} L^{2}=m^{2}-3/4\quad and\quad \mu =\frac{2QB_{\circ }m-k_{z}^{2}}{\left( Q^{2}B_{\circ }^{2}-\eta \varepsilon \right) ^{1/2}} \end{equation Next, we consider the asymptotic solutions $\left( x\rightarrow 0~and~x\rightarrow \infty \right) $ of the radial wavefunction $R\left( x\right) $ to come out with \begin{equation} R\left( x\right) =x^{1+\left\vert \widetilde{\ell }\right\vert }e^{-x/2}G\left( x\right) \end{equation with \begin{equation} \widetilde{\ell }^{2}=L^{2}+1\Longrightarrow ~\left\vert \widetilde{\ell \right\vert =\sqrt{m^{2}+1/4},\quad with\ \ \widetilde{\ell }>0. \end{equation Substituting (37) in (35) would imply \begin{equation} G^{\prime \prime }\left( x\right) +\left( \frac{1+2\left\vert \widetilde \ell }\right\vert }{x}-2x\right) G^{\prime }\left( x\right) +\left( \mu -2-2\left\vert \widetilde{\ell }\right\vert \right) G\left( x\right) =0 \end{equation Which, in turn, with $y=x^{2}$ yields \begin{equation} yG^{\prime \prime }\left( y\right) +\left( 1+\left\vert \widetilde{\ell \right\vert -y\right) G^{\prime }\left( y\right) +\left( \frac{\mu }{4} \frac{\left\vert \widetilde{\ell }\right\vert }{2}-\frac{1}{2}\right) G\left( y\right) =0 \end{equation this equation is the confluent hypergeometric equation, the series of which is a polynomial of degree $n_{\rho }$ (finite everywhere) when \begin{equation} n_{\rho }=\frac{\mu }{4}-\frac{\left\vert \widetilde{\ell }\right\vert }{2} \frac{1}{2}~. \end{equation Consequently, (36) and (38) would give the eigenvalues as \begin{equation} \varepsilon _{n_{\rho },m}=\frac{Q^{2}B_{\circ }^{2}}{\eta }\left[ 1-\left( \frac{\frac{k_{z}^{2}}{2QB_{\circ }}-m}{1+2n_{\rho }+\sqrt{m^{2}+1/4} \right) ^{2}\right] \end{equation and the eigenfunctions as \begin{equation} R\left( \rho \right) =N\rho ^{\left\vert \widetilde{\ell }\right\vert +1}e^{ \frac{\sqrt{Q^{2}B_{\circ }^{2}-\eta \varepsilon }}{2}\rho ^{2}}{}_{1}F_{1}\left( -n_{\rho };\left\vert \widetilde{\ell }\right\vert +1 \sqrt{Q^{2}B_{\circ }^{2}-\eta \varepsilon }\rho ^{2}\right) . \end{equation} In this case, the effect of PDM setting produces a new contribution to the non-degenerate energy levels (42), where they are proportional to $n^{-2}$ and the frequency is modified as $\varpi =\frac{Q^{2}B_{\circ }^{2}}{\eta }.$ \section{PDM-particles with an electric quadrupole moment and electromagnetic fields:} In this section,we study PDM-particles with an electric quadrupole moment and electromagnetic fields. However, we focus on analysis of the effect of the electric field on a PDM particle with an electric quadrupole moment in the presence of a magnetic field (on the Landau-type system reported in the previous section). For this purpose, we choose a sample of radial electric fields (c.f., e.g., \cite{14,15,16,24,31}), in the following illustrative examples. \subsection{\protect\bigskip The influence of A Coulomb-type electric field on the Landau-type system:} Consider a radial electric field in the form of \begin{equation} \overrightarrow{E}=\frac{\lambda }{\rho }\widehat{\rho } \end{equation where $\lambda $ is a constant \cite{31}. Thus, we can see that the interaction between the electric quadrupole moment (18) and the electric field (44) leads to an effective scalar potential \begin{equation} V_{eff}\left( \rho \right) =-\overrightarrow{Q}.\overrightarrow{E}=\frac Q\lambda }{\rho ^{2}} \end{equation which plays the role of a scalar potential in the PDM-Schr\"{o}dinger equation (22) to imply \begin{eqnarray} &&R^{\prime \prime }\left( \rho \right) -\left( \frac{g^{\prime }\left( \rho \right) }{g\left( \rho \right) }-\frac{1}{\rho }\right) R^{\prime }\left( \rho \right) +\left[ -\frac{1}{4}\left( \frac{g^{\prime \prime }\left( \rho \right) }{g\left( \rho \right) }-\frac{g^{\prime }\left( \rho \right) }{\rho g\left( \rho \right) }\right) +\frac{7}{16}\left( \frac{g^{\prime }\left( \rho \right) }{g\left( \rho \right) }\right) ^{2}\right. \notag \\ &&\qquad ~~+\left. g\left( \rho \right) \varepsilon -g\left( \rho \right) \frac{Q\lambda }{\rho ^{2}}-\frac{m^{2}}{\rho ^{2}}-Q^{2}B_{\circ }^{2}\rho ^{2}+2QB_{\circ }m-k_{z}^{2}\right] R\left( \rho \right) =0. \end{eqnarray} Hereby, we again consider the same examples used in the previous section to find exact solutions of equation (46): \subsubsection{ Model-I : $g\left( \protect\rho \right) =\protect\eta \protect\rho $ :} The PDM radial Schr\"{o}dinger equation (46) with, $g\left( \rho \right) =\eta \rho ,$ reads \ \ \begin{equation} R^{\prime \prime }\left( \rho \right) +\left[ \frac{-\left( m^{2}-3/16\right) }{\rho ^{2}}-Q^{2}B_{\circ }^{2}\rho ^{2}+\eta \rho \varepsilon -\frac{\eta \lambda Q}{\rho }+2QB_{\circ }m-k_{z}^{2}\right] R\left( \rho \right) =0 \end{equation with the change of variable $r=\sqrt{QB_{\circ }}\rho ,$ equation (47) becomes \begin{equation} R^{\prime \prime }\left( r\right) +\left[ \frac{-\left( m^{2}-3/16\right) } r^{2}}-r^{2}+\frac{\eta \varepsilon }{\left( QB_{\circ }\right) ^{3/2}}r \frac{\eta \lambda Q}{\left( QB_{\circ }\right) ^{1/2}r}+\frac{2QB_{\circ }m-k_{z}^{2}}{QB_{\circ }}\right] R\left( r\right) =0 \end{equation To find its solutions, we define these parameters \begin{equation} \frac{1}{4}\left( 1-\alpha ^{2}\right) =3/16-m^{2},~\beta =\frac{-\eta \varepsilon }{\left( QB_{\circ }\right) ^{3/2}},~\frac{\delta }{2}=\frac \eta \lambda Q}{\left( QB_{\circ }\right) ^{1/2}},\quad \gamma -\frac{\beta ^{2}}{4}=\frac{2QB_{\circ }m-k_{z}^{2}}{QB_{\circ }}, \end{equation and follow the same steps as in (28) to (31). Thus the exact eigenvalues are \begin{equation} \varepsilon _{n_{\rho },m}=\frac{\left( 2QB_{\circ }\right) ^{3/2}}{\eta \left[ \left( 1+n_{\rho }+-m+\sqrt{m^{2}+1/16}\right) +\frac{k_{z}^{2}} 2QB_{\circ }}\right] ^{1/2} \end{equation and the exact eigenfunctions are \begin{equation} R\left( \rho \right) =N\rho ^{\left\vert \widetilde{\ell }\right\vert +1/2}e^{-\left( \frac{Q^{2}B_{\circ }^{2}\rho ^{2}-\eta \varepsilon \rho } 2QB_{\circ }}\right) }H_{B}\left( \alpha ,\beta ,\gamma ,\delta ;\sqrt QB_{\circ }}\rho \right) \end{equation} \smallskip It is obvious that these eigenvalues (50) are the same as the eigenvalues in the absence of an electric field for the same PDM setting given in (32) but with different eigenfunctions. Thus, the effective potential with the PDM setting does not effect the eigenvalues of the system. \subsubsection{Model-II: $g\left( \protect\rho \right) =\protect\eta \protec \rho ^{2}$ :} The substitution of $g\left( \rho \right) =\eta \rho ^{2},$ in the PDM radial Schr\"{o}dinger equation (46) would yield \begin{equation} R^{\prime \prime }\left( \rho \right) -\frac{1}{\rho }R^{\prime }\left( \rho \right) +\left[ \frac{-\left( m^{2}-3/4\right) }{\rho ^{2}}-\left( Q^{2}B_{\circ }^{2}-\eta \varepsilon \right) \rho ^{2}-\eta \lambda Q+2QB_{\circ }m-k_{z}^{2}\right] R\left( \rho \right) =0 \end{equation We repeat the same procedure as in the previous section and immediately write the corresponding eigenvalues and radial wave functions, respectively, as \begin{equation} \varepsilon _{n_{\rho },m}=\frac{Q^{2}B_{\circ }^{2}}{\eta }\left[ 1-\left( \frac{\frac{\eta \lambda Q+k_{z}^{2}}{2QB_{\circ }}-m}{1+2n_{\rho }+\sqrt m^{2}+1/4}}\right) ^{2}\right] \end{equation and \begin{equation} R\left( \rho \right) =N\rho ^{\left\vert \widetilde{\ell }\right\vert +1}e^{ \frac{\sqrt{Q^{2}B_{\circ }^{2}-\eta \varepsilon }}{2}\rho ^{2}}{}_{1}F_{1}\left( -n_{\rho };\left\vert \widetilde{\ell }\right\vert +1 \sqrt{Q^{2}B_{\circ }^{2}-\eta \varepsilon }\rho ^{2}\right) \end{equation where $\left\vert \widetilde{\ell }\right\vert =\sqrt{m^{2}+1/4},$ with$\ \ \widetilde{\ell }>0.$ In this case, the influence of the PD-effective potential is appeared by making a shift in the energy levels of (42) and producing new eigenvalues (53), therefore. \subsection{The influence of A linear-type electric field on the Landau-type system:} Now, let us consider another radial electric field (e.g., \cite{14,15,16,24 ) as \begin{equation} \overrightarrow{E}=\frac{\lambda \rho }{2}~\widehat{\rho } \end{equation Using the same components of the the electric quadrupole moment tensor defined in (16), the effective scalar potential given in the PDM-Schr\"{o dinger equation(22) becomes \begin{equation} V_{eff}\left( \rho \right) =-\overrightarrow{Q}.\overrightarrow{E}=-\frac Q\lambda }{2} \end{equation Hence, equation(22) reads \begin{eqnarray} &&R^{\prime \prime }\left( \rho \right) -\left( \frac{g^{\prime }\left( \rho \right) }{g\left( \rho \right) }-\frac{1}{\rho }\right) R^{\prime }\left( \rho \right) +\left[ -\frac{1}{4}\left( \frac{g^{\prime \prime }\left( \rho \right) }{g\left( \rho \right) }-\frac{g^{\prime }\left( \rho \right) }{\rho g\left( \rho \right) }\right) +\frac{7}{16}\left( \frac{g^{\prime }\left( \rho \right) }{g\left( \rho \right) }\right) ^{2}\right. \notag \\ &&\qquad ~~+\left. g\left( \rho \right) \varepsilon +g\left( \rho \right) \frac{Q\lambda }{2}-\frac{m^{2}}{\rho ^{2}}-Q^{2}B_{\circ }^{2}\rho ^{2}+2QB_{\circ }m-k_{z}^{2}\right] R\left( \rho \right) =0. \end{eqnarray} In the two examples below, we investigate the influence of this effective potential using the same PDM setting that used in the previous sections: \subsubsection{\protect\bigskip Model-I: $g\left( \protect\rho \right) \protect\eta \protect\rho :$} With $g\left( \rho \right) =\eta \rho $ in (57), we obtain \ \ \begin{equation} R^{\prime \prime }\left( \rho \right) +\left[ \frac{-\left( m^{2}-3/16\right) }{\rho ^{2}}-Q^{2}B_{\circ }^{2}\rho ^{2}+\left( \eta \varepsilon +\frac{\eta \lambda Q}{2}\right) \rho +2QB_{\circ }m-k_{z}^{2 \right] R\left( \rho \right) =0 \end{equation} \smallskip Using the previous technique for the linear-type PDM to get the exact solutions for this case. Hence, this would correspond to the exact eigenvalues and eigenfunctions given, respectively, \begin{equation} \varepsilon _{n_{\rho },m}=\frac{\left( 2QB_{\circ }\right) ^{3/2}}{\eta \left[ 1+n_{\rho }-m+\sqrt{m^{2}+1/16}+\frac{k_{z}^{2}}{2QB_{\circ }}\right] ^{1/2}-\frac{\lambda Q}{2} \end{equation} \begin{equation} R\left( \rho \right) =N\rho ^{\left\vert \widetilde{\ell }\right\vert +1/2}e^{-\frac{Q^{2}B_{\circ }^{2}\rho ^{2}-\left( \eta \varepsilon +\frac \eta \lambda Q}{2}\right) \rho }{2QB_{\circ }}}H_{B}\left( \alpha ,\beta ,\gamma ,0;\sqrt{QB_{\circ }}\rho \right) \end{equation where, $\left\vert \widetilde{\ell }\right\vert =\sqrt{m^{2}+1/16}$ and the parameters $\alpha ,\beta ,\gamma ~and~\delta $ are defined as \begin{equation} \frac{1}{4}\left( 1-\alpha ^{2}\right) =3/16-m^{2},\ \ \beta =\frac{-\left( \eta \varepsilon +\frac{\eta \lambda Q}{2}\right) }{\left( QB_{\circ }\right) ^{3/2}},\ \ \gamma -\frac{\beta ^{2}}{4}=\frac{2QB_{\circ }m-k_{z}^{2}}{QB_{\circ }},\ \ \frac{\delta }{2}=0. \end{equation} \subsubsection{Model-II: $g\left( \protect\rho \right) =\protect\eta \protec \rho ^{2}$ :} Considering this radial cylindrical PDM would imply that equation (57) be rewritten as \begin{equation} R^{\prime \prime }\left( \rho \right) -\frac{1}{\rho }R^{\prime }\left( \rho \right) +\left[ \frac{-\left( m^{2}-3/4\right) }{\rho ^{2}}-\left( Q^{2}B_{\circ }^{2}-\frac{\eta \lambda Q}{2}-\eta \varepsilon \right) \rho ^{2}+2QB_{\circ }m-k_{z}^{2}\right] R\left( \rho \right) =0 \end{equation} Equation (62) is again in the same form of equation (34) and admits the exact solution of the eigenvalues and the corresponding radial eigenfunctions as \begin{equation} \varepsilon _{n_{\rho },m}=\frac{Q^{2}B_{\circ }^{2}}{\eta }\left[ 1-\left( \frac{\frac{k_{z}^{2}}{2QB_{\circ }}-m}{1+2n_{\rho }+\sqrt{m^{2}+1/4} \right) ^{2}\right] -\frac{\lambda Q}{2} \end{equation and \begin{equation} R\left( \rho \right) =N\rho ^{\widetilde{\left\vert \ell \right\vert }+1}e^{ \frac{\sqrt{Q^{2}B_{\circ }^{2}-\frac{\eta \lambda Q}{2}-\eta \varepsilon }} 2}\rho ^{2}}{}_{1}F_{1}\left( -n_{\rho };\left\vert \widetilde{\ell \right\vert +1;\sqrt{Q^{2}B_{\circ }^{2}-\frac{\eta \lambda Q}{2}-\eta \varepsilon }\rho ^{2}\right) \end{equation where $\left\vert \widetilde{\ell }\right\vert =\sqrt{m^{2}+1/4},$ with$\ \ \widetilde{\ell }>0.$ It is shown that the effective potential generated by the interaction between the electric quadrupole moment and the radial electric field given by (55) produces constant potential ($-\frac{Q\lambda }{2}$). Thus, the effect of mass settings on the effective potential in the term ($g\left( \rho \right) \overrightarrow{Q}.\overrightarrow{E}$) yields only a constant shift in the energy levels given in (32) and (42) creating\ a new set of energies given in (59) and (63). \section{Concluding Remarks} In this paper, we have started with a quantum system of an electric quadrupole moment interacting with magnetic and electric fields of a constant mass, as has been previously reported in the literature \cite{18,31 . Next, we have extended this procedure to study PDM systems by using recent results of \cite{58,59} for the PDM- minimal-coupling and the PDM- momentum operator given by (9) and (10), respectively. We have discussed the possibility of achieving the Landau quantization following \cite{18} and modified this discussion to include a PDM case. Thus, we have recollected the most important and essential relations ( equations (15)-(20) above), that have been reported in \cite{18}. Moreover, we have studied this problem within the context of cylindrical coordinates and investigated the exact solvability of the PDM radial Schr\"{o}dinger equation of a neutral particle possessing an electric quadrupole moment interacting with external fields, where we have considered PDM settings $m\left( \overrightarrow{r}\right) =g\left( \rho \right) $, along with the field configurations ( documented in (17) for the magnetic field $\overrightarrow{B}\left( \rho \right) $ and (44),(55) for electric fields $\overrightarrow{E}\left( \rho \right) $), which exhibits a pure radial cylindrical dependence. We have shown that the Landau quantization is produced from the interaction between the magnetic field and the electric quadrupole moment given in (17) and (18) respectively, where this has yielded the PDM radial Schr\"{o}dinger equation of this system in the absence of an electric field (documented in (23) above). However, comparing with \cite{18}, the energy levels of the Landau quantization are modified because of the influence of the spatial dependence of the mass, where we have obtained different eigenvalues for the two examples of PDM settings (i.e. $g\left( \rho \right) =\eta \rho $ and g\left( \rho \right) =\eta \rho ^{2})$ given in (32) and (42), respectively. Furthermore, we have analyzed the effect of the interaction of radial electric fields with the electric quadrupole moment of a PDM neutral particle by choosing two particular cases of the electric field $ \overrightarrow{E}=\frac{\lambda }{\rho }\widehat{\rho }\ and\ \overrightarrow{E}=\frac{\lambda \rho }{2}\widehat{\rho }),$ which have produced the effective potentials. It is shown that the structure of the PD effective potential term, plays the role of scalar potentials in the PDM radial Schr\"{o}dinger equation (22), producing same or an energy shift in energy levels of the systems with the same mass settings and in the absence of the effective potential. We have observed that the difference in energy levels depends on the mass structure. The more complex the chosen mass, the more this difference will be over Landau levels. However, the exact eigenvalues and eigenfunctions for all these cases are obtained. Finally, although the presence of an effective uniform magnetic field produces the Landau quantization, the influence of the spatial dependence of the mass of the system yields a new contribution to energy levels creating a set of new eigenvalues. This study has investigated, for the first time, that the PDM quantum particle that possesses multipole moments under the influence of external fields. Thus, this work opens new discussions regarding the position-dependent concept and provides a good starting point for future research.\newpage
2,869,038,156,082
arxiv
\section*{Acknowledgements} \noindent We express our gratitude to our colleagues in the CERN accelerator departments for the excellent performance of the LHC. We thank the technical and administrative staff at the LHCb institutes. We acknowledge support from CERN and from the national agencies: CAPES, CNPq, FAPERJ and FINEP (Brazil); NSFC (China); CNRS/IN2P3 (France); BMBF, DFG, HGF and MPG (Germany); SFI (Ireland); INFN (Italy); FOM and NWO (The Netherlands); MNiSW and NCN (Poland); MEN/IFA (Romania); MinES and FANO (Russia); MinECo (Spain); SNSF and SER (Switzerland); NASU (Ukraine); STFC (United Kingdom); NSF (USA). The Tier1 computing centres are supported by IN2P3 (France), KIT and BMBF (Germany), INFN (Italy), NWO and SURF (The Netherlands), PIC (Spain), GridPP (United Kingdom). We are indebted to the communities behind the multiple open source software packages on which we depend. We are also thankful for the computing resources and the access to software R\&D tools provided by Yandex LLC (Russia). Individual groups or members have received support from EPLANET, Marie Sk\l{}odowska-Curie Actions and ERC (European Union), Conseil g\'{e}n\'{e}ral de Haute-Savoie, Labex ENIGMASS and OCEVU, R\'{e}gion Auvergne (France), RFBR (Russia), XuntaGal and GENCAT (Spain), Royal Society and Royal Commission for the Exhibition of 1851 (United Kingdom). \clearpage \addcontentsline{toc}{section}{References} \setboolean{inbibliography}{true} \ifx\mcitethebibliography\mciteundefinedmacro \PackageError{LHCb.bst}{mciteplus.sty has not been loaded} {This bibstyle requires the use of the mciteplus package.}\fi \providecommand{\href}[2]{#2} \begin{mcitethebibliography}{10} \mciteSetBstSublistMode{n} \mciteSetBstMaxWidthForm{subitem}{\alph{mcitesubitemcount})} \mciteSetBstSublistLabelBeginEnd{\mcitemaxwidthsubitemform\space} {\relax}{\relax} \bibitem{Charles:2011va} J.~Charles {\em et~al.}, \ifthenelse{\boolean{articletitles}}{{\it {Predictions of selected flavor observables within the Standard Model}}, }{}\href{http://dx.doi.org/10.1103/PhysRevD.84.033005}{Phys.\ Rev.\ {\bf D84} (2011) 033005}, \href{http://arxiv.org/abs/1106.4041}{{\tt arXiv:1106.4041}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Aaltonen:2007he} CDF collaboration, T.~Aaltonen {\em et~al.}, \ifthenelse{\boolean{articletitles}}{{\it {First flavor-tagged determination of bounds on mixing-induced CP violation in $B^0_{s} \ensuremath{\rightarrow}\xspace J/\psi \phi$ decays}}, }{}\href{http://dx.doi.org/10.1103/PhysRevLett.100.161802}{Phys.\ Rev.\ Lett.\ {\bf 100} (2008) 161802}, \href{http://arxiv.org/abs/0712.2397}{{\tt arXiv:0712.2397}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Abazov:2008af} D0 collaboration, V.~M. Abazov {\em et~al.}, \ifthenelse{\boolean{articletitles}}{{\it {Measurement of $B^0_{s}$ mixing parameters from the flavor-tagged decay $B^0_{s} \ensuremath{\rightarrow}\xspace J/\psi \phi$}}, }{}\href{http://dx.doi.org/10.1103/PhysRevLett.101.241801}{Phys.\ Rev.\ Lett.\ {\bf 101} (2008) 241801}, \href{http://arxiv.org/abs/0802.2255}{{\tt arXiv:0802.2255}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{LHCb-PAPER-2013-002} LHCb collaboration, R.~Aaij {\em et~al.}, \ifthenelse{\boolean{articletitles}}{{\it {Measurement of $CP$ violation and the $B^0_s$ meson decay width difference with $B_s^0\ensuremath{\rightarrow}\xspace J/\psi K^+K^-$ and $B_s^0 \ensuremath{\rightarrow}\xspace J/\psi\pi^+\pi^-$ decays}}, }{}\href{http://dx.doi.org/10.1103/PhysRevD.87.112010}{Phys.\ Rev.\ {\bf D87} (2013) 112010}, \href{http://arxiv.org/abs/1304.2600}{{\tt arXiv:1304.2600}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{LHCb-PAPER-2014-019} LHCb collaboration, R.~Aaij {\em et~al.}, \ifthenelse{\boolean{articletitles}}{{\it {Measurement of the $CP$ violationg phase $\phi_s$ in $\bar{B}^0_s\ensuremath{\rightarrow}\xspace J/\psi\pi^+\pi^-$ decays}}, }{}\href{http://dx.doi.org/10.1016/j.nuclphysb.2014.06.011}{Phys.\ Lett.\ {\bf B736} (2014) 186}, \href{http://arxiv.org/abs/1405.4140}{{\tt arXiv:1405.4140}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Abazov:2011ry} D0 Collaboration, V.~M. Abazov {\em et~al.}, \ifthenelse{\boolean{articletitles}}{{\it {Measurement of the CP-violating phase $\phi_s^{J/\psi \phi}$ using the flavor-tagged decay $B_s^0 \rightarrow J/\psi \phi$ in 8 fb$^{-1}$ of $p \bar p$ collisions}}, }{}\href{http://dx.doi.org/10.1103/PhysRevD.85.032006}{Phys.\ Rev.\ {\bf D85} (2012) 032006}, \href{http://arxiv.org/abs/1109.3166}{{\tt arXiv:1109.3166}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Aad:2012kba} ATLAS collaboration, G.~Aad {\em et~al.}, \ifthenelse{\boolean{articletitles}}{{\it {Time-dependent angular analysis of the decay $B_{s}^{0} \ensuremath{\rightarrow}\xspace J/{\psi\phi}$ and extraction of $\Delta\Gamma_{s}$ and the CP-violating weak phase $\phi_s$ by ATLAS}}, }{}\href{http://dx.doi.org/10.1007/JHEP12(2012)072}{JHEP {\bf 12} (2012) 072}, \href{http://arxiv.org/abs/1208.0572}{{\tt arXiv:1208.0572}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Aaltonen:2012ie} CDF collaboration, T.~Aaltonen {\em et~al.}, \ifthenelse{\boolean{articletitles}}{{\it {Measurement of the bottom-strange meson mixing phase in the full CDF data set}}, }{}\href{http://dx.doi.org/10.1103/PhysRevLett.109.171802}{Phys.\ Rev.\ Lett.\ {\bf 109} (2012) 171802}, \href{http://arxiv.org/abs/1208.2967}{{\tt arXiv:1208.2967}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{HFAG} Heavy Flavor Averaging Group, Y.~Amhis {\em et~al.}, \ifthenelse{\boolean{articletitles}}{{\it {Averages of $b$-hadron, $c$-hadron, and $\tau$-lepton properties as of early 2012}}, }{}\href{http://arxiv.org/abs/1207.1158}{{\tt arXiv:1207.1158}}, {updated results and plots available at \href{http://www.slac.stanford.edu/xorg/hfag/}{{\tt http://www.slac.stanford.edu/xorg/hfag/}}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Faller:2008gt} S.~Faller, R.~Fleischer, and T.~Mannel, \ifthenelse{\boolean{articletitles}}{{\it {Precision physics with $B^0_s \ensuremath{\rightarrow}\xspace J/\psi \phi$ at the LHC: The quest for new physics}}, }{}\href{http://dx.doi.org/10.1103/PhysRevD.79.014005}{Phys.\ Rev.\ {\bf D79} (2009) 014005}, \href{http://arxiv.org/abs/0810.4248}{{\tt arXiv:0810.4248}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Fleischer:2007zn} R.~Fleischer, \ifthenelse{\boolean{articletitles}}{{\it {Exploring CP violation and penguin effects through $B^0_{d} \ensuremath{\rightarrow}\xspace D^{+} D^{-}$ and $B^0_{s} \ensuremath{\rightarrow}\xspace D^+_{s} D^-_{s}$}}, }{}\href{http://dx.doi.org/10.1140/epjc/s10052-007-0341-4}{Eur.\ Phys.\ J.\ {\bf C51} (2007) 849}, \href{http://arxiv.org/abs/0705.4421}{{\tt arXiv:0705.4421}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Dunietz:2000cr} I.~Dunietz, R.~Fleischer, and U.~Nierste, \ifthenelse{\boolean{articletitles}}{{\it {In pursuit of new physics with $B_s$ decays}}, }{}\href{http://dx.doi.org/10.1103/PhysRevD.63.114015}{Phys.\ Rev.\ {\bf D63} (2001) 114015}, \href{http://arxiv.org/abs/hep-ph/0012219}{{\tt arXiv:hep-ph/0012219}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Alves:2008zz} LHCb collaboration, A.~A. Alves~Jr.\ {\em et~al.}, \ifthenelse{\boolean{articletitles}}{{\it {The \mbox{LHCb}\xspace detector at the LHC}}, }{}\href{http://dx.doi.org/10.1088/1748-0221/3/08/S08005}{JINST {\bf 3} (2008) S08005}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{LHCb-DP-2012-004} R.~Aaij {\em et~al.}, \ifthenelse{\boolean{articletitles}}{{\it {The \mbox{LHCb}\xspace trigger and its performance in 2011}}, }{}\href{http://dx.doi.org/10.1088/1748-0221/8/04/P04022}{JINST {\bf 8} (2013) P04022}, \href{http://arxiv.org/abs/1211.3055}{{\tt arXiv:1211.3055}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{BBDT} V.~V. Gligorov and M.~Williams, \ifthenelse{\boolean{articletitles}}{{\it {Efficient, reliable and fast high-level triggering using a bonsai boosted decision tree}}, }{}\href{http://dx.doi.org/10.1088/1748-0221/8/02/P02013}{JINST {\bf 8} (2013) P02013}, \href{http://arxiv.org/abs/1210.6861}{{\tt arXiv:1210.6861}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{LHCb-PAPER-2013-060} LHCb collaboration, R.~Aaij {\em et~al.}, \ifthenelse{\boolean{articletitles}}{{\it {Measurement of the $\bar{B}^0_s\ensuremath{\rightarrow}\xspace D_s^-D_s^+$ and $\bar{B}^0_s\ensuremath{\rightarrow}\xspace D^-D_s^+$ effective lifetimes}}, }{}\href{http://dx.doi.org/10.1103/PhysRevLett.112.111802}{Phys.\ Rev.\ Lett.\ {\bf 112} (2014) 111802}, \href{http://arxiv.org/abs/1312.1217}{{\tt arXiv:1312.1217}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{PDG2014} Particle Data Group, K.~A. Olive {\em et~al.}, \ifthenelse{\boolean{articletitles}}{{\it {\href{http://pdg.lbl.gov/}{Review of particle physics}}}, }{}Chin.\ Phys.\ {\bf C38} (2014) 090001\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{LHCb-PAPER-2012-050} LHCb collaboration, R.~Aaij {\em et~al.}, \ifthenelse{\boolean{articletitles}}{{\it {First observations of $\bar{B}^0_s \ensuremath{\rightarrow}\xspace D^+D^-$, $D_s^+D^-$ and $D^0\bar{D}^0$ decays}}, }{}\href{http://dx.doi.org/10.1103/PhysRevD.87.092007}{Phys.\ Rev.\ {\bf D87} (2013) 092007}, \href{http://arxiv.org/abs/1302.5854}{{\tt arXiv:1302.5854}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Hulsbergen:2005pu} W.~D. Hulsbergen, \ifthenelse{\boolean{articletitles}}{{\it {Decay chain fitting with a Kalman filter}}, }{}\href{http://dx.doi.org/10.1016/j.nima.2005.06.078}{Nucl.\ Instrum.\ Meth.\ {\bf A552} (2005) 566}, \href{http://arxiv.org/abs/physics/0503191}{{\tt arXiv:physics/0503191}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Breiman} L.~Breiman, J.~H. Friedman, R.~A. Olshen, and C.~J. Stone, {\em Classification and regression trees}, Wadsworth international group, Belmont, California, USA, 1984\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{AdaBoost} R.~E. Schapire and Y.~Freund, \ifthenelse{\boolean{articletitles}}{{\it A decision-theoretic generalization of on-line learning and an application to boosting}, }{}\href{http://dx.doi.org/10.1006/jcss.1997.1504}{Jour.\ Comp.\ and Syst.\ Sc.\ {\bf 55} (1997) 119}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Sjostrand:2006za} T.~Sj\"{o}strand, S.~Mrenna, and P.~Skands, \ifthenelse{\boolean{articletitles}}{{\it {PYTHIA 6.4 physics and manual}}, }{}\href{http://dx.doi.org/10.1088/1126-6708/2006/05/026}{JHEP {\bf 05} (2006) 026}, \href{http://arxiv.org/abs/hep-ph/0603175}{{\tt arXiv:hep-ph/0603175}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{LHCb-PROC-2010-056} I.~Belyaev {\em et~al.}, \ifthenelse{\boolean{articletitles}}{{\it {Handling of the generation of primary events in Gauss, the LHCb simulation framework}}, }{}\href{http://dx.doi.org/10.1109/NSSMIC.2010.5873949}{Nuclear Science Symposium Conference Record (NSS/MIC) {\bf IEEE} (2010) 1155}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Lange:2001uf} D.~J. Lange, \ifthenelse{\boolean{articletitles}}{{\it {The EvtGen particle decay simulation package}}, }{}\href{http://dx.doi.org/10.1016/S0168-9002(01)00089-4}{Nucl.\ Instrum.\ Meth.\ {\bf A462} (2001) 152}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Golonka:2005pn} P.~Golonka and Z.~Was, \ifthenelse{\boolean{articletitles}}{{\it {PHOTOS Monte Carlo: A precision tool for QED corrections in $Z$ and $W$ decays}}, }{}\href{http://dx.doi.org/10.1140/epjc/s2005-02396-4}{Eur.\ Phys.\ J.\ {\bf C45} (2006) 97}, \href{http://arxiv.org/abs/hep-ph/0506026}{{\tt arXiv:hep-ph/0506026}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Allison:2006ve} Geant4 collaboration, J.~Allison {\em et~al.}, \ifthenelse{\boolean{articletitles}}{{\it {Geant4 developments and applications}}, }{}\href{http://dx.doi.org/10.1109/TNS.2006.869826}{IEEE Trans.\ Nucl.\ Sci.\ {\bf 53} (2006) 270}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Agostinelli:2002hh} Geant4 collaboration, S.~Agostinelli {\em et~al.}, \ifthenelse{\boolean{articletitles}}{{\it {Geant4: a simulation toolkit}}, }{}\href{http://dx.doi.org/10.1016/S0168-9002(03)01368-8}{Nucl.\ Instrum.\ Meth.\ {\bf A506} (2003) 250}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{LHCb-PROC-2011-006} M.~Clemencic {\em et~al.}, \ifthenelse{\boolean{articletitles}}{{\it {The \mbox{LHCb}\xspace simulation application, Gauss: Design, evolution and experience}}, }{}\href{http://dx.doi.org/10.1088/1742-6596/331/3/032023}{{J.\ Phys.\ Conf.\ Ser.\ } {\bf 331} (2011) 032023}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{Pivk:2004ty} M.~Pivk and F.~R. Le~Diberder, \ifthenelse{\boolean{articletitles}}{{\it {sPlot: A statistical tool to unfold data distributions}}, }{}\href{http://dx.doi.org/10.1016/j.nima.2005.08.106}{Nucl.\ Instrum.\ Meth.\ {\bf A555} (2005) 356}, \href{http://arxiv.org/abs/physics/0402083}{{\tt arXiv:physics/0402083}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{2009arXiv0905.0724X} Y.~{Xie}, \ifthenelse{\boolean{articletitles}}{{\it {sFit: A method for background subtraction in maximum likelihood fit}}, }{}\href{http://arxiv.org/abs/0905.0724}{{\tt arXiv:0905.0724}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{LHCb-PAPER-2013-006} LHCb collaboration, R.~Aaij {\em et~al.}, \ifthenelse{\boolean{articletitles}}{{\it {Precision measurement of the $B^0_s-\bar{B}^0_s$ oscillation frequency in the decay $B^0_s \ensuremath{\rightarrow}\xspace D^-_s \pi^+$}}, }{}\href{http://dx.doi.org/10.1088/1367-2630/15/5/053021}{New J.\ Phys.\ {\bf 15} (2013) 053021}, \href{http://arxiv.org/abs/1304.4741}{{\tt arXiv:1304.4741}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{LHCb-PAPER-2014-042} LHCb collaboration, R.~Aaij {\em et~al.}, \ifthenelse{\boolean{articletitles}}{{\it {Measurement of the $\bar{B}^{0}-B^{0}$ and $\bar{B}^{0}_{s}-B^{0}_{s}$ production asymmetries in pp collisions at $\sqrt{s}=7$ TeV}}, }{}\href{http://arxiv.org/abs/1408.0275}{{\tt arXiv:1408.0275}}, {submitted to Phys. Lett. B}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{LHCB-PAPER-2013-033} LHCb collaboration, R.~Aaij {\em et~al.}, \ifthenelse{\boolean{articletitles}}{{\it {Measurement of the flavour-specific $CP$-violating asymmetry $a_{\rm sl}^s$ in $B_s^0$ decays}}, }{}\href{http://dx.doi.org/10.1016/j.physletb.2013.12.030}{Phys.\ Lett.\ {\bf B728} (2014) 607}, \href{http://arxiv.org/abs/1308.1048}{{\tt arXiv:1308.1048}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{LHCb-PAPER-2011-027} LHCb collaboration, R.~Aaij {\em et~al.}, \ifthenelse{\boolean{articletitles}}{{\it {Opposite-side flavour tagging of $B$ mesons at the LHCb experiment}}, }{}\href{http://dx.doi.org/10.1140/epjc/s10052-012-2022-1}{Eur.\ Phys.\ J.\ {\bf C72} (2012) 2022}, \href{http://arxiv.org/abs/1202.4979}{{\tt arXiv:1202.4979}}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \bibitem{LHCB-PAPER-2014-038} LHCb collaboration, R.~Aaij {\em et~al.}, \ifthenelse{\boolean{articletitles}}{{\it {Measurement of CP asymmetry in $B^0_s \ensuremath{\rightarrow}\xspace D^\mp_s K^\pm$ decays}}, }{}\href{http://arxiv.org/abs/1407.6127}{{\tt arXiv:1407.6127}}, {submitted to JHEP}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \EndOfBibitem \end{mcitethebibliography} \newpage \newpage \centerline{\large\bf LHCb collaboration} \begin{flushleft} \small R.~Aaij$^{41}$, C.~Abell\'{a}n~Beteta$^{40}$, B.~Adeva$^{37}$, M.~Adinolfi$^{46}$, A.~Affolder$^{52}$, Z.~Ajaltouni$^{5}$, S.~Akar$^{6}$, J.~Albrecht$^{9}$, F.~Alessio$^{38}$, M.~Alexander$^{51}$, S.~Ali$^{41}$, G.~Alkhazov$^{30}$, P.~Alvarez~Cartelle$^{37}$, A.A.~Alves~Jr$^{25,38}$, S.~Amato$^{2}$, S.~Amerio$^{22}$, Y.~Amhis$^{7}$, L.~An$^{3}$, L.~Anderlini$^{17,g}$, J.~Anderson$^{40}$, R.~Andreassen$^{57}$, M.~Andreotti$^{16,f}$, J.E.~Andrews$^{58}$, R.B.~Appleby$^{54}$, O.~Aquines~Gutierrez$^{10}$, F.~Archilli$^{38}$, A.~Artamonov$^{35}$, M.~Artuso$^{59}$, E.~Aslanides$^{6}$, G.~Auriemma$^{25,n}$, M.~Baalouch$^{5}$, S.~Bachmann$^{11}$, J.J.~Back$^{48}$, A.~Badalov$^{36}$, C.~Baesso$^{60}$, W.~Baldini$^{16}$, R.J.~Barlow$^{54}$, C.~Barschel$^{38}$, S.~Barsuk$^{7}$, W.~Barter$^{47}$, V.~Batozskaya$^{28}$, V.~Battista$^{39}$, A.~Bay$^{39}$, L.~Beaucourt$^{4}$, J.~Beddow$^{51}$, F.~Bedeschi$^{23}$, I.~Bediaga$^{1}$, S.~Belogurov$^{31}$, K.~Belous$^{35}$, I.~Belyaev$^{31}$, E.~Ben-Haim$^{8}$, G.~Bencivenni$^{18}$, S.~Benson$^{38}$, J.~Benton$^{46}$, A.~Berezhnoy$^{32}$, R.~Bernet$^{40}$, M.-O.~Bettler$^{47}$, M.~van~Beuzekom$^{41}$, A.~Bien$^{11}$, S.~Bifani$^{45}$, T.~Bird$^{54}$, A.~Bizzeti$^{17,i}$, P.M.~Bj\o rnstad$^{54}$, T.~Blake$^{48}$, F.~Blanc$^{39}$, J.~Blouw$^{10}$, S.~Blusk$^{59}$, V.~Bocci$^{25}$, A.~Bondar$^{34}$, N.~Bondar$^{30,38}$, W.~Bonivento$^{15,38}$, S.~Borghi$^{54}$, A.~Borgia$^{59}$, M.~Borsato$^{7}$, T.J.V.~Bowcock$^{52}$, E.~Bowen$^{40}$, C.~Bozzi$^{16}$, T.~Brambach$^{9}$, D.~Brett$^{54}$, M.~Britsch$^{10}$, T.~Britton$^{59}$, J.~Brodzicka$^{54}$, N.H.~Brook$^{46}$, H.~Brown$^{52}$, A.~Bursche$^{40}$, G.~Busetto$^{22,r}$, J.~Buytaert$^{38}$, S.~Cadeddu$^{15}$, R.~Calabrese$^{16,f}$, M.~Calvi$^{20,k}$, M.~Calvo~Gomez$^{36,p}$, P.~Campana$^{18}$, D.~Campora~Perez$^{38}$, A.~Carbone$^{14,d}$, G.~Carboni$^{24,l}$, R.~Cardinale$^{19,38,j}$, A.~Cardini$^{15}$, L.~Carson$^{50}$, K.~Carvalho~Akiba$^{2}$, G.~Casse$^{52}$, L.~Cassina$^{20}$, L.~Castillo~Garcia$^{38}$, M.~Cattaneo$^{38}$, Ch.~Cauet$^{9}$, R.~Cenci$^{58}$, M.~Charles$^{8}$, Ph.~Charpentier$^{38}$, M. ~Chefdeville$^{4}$, S.~Chen$^{54}$, S.-F.~Cheung$^{55}$, N.~Chiapolini$^{40}$, M.~Chrzaszcz$^{40,26}$, X.~Cid~Vidal$^{38}$, G.~Ciezarek$^{53}$, P.E.L.~Clarke$^{50}$, M.~Clemencic$^{38}$, H.V.~Cliff$^{47}$, J.~Closier$^{38}$, V.~Coco$^{38}$, J.~Cogan$^{6}$, E.~Cogneras$^{5}$, V.~Cogoni$^{15}$, L.~Cojocariu$^{29}$, P.~Collins$^{38}$, A.~Comerma-Montells$^{11}$, A.~Contu$^{15,38}$, A.~Cook$^{46}$, M.~Coombes$^{46}$, S.~Coquereau$^{8}$, G.~Corti$^{38}$, M.~Corvo$^{16,f}$, I.~Counts$^{56}$, B.~Couturier$^{38}$, G.A.~Cowan$^{50}$, D.C.~Craik$^{48}$, M.~Cruz~Torres$^{60}$, S.~Cunliffe$^{53}$, R.~Currie$^{50}$, C.~D'Ambrosio$^{38}$, J.~Dalseno$^{46}$, P.~David$^{8}$, P.N.Y.~David$^{41}$, A.~Davis$^{57}$, K.~De~Bruyn$^{41}$, S.~De~Capua$^{54}$, M.~De~Cian$^{11}$, J.M.~De~Miranda$^{1}$, L.~De~Paula$^{2}$, W.~De~Silva$^{57}$, P.~De~Simone$^{18}$, D.~Decamp$^{4}$, M.~Deckenhoff$^{9}$, L.~Del~Buono$^{8}$, N.~D\'{e}l\'{e}age$^{4}$, D.~Derkach$^{55}$, O.~Deschamps$^{5}$, F.~Dettori$^{38}$, A.~Di~Canto$^{38}$, H.~Dijkstra$^{38}$, S.~Donleavy$^{52}$, F.~Dordei$^{11}$, M.~Dorigo$^{39}$, A.~Dosil~Su\'{a}rez$^{37}$, D.~Dossett$^{48}$, A.~Dovbnya$^{43}$, K.~Dreimanis$^{52}$, G.~Dujany$^{54}$, F.~Dupertuis$^{39}$, P.~Durante$^{38}$, R.~Dzhelyadin$^{35}$, A.~Dziurda$^{26}$, A.~Dzyuba$^{30}$, S.~Easo$^{49,38}$, U.~Egede$^{53}$, V.~Egorychev$^{31}$, S.~Eidelman$^{34}$, S.~Eisenhardt$^{50}$, U.~Eitschberger$^{9}$, R.~Ekelhof$^{9}$, L.~Eklund$^{51}$, I.~El~Rifai$^{5}$, E.~Elena$^{40}$, Ch.~Elsasser$^{40}$, S.~Ely$^{59}$, S.~Esen$^{11}$, H.-M.~Evans$^{47}$, T.~Evans$^{55}$, A.~Falabella$^{14}$, C.~F\"{a}rber$^{11}$, C.~Farinelli$^{41}$, N.~Farley$^{45}$, S.~Farry$^{52}$, RF~Fay$^{52}$, D.~Ferguson$^{50}$, V.~Fernandez~Albor$^{37}$, F.~Ferreira~Rodrigues$^{1}$, M.~Ferro-Luzzi$^{38}$, S.~Filippov$^{33}$, M.~Fiore$^{16,f}$, M.~Fiorini$^{16,f}$, M.~Firlej$^{27}$, C.~Fitzpatrick$^{39}$, T.~Fiutowski$^{27}$, P.~Fol$^{53}$, M.~Fontana$^{10}$, F.~Fontanelli$^{19,j}$, R.~Forty$^{38}$, O.~Francisco$^{2}$, M.~Frank$^{38}$, C.~Frei$^{38}$, M.~Frosini$^{17,g}$, J.~Fu$^{21,38}$, E.~Furfaro$^{24,l}$, A.~Gallas~Torreira$^{37}$, D.~Galli$^{14,d}$, S.~Gallorini$^{22,38}$, S.~Gambetta$^{19,j}$, M.~Gandelman$^{2}$, P.~Gandini$^{59}$, Y.~Gao$^{3}$, J.~Garc\'{i}a~Pardi\~{n}as$^{37}$, J.~Garofoli$^{59}$, J.~Garra~Tico$^{47}$, L.~Garrido$^{36}$, C.~Gaspar$^{38}$, R.~Gauld$^{55}$, L.~Gavardi$^{9}$, G.~Gavrilov$^{30}$, A.~Geraci$^{21,v}$, E.~Gersabeck$^{11}$, M.~Gersabeck$^{54}$, T.~Gershon$^{48}$, Ph.~Ghez$^{4}$, A.~Gianelle$^{22}$, S.~Gian\`{i}$^{39}$, V.~Gibson$^{47}$, L.~Giubega$^{29}$, V.V.~Gligorov$^{38}$, C.~G\"{o}bel$^{60}$, D.~Golubkov$^{31}$, A.~Golutvin$^{53,31,38}$, A.~Gomes$^{1,a}$, C.~Gotti$^{20}$, M.~Grabalosa~G\'{a}ndara$^{5}$, R.~Graciani~Diaz$^{36}$, L.A.~Granado~Cardoso$^{38}$, E.~Graug\'{e}s$^{36}$, G.~Graziani$^{17}$, A.~Grecu$^{29}$, E.~Greening$^{55}$, S.~Gregson$^{47}$, P.~Griffith$^{45}$, L.~Grillo$^{11}$, O.~Gr\"{u}nberg$^{62}$, B.~Gui$^{59}$, E.~Gushchin$^{33}$, Yu.~Guz$^{35,38}$, T.~Gys$^{38}$, C.~Hadjivasiliou$^{59}$, G.~Haefeli$^{39}$, C.~Haen$^{38}$, S.C.~Haines$^{47}$, S.~Hall$^{53}$, B.~Hamilton$^{58}$, T.~Hampson$^{46}$, X.~Han$^{11}$, S.~Hansmann-Menzemer$^{11}$, N.~Harnew$^{55}$, S.T.~Harnew$^{46}$, J.~Harrison$^{54}$, J.~He$^{38}$, T.~Head$^{38}$, V.~Heijne$^{41}$, K.~Hennessy$^{52}$, P.~Henrard$^{5}$, L.~Henry$^{8}$, J.A.~Hernando~Morata$^{37}$, E.~van~Herwijnen$^{38}$, M.~He\ss$^{62}$, A.~Hicheur$^{2}$, D.~Hill$^{55}$, M.~Hoballah$^{5}$, C.~Hombach$^{54}$, W.~Hulsbergen$^{41}$, P.~Hunt$^{55}$, N.~Hussain$^{55}$, D.~Hutchcroft$^{52}$, D.~Hynds$^{51}$, M.~Idzik$^{27}$, P.~Ilten$^{56}$, R.~Jacobsson$^{38}$, A.~Jaeger$^{11}$, J.~Jalocha$^{55}$, E.~Jans$^{41}$, P.~Jaton$^{39}$, A.~Jawahery$^{58}$, F.~Jing$^{3}$, M.~John$^{55}$, D.~Johnson$^{38}$, C.R.~Jones$^{47}$, C.~Joram$^{38}$, B.~Jost$^{38}$, N.~Jurik$^{59}$, M.~Kaballo$^{9}$, S.~Kandybei$^{43}$, W.~Kanso$^{6}$, M.~Karacson$^{38}$, T.M.~Karbach$^{38}$, S.~Karodia$^{51}$, M.~Kelsey$^{59}$, I.R.~Kenyon$^{45}$, T.~Ketel$^{42}$, B.~Khanji$^{20,38}$, C.~Khurewathanakul$^{39}$, S.~Klaver$^{54}$, K.~Klimaszewski$^{28}$, O.~Kochebina$^{7}$, M.~Kolpin$^{11}$, I.~Komarov$^{39}$, R.F.~Koopman$^{42}$, P.~Koppenburg$^{41,38}$, M.~Korolev$^{32}$, A.~Kozlinskiy$^{41}$, L.~Kravchuk$^{33}$, K.~Kreplin$^{11}$, M.~Kreps$^{48}$, G.~Krocker$^{11}$, P.~Krokovny$^{34}$, F.~Kruse$^{9}$, W.~Kucewicz$^{26,o}$, M.~Kucharczyk$^{20,26,k}$, V.~Kudryavtsev$^{34}$, K.~Kurek$^{28}$, T.~Kvaratskheliya$^{31}$, V.N.~La~Thi$^{39}$, D.~Lacarrere$^{38}$, G.~Lafferty$^{54}$, A.~Lai$^{15}$, D.~Lambert$^{50}$, R.W.~Lambert$^{42}$, G.~Lanfranchi$^{18}$, C.~Langenbruch$^{48}$, B.~Langhans$^{38}$, T.~Latham$^{48}$, C.~Lazzeroni$^{45}$, R.~Le~Gac$^{6}$, J.~van~Leerdam$^{41}$, J.-P.~Lees$^{4}$, R.~Lef\`{e}vre$^{5}$, A.~Leflat$^{32}$, J.~Lefran\c{c}ois$^{7}$, S.~Leo$^{23}$, O.~Leroy$^{6}$, T.~Lesiak$^{26}$, B.~Leverington$^{11}$, Y.~Li$^{3}$, T.~Likhomanenko$^{63}$, M.~Liles$^{52}$, R.~Lindner$^{38}$, C.~Linn$^{38}$, F.~Lionetto$^{40}$, B.~Liu$^{15}$, S.~Lohn$^{38}$, I.~Longstaff$^{51}$, J.H.~Lopes$^{2}$, N.~Lopez-March$^{39}$, P.~Lowdon$^{40}$, D.~Lucchesi$^{22,r}$, H.~Luo$^{50}$, A.~Lupato$^{22}$, E.~Luppi$^{16,f}$, O.~Lupton$^{55}$, F.~Machefert$^{7}$, I.V.~Machikhiliyan$^{31}$, F.~Maciuc$^{29}$, O.~Maev$^{30}$, S.~Malde$^{55}$, A.~Malinin$^{63}$, G.~Manca$^{15,e}$, G.~Mancinelli$^{6}$, A.~Mapelli$^{38}$, J.~Maratas$^{5}$, J.F.~Marchand$^{4}$, U.~Marconi$^{14}$, C.~Marin~Benito$^{36}$, P.~Marino$^{23,t}$, R.~M\"{a}rki$^{39}$, J.~Marks$^{11}$, G.~Martellotti$^{25}$, A.~Mart\'{i}n~S\'{a}nchez$^{7}$, M.~Martinelli$^{39}$, D.~Martinez~Santos$^{42,38}$, F.~Martinez~Vidal$^{64}$, D.~Martins~Tostes$^{2}$, A.~Massafferri$^{1}$, R.~Matev$^{38}$, Z.~Mathe$^{38}$, C.~Matteuzzi$^{20}$, A.~Mazurov$^{45}$, M.~McCann$^{53}$, J.~McCarthy$^{45}$, A.~McNab$^{54}$, R.~McNulty$^{12}$, B.~McSkelly$^{52}$, B.~Meadows$^{57}$, F.~Meier$^{9}$, M.~Meissner$^{11}$, M.~Merk$^{41}$, D.A.~Milanes$^{8}$, M.-N.~Minard$^{4}$, N.~Moggi$^{14}$, J.~Molina~Rodriguez$^{60}$, S.~Monteil$^{5}$, M.~Morandin$^{22}$, P.~Morawski$^{27}$, A.~Mord\`{a}$^{6}$, M.J.~Morello$^{23,t}$, J.~Moron$^{27}$, A.-B.~Morris$^{50}$, R.~Mountain$^{59}$, F.~Muheim$^{50}$, K.~M\"{u}ller$^{40}$, M.~Mussini$^{14}$, B.~Muster$^{39}$, P.~Naik$^{46}$, T.~Nakada$^{39}$, R.~Nandakumar$^{49}$, I.~Nasteva$^{2}$, M.~Needham$^{50}$, N.~Neri$^{21}$, S.~Neubert$^{38}$, N.~Neufeld$^{38}$, M.~Neuner$^{11}$, A.D.~Nguyen$^{39}$, T.D.~Nguyen$^{39}$, C.~Nguyen-Mau$^{39,q}$, M.~Nicol$^{7}$, V.~Niess$^{5}$, R.~Niet$^{9}$, N.~Nikitin$^{32}$, T.~Nikodem$^{11}$, A.~Novoselov$^{35}$, D.P.~O'Hanlon$^{48}$, A.~Oblakowska-Mucha$^{27,38}$, V.~Obraztsov$^{35}$, S.~Oggero$^{41}$, S.~Ogilvy$^{51}$, O.~Okhrimenko$^{44}$, R.~Oldeman$^{15,e}$, G.~Onderwater$^{65}$, M.~Orlandea$^{29}$, J.M.~Otalora~Goicochea$^{2}$, A.~Otto$^{38}$, P.~Owen$^{53}$, A.~Oyanguren$^{64}$, B.K.~Pal$^{59}$, A.~Palano$^{13,c}$, F.~Palombo$^{21,u}$, M.~Palutan$^{18}$, J.~Panman$^{38}$, A.~Papanestis$^{49,38}$, M.~Pappagallo$^{51}$, L.L.~Pappalardo$^{16,f}$, C.~Parkes$^{54}$, C.J.~Parkinson$^{9,45}$, G.~Passaleva$^{17}$, G.D.~Patel$^{52}$, M.~Patel$^{53}$, C.~Patrignani$^{19,j}$, A.~Pazos~Alvarez$^{37}$, A.~Pearce$^{54}$, A.~Pellegrino$^{41}$, M.~Pepe~Altarelli$^{38}$, S.~Perazzini$^{14,d}$, E.~Perez~Trigo$^{37}$, P.~Perret$^{5}$, M.~Perrin-Terrin$^{6}$, L.~Pescatore$^{45}$, E.~Pesen$^{66}$, K.~Petridis$^{53}$, A.~Petrolini$^{19,j}$, E.~Picatoste~Olloqui$^{36}$, B.~Pietrzyk$^{4}$, T.~Pila\v{r}$^{48}$, D.~Pinci$^{25}$, A.~Pistone$^{19}$, S.~Playfer$^{50}$, M.~Plo~Casasus$^{37}$, F.~Polci$^{8}$, A.~Poluektov$^{48,34}$, E.~Polycarpo$^{2}$, A.~Popov$^{35}$, D.~Popov$^{10}$, B.~Popovici$^{29}$, C.~Potterat$^{2}$, E.~Price$^{46}$, J.D.~Price$^{52}$, J.~Prisciandaro$^{39}$, A.~Pritchard$^{52}$, C.~Prouve$^{46}$, V.~Pugatch$^{44}$, A.~Puig~Navarro$^{39}$, G.~Punzi$^{23,s}$, W.~Qian$^{4}$, B.~Rachwal$^{26}$, J.H.~Rademacker$^{46}$, B.~Rakotomiaramanana$^{39}$, M.~Rama$^{18}$, M.S.~Rangel$^{2}$, I.~Raniuk$^{43}$, N.~Rauschmayr$^{38}$, G.~Raven$^{42}$, F.~Redi$^{53}$, S.~Reichert$^{54}$, M.M.~Reid$^{48}$, A.C.~dos~Reis$^{1}$, S.~Ricciardi$^{49}$, S.~Richards$^{46}$, M.~Rihl$^{38}$, K.~Rinnert$^{52}$, V.~Rives~Molina$^{36}$, P.~Robbe$^{7}$, A.B.~Rodrigues$^{1}$, E.~Rodrigues$^{54}$, P.~Rodriguez~Perez$^{54}$, S.~Roiser$^{38}$, V.~Romanovsky$^{35}$, A.~Romero~Vidal$^{37}$, M.~Rotondo$^{22}$, J.~Rouvinet$^{39}$, T.~Ruf$^{38}$, H.~Ruiz$^{36}$, P.~Ruiz~Valls$^{64}$, J.J.~Saborido~Silva$^{37}$, N.~Sagidova$^{30}$, P.~Sail$^{51}$, B.~Saitta$^{15,e}$, V.~Salustino~Guimaraes$^{2}$, C.~Sanchez~Mayordomo$^{64}$, B.~Sanmartin~Sedes$^{37}$, R.~Santacesaria$^{25}$, C.~Santamarina~Rios$^{37}$, E.~Santovetti$^{24,l}$, A.~Sarti$^{18,m}$, C.~Satriano$^{25,n}$, A.~Satta$^{24}$, D.M.~Saunders$^{46}$, M.~Savrie$^{16,f}$, D.~Savrina$^{31,32}$, M.~Schiller$^{42}$, H.~Schindler$^{38}$, M.~Schlupp$^{9}$, M.~Schmelling$^{10}$, B.~Schmidt$^{38}$, O.~Schneider$^{39}$, A.~Schopper$^{38}$, M.-H.~Schune$^{7}$, R.~Schwemmer$^{38}$, B.~Sciascia$^{18}$, A.~Sciubba$^{25}$, M.~Seco$^{37}$, A.~Semennikov$^{31}$, I.~Sepp$^{53}$, N.~Serra$^{40}$, J.~Serrano$^{6}$, L.~Sestini$^{22}$, P.~Seyfert$^{11}$, M.~Shapkin$^{35}$, I.~Shapoval$^{16,43,f}$, Y.~Shcheglov$^{30}$, T.~Shears$^{52}$, L.~Shekhtman$^{34}$, V.~Shevchenko$^{63}$, A.~Shires$^{9}$, R.~Silva~Coutinho$^{48}$, G.~Simi$^{22}$, M.~Sirendi$^{47}$, N.~Skidmore$^{46}$, I.~Skillicorn$^{51}$, T.~Skwarnicki$^{59}$, N.A.~Smith$^{52}$, E.~Smith$^{55,49}$, E.~Smith$^{53}$, J.~Smith$^{47}$, M.~Smith$^{54}$, H.~Snoek$^{41}$, M.D.~Sokoloff$^{57}$, F.J.P.~Soler$^{51}$, F.~Soomro$^{39}$, D.~Souza$^{46}$, B.~Souza~De~Paula$^{2}$, B.~Spaan$^{9}$, P.~Spradlin$^{51}$, S.~Sridharan$^{38}$, F.~Stagni$^{38}$, M.~Stahl$^{11}$, S.~Stahl$^{11}$, O.~Steinkamp$^{40}$, O.~Stenyakin$^{35}$, S.~Stevenson$^{55}$, S.~Stoica$^{29}$, S.~Stone$^{59}$, B.~Storaci$^{40}$, S.~Stracka$^{23}$, M.~Straticiuc$^{29}$, U.~Straumann$^{40}$, R.~Stroili$^{22}$, V.K.~Subbiah$^{38}$, L.~Sun$^{57}$, W.~Sutcliffe$^{53}$, K.~Swientek$^{27}$, S.~Swientek$^{9}$, V.~Syropoulos$^{42}$, M.~Szczekowski$^{28}$, P.~Szczypka$^{39,38}$, D.~Szilard$^{2}$, T.~Szumlak$^{27}$, S.~T'Jampens$^{4}$, M.~Teklishyn$^{7}$, G.~Tellarini$^{16,f}$, F.~Teubert$^{38}$, C.~Thomas$^{55}$, E.~Thomas$^{38}$, J.~van~Tilburg$^{41}$, V.~Tisserand$^{4}$, M.~Tobin$^{39}$, J.~Todd$^{57}$, S.~Tolk$^{42}$, L.~Tomassetti$^{16,f}$, D.~Tonelli$^{38}$, S.~Topp-Joergensen$^{55}$, N.~Torr$^{55}$, E.~Tournefier$^{4}$, S.~Tourneur$^{39}$, M.T.~Tran$^{39}$, M.~Tresch$^{40}$, A.~Tsaregorodtsev$^{6}$, P.~Tsopelas$^{41}$, N.~Tuning$^{41}$, M.~Ubeda~Garcia$^{38}$, A.~Ukleja$^{28}$, A.~Ustyuzhanin$^{63}$, U.~Uwer$^{11}$, C.~Vacca$^{15}$, V.~Vagnoni$^{14}$, G.~Valenti$^{14}$, A.~Vallier$^{7}$, R.~Vazquez~Gomez$^{18}$, P.~Vazquez~Regueiro$^{37}$, C.~V\'{a}zquez~Sierra$^{37}$, S.~Vecchi$^{16}$, J.J.~Velthuis$^{46}$, M.~Veltri$^{17,h}$, G.~Veneziano$^{39}$, M.~Vesterinen$^{11}$, B.~Viaud$^{7}$, D.~Vieira$^{2}$, M.~Vieites~Diaz$^{37}$, X.~Vilasis-Cardona$^{36,p}$, A.~Vollhardt$^{40}$, D.~Volyanskyy$^{10}$, D.~Voong$^{46}$, A.~Vorobyev$^{30}$, V.~Vorobyev$^{34}$, C.~Vo\ss$^{62}$, H.~Voss$^{10}$, J.A.~de~Vries$^{41}$, R.~Waldi$^{62}$, C.~Wallace$^{48}$, R.~Wallace$^{12}$, J.~Walsh$^{23}$, S.~Wandernoth$^{11}$, J.~Wang$^{59}$, D.R.~Ward$^{47}$, N.K.~Watson$^{45}$, D.~Websdale$^{53}$, M.~Whitehead$^{48}$, J.~Wicht$^{38}$, D.~Wiedner$^{11}$, G.~Wilkinson$^{55,38}$, M.P.~Williams$^{45}$, M.~Williams$^{56}$, H.W.~Wilschut$^{65}$, F.F.~Wilson$^{49}$, J.~Wimberley$^{58}$, J.~Wishahi$^{9}$, W.~Wislicki$^{28}$, M.~Witek$^{26}$, G.~Wormser$^{7}$, S.A.~Wotton$^{47}$, S.~Wright$^{47}$, K.~Wyllie$^{38}$, Y.~Xie$^{61}$, Z.~Xing$^{59}$, Z.~Xu$^{39}$, Z.~Yang$^{3}$, X.~Yuan$^{3}$, O.~Yushchenko$^{35}$, M.~Zangoli$^{14}$, M.~Zavertyaev$^{10,b}$, L.~Zhang$^{59}$, W.C.~Zhang$^{12}$, Y.~Zhang$^{3}$, A.~Zhelezov$^{11}$, A.~Zhokhov$^{31}$, L.~Zhong$^{3}$.\bigskip {\footnotesize \it $ ^{1}$Centro Brasileiro de Pesquisas F\'{i}sicas (CBPF), Rio de Janeiro, Brazil\\ $ ^{2}$Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil\\ $ ^{3}$Center for High Energy Physics, Tsinghua University, Beijing, China\\ $ ^{4}$LAPP, Universit\'{e} de Savoie, CNRS/IN2P3, Annecy-Le-Vieux, France\\ $ ^{5}$Clermont Universit\'{e}, Universit\'{e} Blaise Pascal, CNRS/IN2P3, LPC, Clermont-Ferrand, France\\ $ ^{6}$CPPM, Aix-Marseille Universit\'{e}, CNRS/IN2P3, Marseille, France\\ $ ^{7}$LAL, Universit\'{e} Paris-Sud, CNRS/IN2P3, Orsay, France\\ $ ^{8}$LPNHE, Universit\'{e} Pierre et Marie Curie, Universit\'{e} Paris Diderot, CNRS/IN2P3, Paris, France\\ $ ^{9}$Fakult\"{a}t Physik, Technische Universit\"{a}t Dortmund, Dortmund, Germany\\ $ ^{10}$Max-Planck-Institut f\"{u}r Kernphysik (MPIK), Heidelberg, Germany\\ $ ^{11}$Physikalisches Institut, Ruprecht-Karls-Universit\"{a}t Heidelberg, Heidelberg, Germany\\ $ ^{12}$School of Physics, University College Dublin, Dublin, Ireland\\ $ ^{13}$Sezione INFN di Bari, Bari, Italy\\ $ ^{14}$Sezione INFN di Bologna, Bologna, Italy\\ $ ^{15}$Sezione INFN di Cagliari, Cagliari, Italy\\ $ ^{16}$Sezione INFN di Ferrara, Ferrara, Italy\\ $ ^{17}$Sezione INFN di Firenze, Firenze, Italy\\ $ ^{18}$Laboratori Nazionali dell'INFN di Frascati, Frascati, Italy\\ $ ^{19}$Sezione INFN di Genova, Genova, Italy\\ $ ^{20}$Sezione INFN di Milano Bicocca, Milano, Italy\\ $ ^{21}$Sezione INFN di Milano, Milano, Italy\\ $ ^{22}$Sezione INFN di Padova, Padova, Italy\\ $ ^{23}$Sezione INFN di Pisa, Pisa, Italy\\ $ ^{24}$Sezione INFN di Roma Tor Vergata, Roma, Italy\\ $ ^{25}$Sezione INFN di Roma La Sapienza, Roma, Italy\\ $ ^{26}$Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of Sciences, Krak\'{o}w, Poland\\ $ ^{27}$AGH - University of Science and Technology, Faculty of Physics and Applied Computer Science, Krak\'{o}w, Poland\\ $ ^{28}$National Center for Nuclear Research (NCBJ), Warsaw, Poland\\ $ ^{29}$Horia Hulubei National Institute of Physics and Nuclear Engineering, Bucharest-Magurele, Romania\\ $ ^{30}$Petersburg Nuclear Physics Institute (PNPI), Gatchina, Russia\\ $ ^{31}$Institute of Theoretical and Experimental Physics (ITEP), Moscow, Russia\\ $ ^{32}$Institute of Nuclear Physics, Moscow State University (SINP MSU), Moscow, Russia\\ $ ^{33}$Institute for Nuclear Research of the Russian Academy of Sciences (INR RAN), Moscow, Russia\\ $ ^{34}$Budker Institute of Nuclear Physics (SB RAS) and Novosibirsk State University, Novosibirsk, Russia\\ $ ^{35}$Institute for High Energy Physics (IHEP), Protvino, Russia\\ $ ^{36}$Universitat de Barcelona, Barcelona, Spain\\ $ ^{37}$Universidad de Santiago de Compostela, Santiago de Compostela, Spain\\ $ ^{38}$European Organization for Nuclear Research (CERN), Geneva, Switzerland\\ $ ^{39}$Ecole Polytechnique F\'{e}d\'{e}rale de Lausanne (EPFL), Lausanne, Switzerland\\ $ ^{40}$Physik-Institut, Universit\"{a}t Z\"{u}rich, Z\"{u}rich, Switzerland\\ $ ^{41}$Nikhef National Institute for Subatomic Physics, Amsterdam, The Netherlands\\ $ ^{42}$Nikhef National Institute for Subatomic Physics and VU University Amsterdam, Amsterdam, The Netherlands\\ $ ^{43}$NSC Kharkiv Institute of Physics and Technology (NSC KIPT), Kharkiv, Ukraine\\ $ ^{44}$Institute for Nuclear Research of the National Academy of Sciences (KINR), Kyiv, Ukraine\\ $ ^{45}$University of Birmingham, Birmingham, United Kingdom\\ $ ^{46}$H.H. Wills Physics Laboratory, University of Bristol, Bristol, United Kingdom\\ $ ^{47}$Cavendish Laboratory, University of Cambridge, Cambridge, United Kingdom\\ $ ^{48}$Department of Physics, University of Warwick, Coventry, United Kingdom\\ $ ^{49}$STFC Rutherford Appleton Laboratory, Didcot, United Kingdom\\ $ ^{50}$School of Physics and Astronomy, University of Edinburgh, Edinburgh, United Kingdom\\ $ ^{51}$School of Physics and Astronomy, University of Glasgow, Glasgow, United Kingdom\\ $ ^{52}$Oliver Lodge Laboratory, University of Liverpool, Liverpool, United Kingdom\\ $ ^{53}$Imperial College London, London, United Kingdom\\ $ ^{54}$School of Physics and Astronomy, University of Manchester, Manchester, United Kingdom\\ $ ^{55}$Department of Physics, University of Oxford, Oxford, United Kingdom\\ $ ^{56}$Massachusetts Institute of Technology, Cambridge, MA, United States\\ $ ^{57}$University of Cincinnati, Cincinnati, OH, United States\\ $ ^{58}$University of Maryland, College Park, MD, United States\\ $ ^{59}$Syracuse University, Syracuse, NY, United States\\ $ ^{60}$Pontif\'{i}cia Universidade Cat\'{o}lica do Rio de Janeiro (PUC-Rio), Rio de Janeiro, Brazil, associated to $^{2}$\\ $ ^{61}$Institute of Particle Physics, Central China Normal University, Wuhan, Hubei, China, associated to $^{3}$\\ $ ^{62}$Institut f\"{u}r Physik, Universit\"{a}t Rostock, Rostock, Germany, associated to $^{11}$\\ $ ^{63}$National Research Centre Kurchatov Institute, Moscow, Russia, associated to $^{31}$\\ $ ^{64}$Instituto de Fisica Corpuscular (IFIC), Universitat de Valencia-CSIC, Valencia, Spain, associated to $^{36}$\\ $ ^{65}$KVI - University of Groningen, Groningen, The Netherlands, associated to $^{41}$\\ $ ^{66}$Celal Bayar University, Manisa, Turkey, associated to $^{38}$\\ \bigskip $ ^{a}$Universidade Federal do Tri\^{a}ngulo Mineiro (UFTM), Uberaba-MG, Brazil\\ $ ^{b}$P.N. Lebedev Physical Institute, Russian Academy of Science (LPI RAS), Moscow, Russia\\ $ ^{c}$Universit\`{a} di Bari, Bari, Italy\\ $ ^{d}$Universit\`{a} di Bologna, Bologna, Italy\\ $ ^{e}$Universit\`{a} di Cagliari, Cagliari, Italy\\ $ ^{f}$Universit\`{a} di Ferrara, Ferrara, Italy\\ $ ^{g}$Universit\`{a} di Firenze, Firenze, Italy\\ $ ^{h}$Universit\`{a} di Urbino, Urbino, Italy\\ $ ^{i}$Universit\`{a} di Modena e Reggio Emilia, Modena, Italy\\ $ ^{j}$Universit\`{a} di Genova, Genova, Italy\\ $ ^{k}$Universit\`{a} di Milano Bicocca, Milano, Italy\\ $ ^{l}$Universit\`{a} di Roma Tor Vergata, Roma, Italy\\ $ ^{m}$Universit\`{a} di Roma La Sapienza, Roma, Italy\\ $ ^{n}$Universit\`{a} della Basilicata, Potenza, Italy\\ $ ^{o}$AGH - University of Science and Technology, Faculty of Computer Science, Electronics and Telecommunications, Krak\'{o}w, Poland\\ $ ^{p}$LIFAELS, La Salle, Universitat Ramon Llull, Barcelona, Spain\\ $ ^{q}$Hanoi University of Science, Hanoi, Viet Nam\\ $ ^{r}$Universit\`{a} di Padova, Padova, Italy\\ $ ^{s}$Universit\`{a} di Pisa, Pisa, Italy\\ $ ^{t}$Scuola Normale Superiore, Pisa, Italy\\ $ ^{u}$Universit\`{a} degli Studi di Milano, Milano, Italy\\ $ ^{v}$Politecnico di Milano, Milano, Italy\\ } \end{flushleft} \end{document}
2,869,038,156,083
arxiv
\section{INTRODUCTION} With improvements in sensing technologies, recent intelligent vehicles have been equipped with advanced driver assistance systems (ADASs) as standard features \cite{broggi2016intelligent, zhu2017overview}. For example, in the United States, forward collision warning system will be standardized to passenger vehicles by 2022 \cite{NHTSA2016a}. While those warnings have proven to reduce critical accidents, reports show that some users turn off these functions \cite{IIHS2016}. A major reason for this behavior is that those warnings are generated only based on surrounding traffic conditions and driver's steering and pedal {\it operations} but are oblivious to driver's {\it perceptions} and {\it decisions}. This results in making these warnings redundant as the system warns the driver even when the driver is already aware of the dangers. To avoid these redundant warnings, the ADAS needs to be aware of drivers' awareness. In this work, we aim to sense and monitor the drivers' awareness status. While some studies and systems only use drivers' gaze movement as the proxy to driver's perception status \cite{kapitaniak2015application, topolvsek2016examination}, we tackle a more challenging problem of sensing drivers' memory status, which is required in accessing drivers' awareness. However, it is challenging to objectively assess if the driver is aware of a road hazard. One of the top challenges is that no reliable computational model for drivers' memory has been proposed that is able to generalize across various driving scenarios \cite{gugerty2011situation}. The topic is refereed to as situational awareness (SA) \cite{endsley1988design, endsley1995toward}. It has been studied mainly in military applications, and recently, the concept has been applied to automobile applications \cite{endsley2016designing}. Eye-tracking is one of the viable options for SA estimation as it can be applied in real-time without interrupting the ongoing task \cite{moore2010development}. Furthermore, according to the eye-mind hypothesis \cite{just1980theory}, there is a close relationship between what the eyes are gazing at and what the mind is engaged with. However, the prediction performance of the previous results is not satisfactory because the previous study only considered aggregated gaze movement information \cite{kim2020toward}. The contributions in this work are improvement of driver situational awareness prediction performance using the following approaches. \begin{enumerate} \item We combined object properties (prior information of the objects) to driver gaze behavioral features that was used in the previous work. \item We extended gaze behavioral features considering the characteristics of human visual sensory systems. We demonstrated that drivers' gaze behavior with both foveal and peripheral visions was more effective in situational awareness prediction. \item We introduced a strategy to adjust driver awareness scores based on the psychology theory of human short-term memory capacity. \end{enumerate} The paper is organized as follows. We cover related work in Section II, followed by the data collection in Section III. Section IV describes the methods and features we utilized in prediction models. We analyze the prediction results of the proposed methods in Section V. We then clarify our research contributions through discussion in Section VI. Finally, we conclude our work in Section VII. \section{RELATED WORK} Recent advancements with machine-aided cognition and decision support systems have revealed the difficulty in collaboration between human and machine \cite{hancock2013human}. Situational awareness (SA) has been attracting increased interest from research activities \cite{endsley2016designing}. Endsley \cite{endsley1995toward} defined SA as the perception of environmental elements and events with respect to time or space, the comprehension of their meaning, and the projection of their future status. This concept has been applied widely to diverse applications, such as aviation applications, air traffic control, nuclear power plant, and vehicle operations \cite{moore2010development, nguyen2019review, gugerty2011situation}. \begin{figure*}[!t] \centering \includegraphics[width=1.55\columnwidth]{figures/example_pause.png} \caption{An example of a driving video (top, before a pause) and an empty scene (bottom, after the pause) presented on the wall projection screen to assess driver awareness of road hazards. The labels (e.g., PED1) are shown here for demonstration purposes and were not shown to the participants.} \label{fig:example_pause} \vspace{-3mm} \end{figure*} Numerous methods have been proposed to measure SA in human subject studies \cite{gugerty2011situation, kaber2012effects, hofbauer2020measuring}. Among these methods, the Situation Awareness Global Assessment Technique (SAGAT) \cite{endsley1988situation, endsley1995measurement, endsley2000direct} and the Situation Present Assessment Method (SPAM) \cite{durso1998situation} are commonly used. We used SAGAT to measure drivers' situation awareness in this study because of its effectiveness and efficiency. This offline query-based technique provides objective and direct measure of SA. In the SAGAT procedure, subjects are asked questions about a situation while the scenario is paused and the objects are hidden in the scene. Their answers with confidence levels are recorded. Since it's an offline approach, SAGAT cannot predict SA in real-time. The goal of this work is to estimate the SAGAT questionnaire result from gaze and object information towards real-time estimation of driver SA. Previous studies have shown that SA is related to gaze behavior \cite{martin2018dynamics, kim2020toward}. However, further investigations of the relation between gaze behavior to moving objects of interest (OOIs), like vehicles and pedestrians, are necessary. We believe there are three major gaps that have not been considered in the previous studies. The first gap is the consideration of prior information about the object properties since different objects have different perceptual difficulties. For example, drivers are more likely to detect a pedestrian dressed in white comparing to others in dark. Driving context also impact the difficulty. For instance, drivers focus more on future trajectories, thus, traffic participants within those areas are likely to be looked at and aware of. The second gap is that the previous study only considered gaze positions captured by the eye tracker \cite{lu2017much, sharma2016eye}. However, studies show that human uses not only their center vision but also peripheral vision to detect and track objects \cite{iwasaki1986relation, foley2019sensation}. The third gap is about the difficulty levels of perception and projection tasks. Existing studies have shown that the human short-term memory is limited \cite{miller1956magical, broadbent1975magic, chase1973perception}. Thus, we explored methods to approximate such short-term memory mechanism and consider the detection difficulty for objects. Also, the awareness to certain objects can be retained given their relative importance to other objects in the scene. \section{DATA COLLECTION} To estimate drivers' situation awareness and develop SA prediction models, a dataset collected by a human-subject experiment in a previous study was utilized \cite{kim2020toward}. This experiment utilized a video-based driving simulation scenario and SAGAT to collect drivers' SA answer. In general, all participants in this experiment went through the same procedure, in which they were asked to follow pre-defined routes and to answer questions about the situation of the intersection when the driving scene paused. Each pause in experimental sessions presented a specific road intersection scene to participants (an example is shown in Figure \ref{fig:example_pause}). The top figure in Figure \ref{fig:example_pause} shows the last frame in the driving video before the pause, and the bottom figure shows the empty scene (i.e., the same scene as the scenario at the pause without any pedestrian or vehicle). Participants were asked to perceptually match locations of objects to the numbers presented in the bottom figure. Since this was a driving simulation scenario, all participants were asked to control the steering wheel and pedals to mimic the driving prescribed in the video. Different types of data were collected from all sessions. Specifically, we focused on eye movement data \footnote{We performed automatic gaze registration as detailed in \cite{martin2018dynamics} to project the gaze onto the fixed scene perspective.} recorded by Tobii Pro Glasses 2 at 60 Hz and drivers' situation awareness answer collect by SAGAT \cite{kim2019assessing}. In total, we had 44 participants, and each participant experienced 8 pauses with 28 different target objects, where participants' response corresponded to awareness or no awareness of each object. Thus, we have $44\times28=1232$ data points. Other than target objects, there are non-target objects in the scenes that the number of all objects are 7.5 on average with 3.0 standard deviation. \section{FEATURES USED} A 10 second analysis window has been used before each pause to capture the dynamic characteristics of objects and driver's perceptual engagement with said objects. We collected information, including objects' properties and participants' gaze movement, in this analysis window and aggregated them into object and gaze features. \subsection{Gaze point-based features} The first set of features is gaze point-based features, which utilized the positional relationship between target objects and participants' gaze center coordinates. These features were verified as highly correlated to drivers' awareness in a previous study \cite{kim2020toward}. Specifically, gaze coordinates were projected into the scene video, and ``gaze distance'' is defined as the distance between a gaze center and the nearest edge of the bounding box of a target object \footnote{We manually annotated the box of the position using VATIC \cite{vondrick2013efficiently}. All gaze-related features are calculated automatically based on the spatial relationship between projected gaze point and object bounding box.}. Three features measured in degrees are extracted as follows: \begin{itemize} \item $G_{pause}$ - gaze distance to an object at a pause (right before SAGAT questions). \item $G_{min}$ - minimum gaze distance to an object within the analysis window before a pause. \item $G_{average}$ - average gaze distance of an object during an analysis window. \end{itemize} \subsection{Human visual sensory dependent features}\label{sec:fovea-feature} Studies pointed out that while the sensitivity in object detection performance is low (less than 20\% capability in acuity compared to the foveal vision), the peripheral vision is good at the recognition of well-known structures and the detection of motions \cite{nelson1974motion}. In our task, we presume that drivers are using non-foveal vision for tracking detected objects and detecting moving objects. Based on the categorization of clinical research \cite{quinn2019clinical}, we used fovea ($\theta_1 = 2.5$ degree radius), perafovea ($\theta_2 = 4.1$ degree), perifovea ($\theta_3 = 9.1$ degree) and mocula ($\theta_4 = 15.0$ degree) and designed the following features. Specifically, the following three features are extracted per the above four radius ranges: \begin{itemize} \item $HV^{\theta_n}_{elapse}$ - time length measured in second since the last frame where the gaze distance was less than a radius $\theta_n$ till the pause. A small value, for example less than 1 second, means the driver perceives the object right before the pause and is likely aware of it. \item $HV^{\theta_n}_{dwell}$ - total dwell time measured in second when the gaze distance was less than the radius $\theta_n$. \item $HV^{\theta_n}_{average}$ - average gaze distance in degree while the distance was within a radius $\theta_n$. If the gaze distance was always larger than this threshold, the average distance during the whole analysis window was used. \end{itemize} \subsection{Object spatial-based features} Object properties have also been considered in our approaches since differences in objects' position information in a scene may impact drivers' awareness. Four features are extracted based on objects' annotated bounding boxes within the analysis time window: \begin{itemize} \item $OS_{proximity}$ - distance between the object and the vanishing point of the scene, measured in degree. \item $OS_{duration}$ - accumulative time length that the object is visible in the scene within the analysis time window before a pause, measured in seconds. \item $OS_{size}$ - relative size of the object, also considered as the distance from the object to the ego-car approximated by the relative height of the object. Specifically, we normalized the height of the objects based on different object types, either vehicles or pedestrians. \item $OS_{density}$ - density of the scene, measured by the total number of objects in the scene. \end{itemize} \subsection{Object property-based features} Other than objects' spatial-based features, property-based features also play an important role in accessing drivers' situation awareness. For example, a pedestrian crossing the driving lane would have a higher significance level and visual salience than a vehicle parking at the roadside regarding to the driver's view. For objects with high significance and salience, drivers could easily observe the object and take further actions to prevent potential accidents. Thus, the following six property-based features are extracted \footnote{Three annotators separately annotated those values based on an annotation guideline. In case of discrepancies they discussed to reach consensus.}: \begin{itemize} \item $OP_{type}$ - object type, which is a binary feature representing if the object is a pedestrian or a vehicle. \item $OP_{relevance}$ - a binary indicator of whether an object's trajectory intersects with the ego-vehicle's trajectory within the analysis time window. \item $OP_{light}$ - traffic light condition for each object, representing green or red (or unknown) by a binary variable. \item $OP_{contrast}$ - object static salience or contrast against the background, representing the visual difference between the object and the background by an ordinal variable. \item $OP_{movement}$ - a manually annotated feature, representing the dynamic motion saliency of a target object at a pause. Each object is annotated as one of four categories: static (not moving), slow (regular walking speed), medium (regular vehicle speed), and high (fast vehicle speed). \item $OP_{change}$ - the area changing feature represents the potential projection errors of target objects by indicating if the object crosses the border of the target area within the last second before the pause. This feature is necessary because the driver may answer incorrectly because of such crossing trajectories. \end{itemize} \subsection{Human short-term memory-based features} Previous studies did not consider human's short-term memory capacity. Physiology research such as Atkinson and Shiffrin's model \cite{atkinson1968human} suggested that human memory has three stores: a sensory register (1/4-1/2 seconds duration), short-term memory (0-18 seconds duration) and long-term memory (unlimited duration). We believe driver situational awareness is correlated with and significantly affected by short-term memory capacity. \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{figures/Method3.png} \caption{Flow of memory-based feature generation and its application for the classification task} \label{fig:propsed3} \vspace{-3mm} \end{figure} We used one of the most famous finding in psychology -- The Magical Number Seven, Plus or Minus Two (a.k.a. Miller's law) \cite{miller1956magical} to approximate drivers' short-term memory capacity. This approach is expected to boost the probability of awareness of the top N objects and suppress others (even though they are salient). While the scene complexity feature implicitly incorporate this aspect, this feature boosts or suppresses score of all objects in the scene. The SAGAT data only contains a single type (either vehicle or pedestrian) as target objects in a scene. However, a pause may include other non-target objects as well (e.g., Figure \ref{fig:example_pause} is a pause about pedestrian, but the scene has vehicles as well). And driver's memory for such non-target objects is not taken into account. Thus, this feature is designed and achieved by using a two-classifier structure, as listed as follows and illustrated in Figure \ref{fig:propsed3}. \begin{enumerate} \item Apply a classifier trained based on all the features explained above and calculate SA scores for all objects in the dataset (i.e., both target and non-target \footnote{We calculated and annotated features for non-target objects in addition to target objects.} objects). \item Sort the objects in a scene based on the SA scores and get ranking $R_n$ for each object. \item The memory score for object $n$ ($M_n$) is calculated as \begin{equation} M_n = tanh(R_n - N) \end{equation} Here, N is a parameter that approximates the size of the human short-term memory. \end{enumerate} Note that the classifier for memory score calculation is trained based on the data in the training set and the method does not use the SA labels of the data in the test set. \section{EXPERIMENTAL EVALUATION} \subsection{Data preparation} We evaluated the performance of our methods using the data collected by SAGAT. The data set consists of 8 pauses with 1,232 = (44 participants x 28 objects) data samples in total with SA labels. Due to the limited data size, we evaluated SA prediction performance using cross validation. We split the data into 8 portions based on 8 pauses that would be experienced by each participant (equals to one pause-out cross validation) because SA is strongly correlated at the object level but weakly correlated at the driver level \cite{kim2019toward}. Among these 8 pauses, we used the data from 7 pauses as the training data and the remaining pause as the test data. We set class weights inversely proportional to class frequencies in the training data for our methods. Also, all features are normalized using MinMaxScaler so that each feature value ranges between 0 and 1. While using principal component analysis (PCA) for feature dimension reduction, we empirically set the number of PCA components to maximize the classification accuracy, which is the prediction success rate in testing datasets. Note that the chance rate of this classification task, where all samples are classified as negative (not aware) class, is 55.8 \%. For baseline and method evaluations, we focused on analyses on the operating characteristic (ROC) and the classification accuracy. \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{figures/baselines.png} \caption{Evaluation baselines} \label{fig:baselines} \vspace{-3mm} \end{figure} \subsection{Baseline Methods} \subsubsection{Baseline 1: Rule-based baseline} The primary assumption of this baseline is that drivers' eye-glance fixations directly equal to their situation awareness. We defined that the driver fixated on the object if the gaze stayed within 2.5 degree from the object for more than 120 milliseconds based on the finding that human can recognize information in fovea (2.5 deg \cite{quinn2019clinical, nelson1980functional}) within 120 ms \cite{rayner2007eye}. We draw ROC curves by changing 1) 2.5 degree to 0.1-30 degree range as well as 2) change 120 ms to 10-3000 ms range. As shown in Figure \ref{fig:baselines}, the area under the curve (AUC) values of these two ROC curves are 0.713 and 7.06. \subsubsection{Baseline 2: Learning-based baseline} The fundamental assumption of this baseline is also that drivers' fixations equal to their situation awareness. Compared to Baseline 1, this baseline adopts a learning-based approach. With the human annotation of fixation based on the observation from gaze trajectory, we utilized a SVM model that predicts if the driver fixated on the object or not. The details of this approach are described in \cite{martin2018object}. We directly used the parameters in \cite{martin2018object} and did not fine tune the parameter based on our data. AUC value, which is obtained by changing SVM score (distance from the hyper plane), is 0.668 and the classification accuracy on testing datasets was 64.4\%. \subsubsection{Baseline 3: Method based on the positional relationship between gaze center point and object positions} This baseline uses gaze point-based features and object spatial-based features. However, human visual sensory dependent features, which is radius information as described in \ref{sec:fovea-feature}, are not used for this baseline. Rather than simply using these features, a PCA module with five components has been applied to the features to improve the baseline performance. The success rate in SA prediction by this baseline was 62.0 (we set classification threshold to maximize the success rate of the training set) and the AUC value was 0.681 as shown in Figure \ref{fig:baselines}. \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{figures/method_123.png} \caption{Model performance of Method 1, 2, and 3} \label{fig:performance} \vspace{-3mm} \end{figure} \subsection{Proposed method 1: Effect of object property-based features} The first method we proposed is to consider object property-based features into the estimation of drivers' SA. As mentioned in the Features Section, the main reason of considering these features is that we believe these properties could significantly effect the difficulty of detecting objects. Then, Method 1 was trained and tested with all Baseline 3 features plus object property-based features on a linear model of SVM. A PCA module has been applied to the features to reduced the feature dimension to 11 (PCA has also been applied in the following evaluations). As shown in Figure \ref{fig:performance}, the performance of Method 1 is represented by the blue ROC curve, whose AUC value is 0.759. Clearly, such a performance outperforms all three baselines. This result indicates that object property-based features have impact on drivers' SA and can benefit SA prediction models. \subsection{Proposed method 2: Effect of human visual sensory dependent features} Understanding that different fovea ranges in human visual system have different characteristics of perceiving objects with varying properties, the second method we proposed is to consider human visual sensory dependent features (fovea-based features) by utilizing various characteristics of different fovea ranges of the human vision system. To evaluate proposed methods respectively, we first utilized the sensory-based features with features in Baseline 3 and developed a linear model of SVM for SA prediction. However, the performance improvement of the prediction model with such a feature combination is limited compared to Baseline 3, as shown in the ROC curves in Figure \ref{fig:performance} and the accuracy rates in Table \ref{tab:SA_accuracy}. We then added object property-based features to the prediction model. In another word, we combined the sensory-based features with features used in Method 1. With all object and fovea features, the prediction performance outperforms all three baselines and Method 1, as the green ROC curve with AUC of $0.768$ shown in Figure \ref{fig:performance} and the prediction accuracy of $71.5\%$ listed in Table \ref{tab:SA_accuracy}. \begin{table}[t] \caption{SA prediction accuracy} \begin{center} \begin{tabular}{m{60mm}P{16mm}} \hline \centering Method & Accuracy (\%) \\ \hline Chance rate (Percentage of majority class) & 55.8 \\ \hline Baseline 1 (Fixation rule) & 54.9 \\ \hline Baseline 2 (Fixation SVM) & 64.4 \\ \hline Baseline 3 (Gaze+Object spatial feature) & 62.0 \\ \hline Method 1 (Baseline 3+Object property features) & 69.9 \\ \hline Method 2 (Baseline 3+Sensory-based features) & 62.3 \\ \hline Method 1+2 (Method 1+Sensory-based features) & 71.5 \\ \hline Method 1+2+3 (Method 1, 2+Memory feature) & 72.4 \\ \hline \end{tabular} \end{center} \label{tab:SA_accuracy} \vspace{-3mm} \end{table} \subsection{Proposed method 3: Effect of memory-oriented re-ranking features} The third method we proposed is to approximate human memory system by implementing memory-oriented re-ranking features. As explained in the Feature Section, we developed a two-classifier model structure, in which the first classifier provides re-ranking features and the second classifier simulates the memory system by suppressing objects exceeding human short-term memory capacity and then estimates drivers' SA. For the first classifier, we utilized both the proposed method 1 and 2, which contain all object and fovea features processed by PCA. Then, we applied the probability of each object of being aware of by the driver as part of the input for the second classifier. Specifically for the second classifier of a logistic regression model, we have tried multiple $N$ values as short-term memory capacity, and interestingly, the performance was the best when $N=7$ was used. This result aligns with the psychology finding of Miller's lab \cite{miller1956magical}. \begin{comment} \begin{table*}[ht] \centering \begin{tabular}{P{2mm}r|rrrrrrrrrrrrrrrrrrrrrrrrrrrrrr} \hline \# & Contribution(\%) & $G_{pause}$ & $G_{min}$ & $G_{ave}$\\ \hline 1 & 25.7 & -0.04 & -0.06 & -0.02 & -0.09 & 0.09 & 0.26 & 0.39 & 0.09 & -0.02 & -0.43 & -0.31 & -0.09 & 0.40 & -0.31 & -0.03 & 0.13 & 0.21 & 0.29 & -0.07 & 0.09 & -0.02 & -0.07 & 0.10 & -0.06 & -0.06 & 0.12 & -0.06 & -0.05 & 0.10 & -0.06 \\ \hline 2 & 17.2 & 0.09 & 0.16 & 0.07 & 0.23 & -0.29 & 0.17 & 0.06 & -0.02 & 0.19 & -0.01 & -0.20 & 0.25 & -0.06 & 0.05 & -0.45 & 0.25 & 0.15 & 0.18 & -0.03 & -0.12 & 0.08 & 0.00 & -0.17 & 0.20 & 0.04 & -0.28 & 0.17 & 0.09 & -0.32 & 0.15 \\ \hline 3 & 11.7 & -0.04 & -0.08 & -0.03 & -0.12 & 0.18 & 0.42 & -0.10 & 0.11 & -0.25 & 0.16 & 0.18 & -0.12 & -0.05 & 0.43 & -0.55 & 0.05 & 0.08 & 0.15 & 0.04 & 0.08 & -0.03 & 0.03 & 0.10 & -0.09 & 0.00 & 0.14 & -0.07 & -0.06 & 0.19 & -0.06 \\ \hline 4 & 10.0 & -0.03 & -0.11 & -0.04 & -0.08 & 0.22 & -0.18 & -0.02 & -0.23 & 0.20 & -0.25 & -0.01 & 0.51 & -0.49 & -0.05 & -0.08 & -0.01 & 0.14 & 0.29 & 0.13 & 0.01 & -0.04 & 0.11 & 0.04 & -0.13 & 0.04 & 0.14 & -0.10 & -0.01 & 0.21 & -0.08 \\ \hline 5 & 8.4 & -0.05 & -0.11 & -0.03 & -0.12 & -0.19 & -0.24 & 0.35 & -0.10 & 0.16 & 0.16 & 0.20 & -0.23 & 0.04 & 0.14 & -0.08 & -0.47 & 0.41 & 0.25 & -0.01 & -0.13 & -0.04 & -0.03 & -0.14 & -0.12 & -0.05 & -0.14 & -0.11 & -0.10 & -0.14 & -0.10 \\ \hline 6 & 6.9 & 0.05 & -0.03 & 0.02 & 0.15 & 0.21 & -0.35 & 0.19 & 0.17 & -0.25 & 0.28 & -0.52 & 0.26 & 0.26 & 0.26 & -0.07 & -0.16 & -0.03 & -0.01 & 0.15 & -0.01 & 0.00 & 0.15 & -0.04 & -0.01 & 0.14 & -0.01 & -0.01 & 0.05 & 0.13 & 0.02 \\ \hline \end{tabular} \caption{An example of a detailed PCA analysis and interpretation -- coefficients of the top six PCA components with the highest contribution percentages} \label{tab:PCA} \end{table*} \end{comment} \begin{table*}[ht] \caption{An example of a detailed PCA analysis and interpretation -- coefficients of the top six PCA components with the highest contribution percentages (absolute values of coefficients over 0.2 were in bold font)} \centering \begin{tabular}{P{2mm}r|P{11mm}P{11mm}P{11mm}P{11mm}P{11mm}P{11mm}P{11mm}P{11mm}P{11mm}P{11mm}} \hline \multicolumn{2}{c}{\# Contribution} & $G_{pause}$ & $G_{min}$ & $G_{ave}$ & $OS_{proximity}$ & $OS_{duration}$ & $OP_{relevance}$ & $OP_{light}$ & $OS_{size}$ & $OS_{density}$ & $OP_{type}$ \\ \hline 1 & 25.7 & -0.04 & -0.06 & -0.02 & -0.09 & 0.09 & \textbf{0.26} & \textbf{0.39} & 0.09 & -0.02 & \textbf{-0.43} \\ \hline 2 & 17.2 & 0.09 & 0.16 & 0.07 & \textbf{0.23} & \textbf{-0.29} & 0.17 & 0.06 & -0.02 & 0.19 & -0.01 \\ \hline 3 & 11.7 & -0.04 & -0.08 & -0.03 & -0.12 & 0.18 & \textbf{0.42} & -0.10 & 0.11 & \textbf{-0.25} & 0.16 \\ \hline 4 & 10.0 & -0.03 & -0.11 & -0.04 & -0.08 & \textbf{0.22} & -0.18 & -0.02 & \textbf{-0.23} & \textbf{0.20} & \textbf{-0.25} \\ \hline 5 & 8.4 & -0.05 & -0.11 & -0.03 & -0.12 & -0.19 & \textbf{-0.24} & \textbf{0.35} & -0.10 & 0.16 & 0.16 \\ \hline 6 & 6.9 & 0.05 & -0.03 & 0.02 & 0.15 & \textbf{0.21} & \textbf{-0.35} & 0.19 & 0.17 & \textbf{-0.25} & \textbf{0.28} \\ \hline & & $OP_{contrast}^{low}$ & $OP_{contrast}^{med}$ & $OP_{contrast}^{high}$ & $OP_{movement}^{static}$ & $OP_{movement}^{slow}$ & $OP_{movement}^{med}$ & $OP_{movement}^{high}$ & $OP_{change}$ & $HV^{\theta_n=2.5}_{elapse}$ & $HV^{\theta_n=2.5}_{dwell}$ \\ \hline 1 & & \textbf{-0.31} & -0.09 & \textbf{0.40} & \textbf{-0.31} & -0.03 & 0.13 & \textbf{0.21} & \textbf{0.29} & -0.07 & 0.09 \\ \hline 2 & & \textbf{-0.20} & \textbf{0.25} & -0.06 & 0.05 & \textbf{-0.45} & \textbf{0.25} & 0.15 & 0.18 & -0.03 & -0.12 \\ \hline 3 & & 0.18 & -0.12 & -0.05 & \textbf{0.43} & \textbf{-0.55} & 0.05 & 0.08 & 0.15 & 0.04 & 0.08 \\ \hline 4 & & -0.01 & \textbf{0.51} & \textbf{-0.49} & -0.05 & -0.08 & -0.01 & 0.14 & \textbf{0.29} & 0.13 & 0.01 \\ \hline 5 & & \textbf{0.20} & \textbf{-0.23} & 0.04 & 0.14 & -0.08 & \textbf{-0.47} & \textbf{0.41} & \textbf{0.25} & -0.01 & -0.13 \\ \hline 6 & & \textbf{-0.52} & \textbf{0.26} & \textbf{0.26} & \textbf{0.26} & -0.07 & -0.16 & -0.03 & -0.01 & 0.15 & -0.01 \\ \hline & & $HV^{\theta_n=2.5}_{average}$ & $HV^{\theta_n=4.1}_{elapse}$ & $HV^{\theta_n=4.1}_{dwell}$ & $HV^{\theta_n=4.1}_{average}$ & $HV^{\theta_n=9.1}_{elapse}$ & $HV^{\theta_n=9.1}_{dwell}$ & $HV^{\theta_n=9.1}_{average}$ & $HV^{\theta_n=15.0}_{elapse}$ & $HV^{\theta_n=15.0}_{dwell}$ & $HV^{\theta_n=15.0}_{average}$ \\ \hline 1 & & -0.02 & -0.07 & 0.10 & -0.06 & -0.06 & 0.12 & -0.06 & -0.05 & 0.10 & -0.06 \\ \hline 2 & & 0.08 & 0.00 & -0.17 & \textbf{0.20} & 0.04 & \textbf{-0.28} & 0.17 & 0.09 & \textbf{-0.32} & 0.15 \\ \hline 3 & & -0.03 & 0.03 & 0.10 & -0.09 & 0.00 & 0.14 & -0.07 & -0.06 & 0.19 & -0.06 \\ \hline 4 & & -0.04 & 0.11 & 0.04 & -0.13 & 0.04 & 0.14 & -0.10 & -0.01 & \textbf{0.21} & -0.08 \\ \hline 5 & & -0.04 & -0.03 & -0.14 & -0.12 & -0.05 & -0.14 & -0.11 & -0.10 & -0.14 & -0.10 \\ \hline 6 & & 0.00 & 0.15 & -0.04 & -0.01 & 0.14 & -0.01 & -0.01 & 0.05 & 0.13 & 0.02 \\ \hline \end{tabular} \label{tab:PCA} \end{table*} The performance of this method is represented by the red ROC curve in Figure \ref{fig:performance} with an AUC value of 0.762. Compared to the Method 1+2, the prediction accuracy rate improved from 71.5\% to 72.4\%, which was the highest among all baselines and propose methods. Also, we tested the proposed method 3 only (i.e., using Baseline 3 classifier to get memory-based features), but the performance of the classification did not improve. We believe the main reason is that the quality of the memory-based feature relies on the performance of the first classifier, and the quality of the ranking by the baseline 3 was not as good as the ranking by the proposed method 1+2 classifier. \section{DISCUSSION} To better understand how the features were used in proposed methods, we analyzed the meaning of PCA components \footnote{Note that this analysis is not for analysis of how the components affect to the classification.}. Table \ref{tab:PCA} summarizes the weights of all features for the top 6 PCA components with the highest contribution. We analyzed top 6 components because the cumulative explained variance ratio of the top 6 components were 80\%. It can be observed that weights for spatial-relationship based features ($OS_x$) are generally low. That means that these features are more noise rather than important information. Correlating with the result that the Proposed method 2 (using fixation information) outperforms Baseline 3, using human vision system-based knowledge are useful than using raw gaze movement information. The first component has large weights with the features related to many of object property-based features including contrast, motion as well as the importance of the objects such as object type, relevance to ego-car future trajectory and traffic signal. We think this component summarizes features that represent the saliency (how easy the objects are to be detected) of objects. The second and third components has larger weight in total fixation time and movement, as well as state change of the objects. Interestingly, the weights of the second component for the wider field of view are larger. The weights for 2.5, 4.1, 9.1, 15.0 are -0.122, -0.173, -0.278 and -0.319. We believe these components characterize how people track (already detected) objects and people's peripheral vision has a relation to the tracking behavior. In addition to the PCA modules, another important part in our methods is the two-classifier structure for approximating human short-term memory capacity. In our model, we used hyperbolic tangent function to model memory-based feature i.e., $M_n=tanh(R_n - N)$. We compared our definition with definitions using Heaviside step function (i.e., $M_n=1$ if $n>N$, $0$ if $n\leq N$) and linear function (i.e., $M_n=R_n$). We confirmed that the score using hyperbolic tangent was the best among those. We believe that the hyperbolic tangent outperforms the step function because it is more robust against the difference between the scenes and individual differences. The fact that our model outperforms the linear model indicates that the memory capacity is used equally for each objects rather than major objects take majority of the memory capacity. The first major limitation of our work comes from the limited quantity of data that we were able to collect. Although our test scenes (road intersections) involve typical situations where most accidents happen \footnote{Approximately 40\% of accidents happen in intersections according to the Federal Highway Administration reports \cite{NHTSA2016b}}, it is unclear how our results can be generalized to other sites, traffic conditions, and weather conditions. Thus, the applicability of the proposed methods to other situations need to be explored further. We are also aware of the limitation due to the data collection using a driving simulator. The simulator has a limited field of view of the traffic scenes, and its depth perception (due to flat projector screen) is disregarded. While we considered that participants in the data collection process were similar to the drivers in the real-world driving scenarios, their risk perception of traffic scenes could be different. Yet, we believe the proposed approaches and findings of this paper will provide essential and necessary steps towards developing human and situation-aware user assistance systems. \section{CONCLUSION} Understanding drivers' situation awareness can significantly benefit advanced driver assistance systems for intelligent vehicles in warning drivers about important traffic participants that could potentially cause accidents. In this work, we developed prediction models for estimating drivers' situation awareness by utilizing their gaze behavior, object spatial and property-based features, human visual sensory and short-term memory mechanism. The prediction performance of our models achieves over 70\% in accuracy rate and outperforms baselines established from previous studies. While an increased fidelity of the experimental set-up as described in the limitation paragraph is an important future work, we would also like to test the effectiveness of our concept in the future. That is, we would like to implement an adaptive warning system by combining a real-time SA tracking system and object of importance estimation systems (such as \cite{martin2018object}) to locate important objects that drivers are not aware of and provide corresponding warnings. It is also an important future work to investigate the best strategy regarding when to make a warning or not to maximize driver's safety. \addtolength{\textheight}{-0cm} \section*{ACKNOWLEDGMENT} We gratefully acknowledge Dr. Hyungil Kim and Dr. Joe Gabbard for their assistance of data collection and annotation. \bibliographystyle{ieeetran}
2,869,038,156,084
arxiv
\section{Introduction} \label{sec:intro} Asteroid families are a powerful tool to investigate the collisional and dynamical evolution of the asteroid belt. If they are correctly identified, they allow to describe the properties of the parent body, the collisional event(s) generating the family and the subsequent evolution due to chaotic dynamics, non-gravitational perturbations, and secondary collisions. However, asteroid families are statistical entities. They can be detected by density contrast with respect to a background in the phase space of appropriate parameters, but their reliability strongly depends upon the number of members, the accuracy of the parameters and the method used for the classification. The exact membership cannot be determined, because the portion of phase space filled by the family can have already been occupied by some background asteroids, thus every family can and usually does have interlopers \cite{Migliorinietal95}. Moreover, the exact boundary of the region occupied by the family corresponds to marginal density contrast and thus cannot be sharply defined. The problem is, the purpose of a classification is not just to get as many families as possible, but to obtain families with enough members, with a large enough spread in size, and accurate enough data to derive quantities such as number of family generating collisional events, age for each of them, size of parent bodies, size distribution of fragments (taking into account observational bias), composition, and flow from the family to Near Earth Asteroid (NEA) and/or cometary type orbits. This has three important implications. First, the quality of a family classification heavily depends on the number of asteroids used, and upon the accuracy of the parameters used in the process. The number of asteroids is especially critical, because small number statistics masks all the information content in small families. Moreover, there is no way of excluding that families currently with few members are statistical flukes, even if they are found likely to be significant by statistical tests. Second, different kinds of parameters have very different value in a family classification. This can be measured by the information content, which is, for each parameter, the base 2 logarithm of the inverse of relative accuracy, summed for all asteroids for which such data are available; see Section~\ref{sec:dataset}. By using this measure, it is easy to show that the dynamical parameters, such as the proper elements\footnote{As an alternative the corresponding frequencies $n,g,s$ can be used, the results should be the same if the appropriate metric is used \cite{propfreq}.} $a,e,I$ have a much larger information content than the physical observations, because the latter are either available only for much smaller catalogs of asteroids, or with worse relative accuracy, or both. As an example, absolute magnitudes are available for all asteroids with good orbits and accurate proper elements, but they are computed by averaging over inhomogeneous data with poorly controlled quality. There are catalogs with albedos such as WISE, described in \cite{WISE, NEOWISE}, and color indexes such as the Sloan Digital Sky Survey (SDSS), described in \cite{sdss}, but the number of asteroids included is smaller by factors 3 to 5 with respect to proper elements catalogs, and this already includes many objects observed with marginal S/N, thus poor relative accuracy. Third, catalogs with asteroid information grow with time at a rapid and accelerating pace: e.g., we have used for this paper a catalog, distributed from AstDyS\footnote{http://hamilton.dm.unipi.it/astdys2/index.php?pc=5} with $336\, 219$ numbered asteroids with synthetic proper elements computed up to November 2012. By April 2013 the same catalog had grown to $354\, 467$ objects, i.e., $5.4\%$ in 5 months. If the rate of asteroid numbering was to continue at this rate, a pessimistic assumption, in less than 8 years the catalog would be doubled. Catalogs of physical observations are also likely to grow, although in a less regular and predictable way. Thus the question is whether the increase of the catalogs will either confirm or contradict the conclusions we draw at present; or better, the goal should be to obtain robust conclusions which will pass the test of time. As a consequence our purpose in this paper is not to make "yet another asteroid family classification", but to setup a classification which can be automatically updated every time the dataset is increased. We are going to use the proper elements first, that is defining "dynamical families", then use information from absolute magnitudes, albedos and generalized color indexes, when available and useful, as either confirmation or rejection, e.g, identification of interlopers and possibly of intersections of different collisional families. We will continue to update the classification by running at short intervals (few months) a fully automated "classification" computer program which attaches newly numbered objects to existing families. This will not remove the need to work on the identification of new families and/or on the analysis of the properties of the already known families. Every scientist will be practically enabled to do this at will, since for our dataset and our classification we follow a strict open data policy, which is already in place when this paper is submitted: all the data on our family classification is already available on AstDyS, and in fact it has already been updated with respect to the version described in this paper. We have also recently made operational a new graphic interface on AstDyS providing figures very much like the examples shown in this paper\footnote{http://hamilton.dm.unipi.it/astdys2/Plot/index.html}. The senior authors of this paper have learned much on the books Asteroids I and Asteroids II which contained Tabulation sections with large printed tables of data. This has been abolished in Asteroids III, because by the the year 2000 the provision of data in electronic form, of which AstDyS is just one example, was the only service practically in use. It is now time to reduce the number of figures, especially in color, and replacing them with ``graphic servers'' providing figures based upon the electronically available data and composed upon request from the users: this is what the AstDyS graphic tool is proposing as an experiment which may be followed by others. Readers are warmly invited to take profit of this tool, although the most important statements we do in the discussion of our results are already supported by a (forcibly limited) number of explanatory figures. The main purpose of this paper is to describe and make available some large data sets, only some of the interpretations are given, mostly as examples to illustrate how the data could be used. We would like to anticipate one major conceptual tool, which will be presented in the paper. This concerns the meaning of the word ``family'', which has become ambiguous because of usage by many authors with very different background and intent. We shall use two different terms: since we believe the proper elements contain most of the information, we perform a family classification based only upon them, thus defining \textit{dynamical families}. A different term \textit{collisional families} shall be used to indicate a set of asteroids in the catalog which have been formed at once by a single collision, be it either a catastrophic fragmentation or a cratering event. Note that there is no reason to think that the correspondence between dynamical families and collisional families should be one to one. A dynamical family may correspond to multiple collisional events, both on the same and on different parent bodies. A collisional family may appear as two dynamical families because of a gap due to dynamical instability. A collisional family might have dissipated long ago, leaving no dynamical family. A small dynamical family might be a statistical fluke, with no common origin for its members. In this paper we shall give several examples of these non-correspondences. \section{Dataset} \label{sec:dataset} In this section we list all datasets used in our classification and in its analysis contained in this paper. \subsection{Proper elements} Proper elements $a, e, \sin{i}$ are three very accurate parameters, and we have computed, over many years up to November 2012, synthetic proper elements \cite{synthpro1,synthpro2} for $336\,319$ asteroids. We made a special effort to recompute synthetic proper elements for asteroids missing in our database because of different reasons: being close to strong resonances, suffering close encounters with major planets, or having high eccentricities and inclinations. In particular we aimed at increasing the number of objects in the high $e,I$ regions, in order to improve upon the sparse statistics and the reliability of small families therein. We thus integrated orbits of a total of $2\,222$ asteroids, out of which the proper elements computation failed for only $62$ of them. The rest are now included in the AstDyS synthetic proper elements files. This file is updated a few times per year. For each individual parameter in this dataset there are available both standard deviation and maximum excursion obtained from the analysis of the values computed in sub-intervals \cite{synthpro1}. If an asteroid has large instabilities in the proper elements, as it happens for high proper $e, \sin{I}$, then the family classification can be dubious. The same catalog contains also absolute magnitudes and estimates of Lyapounov Characteristic Exponents, discussed in the following subsections. \subsection{Absolute magnitudes} Another piece of information is the set of absolute magnitudes available for all numbered asteroids computed by fitting the apparent magnitudes obtained incidentally with the astrometry, thus stored in the Minor Planet Center (MPC) astrometric database. The range of values for all numbered asteroids is $15.7$, the accuracy is difficult to be estimated because the incidental photometry is very inhomogeneous in quality, and the information on both S/N and reduction method is not available. The sources of error in the absolute magnitude data are not just the measurement errors, which are dominant only for dim asteroids, but also star catalog local magnitude biases, errors in conversion from star-calibrated magnitudes to the assumed V magnitude used in absolute magnitudes, and the lightcurve effect. Moreover, the brightness of an asteroid changes at different apparitions as an effect of shape and spin axis orientation, so strictly speaking the absolute magnitude is not a real constant. Last but not least, the absolute magnitude is defined at zero phase angle (ideal solar opposition) thus some extrapolation of observations obtained in different illumination conditions is always needed, and this introduces other significant errors, especially for high phase angles. The standard deviation of the apparent magnitude residuals has a distribution peaked at $0.5$ mag: since numbered asteroids have in general many tens of magnitude observations, the formal error in the absolute magnitude, which is just a corrected average, is generally very small, but biases can be very significant. Thus we do not have a reliable estimate of the accuracy for the large dataset of $336\,219$ absolute magnitudes computed by AstDyS, we can just guess that the standard deviation should be in the range $0.3\div 0.5$ magnitudes. The more optimistic estimate gives a large information content (see Table~\ref{tab:infocount}), which would be very significant, but there are many quality control problems. Other databases of photometric information with better and more consistent information are available, but the number of objects included is much smaller, e.g., $583$ asteroids with accuracy better than $0.21$ magnitudes \cite{pravec2012}: these authors also document the existence of serious systematic size-dependent biases in the H values. \subsection{WISE and SDSS catalogs of physical observations} The WISE catalog of albedos\footnote{Available at URL http://wise2.ipac.caltech.edu/staff/bauer/NEOWISE\_pass1} has information on $94\,632$ numbered asteroids with synthetic proper elements, but the relative accuracy is moderate: the average standard deviation (STD) is $0.045$, but the average ratio between STD and value is $0.28$; e.g., the asteroids in the WISE catalog for which the albedo measured is $<3$ times the STD are $26\%$ (we are going to use measure $>3$ times STD as criterion for using WISE data in Section~\ref{sec:use_physical}). This is due to the fact that WISE albedos, in particular for small asteroids, have been derived from a \textit{measured} infrared flux and \textit{an estimated} visible light flux derived from an adopted nominal value of absolute magnitude. Both terms are affected by random noise increasing for small objects, and by systematics in particular in the visible, as outlined in the previous subsection. In principle one should use a value of absolute magnitude not only accurate, but also corresponding to the same observation circumstances of the thermal IR observations, which is seldom the case. For comparatively large asteroids, albedo can be constrained also from the ratios between different infrared channels of WISE, thus the result may be less affected by the uncertainty of the absolute magnitude. The 4th release of the SDSS MOC\footnote{Available at URL http://www.astro.washington.edu/users/ivezic/sdssmoc/} contains data for 471 569 moving objects, derived from the images taken in five filters, $u$, $g$, $r$, $i$, and $z$, ranging from 0.3 to 1.0 $\mu$m. Of those, 123 590 records refer to asteroids present in our catalog of proper elements. As many of these objects have more than one record in the SDSS catalog, a total number of asteroids present in both catalogs is 61 041. The latter number was then further decreased by removing objects having error in magnitude larger than 0.2 mag in at least one band (excluding the $u$-band which was not used in our analysis). This, non-conservative limit, is used to remove only obviously degenerate cases. Thus, we used the SDSS MOC 4 data for 59 975 numbered asteroids. It is well known that spectrophotometric data is of limited accuracy, and should not be used to determine colors of single objects. Still, if properly used, the SDSS data could be useful in some situations, e.g. to detect presence of more than one collisional family inside a dynamical family, or to trace objects escaping from the families. Following \cite{parker2008} we have used $a^{*}$, the first principal component\footnote{According to \cite{sdss} the first principal component in the $r-i$ vs $g-r$ plane is defined as $a^{*} = 0.89(g-r) + 0.45(r-i) - 0.57$.} in the $r-i$ versus $g-r$ color-color plane, to discriminate between $C$-type asteroids ($a^{*}<0$) and $S$-type asteroids ($a^{*}\geq0$). Then, among the objects having ($a^{*}\geq0$) the $i-z$ values can be used to further discriminate between $S$- and $V$-type, with the latter one characterized by the $i-z < -0.15$. The average standard deviation of data we used is $0.04$ for $a^*$, $0.08$ for $i-z$. \subsection{Resonance identification} Another source of information available as an output of the numerical integrations used to compute synthetic proper elements is an estimate of the maximum Lyapounov Characteristic Exponent (LCE). The main use of this is to identify asteroids affected by chaos over comparatively short time scales (much shorter than the ages of the families)\footnote{Every asteroid is affected by chaotic effects over timescales comparable to the age of the solar system, but this does not matter for family classification.}. These are mostly due to mean motion resonances with major planets (only resonances with Jupiter, Saturn and Mars are affecting the Main Belt at levels which are significant for family classification). Thus we use as criterion to detect these ``resonant/chaotic'' asteroids the occurrence of at least one of the following: either a LCE $> 50$ per Million years (that is a Lyapounov time $<20\,000$ years) or $STD(a)>3\times 10^{-4}$ au. Note that the numerical integrations done to compute proper elements use a dynamical model not including any asteroid as perturber. This is done for numerical stability reasons, because all asteroids undergo mutual close approaches and these would need to be handled accurately, which is difficult while integrating hundreds of asteroids simultaneously. Another reason for this choice is that we wish to identify specifically the chaos which is due to mean motion resonances with the major planets. As shown by \cite{laskar1}, if even a few largest asteroids are considered with their mass in the dynamical model, then all asteroids are chaotic with Lyapounov times of a few $10\,000$ years. However, the long term effect of such chaos endogenous to the asteroid belt is less important than the chaos generated by the planetary perturbations \cite{laskar2}. The asteroid perturbers introduce many new frequencies, resulting in an enormous increase of the Arnold web of resonances, to the point of leaving almost no space for conditionally periodic orbits, and the Lyapounov times are short because the chaotic effects are driven by mean motion resonances. However, these resonances are extremely weak, and they do not result in large scale instability, not even over time spans of many thousands of Lyapounov times, the so called ``stable chaos'' phenomenon \cite{helga}. In particular, locking in a stable resonance with another asteroid is almost impossible, the only known exception being the $1/1$ resonance with Ceres, see the discussion about the Hoffmeister family in Section~\ref{sec:haloproblem} and Figure~\ref{fig:Hoffmeister_aI}. This implies that the (size-dependent) Yarkovsky effect, which accumulates secularly in time in semimajor axis, cannot have secular effects in eccentricity and inclination, as it happens when there is capture in resonance. We have developed a sensitive detector of mean motion resonances with the major planets, but we would like to know which resonance, which is the integer combination of mean motions forming the ``small divisor''. For this we use the catalog of asteroids in mean motion resonances by \cite{smirnov}, which has also been provided to us by the authors in an updated and computer readable form. This catalog will continue to be updated, and the information will be presented through the AstDyS site. Asteroid families are also affected by secular resonances, with ``divisor'' formed with an integer combination of frequencies appearing in the secular evolution of perihelia and nodes, namely $g, g_5, g_6$ for the perihelia of the asteroid, Jupiter and Saturn, and $s, s_6$ for the ones in the nodes of the asteroid and Jupiter\footnote{In the Hungaria region even some resonances involving the frequencies $g_3,g_4, s_4$ for Earth and Mars can be significant \cite{hungaria}.}. The data on the asteroids affected by secular resonances can be found with the analytic proper elements, computed by us with methods developed in the 90s \cite{propel1, propel2, propel3}. In these algorithms, the small divisors associated with secular resonances appear as obstruction to the convergence of the iterative algorithm used to compute proper elements, thus error codes corresponding to the secular resonances are reported with the proper elements\footnote{We must admit these codes are not user friendly, although a Table of conversion from the error codes to the small divisors is given in \cite[table 5.1]{propel1}. We shall try to improve the AstDyS user interface on this.}. Note that we have not used analytic proper elements as a primary set of parameters for family classification, since they are significantly less accurate (that is, less stable in time over millions of years) than the synthetic ones, by a factor about 3 in the low $e$ and $I$ portion of the main belt. The accuracy becomes even worse for high $e,I$, to the point that for $\sqrt{e^2+\sin^2{I}}>0.3$ the analytical elements are not even computed \cite{compareproper}; they are also especially degraded in the outer portion of the main belt, for $a>3.2$ au, due to too many mean motion resonances. On the other hand, analytic proper elements are available for multiopposition asteroids, e.g., for $98\, 926$ of them in November 2012, but these would be more useful in the regions where the number density of numbered asteroids is low, which coincide with the regions of degradation: high $e,I$ and $a>3.2$ au. It is also possible to use proper elements for multiopposition asteroids to confirm and extend the results of family classification, but for this purpose it is, for the moment, recommended to use ad hoc catalogs of synthetic proper elements extended to multiopposition, as we have done in \cite{hungaria,bojan_highi}. \subsection{Amount of information} For the purpose of computing the information content of each entry of the catalogs mentioned in this section, we use as definition of inverse relative accuracy the ratio of two quantities: 1. for each parameter, the useful range, within which most ($> 99\%$) of the values are contained; 2. the standard deviation, as declared in each catalog, for each individual computed parameter. Then the content in bit of each individual parameter is the base 2 logarithm of this ratio. These values are added for each asteroid in the catalog, thus forming a total information content, reported in the last column of Table~\ref{tab:infocount} in Megabits. For statistical purposes we give also the average number of bits per asteroid in the catalog. \begin{table}[h] \footnotesize \centering \caption{An estimate of the information content of catalogs. The columns give: parameters contained in the catalogs, minimum and maximum values and range of the parameters, average information content in bits for a single entry in the catalog, number of entries in the catalog and total information content.} \label{tab:infocount} \medskip \begin{tabular}{lrrrrrr} \hline parameter & min & max & range & av.bits & number & tot Mb \\ \hline \\ a (au) & 1.80 & 4.00 & 2.20 & 16.7 &336219 & 5.63\\ e & 0.00 & 0.40 & 0.40 & 10.7 &336219 & 3.59\\ sin I & 0.00 & 0.55 & 0.55 & 15.1 &336219 &5.08\\ total& & & & & & 14.39 \\ \\ \hline \\ H & 3.30 & 19.10 & 15.8 & 5.7 &336219& 1.92 \\ \\ \hline \\ albedo & 0.00 & 0.60 & 0.60 & 4.5 &94632& 0.43\\ \\ \hline \\ a* & -0.30 & 0.40 & 0.70 & 4.4 &59975 & 0.26\\ i-z & -0.60 & 0.50 & 1.10 & 4.0 &59975 & 0.24\\ total & & & & & & 0.50\\ \\ \hline \end{tabular} \end{table} Note that for the absolute magnitude we have assumed a standard deviation of $0.3$ magnitudes for all, although this might be optimistic. With the numbers in Table~\ref{tab:infocount} we can estimate the information content of our synthetic proper element catalog to be about $14$ Megabits, the absolute magnitudes provide almost $2$ megabits with somewhat optimistic assumptions, the physical data catalogs WISE and SDSS are $1$ Megabit together. \section{Method for large dataset classification} \label{sec:method} Our adopted procedure for family identification is largely based on applications of the classical Hierarchical Clustering Method (HCM) already adopted in previous families searches performed since the pioneering work by \cite{zapetal90}, and later improved in a number of papers \cite{zapetal94, zapetal95, hungaria, bojan_highi}. Since the method has been already extensively explained in the above papers, here we will limit ourselves to a very quick and basic description. We have divided the asteroid belt in zones, corresponding to different intervals of heliocentric distance, delimited by the most important mean-motion resonances with Jupiter. These resonances correspond to Kirkwood gaps wide enough to exclude family classification across the boundaries. \begin{table}[h] \footnotesize \centering \caption{Summary of the relevant parameters for application of the HCM.} \label{tab:zones} \medskip \begin{tabular}{lccrrrrr} \hline \\ Zone & $\sin{I}$ & range $a$ & $N$ (total) & $H_{completeness}$ & $N(H_{completeness})$ & $N_{min}$ & \multicolumn{1}{c}{QRL} \\ & & & & \multicolumn{1}{c}{(when used)} & \multicolumn{1}{c}{(when used)} & & \multicolumn{1}{c}{(m/s)} \\ \\ \hline \\ 1 & $>0.3$ & $1.600\div 2.000$ & 4249 & & & 15 & 70 \\ 2 & $<0.3$ & $2.065\div 2.501$ & 115004 & 15.0& \multicolumn{1}{c}{15888} & 17 & 70 \\ 2 & $>0.3$ & $2.065\div 2.501$ & 2627 & & & 11 & 130 \\ 3 & $<0.3$ & $2.501\div 2.825$ & 114510 & 14.5& \multicolumn{1}{c}{16158} & 19 & 90 \\ 3 & $>0.3$ & $2.501\div 2.825$ & 3994 & & & 9 & 140 \\ 4 & $<0.3$ & $2.825\div 3.278$ & 85221 & 14.0& \multicolumn{1}{c}{14234} & 17 & 100 \\ 4 & $>0.3$ & $2.825\div 3.278$ & 7954 & & & 12 & 80 \\ 5 & all & $3.278\div 3.700$ & 991 & & & 10 & 120 \\ 6 & all & $3.700\div 4.000$ & 1420 & & & 15 & 60 \\ \\ \hline \end{tabular} \end{table} As shown in Table~\ref{tab:zones}, our ``zone 1'' includes objects having proper semi-major axes between 1.6 and 2 au. In this region, only the so-called Hungaria objects at high inclination ($\sin I \ge 0.3$) are dynamically stable \cite{hungaria}. Our ``zone 2'' includes objects with proper orbital elements between 2.067 and 2.501 au. The ``zone 3'' is located between 2.501 and 2.825 au, and ``zone 4'' between 2.825 and 3.278 au. Zones 2, 3 and 4 were already used in several previous analyzes by our team. In addition, we use also a ``zone 5'', corresponding to the interval between 3.278 and 3.7 au., and a ``zone 6'', extending between 3.7 and 4.0 au. (Hilda zone). Moreover, some semi-major axis zones have been also split by the value of proper sin I, between a moderate inclination region $\sin I < 0.3$ and a high inclination region $\sin I > 0.3$. In some zones the boundary value corresponds to a gap due to secular resonances and/or stability boundaries. E.g., in zone 1 the moderate inclination region is almost empty and contains very chaotic objects (interacting strongly with Mars). In zone 2 the $g - g_6$ secular resonance clears a gap below the Phocaea region. In zones 3 and 4 there is no natural dynamical boundary which could be defined by inclination only, and indeed we have problems with families found in both regions. In zones 5 and 6 there are few high inclination asteroids, and a much smaller population. A metric function has been defined to compute the mutual distances of objects in the proper element space. Here, we have adopted the so-called ``standard metric'' $d$ proposed by \cite{zapetal90}, and since then adopted in all HCM-based family investigations. We remind that using this metric the distances between objects in the proper element space correspond to differences of velocity and are given in m/s. Having adopted a metric function, the HCM algorithm allows the users to identify all existing groups of objects which, at any given value of mutual distance $d$, are linked, in the sense that for each member of a group there is at least one other member of the same group which is closer than $d$. The basic idea is therefore to identify groups which are sufficiently populous, dense and compact (i.e., include large numbers of members down to small values of mutual distance) to be reasonably confident, in a statistical sense, that their existence cannot be a consequence of random fluctuations of the distribution of the objects in the proper element space. In this kind of analysis, which is eminently statistical, a few parameters play a key role. The most important ones are the minimum number of objects $N_{min}$ required for groups to be considered as candidate families, and the critical level of distance adopted for family identification. As for $N_{min}$, it is evident that its choice depends on the total number of objects present in a given region of the phase space. At the epoch of the pioneering study by \cite{zapetal90}, when the total inventory of asteroids with computed proper elements included only about 4,000 objects in the whole main belt, its value was chosen to be 5. Since then, in subsequent analyzes considering increasingly bigger datasets, the adopted values of $N_{min}$ were scaled as the square root of the ratio between the numbers of objects in the present dataset and the one in some older sample in the same volume of the proper element space. We follow this procedure also in the present paper, so we chose the new $N_{min}$ values by scaling with respect to the $N_{min}$ adopted by \cite{zapetal95} for the low-I portions of zones 2, 3, and 4, and \cite{bojan_highi} for the high-I portions of the same zones. Zone 5 and 6 were analyzed for the first time, they contain relatively low numbers of objects, and for them we adopted $N_{min}$ values close to $1\%$ of the sample, after checking that this choice did not affect severely the results. As for the critical distance level, it is derived by generating synthetic populations (``Quasi-Random Populations'') of fictitious objects occupying the same volume of proper element space, and generated in such a way as to have {\em separately} identical distributions of $a$, $e$ and $\sin I$ as the real asteroids present in the same volume. An HCM analysis of these fictitious populations is then performed, and this makes it possible to identify some critical values of mutual distance below which it is not possible to find groupings of at least $N_{min}$ quasi-random objects. All groups of real objects which include $N_{min}$ members at distance levels below the critical threshold, are then considered as dynamical families. Note also that we always looked at groupings found at discrete distance levels separated by steps of 10 m/s, starting from a minimum value of 10 m/s. As for the practical application of the method described above, one might paradoxically say that this is a rare case, in the domain of astrophysical disciplines, in which the abundance of data, and not their scarcity, starts to produce technical problems. The reason is that the inventory of asteroids for which the orbital proper elements are available is today so big, that difficult problems of overlapping between different groupings must be faced. In other words, a simple application of the usual HCM procedures developed in the past to deal with much smaller asteroid samples would be highly problematic now in many respects, especially the phenomenon of \emph{chaining} by which obviously independent families get attached together by a thin chain of asteroids. For these reasons, when necessary (i.e., in the most populous zones of the asteroid belt) we have adopted in this paper a new, multistep procedure, allowing us to deal at each step with manageable numbers of objects, and developing appropriate procedures to link the results obtained in different steps. \subsection{Step 1: Core families} \label{sec:hcm} In order to cope with the challenge posed by the need of analyzing very big samples of objects in the most populous regions of the belt, namely the low-inclination portions of zones 2, 3 and 4, as a first step we look for the cores of the most important families present in these regions. In doing this, we take into account that small asteroids below one or few km in size are subject to comparatively fast drifts in semi-major axis over time as the consequence of the Yarkovsky effect. Due to this and other effects (including low-energy collisions, see \cite{Delloroetal12}) the cloud of smallest members of a newly born family tends to expand in the proper element space and the family borders tend to ``blur'' as a function of time. For this reason, we first removed from our HCM analysis of the most populous regions the small asteroids. In particular, we removed all objects having absolute magnitudes $H$ fainter than a threshold roughly corresponding to the completeness limit in each of the low-I portions of zones 2, 3 and 4. These completeness limits, listed in Table~\ref{tab:zones}, were derived from the cumulative distributions of $H$; for the purposes of our analysis, the choice of this threshold value is not critical. Having removed the objects having $H$ fainter than the threshold value, we were left with much more manageable numbers of asteroids, see Table~\ref{tab:zones}. To these samples, we applied the classical HCM analysis procedures. As a preliminary step we considered in each zone samples of $N$ completely random synthetic objects ($N$ being the number of real objects present in the zone), in order to determine a distance value $RL$ at which these completely random populations could still produce groups of $N_{min}$ members. Following \cite{zapetal95}, in order to smooth a little the quasi-Random populations to be created in each region, groups of the real population having more than $10\%$ of the total population at $RL$ were removed and substituted by an equal number of fully-random objects. The reason of this preliminary operation is to avoid that in the real population, a few very big and dense groups could be exceedingly dominant and could affect too strongly the distributions of proper elements in the zone, obliging some bins of the $a, e, \sin{I}$ distribution, from which the QRL population is built (see below), to be over-represented. This could affect the generation of Quasi-Random objects, producing an exceedingly deep (low) Quasi-Random level ($QRL$) of distance, leading to a too severe criterion for family acceptance. In Zones 3 and 4 a few groups (only one group in Zone 3 and two groups in Zone 4) were first removed and substituted by equal numbers of randomly generated clones. In Zone 2, the $RL$ turned out to be exceedingly high: 160 m/s, corresponding to a distance level at which practically the whole real population merges in a unique group. Removing real objects at that level would have meant to substitute nearly the whole real population by fully-random objects. So this substitution was \textit{not} done in Zone 2. After that, we ran the classical generations of quasi-random populations. In each zone, the distributions of proper a, e, and sin I to be mimicked by the quasi-random populations were subdivided in number of bins, as already done in previous papers. In each zone, we determined the minimum level of distance for which groupings of $N_{min}$ objects could be found. We considered as the critical Quasi Random level $QRL$ in each zone the minimum value obtained in ten generations. The $QRL$ values adopted in each zone are also listed in Table~\ref{tab:zones}. Then we run the HCM algorithm on the real proper elements. Families were defined as the groups having at least $N_{min}$ members at distance levels 10 m/s lower than $QRL$, or just reaching $QRL$, but with a number of members $N \ge N_{min} + 2 \sqrt{N_{min}}$. The families obtained in this first step of our procedure include only a small subset, corresponding to the largest objects, of the asteroids present in the low-I portion of zones 2, 3 and 4. For this reason, we call them ``core families'': they represent the inner ``skeletons'' of larger families present in these zones, whose additional members are then identified by the following steps of the procedure (see below Figure~\ref{fig:20_vshapea}). In the case of the high-I portions of zones 2, 3 and 4, and the entire zones 1, 5 and 6, the number of asteroids is not extremely large, and we identified families within them by applying the classical HCM procedure, without multistep procedure. For each family, the members were simply all the objects found to belong to it at the resulting QRL value of the zone. In other words, we did not adopt a case-by-case approach based on looking at the details of the varying numbers of objects found within each group at different distance levels, as was done by \cite{bojan_highi} to derive memberships among high-inclination families. \subsection{Step 2: Attaching smaller asteroids to core families} The second step of the procedure in the low-I portions of zones 2, 3 and 4 was to \textit{classify} individual asteroids, which had not been used in the core classification, by attaching some of them to the established family cores. For this we used the same QRL distance levels used in the identification of the family cores, but we allowed only single links for attachment, because otherwise we would get chaining, with the result of merging most families together. In other words, in step 2 we attribute to the core families the asteroids having a distance from at least one member (of the same core family) not larger than the QRL critical distance. The result is that the families are extended in the absolute magnitude/size dimension, not much in proper elements space, especially not in proper $a$ (see Figure~\ref{fig:20_vshapea}). Since this procedure has to be used many times (see Section~\ref{automatic}), it is important to use an efficient algorithm. In principle, if the distance has to be $d<QRL$, we need to compute all distances, e.g., with $M$ proper elements we should compute $M\cdot (M-1)/2$ distances, then select the ones $<QRL$. The computational load can be reduced by the partition into regions, but with zones containing $>100\,000$ asteroids with proper elements it is anyway a time consuming procedure. This problem has a solution which is well known, although it may not have been used previously in asteroid family classification. Indeed, the problem is similar to the one of comparing the computed ephemerides of a catalog of asteroids to the available observations from a large survey \cite[Section 11.3]{orbdet}. We select one particular dimension in the proper elements space, e.g, $\sin{I}$; we sort the catalog by the values of this proper element. Then, given the value of $\sin{I_0}$ for a given asteroid, we find the position in the sorted list, then scan the list starting from this position up and down, until the values $\sin{I_0}+QRL$ and $\sin{I_0}-QRL$, respectively, are exceeded. In this way the computational complexity for the computation of the distances $<QRL$ for $M$ asteroids is of the order of $M\,log_2(M)$, instead of $M^2$. The large distances are not computed, even less stored in the computer memory. \subsection{Step 3: Hierarchical clustering of intermediate background} As an input to the third step in the low-I portion of zones 2, 3, and 4, we use the ``intermediate background'' asteroids, defined as the set of all the objects not attributed to any family in steps 1 and 2. The HCM procedure was then applied to these objects, separately in each of the three zones. The numbers of objects left for step 3 of our analysis were 99399 in zone 2, 94787 in zone 3 and 57422 in zone 4. The corresponding values of $N_{min}$ were 42, 46 and 34, respectively, adopting the same criterion \cite{zapetal95} already used for core families. The same HCM procedures were applied, with only a few differences. In computing the critical $QRL$ distance threshold, we did not apply any preliminary removal of large groupings of real objects, because {\em a priori} we were not afraid to derive in this step of our procedure rather low values of the QRL distance level threshold. The reason is that, dealing with very large numbers of small asteroids, we adopted quite strict criteria for family acceptance, in order to minimize the possible number of false grouping, and to reduce the chance of spurious family merging. By generating in each of the three zones 10 synthetic quasi-random populations, and looking for the deepest groups of $N_{min}$ objects to determine the $QRL$, we obtained the following $QRL$ values: 50, 60 and 60 m/s for zones 2, 3 and 4, respectively. Following the same criteria used for core families, step 3 families need to be found as groupings having at least $N_{min}$ members at 40 m/s in zone 2, and 50 m/s in the zones 3 and 4. On the other hand, as mentioned above, in identifying step 3 families we are forced to be quite conservative. This is due to the known problems of overlapping between different families as a consequence of the intrinsic mobility of their smallest members in the proper element space, as a consequence of (primarily) Yarkovsky as well as of low-energy collisions. For this reason, we adopted a value of 40 m/s for step 3 family identification in all three zones. We also checked that adopting a distance level of 50 m/s in zones 3 and 4 would tend to produce an excessive effect of chaining, which would suggest merging of independent families. If collisional processes produce overlapping of members of different families in the proper element space, we can reduce this effect on our family classification only at the cost of being somewhat conservative in the identification of family memberships. Families identified at this step are formed by the population of asteroids left after removing from the proper elements dataset family members identified in steps 1 and 2 of our procedure. There are therefore essentially two possible cases: ``step 3'' families can either be fully independent, new families having no relation with the families identified previously, or they may be found to overlap ``step 1+2'' families, and to form ``haloes'' of smaller objects surrounding some family cores. The procedure adopted to distinguish these two cases is described in the following. \subsection{Step 4: Attaching background asteroids to all families} After adding the step 3 families to the list of core families of step 1, we repeat the procedure of attribution with the same algorithm of step 2. The control value of distance $d$ for attribution to a given family is the same used in the HCM method as QRL for the same family; thus values are actually different for step 1 and step 3 families, even in the same zone. If a particular asteroid is found to be attributed to more than one family, it can be considered as part of an intersection. A small number of these asteroids with double classification is unavoidable, as a consequence of the statistical nature of the procedure. However, the concentration of multiple intersections between particular families requires some explanation. One possible explanation is due to the presence of families at the boundaries between high and low inclination regions in zones 3 and 4, where there is no gap between them. Indeed, the classification has been done for proper $\sin{I}>0.29$ for the high inclination regions, for $\sin{I}<0.3$ for the low inclination. This implies that in the overlap strip $0.29< \sin{I}<0.30$ some families are found twice, e.g,, family 729. In other cases two families are found with intersections in the overlap strip. This is obviously an artifact of our decomposition in zones and needs to be corrected by merging the intersecting families. \subsection{Step 5: Merging halo families with core families} Another case of family intersections, created by step 4, is the ``halo families''. This is the case when a new family appears as an extension of an already existing family identified at steps 1 and 2, with intersections near the boundary. In general for the merging of two families we require as a criterion multiple intersections. Visual inspection of the three planar projections of the intersecting families proper elements is used as a check, and sometimes allows to assess ambiguous cases. Of the 77 families generated by HCM in step 3, we have considered 34 to be halo. Even 2 core families have been found to be halo of other core families and thus merged. There are of course dubious cases, with too few intersections, as discussed in Section~\ref{sec:haloproblem}. Still the number of asteroids belonging to intersections decreases sharply, e.g., in the two runs of single-step linkage before and after the step 5 mergers, the number of asteroids with double classification decreased from $1\,042$ to $29$. Note that the notion of ``halo'' we are using in this paper does not correspond to other definitions found in the literature, e.g., \cite{brozmorby,carruba}. The main difference is that we have on purpose set up a procedure to attach ``halo families'' formed on average by much smaller asteroids, thus we have used the absolute magnitude information to decompose the most densely populated regions. This is consistent with the idea that the smaller asteroids are more affected by the Yarkovsky effect, which is inversely proportional to the diameter: it spreads the proper elements $a$ by the direct secular effect, $e, \sin{I}$ by the interaction with resonances encountered during the drift in $a$. This has very important consequences on the family classification precisely because we are using a very large proper elements catalog, which contains a large proportion of newly discovered smaller asteroids. On the other hand, the other 43 families resulting from step 3 have been left in our classification as independent families, consisting mostly of smaller asteroids. As discussed in Sections~\ref{sec:mediumfam} and \ref{sec:smallfam}, some of them are quite numerous and statistically very significant, some are not large and may require confirmation, but overall the step 3 families give an important contribution. \subsection{Automatic update} \label{automatic} The rapid growth of the proper elements database due to the fast rate of asteroid discoveries and follow-up using modern observing facilities, and to the efficiency of the computation of proper elements, results in any family classification becoming quickly outdated. Thus we devised a procedure for an automatic update of this family classification, to be performed periodically, e.g., two-three times per year. The procedure consists in repeating the attribution of asteroids to the existing families every time the catalog of synthetic proper elements is updated. What we repeat is actually step 4, thus the lists of core families members (found in step 1) and of members of smaller families (from step 3) are the same, and also the list of mergers (from step 5) is unchanged. Thus newly discovered asteroids, after they have been numbered and have proper elements, automatically are added to the already established families when this is appropriate. There is a step which we do not think can be automated, and that is step 5: in principle, as the list of asteroids attached to established families grows, the intersection can increase. As an example, with the last update of the proper elements catalog with $18\,149$ additional records, we have added $3\,586$ new ``step 4'' family members. Then the number of intersections, that is members of two families, grows from 29 to 36. In some cases the new intersections support some merge previously not implemented because considered dubious, some open new problems, in any case to add a new merger is a delicate decision which we think should not be automated. As time goes by, there will be other changes, resulting from the confirmation/contradiction of some of our interpretations as a result of new data: as an example, some small families will be confirmed by the attribution of new members and some will not. At some point we may be able to conclude that some small families are flukes and decide to remove them from the classification, but this is a complicated decision based on assessment of the statistical significance of the lack of increase. In conclusion we can only promise we will keep an eye on the situation as the classification is updated and try to perform non-automated changes whenever we believe there is enough evidence to justify them. The purpose of both automated and non-automated upgrades of the classification is to maintain the validity of the information made public for many years, without the need for repeating the entire procedure from scratch. This is not only to save our effort, but also to avoid confusing the users with the need of continuously resetting their perception of the state of the art. \subsection{Some methodological remarks} As it should be clear after the description of our adopted techniques, our approach to asteroid family identification is based on procedures which are in some relevant details different with respect to other possible approaches previously adopted by other authors. In particular, we do not use any systematic family classifications in $>3$ dimensional spaces, such as the ones based either upon proper elements and albedo, or proper elements and color indexes, or all three datasets. We make use in our procedure, when dealing with very populous zones, of the absolute magnitude, but only as a way to decompose into steps the HCM procedure, as discussed in Subsection~\ref{sec:hcm}. Any other available data are used only \emph{a posteriori}, as verification and/or improvement, after a purely dynamical classification has been built. The reasons for this are explained in Table~\ref{tab:infocount}: less objects, each with a set of $4\div 6$ parameters, actually contain less information. We acknowledge that other approaches can be meaningful and give complementary results. The specific advantage of our datasets and of our methods is in the capability of handling large numbers of small asteroids. This allows to discover, or at least to measure better, different important phenomena. The best example of this is the radical change in perception about the cratering families, which have been more recently discovered and are difficult to appreciate in any approach biased against the use of information provided by small asteroids, as it is the case for classifications which require the availability of physical data (see Section~\ref{sec:craters}). Moreover, we do not make use of naked eye taxonomy, that is of purely visual perception as the main criterion to compose families. This is not because we disagree on the fact that the human eye is an extremely powerful instrument, but because we want to provide the users with data as little as possible affected by subjective judgments. We have no objection on the use of visual appreciation of our proposed families by the users of our data, as shown by the provision of a dedicated public graphic tool. But this stage of the work needs to be kept separate, after the classification has been computed by objective methods. \section{Results from dynamical classification} \label{sec:result_dyn} \subsection{Large families} \label{sec:bigfam} By ``large families'' we mean those having, in the classification presented in this paper, $> 1000$ members. There are $19$ such families, and some of their properties are listed in Table~\ref{tab:bigfam}. \begin{table}[ht] \footnotesize \centering \caption{Large families with $> 1000$ members sorted by \# tot. The columns give: family, zone, QRL distance (m/s), number of family members classified in steps 1, 3, 2+4 and the total number of members, family boundaries in terms of proper $a$, $e$ and $\sin{I}$. } \label{tab:bigfam} \medskip \begin{tabular}{lcrrrrrcccccc} \hline family&zone& QRL & 1 & 3 & 2+4 & tot& $a_{min}$ & $a_{max}$ & $e_{min}$& $e_{max}$& $sI_{min}$& $sI_{max}$\\ \hline \\ 135 & 2& 70& 1141& 5001& 5286\phantom{0}& 11428& 2.288& 2.478& 0.134& 0.206& 0.032& 0.059\\ 221 & 4& 100& 3060& 310& 6966\phantom{0}& 10336& 2.950& 3.146& 0.022& 0.133& 0.148& 0.212\\ 4 & 2& 70& 1599& 925& 5341\phantom{0}& 7865& 2.256& 2.482& 0.080& 0.127& 0.100& 0.132\\ 15 & 3& 90& 2713& 0& 4132\phantom{0}& 6845& 2.521& 2.731& 0.117& 0.181& 0.203& 0.256\\ 158 & 4& 100& 930& 0& 4671\phantom{0}& 5601& 2.816& 2.985& 0.016& 0.101& 0.029& 0.047\\ 20 & 2& 70& 86& 3546& 1126\phantom{0}& 4758& 2.335& 2.474& 0.145& 0.175& 0.019& 0.033\\ 24 & 4& 100& 1208& 0& 2742\phantom{0}& 3950& 3.062& 3.240& 0.114& 0.192& 0.009& 0.048\\ 10 & 4& 100& 511& 50& 1841\phantom{0}& 2402& 3.067& 3.241& 0.100& 0.166& 0.073& 0.105\\ 5 & 3& 90& 27& 1743& 350\phantom{0}& 2120& 2.552& 2.610& 0.146& 0.236& 0.054& 0.095\\ 847 & 3& 90& 176& 175& 1682\phantom{0}& 2033& 2.713& 2.819& 0.063& 0.083& 0.056& 0.076\\ 170 & 3& 90& 785& 0& 1245\phantom{0}& 2030& 2.523& 2.673& 0.067& 0.128& 0.231& 0.269\\ 93 & 3& 90& 641& 0& 1192\phantom{0}& 1833& 2.720& 2.816& 0.115& 0.155& 0.147& 0.169\\ 145 & 3& 90& 327& 0& 1072\phantom{0}& 1399& 2.573& 2.714& 0.153& 0.181& 0.193& 0.213\\ 1726& 3& 90& 84& 159& 1072\phantom{0}& 1315& 2.754& 2.818& 0.041& 0.053& 0.066& 0.088\\ 2076& 2& 70& 140& 528& 477\phantom{0}& 1145& 2.254& 2.323& 0.130& 0.153& 0.088& 0.106\\ 490 & 4& 100& 187& 46& 903\phantom{0}& 1136& 3.143& 3.196& 0.049& 0.079& 0.151& 0.172\\ 434 & 1& 70& 662& 0& 455\phantom{0}& 1117& 1.883& 1.988& 0.051& 0.097& 0.344& 0.378\\ 668 & 3& 90& 259& 0& 842\phantom{0}& 1101& 2.744& 2.811& 0.188& 0.204& 0.129& 0.143\\ 1040& 4& 100& 226& 0& 870\phantom{0}& 1096& 3.083& 3.174& 0.176& 0.217& 0.279& 0.298\\ \\ \hline \end{tabular} \end{table} The possibility of finding families with such large number of members results from our methods, explained in the previous section, to attach to the \textit{core families} either families formed with smaller asteroids or individual asteroids which are suitably close to the core. The main effect of attaching individual asteroids is to extend the family to asteroids with higher $H$, that is smaller. The main effect of attaching families formed with smaller objects is to extend the families in proper $a$, which can be understood in terms of the Yarkovsky effect, which generates a drift $da/dt$ inversely proportional to the diameter $D$. As the most spectacular increase in the family size, the core family of (5) Astraea is very small, with only 27 members, growing to $2\,120$ members with steps 2--5: almost all the family members are small, i.e., $H>14$. For example: among the largest families, the ones with namesakes (135) Hertha and (4) Vesta are increased significantly in both ways, by attaching families with smaller asteroids on the low $a$ side (the high $a$ side being eaten up, in both cases, by the $3/1$ resonance with Jupiter) and by attaching smaller asteroids to the core. In both cases the shape of the family, especially when projected on the proper $a, e$ plane, clearly indicates a complex structure, which would be difficult to model with a single collisional event. There are two different reasons why these families contain so many asteroids: family 135 actually contains the outcome of the disruption of two different parents (see Section~\ref{sec:use_physical}); family 4 is the product of two or more cratering events, but on the same parent body (see Section~\ref{sec:absol_mag}). \begin{figure}[h!] \figfig{12cm}{massalia_ae}{The family of Massalia as it appears in the $a - e$ plane. Red dots indicate objects belonging to the family core. Green dots refer to objects added in step 2 and 4 of the classification procedure, while yellow points refer to objects linked at step 3 (see text). Black dots are not recognized as nominal family members, although some of them might be, while others are background objects. Blue dots are chaotic objects, affected by the $2/1$ resonance with Mars.} \end{figure} Another spectacular growth with respect to the core family is the one shown by the family of (20) Massalia, which is also due to a cratering event. The region of proper elements space occupied by the family has been significantly expanded by the attribution of the ``halo families'', in yellow in Figure~\ref{fig:massalia_ae}, on both the low $a$ and the high $a$ side. The shape is somewhat bilobate, and this, as already reported by \cite{vokyorp}, is due to the $1/2$ resonance with Mars which is clearly indicated by chaotic orbits (marked in blue) and also by the obvious line of diffusion along the resonance. The border of the family on the high proper $a$ side is very close to the $3/1$ resonance with Jupiter; the instability effects due to this extremely strong resonance may be responsible for the difficulties of attributing to the family a number of objects currently classified as background (marked in black). We can anyway suggest that Massalia can be a significant source of NEA and meteorites through a chaotic path passing though the $3/1$. On the contrary, there is no resonance affecting the border on the low $a$. The families of (221) Eos and (15) Eunomia have also been increased significantly by our procedure, although the core families were already big. In both cases there is a complex structure, which makes it difficult to properly define the family boundaries as well as to model the collisional event(s). On the contrary, the family of (158) Koronis was produced by an impact leading to complete disruption of the original parent body, since this family does not show any dominant largest member. Family 158 has no halo: this is due to being sandwiched between the $5/2$ resonance and the $7/3$ resonance with Jupiter. The same lack of halo occurs for the family of (24) Themis: the $2/1$ resonance with Jupiter explains the lack of halo on the high $a$, the $11/5$ resonance has some influence on the low $a$ boundary. There has been some discussion in the past on the family 490, but as pointed out already in \cite{chaosclock} the asteroid (490) Veritas is in a very chaotic orbit resulting in transport along a resonance (later identified as $5J-2S-2A$), thus it currently appears to be far away in proper $e$ from the center of the family, but still can be interpreted as the parent body. A significant fraction of family members are locked in the same resonance, thus giving the strange shape which can be seen in the right low portion of Figure~\ref{fig:eos_aI_otherfam}. We note that in our analysis we do not identify a family associated with (8) Flora. A Flora family was found in some previous family searches, but always exhibited a complicated splitting behavior which made the real membership to appear quite uncertain \cite{zapetal95}. We find (8) Flora to belong to a step 1 grouping which is present at a distance level of 110 m/s, much higher than the adopted QRL for this zone (70 m/s). This grouping merges with both (4) and (20) at 120 m/s, obviously meaningless. In a rigorous analysis, the QRL cannot be increased arbitrarily just to accept as a family groupings like this one, which do not comply with our criteria. \subsubsection{Halo problems} \label{sec:haloproblem} We are not claiming that our method of attaching ``halo'' families to larger ones can be applied automatically and/or provide an absolute truth. There are necessarily dubious cases, most of which can be handled only by suspending our judgment. Here we are discussing all the cases in which we have found problems, resulting in a total of 29 asteroids belonging to family intersections. For the family of (15) Eunomia, the $3/1$ resonance with Jupiter opens a wide gap on the low $a$ side. The $8/3$ resonance appears to control the high $a$ margin, but there is a possible appendix beyond the resonance, which is classified as the family of (173) Ino: we have found four intersections 15--173. A problem would occur if we were to merge the two families, because the proper $a=2.743$ au of the large asteroid (173) appears incompatible with the dynamical evolution of a fragment from (15). The only solution could be to join to family 15 only the smaller members of 173, but we do not think that such a merge could be considered reliable at the current level of information. \begin{figure}[h!] \figfig{12cm}{eos_aI_otherfam}{The region surrounding the family of (221) Eos in the proper $a$, proper $\sin{I}$ plane. Boxes are used to mark the location of families, some of which overlap family 221 in this projection but not in others, such a $a, e$. (This figure has been generated with the new AstDyS graphics server.)} \end{figure} The family of (221) Eos appears to end on the lower $a$ side at the $7/3$ resonance with Jupiter, but the high $a$ boundary is less clear. There are two families 507 and 31811 having a small number (six) of intersections with 221: they could be interpreted as a continuation of the family, which would have a more complex shape. However, for the moment we do not think there is enough information to draw this conclusion. Other families in the same region have no intersections and appear separate: the $a, \sin{I}$ projection of Figure~\ref{fig:eos_aI_otherfam} shows well the separation of core families 179, 490, 845, and small families 1189 and 8737, while 283 is seen to be well separate by using an $e, \sin{I}$ projection. The family of (135) Hertha has few intersections (a total of four) with the small families 6138 (48 members) 6769 (45 members) 7220 (49 members). All three are unlikely to be separate collisional families, but we have not yet merged them with 135 because of too little evidence. The family of (2076) Levin appears to be limited in the low $a$ side by the $7/2$ resonance with Jupiter. It has few (three) intersections with families 298 and 883. 883 is at lower $a$ than the $7/2$ resonance, and could be interpreted as a halo, with lower density due to the ejection of family members by the resonance. Although this is an interesting interpretation which we could agree with, we do not feel this can be considered proven, thus we have not merged 2076--883. As for the family of (298) Baptistina, again the merge with 2076 could be correct, but the family shape would become quite complex, thus we have not implemented it for now. Note that a halo family, with lowest numbered (4375), has been merged with 2076 because of 38 intersection, resulting in a much larger family. The family of (1040) Klumpkea has an upper bound of the proper $\sin{I}$ very close to $0.3$, that is to the boundary between the moderate inclination and the high inclination zones, to which the HCM has been applied separately. This boundary also corresponds to a sharp drop in the number density of main belt asteroids (only $5.3\%$ have proper $\sin{I}>0.3$), which is one reason to separate the HCM procedure because of a very different level of expected background density. The small family of (3667) Anne-Marie has been found in a separate HCM run for high proper $\sin{I}$, but there are ten intersections with family 1040. The two families could have a common origin, but if we were to merge them the shape of the family would be bizarre, with a sharp drop in number density inside the family. This could have explanation either because of a stability boundary or because of an observational bias boundary. However, this would need to be proven by a very detailed analysis, thus we have not implemented this merge. The family of (10) Hygiea has two intersections with family 1298, but the two are well separated in the $e,\sin{I}$ plane, thus they have not been merged. \begin{figure}[h!] \figfig{12cm}{Hoffmeister_aI}{The strange shape of the family of (126) Hoffmeister is shown in the proper $a,\sin{I}$ projection. Some gravitational perturbations affect the low $a$ portion of the family, including the halo family 14970. The family of (110) Lydia is nearby, but there is no intersection.} \end{figure} The family of (1726) Hoffmeister has twenty intersection with the family 14970, formed with much smaller asteroids. Given such an overlap, merging the two families appears fully consistent with our procedure as defined in Section 3. However, the merged family has a strange shape, see Figure~\ref{fig:Hoffmeister_aI}, in particular with a protuberance in the $\sin{I}$ direction which would not be easy to reconcile with a standard model of collisional disruption followed by Yarkovsky effect. Moreover, the strange shape already occurs in the core family, that is for the few largest asteroids, and thus should be explained by using perturbations not depending upon the size, that is gravitational ones. Indeed, by consulting the database of analytic proper elements, it is possible to find that (14970) has a ``secular resonance flag'' $10$, which can be traced to the effect of the secular resonance $g+s-g_6-s_6$, see also \cite[Figure 7]{propel3}; the same flag is $0$ for (1726). Indeed, the value of the ``divisor'' $g+s-g_6-s_6$ computed from the synthetic proper elements is $0.1$ arcsec/y for (14970), $0.65$ for (1726). On top of that, the proper semimajor axis of (1) Ceres is $2.7671$ au, which is right in the range of $a$ of the protuberance in proper $\sin{I}$, thus it is clear that close approaches and the $1/1$ resonance with Ceres can play a significant role in destabilizing the proper elements \cite{laskar2}. We do not have a rigorous answer fully explaining the shape of the family, but we have decided to apply this merger because the number of intersection is significant and the strange shape did not appear as a result of the procedure to enlarge the core family. From these examples, we can appreciate that it is not possible to define some algorithmic criterion, like a fixed minimum number of intersections, to automatize the process. All of the above cases can be considered as still open problems, to be more reliably solved by acquiring more information. \subsection{Medium families} \label{sec:mediumfam} By ``medium families'' we mean families we have found to have more than $100$ and no more than $1\,000$ members; the properties of the 41 families in this group are given in Table~\ref{tab:mediumfam}. \begin{table}[p] \footnotesize \centering \caption{The same as in Table~\ref{tab:bigfam} but for medium families with $100< \# \leq 1000$ members.} \label{tab:mediumfam} \medskip \begin{tabular}{lcrrrrrcccccc} \hline family&zone& QRL & 1 & 3 & 2+4 & tot& $a_{min}$ & $a_{max}$ & $e_{min}$& $e_{max}$& $sI_{min}$& $sI_{max}$\\ \hline \\ 31 & 4& 80& 968& 0& 0\phantom{00}& 968& 3.082& 3.225& 0.150& 0.231& 0.431& 0.459\\ 25 & 2& 130& 944& 0& 0\phantom{00}& 944& 2.261& 2.415& 0.160& 0.265& 0.366& 0.425\\ 480 & 3& 140& 839& 0& 0\phantom{00}& 839& 2.538& 2.721& 0.008& 0.101& 0.364& 0.385\\ 808 & 3& 90& 72& 166& 567\phantom{00}& 805& 2.705& 2.805& 0.125& 0.143& 0.080& 0.093\\ 3 & 3& 90& 45& 257& 462\phantom{00}& 764& 2.623& 2.700& 0.228& 0.244& 0.225& 0.239\\ 110 & 3& 90& 168& 0& 561\phantom{00}& 729& 2.696& 2.779& 0.026& 0.061& 0.084& 0.106\\ 3827 & 3& 90& 29& 310& 332\phantom{00}& 671& 2.705& 2.768& 0.082& 0.096& 0.080& 0.094\\ 3330 & 4& 100& 63& 0& 537\phantom{00}& 600& 3.123& 3.174& 0.184& 0.212& 0.171& 0.184\\ 1658 & 3& 90& 98& 172& 288\phantom{00}& 558& 2.546& 2.626& 0.165& 0.185& 0.123& 0.142\\ 375 & 4& 100& 229& 0& 273\phantom{00}& 502& 3.096& 3.241& 0.059& 0.130& 0.264& 0.299\\ 293 & 4& 100& 38& 0& 405\phantom{00}& 443& 2.832& 2.872& 0.119& 0.133& 0.256& 0.264\\ 10955 & 3& 40& 0& 428& 0\phantom{00}& 428& 2.671& 2.739& 0.005& 0.026& 0.100& 0.113\\ 163 & 2& 40& 0& 392& 0\phantom{00}& 392& 2.332& 2.374& 0.200& 0.218& 0.081& 0.098\\ 569 & 3& 40& 0& 389& 0\phantom{00}& 389& 2.623& 2.693& 0.169& 0.183& 0.035& 0.045\\ 1128 & 3& 40& 0& 389& 0\phantom{00}& 389& 2.754& 2.817& 0.045& 0.053& 0.008& 0.018\\ 283 & 4& 100& 49& 0& 320\phantom{00}& 369& 3.029& 3.084& 0.107& 0.124& 0.155& 0.166\\ 179 & 4& 100& 60& 0& 306\phantom{00}& 366& 2.955& 3.015& 0.053& 0.080& 0.148& 0.159\\ 5026 & 2& 40& 0& 346& 0\phantom{00}& 346& 2.368& 2.415& 0.200& 0.217& 0.082& 0.096\\ 3815 & 3& 40& 0& 283& 0\phantom{00}& 283& 2.563& 2.583& 0.138& 0.143& 0.145& 0.164\\ 1911 & 6& 60& 280& 0& 0\phantom{00}& 280& 3.964& 3.967& 0.159& 0.222& 0.041& 0.055\\ 845 & 4& 100& 29& 0& 224\phantom{00}& 253& 2.917& 2.953& 0.029& 0.041& 0.205& 0.209\\ 194 & 3& 140& 235& 0& 17\phantom{00}& 252& 2.522& 2.691& 0.154& 0.196& 0.293& 0.315\\ 396 & 3& 40& 0& 242& 0\phantom{00}& 242& 2.731& 2.750& 0.164& 0.170& 0.057& 0.062\\ 12739 & 3& 40& 0& 240& 0\phantom{00}& 240& 2.682& 2.746& 0.047& 0.060& 0.031& 0.041\\ 778 & 4& 100& 29& 0& 200\phantom{00}& 229& 3.158& 3.191& 0.240& 0.261& 0.243& 0.253\\ 945 & 3& 140& 219& 0& 0\phantom{00}& 219& 2.599& 2.659& 0.190& 0.289& 0.506& 0.521\\ 1303 & 4& 80& 179& 0& 0\phantom{00}& 179& 3.193& 3.236& 0.106& 0.144& 0.310& 0.337\\ 752 & 2& 70& 27& 90& 41\phantom{00}& 158& 2.421& 2.484& 0.084& 0.095& 0.085& 0.092\\ 18466 & 3& 40& 0& 155& 0\phantom{00}& 155& 2.763& 2.804& 0.171& 0.182& 0.229& 0.236\\ 173 & 3& 90& 29& 0& 125\phantom{00}& 154& 2.708& 2.770& 0.159& 0.180& 0.229& 0.239\\ 606 & 3& 40& 0& 153& 0\phantom{00}& 153& 2.573& 2.594& 0.179& 0.183& 0.166& 0.168\\ 507 & 4& 100& 38& 0& 111\phantom{00}& 149& 3.124& 3.207& 0.049& 0.075& 0.181& 0.198\\ 13314 & 3& 40& 0& 146& 0\phantom{00}& 146& 2.756& 2.801& 0.170& 0.183& 0.069& 0.078\\ 302 & 2& 40& 0& 143& 0\phantom{00}& 143& 2.385& 2.418& 0.104& 0.111& 0.056& 0.060\\ 1298 & 4& 100& 69& 0& 74\phantom{00}& 143& 3.088& 3.220& 0.105& 0.123& 0.104& 0.123\\ 87 & 5& 120& 119& 0& 20\phantom{00}& 139& 3.459& 3.564& 0.046& 0.073& 0.162& 0.179\\ 883 & 2& 70& 46& 0& 86\phantom{00}& 132& 2.213& 2.259& 0.140& 0.151& 0.092& 0.102\\ 298 & 2& 70& 43& 0& 88\phantom{00}& 131& 2.261& 2.288& 0.146& 0.161& 0.100& 0.114\\ 19466 & 3& 40& 0& 125& 0\phantom{00}& 125& 2.724& 2.761& 0.007& 0.020& 0.103& 0.111\\ 1547 & 3& 40& 0& 108& 0\phantom{00}& 108& 2.641& 2.650& 0.267& 0.270& 0.211& 0.212\\ 1338 & 2& 70& 38& 0& 66\phantom{00}& 104& 2.259& 2.302& 0.119& 0.130& 0.075& 0.091\\ \\ \hline \end{tabular} \end{table} Of course the exact boundaries $100$ and $1\,000$ are chosen just for convenience: still the distinction between families based on these boundaries has some meaning. The medium families are such that the amount of data may not be enough for detailed studies of the family structure, including family age determination, size distribution, detection of internal structures and outliers. However, they are unlikely to be statistical flukes, they represent some real phenomenon, but some caution needs to be used in describing it. In this range of sizes it is necessary to analyze each individual family to find out what can be actually done with the information they provide. As for the ones near the lower boundary for the number of members, they are expected to grow as the family classification procedure is applied automatically by the AstDyS information system to larger and larger proper elements datasets. As a result, they should over a time span of several years grow to the point that more information on the collisional and dynamical evolution process is available. The only other expected outcome is that some of them can become much stronger candidates for merging with big families (e.g., family 507 cited above as a possible appendix to 221). If some others were not to grow at all, even over a timespan in which there has been a significant increase in number density (e.g, $50\%$) in the region, this would indicate a serious problem in the classification and would need to be investigated. Note that 14 of the medium families have been generated in step 3, that is they are formed with the intermediate background after removal of step 1 and 2 family members, roughly speaking with ``smaller'' asteroid. \subsubsection{Some remarkable medium families} The families of (434) Hungaria, (25) Phocaea, (31) Euphrosyne, and (480) Hansa, clustered around the $1\,000$ members boundary, are the largest high inclination families, one for each semimajor axis zone\footnote{Hungaria is included in our list of large families. Zones 5 and 6 have essentially no stable high inclination asteroids}. Given the lower number density for proper $\sin{I}>0.3$ the numbers of members are remarkably high, and suggest that it may be possible to obtain information on the collisional processes which can occur at higher relative velocities. These four have been known for some time \cite{hungaria,bojan_highi,synthpro2,phocea_carruba}, but now it has become possible to investigate their structure. For family 480 the proper $e$ can be very small: this results in a difficulty in computing proper elements (especially $e$ and the proper frequency $g$) due to "paradoxical libration". We will try to fix our algorithm to avoid this, but we have checked that it has no influence on the family membership, because $e=0$ is not a boundary. The largest family in zone 5 is the one of (87) Sylvia, which is well defined, but with a big central gap corresponding to the resonance $9/5$ with Jupiter. This family has in (87) such a dominant largest member that it can be understood as a cratering event, even if we do not have a good idea of how many fragments have been removed by the $9/5$ and other resonances\footnote{This family is interesting also because (87) Sylvia has been the first recognized triple asteroid system, formed by a large primary and two small satellites \cite{Marchis2005}}. The largest family in zone 6, that is among the Hilda asteroid locked in the $3/2$ resonance with Jupiter, is the one of (1911) Schubart. By the way, proper elements for Hildas, taking into account the resonance, have been computed by \cite{schubart}, but we have used as input to the HCM (step 1) procedure the synthetic proper elements computed without taking into account the resonance, thus averaging over the libration in the critical argument. This is due to the need to use the largest dataset of proper elements, and is a legitimate approximation because the contribution of even the maximum libration amplitude to the metrics similar to $d$, to be used for resonant asteroids, is more than an order of magnitude smaller than the one due to eccentricity. \subsection{Small families} \label{sec:smallfam} The families we rate as ``small'' are those in the range between $30$ and $100$ members; data for 43 such families are in Table~\ref{tab:smallfam}. \begin{table}[p] \footnotesize \centering \caption{The same as in Table~\ref{tab:bigfam} but for small families with $30< \#\leq 100$ members.} \label{tab:smallfam} \medskip \begin{tabular}{lcrrrrrcccccc} \hline family&zone& QRL & 1 & 3 & 2+4 & tot& $a_{min}$ & $a_{max}$ & $e_{min}$& $e_{max}$& $sI_{min}$& $sI_{max}$\\ \hline \\ 96 & 4& 100& 38& 0& 62\phantom{00}& 100& 3.036& 3.070& 0.176& 0.189& 0.280& 0.289\\ 148 & 3& 140& 95& 0& 0\phantom{00}& 95& 2.712& 2.812& 0.116& 0.150& 0.420& 0.430\\ 410 & 3& 90& 55& 0& 38\phantom{00}& 93& 2.713& 2.761& 0.238& 0.265& 0.146& 0.160\\ 2782 & 3& 90& 21& 0& 71\phantom{00}& 92& 2.657& 2.701& 0.185& 0.197& 0.061& 0.072\\ 31811& 4& 40& 0& 89& 1\phantom{00}& 90& 3.096& 3.138& 0.060& 0.075& 0.178& 0.188\\ 3110 & 3& 40& 0& 86& 0\phantom{00}& 86& 2.554& 2.592& 0.134& 0.145& 0.049& 0.065\\ 18405& 4& 40& 0& 85& 0\phantom{00}& 85& 2.832& 2.858& 0.103& 0.110& 0.158& 0.162\\ 7744 & 3& 40& 0& 78& 0\phantom{00}& 78& 2.635& 2.670& 0.069& 0.075& 0.042& 0.049\\ 1118 & 4& 100& 47& 0& 30\phantom{00}& 77& 3.145& 3.246& 0.035& 0.059& 0.252& 0.266\\ 729 & 3& 90& 73& 0& 2\phantom{00}& 75& 2.720& 2.814& 0.110& 0.144& 0.294& 0.305\\ 17392& 3& 40& 0& 75& 0\phantom{00}& 75& 2.645& 2.679& 0.059& 0.070& 0.036& 0.042\\ 4945 & 3& 40& 0& 71& 0\phantom{00}& 71& 2.570& 2.596& 0.235& 0.244& 0.087& 0.096\\ 63 & 2& 40& 0& 70& 0\phantom{00}& 70& 2.383& 2.401& 0.118& 0.127& 0.107& 0.118\\ 16286& 4& 40& 0& 68& 0\phantom{00}& 68& 2.846& 2.879& 0.038& 0.047& 0.102& 0.111\\ 1222 & 3& 140& 68& 0& 0\phantom{00}& 68& 2.769& 2.803& 0.068& 0.113& 0.350& 0.359\\ 11882& 3& 40& 0& 66& 0\phantom{00}& 66& 2.683& 2.708& 0.059& 0.066& 0.031& 0.040\\ 21344& 3& 40& 0& 62& 0\phantom{00}& 62& 2.709& 2.741& 0.150& 0.159& 0.046& 0.050\\ 3489 & 2& 40& 0& 57& 0\phantom{00}& 57& 2.390& 2.413& 0.090& 0.096& 0.103& 0.109\\ 6124 & 6& 60& 57& 0& 0\phantom{00}& 57& 3.966& 3.967& 0.186& 0.212& 0.146& 0.159\\ 29841& 3& 40& 0& 53& 0\phantom{00}& 53& 2.639& 2.668& 0.052& 0.059& 0.033& 0.040\\ 25315& 3& 40& 0& 53& 0\phantom{00}& 53& 2.575& 2.596& 0.243& 0.251& 0.090& 0.096\\ 3460 & 4& 100& 28& 0& 24\phantom{00}& 52& 3.159& 3.218& 0.187& 0.209& 0.016& 0.028\\ 2967 & 4& 80& 52& 0& 0\phantom{00}& 52& 3.150& 3.224& 0.092& 0.124& 0.295& 0.303\\ 8905 & 3& 40& 0& 49& 0\phantom{00}& 49& 2.599& 2.620& 0.181& 0.190& 0.084& 0.091\\ 7220 & 2& 40& 0& 48& 1\phantom{00}& 49& 2.418& 2.424& 0.183& 0.195& 0.026& 0.036\\ 3811 & 3& 40& 0& 49& 0\phantom{00}& 49& 2.547& 2.579& 0.101& 0.110& 0.185& 0.190\\ 6138 & 2& 40& 0& 46& 2\phantom{00}& 48& 2.343& 2.357& 0.204& 0.215& 0.039& 0.045\\ 32418& 3& 40& 0& 48& 0\phantom{00}& 48& 2.763& 2.795& 0.255& 0.261& 0.152& 0.156\\ 53546& 3& 40& 0& 47& 0\phantom{00}& 47& 2.709& 2.735& 0.170& 0.174& 0.247& 0.251\\ 43176& 4& 40& 0& 47& 0\phantom{00}& 47& 3.109& 3.152& 0.065& 0.074& 0.174& 0.183\\ 618 & 4& 40& 0& 46& 0\phantom{00}& 46& 3.177& 3.200& 0.056& 0.059& 0.270& 0.277\\ 28804& 3& 40& 0& 46& 0\phantom{00}& 46& 2.589& 2.601& 0.146& 0.156& 0.063& 0.070\\ 7468 & 4& 40& 0& 45& 0\phantom{00}& 45& 3.031& 3.075& 0.087& 0.091& 0.060& 0.061\\ 6769 & 2& 40& 0& 44& 1\phantom{00}& 45& 2.398& 2.431& 0.148& 0.155& 0.051& 0.056\\ 159 & 4& 40& 0& 45& 0\phantom{00}& 45& 3.091& 3.131& 0.111& 0.117& 0.084& 0.090\\ 5651 & 4& 100& 20& 0& 22\phantom{00}& 42& 3.097& 3.166& 0.112& 0.128& 0.231& 0.241\\ 21885& 4& 40& 0& 42& 0\phantom{00}& 42& 3.079& 3.112& 0.026& 0.035& 0.184& 0.188\\ 780 & 4& 80& 41& 0& 0\phantom{00}& 41& 3.085& 3.129& 0.060& 0.074& 0.310& 0.314\\ 22241& 4& 40& 0& 40& 0\phantom{00}& 40& 3.082& 3.096& 0.126& 0.133& 0.087& 0.096\\ 2 & 3& 140& 38& 0& 0\phantom{00}& 38& 2.756& 2.791& 0.254& 0.283& 0.531& 0.550\\ 1189 & 4& 40& 0& 38& 0\phantom{00}& 38& 2.904& 2.936& 0.071& 0.075& 0.192& 0.194\\ 8737 & 4& 40& 0& 37& 0\phantom{00}& 37& 3.116& 3.141& 0.112& 0.121& 0.207& 0.211\\ 3438 & 4& 100& 20& 0& 14\phantom{00}& 34& 3.036& 3.067& 0.176& 0.186& 0.249& 0.255\\ \hline \end{tabular} \end{table} Note that 29 out 41 of these ``small families'' have been added in step 3, and have not been absorbed as halo families. The families in this category have been selected on the basis of statistical tests, which indicates they are unlikely to be statistical flukes. Nevertheless, most of them need some confirmation, which may not be available due to small number statistics. Thus most of these proposed families are there waiting for confirmation, which may require waiting for more observational data. The possible outcomes of this process, which requires a few years, are as follows: (i) the family is confirmed by growing in membership, as a result of automatic attachment of new numbered asteroids; (ii) the family is confirmed by the use of physical observations and/or modeling; (iii) the family grows and become attached as halo to a larger family; (iv) the family is found not to exist as collisional family because it does not increase with new smaller members; (v) the family is found not to exist as collisional family because of enough physical observations showing incompatible composition. In other words, the tables published in this paper are to be used as reference and compared, at each enlargement of the proper elements database, with the automatically generated new table based on more asteroids, to see which family is growing\footnote{The need for fixed lists to be cited over many years as a comparison with respect to the ``current'' number of members explains why these tables need to be published in printed form. The current family table can be downloaded from AstDyS at http://hamilton.dm.unipi.it/\~{ }astdys2/propsynth/numb.famtab}. However, there are cases in which some of these outcomes appear more likely, and we shall comment on a few of them. \subsubsection{Small but convincing families} The family of (729) Watsonia has been obtained by joining a high $I$ and a low $I$ families. We are convinced that it is robust, but it may grow unevenly in the high $I$ and in the low $I$ portions because of the drop in number density, whatever its cause. Other results on this family are given in Section~\ref{sec:use_physical}. The family of (2) Pallas has only 38 members, but it is separated, in proper $a$, by a gap from the tiny family 14916. The gap can be explained as the effect of resonances, the main one being the $3J -1S -1A$ 3-body resonance. Given the large escape velocity from Pallas, families 2 and 14916 together would be within the range of proper elements obtained from the ejection of the fragments following the cratering event. That is, the distribution of proper elements could be due to the initial spread of velocities rather than to Yarkovsky, implying that they would contain no evidence for the age of the family. However, we have not merged these two families because this argument, although we believe it is correct, arises from a model, not from the data as such. \subsubsection{Small families which could belong to the halo of large families} On top of the small families already listed in Subsection~\ref{sec:haloproblem}, which are considered as possible halos because of intersections, there are other cases in which small families are very close to large ones, and thus could become candidates for merging as the size of the proper elements catalog increases. To identify these cases we have used for each family the ``box'' having sides corresponding to the ranges in proper $a, e, \sin{I}$ listed in Tables~\ref{tab:bigfam}--\ref{tab:tinyfam}, and we analyzed all the overlaps between them. The parameter we use as an alert of proximity between two families is the ratio between the volume of the intersection to the volume of the box for the smaller family. If this ratio is $100\%$ then the smaller family is fully included within the box of the larger one; we have found 12 such cases. We found another 17 cases with ratio $>20\%$. By removing the cases with intersections, or anyway already discussed in Sections~\ref{sec:bigfam} and \ref{sec:mediumfam}, we are left with $17$ cases to be analyzed. One case of these overlapping-box families is about two medium families, namely family 10955 (with 428 members the largest of the step 3 families) and family 19466, which has $40\%$ of its box contained in the box of 10955. The possibility of future merger cannot be excluded. Among small/tiny families with boxes overlapping larger ones, we have found 10 cases we think could evolve into mergers with more data: 4-3489, 5-8905, 5-28804, 10-159, 10-22241, 221-31811, 221-41386, 375-2967, 480-34052, 1040-29185. In two of the above cases there is already, from the first automatic update, supporting evidence: of the 7 new intersections, one is 10-22241 and another is 375-2967. We are not claiming this is enough evidence for a merge, but it shows how the automatic upgrade of the classification works. In three cases we do not think there could be future mergers: 15-145, 15-53546, 221-21885. In another three cases the situation is too complicated to allow us to make any guess; 24-3460, 31-895, 4-63. The conclusion from this discussion is clear: a significant fraction of the small families of Table~\ref{tab:smallfam}, and few from Table~\ref{tab:tinyfam}, could be in the future included in the halo of larger families. Others could be confirmed as independent families, and some could have to be dismissed. \subsection{Tiny families} \label{sec:tinyfam} The ``tiny families'' are the ones with $<30$ members; of course their number is critically dependent upon the caution with which the small clusters have been accepted as proposed families. In Table~\ref{tab:tinyfam} we are presenting a set of $25$ such families. \begin{table}[h] \footnotesize \centering \caption{The same as in Table~\ref{tab:bigfam} but for tiny families with $< 30$ members.} \label{tab:tinyfam} \medskip \begin{tabular}{lcrrrrrcccccc} \hline family&zone& QRL & 1 & 3 & 2+4 & tot& $a_{min}$ & $a_{max}$ & $e_{min}$& $e_{max}$& $sI_{min}$& $sI_{max}$\\ \hline \\ 3667 & 4& 80& 25& 0& 3\phantom{00,}& 28& 3.087& 3.125& 0.184& 0.197& 0.294& 0.301\\ 895 & 4& 80& 25& 0& 0\phantom{00,}& 25& 3.202& 3.225& 0.169& 0.183& 0.438& 0.445\\ 909 & 5& 120& 23& 0& 1\phantom{00,}& 24& 3.524& 3.568& 0.043& 0.058& 0.306& 0.309\\ 29185 & 4& 80& 23& 0& 0\phantom{00,}& 23& 3.087& 3.116& 0.196& 0.209& 0.295& 0.304\\ 4203 & 3& 140& 22& 0& 0\phantom{00,}& 22& 2.590& 2.648& 0.124& 0.135& 0.473& 0.486\\ 34052 & 3& 140& 21& 0& 0\phantom{00,}& 21& 2.641& 2.687& 0.073& 0.087& 0.368& 0.377\\ 5931 & 4& 80& 19& 0& 0\phantom{00,}& 19& 3.174& 3.215& 0.160& 0.172& 0.302& 0.313\\ 22805 & 4& 80& 17& 0& 0\phantom{00,}& 17& 3.136& 3.159& 0.165& 0.175& 0.301& 0.308\\ 1101 & 4& 80& 17& 0& 0\phantom{00,}& 17& 3.229& 3.251& 0.030& 0.037& 0.363& 0.375\\ 10369 & 3& 140& 17& 0& 0\phantom{00,}& 17& 2.551& 2.609& 0.105& 0.118& 0.470& 0.482\\ 3025 & 4& 80& 16& 0& 0\phantom{00,}& 16& 3.192& 3.221& 0.059& 0.066& 0.368& 0.378\\ 14916 & 3& 140& 16& 0& 0\phantom{00,}& 16& 2.710& 2.761& 0.270& 0.282& 0.537& 0.542\\ 3561 & 6& 60& 15& 0& 0\phantom{00,}& 15& 3.962& 3.962& 0.127& 0.133& 0.149& 0.156\\ 45637 & 5& 120& 14& 0& 1\phantom{00,}& 15& 3.344& 3.369& 0.103& 0.123& 0.142& 0.151\\ 260 & 5& 120& 11& 0& 4\phantom{00,}& 15& 3.410& 3.464& 0.081& 0.088& 0.100& 0.108\\ 58892 & 4& 80& 14& 0& 0\phantom{00,}& 14& 3.121& 3.154& 0.153& 0.162& 0.300& 0.308\\ 6355 & 4& 80& 13& 0& 0\phantom{00,}& 13& 3.188& 3.217& 0.088& 0.097& 0.374& 0.378\\ 40134 & 3& 140& 13& 0& 0\phantom{00,}& 13& 2.715& 2.744& 0.223& 0.235& 0.429& 0.438\\ 116763& 3& 140& 13& 0& 0\phantom{00,}& 13& 2.621& 2.652& 0.236& 0.246& 0.463& 0.468\\ 10654 & 4& 80& 13& 0& 0\phantom{00,}& 13& 3.207& 3.244& 0.051& 0.056& 0.368& 0.374\\ 10000 & 3& 140& 13& 0& 0\phantom{00,}& 13& 2.562& 2.623& 0.260& 0.273& 0.316& 0.325\\ 7605 & 4& 80& 12& 0& 0\phantom{00,}& 12& 3.144& 3.153& 0.065& 0.073& 0.447& 0.453\\ 69559 & 4& 80& 12& 0& 0\phantom{00,}& 12& 3.202& 3.219& 0.196& 0.201& 0.299& 0.305\\ 20494 & 3& 140& 12& 0& 0\phantom{00,}& 12& 2.653& 2.690& 0.119& 0.132& 0.470& 0.480\\ 23255 & 3& 140& 10& 0& 0\phantom{00,}& 10& 2.655& 2.688& 0.095& 0.113& 0.460& 0.469\\ \\ \hline \end{tabular} \end{table} Given the cautionary statements we have given about the ``small families'', what is the value of the ``tiny'' ones? To understand this, it is useful to check the zones/regions where these have been found: 3 tiny families belong to zone 5, 1 to zone 6, 12 to zone 4 high inclination, 9 to zone 3 high inclination. Indeed, groupings with such small numbers can be statistically significant only in the regions where the number density is very low. These families satisfy the requirements to be considered statistically reliable according to the standard HCM procedure adopted in the above zones. It should be noted that, due to the low total number of objects present in these regions, the adopted minimum number $N_{min}$ of required members to form a family turns out to be fairly low, and its choice can be more important with respect to more densely populated regions. In the case of high-I asteroids, \cite{bojan_highi} included in their analysis a large number of unnumbered objects which we are not considering in the present paper. The nominal application of the HCM procedure leads to accept the groups listed in Table~\ref{tab:tinyfam} as families, but it is clear that their reliability will have to be tested in the future when larger numbers will be available in these zones. Thus, each one of these groups is only a proposed family, in need of confirmation. There is an important difference with most of the small families listed in Table~\ref{tab:smallfam}: there are two reasons why the number densities are much lower in these regions, one being the lower number density of actual asteroids, for the same diameters; the other being the presence of strong observational biases which favor discovering only objects of comparatively large size. In the case of the high inclination asteroids the observational bias is due to most asteroid surveys looking more often near the ecliptic, because there more asteroids can be found with the same observational effort. For the more distant asteroids the apparent magnitude for the same diameter is fainter because of both larger distance and lower average albedo. If a family is small because of observational bias, it grows very slowly in membership unless the observational bias is removed, which means more telescope time is allocated for asteroid search/recovery at high ecliptic latitude and more powerful instruments are devoted to find more distant/dark asteroids. Unfortunately, there is no way to be sure that these resources will be available, thus some ``tiny'' families may remain tiny for quite some time. In conclusion, the list of Table~\ref{tab:tinyfam} is like a bet to be adjudicated in a comparatively distant future. we can already confirm that many of these tiny families are slowly increasing in numbers. As already mentioned, while this paper was being completed, the proper elements catalog has already been updated, the automatic step 4 was completed, resulting in a new classification with a $4\%$ increase in family membership. In this upgrade $14$ out of $25$ tiny families have increased their membership, although in most cases by just $1\div 2$. \section{Use of absolute magnitude data} \label{sec:absol_mag} For most asteroids a direct measurement of the size is not available, whereas all the asteroids with an accurate orbit have some estimated value of absolute magnitude. To convert absolute magnitude data into diameter $D$, we need the albedo, thus $D$ can be accurately estimated only for the objects for which physical observations, such as WISE data or polarimetric data, are available. However, it is known that families are generally found to be quite homogeneous in terms of albedo and spectral reflectance properties \cite{CellinoAstIII}. Therefore, by assuming an average albedo value to be assigned to all the members of a given family, we can derive the size of each object from its value of absolute magnitude. This requires that a family has one or more members with a known albedo, and we have reasons to exclude that they are interlopers. The main applications of the statistical information on diameter $D$ of family members are three: estimation of the total volume of the family, of the age of a collisional family, and of the size distribution. \subsection{The volume of the families} \label{sec:volume} In case of a total fragmentation, the total volume of a collisional family can be used to give a lower bound for the size of the parent body. For cratering, the volume computed without considering the parent body can be used to constrain from below the size of the corresponding crater. In case of dubious origin, the total volume can be used to discard some possible sources if they are too small. As an example let us choose the very large family of (4) Vesta. The albedo of Vesta has been measured as $0.423$ \cite{IRAS}, but more recently an albedo of $0.37$ has been reported by \cite{shevted06}, while a value around 0.30 is found by the most recent polarimetric investigations \cite{ELSXI}\footnote{By using WISE albedos, it can be shown that the most frequent albedos for members of family 4 are in the range spanning these measurements, see Figure~\ref{fig:vesta_back_albedo_hist}}. Before computing the volume of fragments we need to remove the biggest interlopers, because they could seriously contaminate the result: asteroids (556) and (1145) are found to be interlopers because they are too big for their position with respect to the parent body, as discussed in the next subsection. If we assume albedo $0.423$ for all family 4 members, we can compute the volume of all the family members at $32\,500$ km$^3$. The volume would be $54\,500$ with albedo $0.3$, thus the volume of the known family can be estimated to be in this range. On Vesta there are two very large craters, Rheasilvia and Veneneia, with volumes of $>1$ million km$^3$. Thus it is possible to find some source crater. Another example of cratering on a large asteroid is the family of (10) Hygiea. After removing Hygiea and interlopers with $D>40$ km which should be too large for being ejected from a crater, and assuming a common albedo equal to the IRAS measure for (10), namely $0.072$, we get a total volume of the family as $550\,000$ km$^3$. This implies on the surface of (10) Hygiea there should be a crater with a volume at least as large as for Rheasilvia. Still the known family corresponds to only $1.3\%$ of the volume of (10) Hygiea. \subsection{Family Ages} \label{sec:ages} The computation of family ages is a high priority goal. As a matter of principle it can be achieved by using V-shape plots such as Figure~\ref{fig:20_vshapea}, for the families old enough to have Yarkovsky effect dominating the spread of proper $a$. The basic procedure is as follows: as in the previous section by assuming a common geometric albedo $p_v$, from the absolute magnitudes $H$ we can compute\footnote{$D=1\,329\times 10^{-H/5}\, \times 1/\sqrt{p_v}$} the diameters $D$. The Yarkovsky secular effect in proper $a$ is $da/dt=c\;\cos(\phi)/D$, with $\phi$ the obliquity (angle of the spin axis with the normal to the orbit plane), and $c$ a calibration depending upon density, thermal conductivity and spin rate. As a matter of fact $c$ is weakly dependent upon $D$, but this cannot be handled by a general formula since the dependence of $c$ from thermal conductivity is highly nonlinear \cite[Figure 1]{vok2000}. Thus, as shown in \cite[Appendix A]{chesley_bennu}, the power law expressing the dependence of $c$ upon $D$ changes from case to case. For cases in which we have poor information on the thermal properties (almost all cases) we are forced to use just the $1/D$ dependency. \begin{figure}[h!] \figfig{12cm}{20_vshapea}{V-shape of the family 20 in the proper $a$ vs. $1/D$ plane. The black lines are the best fit on the two sides; the black circles indicate the outliers.} \end{figure} Then in a plot showing proper $a$ vs. $1/D$ for asteroids formed by the same collisional event we get straight lines for the same $\phi$. We can try to fit to the data two straight lines representing the prograde spin and retrograde spin states ($\phi=0^\circ$ and $\phi=180^\circ$). The slopes of these lines contain information on the family age. Note that this procedure can give accurate results only if the family members cover a sufficient interval of $D$, which now is true for a large set of dynamical families thanks to the inclusion of many smaller objects (represented by green and yellow points in all the figures). As an example in Figure~\ref{fig:20_vshapea} we show two such lines for the Massalia family on both the low proper $a$ and the high proper $a$ side, that is representing the above mentioned retrograde and direct spin rotation state, respectively. This is what we call \emph{V-shape}, which has to be analyzed to obtain an age estimate. A method of age computation based on the V-shape has already been used to compute the age of the Hungaria family \cite[Figure 20]{hungaria}. In principle, a similar method could be applied to all the large (and several medium) families. However, a procedure capable of handling a number of cases with different properties needs to be more robust, taking into account the following problems. \begin{figure}[h!] \figfig{12cm}{847_vshapea}{V-shape of the family 847 in the proper $a$ vs. $1/D$ plane. The most striking feature is the presence of much denser substructure, which exhibits its own V-shape, shown in Figure~\ref{fig:3395_vshapea}.} \end{figure} \begin{itemize} \item The method assumes all the family members have the same age, that is, it assumes the coincidence of the dynamical family with the collisional family. If this is not the case, the procedure is much more complicated: see Figure~\ref{fig:847_vshapea}, which shows two superimposed V-shapes (the outer one is marked with a fit for the boundary) for the Agnia family, indicating at least two collisional events with different ages. Thus a careful examination of the family shape, not just in the $a,1/D$ plot but also in all projections, is required to first decide on the minimum number of collisional events generating each dynamical family. If substructures are found, with shape such that interpretation as the outcome of a separate collisional event is possible, their ages may in some cases be computed. \item To compute the age we use the inverse slope $\Delta a(D)/(1/D)$, with $D$ in km, of one of the two lines forming the V-shape, which is the same as the value of $\Delta a$ for an hypothetical asteroid with $D=1$ km along the same line. This is divided by the value of the secular drift $da/dt$ for the same hypothetical asteroid, giving the estimated age. However, the number of main belt asteroids for which we have a measured value of secular $da/dt$ is zero. There are $>20$ Near Earth Asteroids for which the Yarkovsky drift has been reliably measured (with $S/N>3$) from the orbit determination \cite[Table 2]{yarko_all}. It is indeed possible to estimate the calibration $c$, thus the expected value of $da/dt$ for an asteroid with a given $D, a, e, \phi$, by scaling the result for another asteroid, in practice a Near Earth one, with different $D, a, e, \phi$. However, to derive a suitable error model for this scaling is very complicated; see e.g. \cite{farnocchia_apophis} for a full fledged Monte Carlo computation. \item The data points $(1/D, a)$ in the V-shape are not to be taken as exact measurements. The proper $a$ coordinate is quite accurate, with the chaotic diffusion due to asteroid-asteroid interaction below $0.001$ au \cite{laskar2}, and anyway below the Yarkovsky secular contribution for $D<19$ km; the error in the proper elements computation (with the synthetic method) gives an even smaller contribution. To the contrary, the value of $D$ is quite inaccurate. Thus a point in the $1/D, a$ plane has to be considered as measurement with a significant error, especially in $1/D$, and the V-shape needs to be determined by a least squares fit, allowing also for outlier rejection. \item Most families are bounded, on the low-$a$ side and/or on the high-$a$ side, by resonances strong enough to eject the family members reaching the resonant value of $a$ by Yarkovsky, into unstable orbits (at least most of them). Thus the V-shape is cut by vertical lines at the resonant values of proper $a$. The computation of $\Delta a$ must therefore be done at values of $1/D$ below the intersection of one of the slanted sides of the V and the vertical line at the resonant value of $a$. For several families this significantly restricts the range of $1/D$ for which the V-shape can be measured. \item The dynamical families always contain interlopers, which should be few in number, but not necessarily representing a small fraction of the mass (the size distribution of the background asteroids is less steep). The removal of large interlopers is necessary not to spoil the computation of the slopes, and also of centers of mass. \end{itemize} As a consequence of the above arguments, we have decide to develop a new method, which is more objective than the previous one we have used, because the slope of the two sides of the V-shape is computed in a fully automated way as a least squares fit, with equally automatic outlier rejection. The following points explain the main features of this new method. \begin{enumerate} \item For each family we set up the minimum and maximum values of the proper semimajor axis and of the diameter for the fit. We may use different values for the inner and the outer side of the V, taking into account the possibility that they measure different ages. Note this is the only ``manual'' intervention. \item We divide the $1/D$-axis into bins, which are created in such a way to contain about the same number of objects. Hence: \begin{itemize} \item the bins, which correspond to small values of $1/D$ are bigger than the ones which correspond to large values of $1/D$; \item the inner side and the outer side of the family may have different bins. \end{itemize} \item We implement a linear regression for both sides. The method is iterative. For each iteration we calculate the residuals, the outliers and the kurtosis of the distribution of residuals. A point is an outlier if its residual is greater than $3 \sigma$. Then we remove the outliers and repeat the linear regression. Note that the outliers for the fit can be interlopers in the family, but also family members with low accuracy diameters. \item We say that the method converges if the kurtosis of the residuals is $3 \pm 0.3$ or if there exists an iteration without additional outliers. \item The two straight lines on the sides of the V-shape are computed independently, thus they do not need to cross in a point on the horizontal axis. We compute the \emph{V-base} as the difference in the $a$ coordinate of the intersection with the $a$ axis of the outer side and of the inner side. This quantity has an interpretation we shall discuss in Section~\ref{sec:collmodel}. \end{enumerate} \begin{table}[h!] \footnotesize \centering \caption{Results of the fit for the low $a$ (IN) and high $a$ (OUT) sides for each considered family: number of iterations, minimum diameter $D$ (in km) used in the fit, number of bins, number of outliers, value of the kurtosis and standard deviation of the residuals in $1/D$, and the value of the inverse of the slope (in au).} \label{tab:tableslopes} \medskip \begin{tabular}{lrrrrcccc} \hline family & &\# iter. & min. D fit & \# bins & \# outliers & kurtosis & RMS(resid) & 1/slope\\ \hline \\ 20 & IN& 7& 1.0& 20& 69& 2.98& 0.0371& -0.063\\ 20 & OUT& 8& 1.1& 12& 48& 3.12& 0.0118& \phantom{-}0.058\\ \\ 4 & IN& 2& 2.0& 23& 9& 3.27& 0.0364& -0.267\\ 4 & OUT& 2& 3.6& 6& 2& 1.49& 0.0060& \phantom{-}0.537\\ \\ 15 & IN& 2& 5.0& 11& 1& 3.22& 0.0021& -0.659\\ 15 & OUT& 2& 6.7& 16& 1& 2.84& 0.0027& \phantom{-}0.502\\ \\ 158 & IN& 2& 10.0& 9& 2& 3.03& 0.0013& -0.442\\ 158 & OUT& 2& 5.9& 10& 0& 1.99& 0.0024& \phantom{-}0.606\\ \\ 847 & IN& 2& 6.7& 5& 0& 1.25& 0.0004& -0.428\\ 847 & OUT& 2& 9.1& 7& 0& 1.65& 0.0003& \phantom{-}0.431\\ \\ 3395& IN& 10& 1.8& 7& 43& 2.92& 0.0041& -0.045\\ 3395& OUT& 9& 2.2& 8& 49& 3.20& 0.0053& \phantom{-}0.045\\ \\ \hline \end{tabular} \end{table} \begin{table}[h!] \footnotesize \centering \caption{V-base and center of the V-base.} \label{tab:tableVbase} \medskip \begin{tabular}{lcc} \hline family & V-base & center of V-base\\ \hline \\ 20 & \phantom{-}0.004& 2.4117\\ 4 & -0.025& \\ 15 & \phantom{-}0.009& 2.6389\\ 158 & \phantom{-}0.021& 2.8774\\ 847 & -0.016& 2.7734\\ 3395& -0.008& 2.7900\\ \\ \hline \end{tabular} \end{table} Let us emphasize that our main goal in this paper is to introduce methods which are objective and take as thoroughly as possible into account all the problems described above. Also we pay a special attention to the computation of quantities like $\Delta a$ and the slopes, used to estimate the family age, but the determination of the ages themselves, involving the complicated calibration of the Yarkovsky effect, is performed only as an example, to demonstrate what we believe should be a rigorous procedure. \subsubsection{Massalia} One of the best examples of dynamical family for which the computation of a single age for a crater is possible is the one of (20) Massalia, see Figure~\ref{fig:massalia_ae}. Massalia has an albedo $0.21$ measured by IRAS\footnote{Massalia does not have a WISE albedo, but it is possible to use WISE data to confirm that $65\%$ of the members of family 20 have albedo between $0.17$ and $0.32$.}. The two slopes of the inner and outer side of V-shape (Figure~\ref{fig:20_vshapea}) have of course opposite sign, with the absolute value different by $9\%$, see Table~\ref{tab:tableslopes}. Taking into account that there is some dependence of the calibration on $a$, this indicates an accurate determination of the slope. This is due to the fact that the fit can be pushed down to comparatively small diameters, around $1$ km, because the family is not cut by a resonance on the low $a$ side, and is affected by the $3/1$ resonance with Jupiter on the high $a$ side, but only for $D<1$ km. The V-base is small and positive (Table~\ref{tab:tableVbase}). The internal structure of family 20 is further discussed in Section~\ref{sec:massalia}. \subsubsection{Vesta} \begin{figure}[h!] \figfig{10cm}{4_vshapea}{V-shape of the family 4. The lines identified by the fit have different slopes on the two sides; for the explanation see Section~\ref{sec:vesta}.} \end{figure} For the V-shape and slope fit (Figure~\ref{fig:4_vshapea}) we have used as common albedo $0.423$. The Vesta family has a complex structure, which is discussed in Section~\ref{sec:vesta}: thus the presence of two different slopes on the two sides (Table~\ref{tab:tableslopes}) indicates that we are measuring the age of two different collisional events, the one forming the high $a$ boundary of the family being older. In theory two additional slopes exist, for the outer boundary of the inner subfamily and for the inner boundary of the outer family, but they cannot be measured because of the significant overlap of the two substructures. Thus the negative V-base appearing in the figure has no meaning. The family is cut sharply by the $3/1$ resonance with Jupiter on the high $a$ side and somewhat less affected by the $7/2$ on the low $a$ side. As a result the outer side slope fit is somewhat less robust, because the range of sizes is not as large. The fact that the slope is lower (the age is older) on the high $a$ side is a reliable conclusion, but the ratio is not estimated accurately. The calibration constant is not well known, but should be similar for the two subfamilies with the same composition (with only a small difference due to the relative difference in $a$), thus the ratio of the ages is $\sim 2/1$. For the computation of the barycenter (Table~\ref{tab:tablebar_crat}) it is important to remove the interlopers (556) and (1145), which clearly stick out from the V-shape on the outer side, although (1145) is not rejected automatically by the fit\footnote{By using the smaller WISE albedos, these two are even larger than shown in Figure~\ref{fig:4_vshapea}.}. \subsubsection{Eunomia} \begin{figure}[h!] \figfig{10cm}{15_vshapea}{V-shape of the family 15 exhibiting two rather different slopes; for the explanation see Section~\ref{sec:eunomia}.} \end{figure} For the V-shape plot of Figure~\ref{fig:15_vshapea} we have used as common albedo the IRAS value for Eunomia which is $0.209$. The inner and outer slopes of the V-shape for the Eunomia family are different by $31\%$. The slope on the outer side is affected by the $8/3$ resonance with Jupiter, forcing us to cut the fit already at $D=6.7$ km, thus the value may be somewhat less accurate. On the contrary the inner slope appears well defined by using $D>5$ km, although the $3/1$ resonance with Jupiter is eating up the family at lower diameters. The V-base is small and positive (Table~\ref{tab:tableVbase}). The possibility of an internal structure, affecting the interpretation of the slopes and ages, is discussed in Section~\ref{sec:eunomia}. For the computation of the barycenter (Table~\ref{tab:tablebar_crat}) it is important to remove the interlopers (85) and (258) which stick out from the V-shape, on the right and on the left, respectively, with the largest diameters after (15), see Figure~\ref{fig:15_vshapea}\footnote{Moreover, the albedo of (85) Io is well known (both from IRAS and from WISE) to be incompatible with (15) Eunomia as parent body.}. \begin{table}[h!] \footnotesize \centering \caption{Cratering families: family, proper $a$, $e$ and $\sin{I}$ of the barycenter, position of the barycenter with respect to the parent body, escape velocity from the parent body. The barycenter is computed by removing the parent body, the interlopers and the outliers.} \label{tab:tablebar_crat} \medskip \begin{tabular}{lccccccc} \hline family & a$_b$ & e$_b$ & $\sin{I}_b$ & a$_b-$a$_0$ & e$_b-$e$_0$ & $\sin{I}_b$ & $v_e$ (m$/$s)\\ & & & & & & $-\sin{I}_0$ &\\ \hline \\ 20 & 2.4061& 0.1622& 0.0252& -0.0025& \phantom{-}0.0004& \phantom{-}0.0004& 102\\ 4 & 2.3637& 0.1000& 0.1153& \phantom{-}0.0022& \phantom{-}0.0012& \phantom{-}0.0040& 363\\ 4 (N$\ne$1145) & 2.3621& 0.0993& 0.1153& \phantom{-}0.0006& \phantom{-}0.0005& \phantom{-}0.0040& \\ 4 low $e$ & 2.3435& 0.0936& 0.1169& -0.0180& -0.0052& \phantom{-}0.0056& \\ 4 high $e$ & 2.3951& 0.1094& 0.1124& \phantom{-}0.0336& \phantom{-}0.0106& \phantom{-}0.0011& \\ 15 & 2.6346& 0.1528& 0.2276& -0.0091& \phantom{-}0.0042& \phantom{-}0.0010& 176\\ 15 (N$\ne$85, 258)& 2.6286& 0.1495& 0.2282& -0.0168& \phantom{-}0.0010& \phantom{-}0.0016& \\ 15 low $a$ & 2.6090& 0.1494& 0.2294& -0.0347& \phantom{-}0.0008& \phantom{-}0.0028& \\ 15 high $a$ & 2.6808& 0.1501& 0.2246& \phantom{-}0.0371& \phantom{-}0.0015& -0.0021& \\ \\ \hline \end{tabular} \end{table} \subsubsection{Koronis} \begin{figure}[h!] \figfig{10cm}{158_vshapea}{V-shape of the family 158. The Karin subfamily is clearly visible at about $a=2.865$ au.} \end{figure} The Koronis family has a V-shape sharply cut by the $5/2$ resonance with Jupiter on the low $a$ side, by the $7/3$ on the high $a$ side (Figure~\ref{fig:158_vshapea}). This results in a short range of diameters usable to compute the slope, especially on the low $a$ side, where we have been forced to cut the fit at $D=10$ km. This could be the consequence of an already well known phenomenon, by which leakage by Yarkovsky effect from family 158 into the $5/2$ resonance occurs even for comparatively large objects \cite{vysh1,vysh2}. This implies a less accurate slope estimate on the inner side. This could explain the discrepancy by $37\%$ of the two slopes, since we have no evidence for substructures which could affect the V-shape\footnote{There is a well known substructure, the Karin subfamily, which is perfectly visible in Figure~\ref{fig:158_vshapea}, but does not affect the two sides of the V-shape.}. Anyway we recommend to use the outer slope for the age estimation. \subsubsection{Agnia} The Agnia family, as shown by Figure~\ref{fig:847_vshapea}, has a prominent subfamily forming a V-shape inside the wider V-shape of the entire family. We call this structure the subfamily of (3395) Jitka. \begin{figure}[h!] \figfig{10cm}{3395_vshapea}{V-shape of the subfamily of (3395) Jitka. Most of the outliers, marked by a black circle, are members of the larger Agnia family, but not of the subfamily.} \end{figure} For the entire family, almost identical values of the slopes on the two sides (Table~\ref{tab:tableslopes}) appear to correspond to the much older age of a wider and less dense family. The two slopes of the Jitka subfamily are also identical, but with inverse slopes lower by a factor $>9$. The V-base is negative in both cases (Table~\ref{tab:tableVbase}). The Jitka subfamily shows in the V-shape plot (Figure~\ref{fig:3395_vshapea}) a depletion of the central portion, which should correspond to obliquities $\gamma$ not far from $90^\circ$. This can be interpreted as a signature of the YORP effect, in that most members with $\gamma\sim 90^\circ$ would have had their spin axes evolved by YORP towards one of the two stable states, $\gamma=0^\circ, 180^\circ$. If the two collisional families belong to parent bodies with similar composition, then the ratio of the inverse slopes correspond to the ratio of the ages, independently from the calibration. Thus Jitka could be a catastrophic fragmentation of a fragment from another fragmentation $9$ times older. However, there are some problems if we use the WISE albedo data for family 847 members; there are $114$ albedos with $S/N>3$, which introduces some risk of small number statistics. Anyway, they indicate that the two subgroups, the 3395 subfamily and the rest of the 847 dynamical family, have a similar distribution of albedos, including dark interlopers. However, the albedo of (847) Agnia $0.147\pm 0.012$ is lower than most family members, while (3395) Jitka has $0.313\pm 0.045$ which is more compatible with the family. Thus it is not clear whether (847) Agnia is the largest remnant or an interloper, and whether the parent body of the Jitka subfamily did belong to the first generation family. \begin{table}[h!] \footnotesize \centering \caption{Fragmentation families: family, proper $a$, $e$ and $\sin{I}$ of the barycenter. The barycenter is computed by removing the outliers.} \label{tab:tablebar_frag} \medskip \begin{tabular}{lccc} \hline family & a$_b$ & e$_b$ & $\sin{I}_b$ \\ \hline \\ 158 & 2.8807& 0.0488& 0.0371\\ 847 & 2.7799& 0.0715& 0.0664\\ 847 (w/o 3395)& 2.7462& 0.0725& 0.0654\\ 3395 & 2.7911& 0.0728& 0.0669\\ \\ \hline \end{tabular} \end{table} \subsubsection{Yarkovsky effect calibration and family age estimation} Recalling that there is not a single measurement of the Yarkovsky effect for the main belt asteroids, thus also for the families, we can perform the necessary calibration only by using the available measurements for the Near Earth Asteroids. Thus, here the age estimation is obtained by scaling the results for the asteroid for which there is the best Yarkovsky effect determination \cite{yarko_all}, namely the low-albedo asteroid (101955) Bennu, with scaling taking into account the different values of $D,a,e, \rho$ and $A$, where $\rho$ is the density and $A$ is the Bond albedo. The $da/dt$ value for (101955) Bennu has a $S/N=197.7$, thus a relative uncertainty $<1\%$. The scaling formula we have used is: \[ \frac{da}{dt} = \left.\frac{da}{dt}\right|_{Bennu} \frac{\sqrt{a}_{(Bennu)}(1-e^2_{Bennu})}{\sqrt{a}(1-e^2)} \frac{D_{Bennu}}{D}\frac{\rho_{Bennu}}{\rho} \frac{\cos(\phi)}{\cos(\phi_{Bennu})}\frac{1-A}{1-A_{Bennu}} \] where $D=1$ km used in this scaling formula is not the diameter of an actual asteroid, but is due to the use of the inverse slope and $\cos(\phi)=\pm 1$, as explained in the description of the method above. It may appear that the use of the Yarkovsky effect measurements for asteroids more similar in composition to the considered families than Bennu would be more appropriate. So, for example, the asteroid (2062) Aten has the best determined $da/dt$ value of all S-type asteroids. It has a $S/N=6.3$, thus a relative uncertainty $0.16$, which has to be taken into account in the calibration error. As for the scaling formula above, it introduces additional uncertainty, especially since there is no scaling term accounting for the different thermal properties. Thus, using an S-class asteroid for scaling may not result in a better calibration, because the S-type asteroids are not all the same, e.g., densities and thermal properties may be different. In the case of (2062) Aten there is an error term due to the lack of knowledge on the obliquity $\phi$, which can contribute an additional relative error up to $0.2$. The other two S-type asteroids with measured Yarkovsky are (1685) Toro and (1620) Geographos, but with $S/N= 3.7$ and $3.0$, respectively. In the same way, for the family of (4) Vesta one would expect that the use of the Yarkovsky measurement for asteroid (3908) Nyx, presumably of V-type \cite{binzel}, should represent a natural choice for calibration. In fact, the same authors warn that this asteroid belongs to a small group of objects with ``sufficiently unusual or relatively low $S/N$ spectra'', thus the taxonomic class may be different from nominal. This suspicion is further strengthened by the value of geometric albedo of only $0.16 \pm 0.06$ reported by \cite{benner}, which is significantly lower than the typical value ($\sim 0.35$) for a Vestoid. (3908) Nyx is apparently of extremely low density (Farnocchia and Chesley, private communication), thus it has too many properties inconsistent with Vestoids. This is why we have decided to use (101955) Bennu as benchmark to be scaled for the Yarkovsky calibration of all families, because it is the known case with both the best estimate of Yarkovsky and best known properties, including obliquity, density, and size. \begin{table}[h!] \footnotesize \centering \caption{Family age estimation: family, $da/dt$ for $D=1$ km obtained using (101955) Bennu for the calibration, for the two sides of the V-shape, and corresponding family age estimation.} \label{tab:tableage} \medskip \begin{tabular}{lrcc} \hline family & & da/dt & $\Delta$t \\ & & ($10^{-10}$ au/y)& (Gy)\\ \hline \\ 20 & IN& -3.64& 0.173\\ 20 & OUT& \phantom{-}3.55& 0.163\\ \\ 4 & IN& -2.65& 1.010\\ 4 & OUT& \phantom{-}2.57& 2.090\\ \\ 15 & IN& -3.49& 1.890\\ 15 & OUT& \phantom{-}3.39& 1.480\\ \\ 158 & IN& -3.13& 1.410\\ 158 & OUT& \phantom{-}3.08& 1.970\\ \\ 847 & IN& -3.37& 1.270\\ 847 & OUT& \phantom{-}3.33& 1.300\\ \\ 3395& IN& -3.35& 0.134\\ 3395& OUT& \phantom{-}3.33& 0.135\\ \\ \hline \end{tabular} \end{table} The results of our age computation for the considered families are given in Table~\ref{tab:tableage}, for the two sides of the V-shape. As one can appreciate from these data, the estimations of the age of Massalia family from the two slopes differ by only a small amount, and they are also in good agreement with results obtained with a quite different method by \cite{vokyorp}. The Vesta family case is particularly interesting as the lower age appears to be compatible with the estimated age of one of the two largest craters on Vesta, Rheasilvia ($\sim 1$ Gy) \cite{dawn_marchi}. An age for the other big crater, Veneneia, has not been estimated, although it must be older because this crater is below the other. Our estimated $\sim 2/1$ ratio for the collisional families ages is an interesting result, although it should not be considered as a proof that the sources are the two known largest craters. The difference of the values of the inner and outer slopes of the V-shape for the Eunomia could be interpreted as the age of two different events, see Section~\ref{sec:eunomia}. There is no previous estimate of the age of Eunomia we are aware of, a ``young age'' being suggested on the basis of simulations of size distribution evolution by \cite{michel}. The estimation of the age of Koronis family as inferred from the longer outer side of the V-shape is consistent with the age ($\leq 2$ Gy) reported previously by \cite{marzari}, based on the observed size distribution of larger members, and by \cite{chapman}, based on the crater count on the surface of the Koronis family member (243) Ida. \cite{bottke2001} give $2.5\pm 1$ Gy, which is also consistent. The age estimate for the Agnia family of $<140$ My, provided by \cite{agnia_vok}, is in a very good agreement with our result for the Jitka subfamily; the older age for the entire Agnia family has not been found previously because the low $a$ component identified by us was not included in the family. The two youngest according to our estimates, family 20 and subfamily 3395, have in common the presence of a lower density central region of the V-shape, more pronounced for Jitka, barely visible for Massalia. This suggests the following: the time scale for the YORP effect to reorient the rotation axis towards either full spin up or full spin down is much smaller than the time scale for randomization of a significant fraction of the spin states, which would fill the central gap. We need to stress that the main uncertainty in the age computation is not due to the estimate of the slope (apart for the ``bad'' case of the inner slope of Koronis). The main error term is due to the calibration; we do not yet have enough information to derive a formal standard deviation estimate, but the relative uncertainty of the age should be of the order of $0.2\div 0.3$. \subsubsection{Collisional Models and the interpretation of V-shapes} \label{sec:collmodel} The method we have proposed for the computation of family ages has the advantage of using an objective measurement of the family V-shape, rather than using a line placed ``by eye'' on a plot. However, because two parameters are fit for each boundary line, that is the slope and the intersection with a-axis, whenever both sides are accessible the output is a set of four parameters: the inverse slopes on both sides (Table~\ref{tab:tableslopes}), the V-base and the center of V-base (Table~\ref{tab:tableVbase}). To force the lines to pass from a single vertex on the horizontal axis would remove one fit parameter, to assign also the proper $a$ of this vertex (e.g., at the value of some barycenter) would remove two parameters: by doing this we would bias the results and contradict the claimed objectivity of the procedure, which has to be defined by the family membership only. On the other hand, our procedure does not use the V-base and its center to estimate the family age. Only the slopes of the leading edge of family members (for either high or low $a$) are used for the ages. This leads to two questions: which information is contained in the two parameters we are not using, and is it appropriate to obtain, e.g., a negative V-base, or is this an indication of a poor fit of at least one of the two lines? The interpretation of the V--shape plots is not straightforward, because they are the outcome of a game involving three major players, each one producing its own effect. These players are: (1) the collisional history of a family, including the possible presence of overlapping multi-generation events; (2) the Yarkovsky effect, which in turn is influenced by the YORP effect; (3) the original field of fragment ejection velocities at the epoch of family formation. In addition, also the possible presence of strong nearby resonances plays an important role. Note also that in the present list of families several ones have been created by a cratering and not by a catastrophic disruption. As for the effect (3), the existence of a correlation between the size and the dispersion in semi-major axis of family members has been known for several years. In the past, pre--Yarkovsky era (and with most of the recognized families resulting from catastrophic events), this correlation was assumed to be a direct consequence of the distribution of original ejection velocities, with smaller fragments being ejected at higher speeds. The ejection velocities derived from observed proper element differences, however, turned out to be too high to be consistent with the experiments, since they implied collisional energies sufficient to thoroughly pulverize the parent bodies. Later, the knowledge of the Yarkovsky effect and the availability of more detailed hydrodynamic simulations of catastrophic fragmentation family--forming events (see, for instance, ~\cite{michel}) suggested a different scenario: most family members would be reaccumulated conglomerates, issued from merging of many fragments ejected at moderate velocities. In this scenario, the original ejection velocities give a moderate contribution to the observed dispersion of proper elements. Then the V--shape plots discussed in the previous subsections would be essentially a consequence of such Yarkovsky--driven evolution (see~\cite{bottke2002} for a general reference). The extension of the above scenario to families formed by craterization events is not obvious, nor --at the present time-- supported by numerical simulations, which are not yet capable to reach the required resolution~\cite{jutzi}. However, the interpretation of the V--shape as a consequence of Yarkovsky effect should hold also for them. Unfortunately, a fully satisfactory interpretation of the observed V--shape plots can hardly be achieved in such a purely Yarkovsky--dominated scenario: the original ejection velocities of fragments cannot be totally disregarded. For the Eos family \cite{brozmorby}, \cite{eos_vok}, assume, for bodies of a size of $5$ km, average asymptotic relative velocities $v_\infty$ of about $90$ m/s. This is even more true for the families formed by cratering events on very large asteroids, since ejection velocities $v_0$ must be $>v_e$ (escape velocity) as to overcome the gravitational well of the parent body, and the $v_\infty$ of the family members are both large and widely dispersed (see Section~\ref{sec:craters}). Due to the original dispersion of the family members, we cannot expect that the two sides of any given V--plot exactly intersect on the horizontal axis, as one might expect for a "pure" Yarkovsky model. The original extension of the family depends on the ejection velocities $v_\infty$ of the bodies, while the Yarkovsky effect on every body of a given size depends on the orientation of the spin vector. If velocities and spin vectors are not correlated, the two terms should combine as independent distributions. If the Yarkovsky term is assumed to be the dominant signal, the original velocities provide a noise term; the noise/signal value is certainly significant for large objects, thus the two lines of the "V" should not intersect at $1/D=0$, but in the halfplane $D<0$. The "V--base" has therefore to be positive. Yet, this is not the case in $3$ out of $6$ examples presented in this paper. How to possibly explain this? A more physical explanation may be tentatively suggested, based on an argument which has been previously discussed in the literature \cite{laspina}, \cite{paolicchi2008} but yet not fully explored. According to the results of some laboratory fragmentation experiments \cite{fuji}, \cite{holsapple} the fragments ejected from a catastrophic disruption rotate, and the sense of spin is related to the ejection geometry: the fragments rotate away from the side of higher ejection velocity. Such behavior is clearly represented in \cite[Fig. 1]{fuji}. This experimental evidence was used in developing the so-called semi-empirical model \cite{paolicchi1989,paolicchi1996}, assuming that fragment rotations are produced by the anisotropy in the velocity ejection field. \begin{figure}[h!] \figfig{9cm}{Fig_fam}{The possible correlation spin--ejection velocity for a radial impact from the interior of the Solar System. This is a projection on the orbital plane. For a cratering impact the ejecta are in the same hemisphere as the impact point, while for a catastrophic disruption the crater zone is pulverized, most sizable fragments are ejected from the antipodal hemisphere. In both cases the ejection velocity decreases with the angular distance from the impact point. If the rotation is connected to the velocity shear the fragment with a positive along track velocity (top of the figure) have a retrograde (clockwise) rotation, and viceversa. This is true both for the front side ejecta (cratering) and for the rear side fragments (disruption). In this case the correlation between the initial $\Delta a$ and $\cos\gamma$ is negative, and the Yarkovsky effect tends initially to shrink the family in $a$.} \end{figure} In this scenario, the rotation of fragments created in a catastrophic process can be strongly correlated with the ejection velocity. For what concerns cratering events, as far as we know, there is not in this respect any experimental evidence mentioned in the literature. However, also in cratering events the ejection velocity field is strongly anisotropic (see, for instance, the popular Z--model by \cite{maxvell}), and a similar correlation between ejection velocity and spin rate can be expected for the fragments. It is not obvious how significantly the reaccumulation of ejecta (a process which certainly is very important after catastrophic events) can affect this correlation. There are very few simulations taking into account the rotation of fragments recorded in the literature \cite{richardson}, \cite{michel}, they are all about fragmentations, and their results do not solve the present question. However, if the fragments which stick together were ejected from nearby regions of the parent body, an original correlation might be preserved. If this is the case, different impact geometries will result in different evolutions of the semi-major axis spread of the family. To model the geometry of the impact, let us call \emph{crater radiant} the normal $\hat n$ to the smoothed terrain before the crater is excavated (at the impact point). What matters are the angles between $\hat n$ and the directions $\hat v$ of the orbital velocity of the parent body, and $\hat s$ towards the Sun (both at the epoch of the impact). If $\hat n\cdot \hat s>0$ (impact on the inner side) with $\hat n\cdot \hat v\simeq 0$ (impact radiant close to normal to the velocity) there are preferentially retrograde fragments on the side where ejection velocity adds up with orbital velocity, thus giving rise to larger a of fragments, preferentially prograde on the opposite, lower a side. This implies that the spread in proper $a$ of the family initially decreases (ejection velocity and Yarkovsky term act in the opposite sense), then increases again, and the V-base is negative. If $\hat n\cdot \hat s<0$ (impact on the outer side) with $\hat n\cdot \hat v\simeq 0$ there are preferentially prograde fragments at larger $a$, preferentially retrograde at lower $a$. This results in a large spread, even after a short time, of the family in proper $a$ (ejection velocity and Yarkovsky term add), and the V-base is positive. Finally, in case of negligible $\hat n\cdot \hat s$, the original ejection velocities and Yarkovsky drift add up as a noise terms, the latter dominating in the long run; the V-base is positive but small. Note that, as shown by Figure~\ref{fig:Fig_fam}, this argument applies equally to cratering and to fragmentation cases. Thus, in principle, the properties of the V--base and of the family barycenter (Tables~\ref{tab:tableVbase}, \ref{tab:tablebar_crat}, and \ref{tab:tablebar_frag}) contain information on the impact geometry and on the original distribution of $v_\infty$. However, the interpretation of these data is not easy. A quantitative model of the ejection of fragments, describing the distribution of $v_\infty$, the direction of $v_\infty$, $\cos\gamma$, and $D$, taking into account all the correlations, is simply not available. We have just shown that some of these correlations (between direction and $\cos\gamma$) are not negligible at all, but all the variables can be correlated. Even less we have information on shapes, which are known to be critical for the YORP effect. This does introduce error terms in our age estimates. The main problem is the dependence of the Yarkovsky drift in $a$, averaged over very long times, from $D$. According to the basic YORP theories (see~\cite{bottke2006} for a general reference) the bodies should preferentially align their rotation axes close to the normal to the orbital plane (both prograde and retrograde), with a timescale strongly dependent on the size. This result is also supported by the recent statistical work on the spin vector catalog~\cite{paolicchi2012}. Consequently, there should be a substantial fraction of the small bodies moving towards the borders of the V--plot, especially after times long with respect to the time scale for the YORP-driven evolution to the spin up/spin down stable states. Using a different database~\cite{vokyorp} have found for most families, a number density distribution in accordance to this idea. However, the maxima are not at the edges, but somewhere in between the symmetry axis of the V-shape and the edges: e.g., see Figure~\ref{fig:3395_vshapea}. It is not easy to draw general conclusions from this kind of data, because in most familes the portions near the extreme values of proper $a$ are affected by resonances and/or by the merging of step 3 families as haloes. There are many models proposed in the literature to account for a form of randomization of the spin state, resulting in something like a Brownian motion along the $a$ axis over very long time scales; e.g., \cite{statler} and \cite{cotto} show that the YORP effect can be suddenly altered. Thus after a long enough time span, most family members may be random-walking between the two sides of the V-shape, and the central area is never emptied. However, what we are measuring is not the evolution in $a$ of the majority of family members, but the evolution of the members fastest in changing $a$. Our method, indeed any method using only the low and high $a$ boundaries of the family, should be insensitive to this effect for large enough families. In a random effect a portion of the family with a spin state remaining stable at $\cos\gamma\simeq \pm 1$ will be maintained for a very long time, and this portion is the one used in the V-shape fit. Our method is mathematically rigorous in extracting from the family data two components of the evolution of proper $a$ after the family formation, a term which is constant in time (from the original distribution of velocities) and independent from $D$, and a term which is proportional to $1/D$ and to the time elapsed. If the situation is much more complicated, with a larger number of terms with different dependence on both $D$ and $t$, we doubt that the current dataset is capable of providing information on all of them, independently from the method used. Moreover, some terms may not be discriminated at all, such as an $1/D$ dependency not due to a pure Yarkovsky term $\Delta t/D$. \subsection{Size distributions} \begin{figure}[h!] \figfig{11cm}{massalia_sizefit}{Size distribution of family 20 using the range $1.5<D<5$ km.} \end{figure} Another use of the diameters deduced from absolute magnitudes assuming uniform albedo is the possibility of computing a size distribution. This is a very delicate computation, depending strongly upon the range of diameters used. Numbers of members at too small diameters are affected by incompleteness, while too large diameters are affected by small number statistics, especially for cratering events. The utmost caution should be used in these estimates for families less numerous and/or with a more complex structure. In Figure~\ref{fig:massalia_sizefit} we show the result of a size distribution power law fit for family 20, by using the range from $D=1.5$ to $5$ km, thus excluding (20) and the two outliers identified above as well as two others above $5$ km. The resulting best fit differential power law is proportional to $1/D^{5}$, that is the cumulative distribution is proportional to $1/D^{4}$; this value suggests that the fragments are not yet in collisional equilibrium, thus supporting a comparatively young age for the family. The results are somewhat dependent upon the range of diameters considered, as it is clear by the much lower value for diameters $D< 1$ km with respect to the fit line (in green): this is a clear signature of observational incompleteness. \begin{figure}[h!] \figfig{11cm}{vesta_sizefit}{Size distribution of family 4 using the range $1.5<D<8$ km.} \end{figure} In Figure~\ref{fig:vesta_sizefit} we show the size distribution power law fit for family 4; to improve the estimates of the diameters, we have used as a common albedo $0.35$ because it is more representative for the small members of the family, see Figure~\ref{fig:vesta_back_albedo_hist}. We have used the range from $D=1.5$ to $8$ km, thus excluding (4) and the interlopers (556) and (1145) as well as another member marginally larger than $D=8$. The differential power law is $1/D^{4.5}$, that is the cumulative is $1/D^{3.5}$; this value also suggests that collisional equilibrium has not been reached, the family appears somewhat less steep, which should mean it is older, in agreement with the estimates from the previous subsection. Because of the larger span in $D$ the figure shows, besides the incompleteness of family members with diameter $D<2$ km, an additional phenomenon, namely a tendency to decrease the number of members with respect to the power law at comparatively large diameters $D>5$. However, a complete interpretation of the size distribution should not be attempted without taking into account the results of Section~\ref{sec:vesta}, proposing a complex structure for the family. The fact that the size distribution of families formed in cratering events usually exhibits such kind of concavity, is in agreement with available fragmentation models \cite{Tangaetal99, Durdaetal2007}. The concavity in the size distribution tends to disappear when the parent body to largest member size ratio decreases, and this can be qualitatively explained in terms of available volume for the production of larger fragments \cite{Tangaetal99}. Another constraint on the low $D$ side comes from the fact that a slope larger than $4$ for the differential size distribution corresponds to an infinite total volume of all the fragments of diameter in the interval $D_{max}>D>0$. This implies that, even if there was no observational bias, the slope must decrease below some critical value of $D$. The problem is, we do not know what this critical size is; thus we cannot discriminate between observational bias and possible detection of a real change in slope. \section{Refinement with physical data} \label{sec:use_physical} The data from physical observations of asteroids, especially if they are available in large and consistent catalogs like WISE and SDSS, are very useful to solve some of the problems left open by purely dynamical classifications such as the one discussed in the previous sections. This happens when there are enough consistent and quality controlled data, and when the albedo and/or colors can discriminate, either between subsets inside a family, or between a family and the local background, or between nearby families. (See also examples in Section~\ref{sec:craters}.) \subsection{The Hertha--Polana--Burdett complex family} The most illustrative example of discrimination inside a dynamical family is the case of the family with lowest numbered asteroid (135) Hertha; when defined by purely dynamical arguments, it is the largest family with $11\,428$ members. Its shape is very regular in the $a,sin I$ proper element projection, but has a peculiar $>$-shape in the $a,e$ projection (Figure~\ref{fig:hertha_ae_wise}), which has been strongly enhanced by the addition of the smaller asteroids of the halo (in yellow). \begin{figure}[h!] \figfig{12cm}{hertha_ae_wise}{The Hertha dynamical family in proper $a, e$ plane. Bright objects (magenta stars) and dark objects (cyan stars) forming a characteristic $>$-shape indicate two partly overlapping collisional families. } \end{figure} Already by using absolute magnitude information some suspicion arises from the V-shape plot, from which it appears possible to derive a consistent ``slope'' neither from the inner nor for the outer edge. Problems are already well known to arise in this family from the very top, that is (142) Polana which is dark (WISE albedo $0.045$) and diameter $D\simeq 60$ km, and (135) Hertha which is of intermediate albedo $0.152$ and $D\simeq 80$ km, also known to be an M type asteroid, but exhibiting the 3 $\mu$m spectral feature of hydrated silicates \cite{Rivkinetal2000}. \begin{figure}[h!] \figfig{12cm}{hertha_hist_alb}{The distribution of WISE albedos for the 135 dynamical family with the locations of the three namesakes indicated by red lines. The distribution is clearly bimodal supporting the scenario with two collisional families.} \end{figure} By using systematically the WISE albedo, limited to the asteroids for which the albedo uncertainty is less than $1/3$ of the nominal value ($1\,247$ such data points in the 135 dynamical family), we find the sharply bimodal distribution of Figure~\ref{fig:hertha_hist_alb}. (142) Polana is by far the largest of the ``dark'' population (for the purpose of this discussion defined as albedo $<0.09$, $611$ asteroids) as well as the lowest numbered. The ``bright'' population (albedo $>0.16$, $568$ asteroids) does not have a dominant large member, the largest being (3583) Burdett (albedo $0.186\pm 0.02$, $D\simeq 7.6$ km)\footnote{The asteroid (878) Mildred was previously cited as namesake of a family in the same region: Mildred is very likely to be ``bright'', although the WISE data are not conclusive (albedo $=0.40\pm 0.22$), but is very small ($D\simeq 2.5$ km). The fact that (878) was imprudently numbered in 1926 after the discovery and then lost is a curious historical accident explaining a low numbered asteroid which is anomalously small. Thus we are going to use Burdett as the namesake of the ``bright'' component.} In Figure ~\ref{fig:hertha_ae_wise} we have plotted with magenta stars the ``bright'', with cyan stars the ``dark'', and it is clear that they are distributed in such a way that the $>$-shape in the proper $a,e$ plane can be explained by the presence of two separate collisional families, the Polana family and the Burdett family, with a significant overlap in the high $a$, low $e$ portion of the Hertha dynamical family. Because the WISE dataset is smaller than the proper elements dataset, we cannot split the list of members of the 135 dynamical family into Polana and Burdett, because such a list would contain an overwhelming majority of ``don't know''. Erosion of the original clouds of fragments by the $3/1$ resonance with Jupiter must have been considerable, thus we can see only a portion of each of the two clouds of fragments. Based on the total volume of the objects for which there are good albedo data, the parent body of Polana must have had $D> 76$ km, the one of Burdett $D> 30$ km. Note that we could get the same conclusion by using the $a^*$ parameter of the SDSS survey: among the $1\,019$ asteroids in the 135 dynamical family with SDSS colors and $a^*$ uncertainty less than $1/3$ of the nominal value, $184$ have $-0.3<a^*<-0.05$ and $835$ have $+0.05< a^*< 0.3$, thus there is also a bimodal distribution, which corresponds to the same two regions marked in magenta and cyan in Figure~\ref{fig:hertha_ae_wise}, with negative $a^*$ corresponding to low albedo and positive $a^*$ corresponding to high albedo, as expected \cite[Figure 3]{parker2008}. The lower fraction of ``dark'' contained in the SDSS catalog, with respect to the WISE catalog, is an observation selection effect: dark objects are less observable in visible light but well observable in the infrared. Because of its very different composition (135) Hertha can be presumed to belong to neither the one nor the other collisional family, although strictly speaking this conclusion cannot be proven from the data we are using, listed in Section 2, but requires some additional information (e.g., taxonomic classification of Hertha) and suitable modeling (e.g., excluding that a metallic asteroid can be the core of a parent body with ordinary chondritic mantle). All these conclusions are a confirmation, based on a statistically very significant information, of the results obtained by \cite{cellino2001} on the basis of a much more limited dataset (spectra of just 20 asteroids). Other authors, such as \cite{masiero2013}, have first split the asteroids by albedo then formed families by proper elements, and they get the same conclusion on two overlapping families, but the total number of family members is lower by a factor $\sim 3$. \subsection{The Eos family boundaries} Unfortunately, in many other cases in which the dynamical classification raises doubts on the family structure the physical observations databases are not sufficient to solve the problem. An example is the family of (221) Eos, which on the basis of the proper elements only cannot be neatly separated from the two smaller families of (507) Laodica and (31811) 1999 NA41 (there are a few members in common), as discussed in Section~\ref{sec:result_dyn}. The Eos family mostly contains intermediate albedo asteroids, including (221) Eos (WISE albedo $=0.165\pm 0.034$) belonging to the unusual K taxonomic class, but the surrounding region predominantly contains low albedo objects, including (31811) (albedo $0.063\pm 0.014$) and the majority of the members of the 507 family for which WISE data are available; (507) Laodica itself has an albedo $=0.133\pm 0.009$ which is compatible with the Eos family. From this we are able to give the negative conclusion: attaching families 507 and 31811 to 221 would not be supported by physical observations, but leaving them separate is not much better. \subsection{Watsonia and the Barbarians} The family of (729) Watsonia had been already identified in the past by \cite{bojan_highi}, who adopted a proper element data base including also a significant number of still unnumbered, high-inclination asteroids, not considered in our present analysis. This family is interesting because it includes objects called ``Barbarians'', see \cite{Barbara}, which are known to exhibit unusual polarization properties. Two of us (AC and BN) have recently obtained VLT polarimetric observations \cite{barbarians} showing that seven out of nine observed members of the Watsonia family exhibit the Barbarian behavior. This result strongly confirms a common origin of the members of the Watsonia family. On the other hand, the presence of another large (around 100 km in size) Barbarian, (387) Aquitania, which has proper semi-major axis and inclination within the limits of the Watsonia family, but shows a difference in proper eccentricity of about $0.1$, exceedingly large to include it in the family, indicates that the situation can be fairly complex and opens interpretation problems, including a variety of possible scenarios which are beyond the scopes of the present paper. \section{Cratering families} \label{sec:craters} As a result of the availability of accurate proper elements for smaller asteroids, our classification contains a large fraction of families formed by cratering events. Modeling of the formation of cratering families needs to take into account the escape velocity $v_e$ from the parent body, which results in the parent body not being at the center of the family as seen in proper elements space. This is due to the fact that fragments which do not fall back on the parent body need to have an initial relative velocity $v_0>v_e$, and because of the formula giving the final relative velocity $v_\infty=\sqrt{v_0^2-v_e^2}$ the values of $v_\infty$ have a wide distribution even for a distribution of $v_0$ peaking just above $v_e$. The mean value of $v_\infty$ is expected to be smaller than $v_e$, at most of the same order. Thus immediately after the cratering event, the family appears in the proper elements space as a region similar to an ellipsoid, which is centered at a distance $d$ of the order of $v_e$ from the parent body. Of course this effect is most significant for the very largest parent bodies. Moreover, it is important not to forget that cratering events typically occur multiple times over the age of the solar system, since the target keeps the same impact cross section. The outcomes can appear either as separate dynamical families or as structures inside a single one. We use as criterion for identification of a cratering family that the fragments should add up to $\leq 10\%$ of the parent body volume; we have tested only the large and medium families, and used the common albedo hypothesis to compare volumes. In this way 12 cratering families have been identified with the asteroids (2), (3), (4), (5), (10), (15), (20), (31), (87), (96), (110), (179), and (283) as parent bodies. Other large asteroids do not appear to belong to families. We will discuss some interesting examples. \subsection{The Massalia family} \label{sec:massalia} Although the V-shape plot (Figure~\ref{fig:20_vshapea}) does not suggest any internal structure for the family 20, the inspection of the shape of the family in the space of all three proper elements suggests otherwise. The distribution of semimajor axes is roughly symmetrical with respect to (20) Massalia, while these of eccentricity and inclination are, on the contrary, rather asymmetrical. The eccentricity distribution is skewed towards higher eccentricities (third moment positive), this is apparent from Figure~\ref{fig:massalia_ae} as a decrease of number density for $e<0.157$; the inclination one is skewed towards lower inclinations (third moment negative). Thus the barycenter of the ejected objects appears quite close to the parent body (20), see Table~\ref{tab:tablebar_crat}: if the differences in $e, \sin{I}$ are scaled by the orbital velocity they correspond to about $7$ m/s, which is much smaller than the escape velocity. Even if the distributions are skewed in number density, fragments appear to have been launched in all directions, and this is not possible for a single cratering event. These arguments lead us to suspect a multiple collision origin of the dynamical family. At $e<0.157$ there seems to be a portion of a family with less members which does not overlap the other, more dense collisional family. The more dense family has been ejected in a direction such that $e$ increases and $\sin{I}$ decreases, the other in a direction with roughly opposite effect. However, the presence of the low $e$ subfamily does not affect the age computation, which only applies to the high $e$ subfamily, due to the fact that the extreme values of $a$ are reached in the high $e$ region. Thus there are two concordant values for the slopes on the two sides, and a single value of the age we can compute, which refers only to the larger, high $e$ subfamily. \subsection{The Vesta family substructures} \label{sec:vesta} \begin{figure}[h!] \figfig{12cm}{4_ae}{The family 4 shown in the proper $a,e$ plane. The halo families merged in step 5 of our classification procedure (yellow dots) extend the family closer to the $3/1$ resonance with Jupiter on the right and to the $7/2$ resonance on the left. The position of (4) Vesta is indicated by the cyan cross, showing that the parent body is at the center of neither of the two concentrations of members, at lower $e$ and at higher $e$. This because of the strongly anisotropic distribution of velocities $v_\infty$ for a cratering event.} \end{figure} The Vesta family has a curious shape in the proper $a,e$ plane, see Figure~\ref{fig:4_ae}, which is even more curious if we consider the position of (4) Vesta in that plane. In proper $a$, the family 4 is bound by the $3/1$ resonance with Jupiter on the outside and by the $7/2$ inside. Closer inspection reveals the role of another resonance at $a\simeq 2.417$ au, which is the $1/2$ with Mars. Indeed, the low $e$ portion of the family has the outside boundary at the $1/2$ resonance with Mars. By stressing the position of Vesta, as we have done with the cyan cross in Figure~\ref{fig:4_ae}, we can appreciate the existence of a group of roughly oval shape with proper $e$ lower than, or only slightly above, the one of Vesta (which is $0.099$). This can be confirmed by a histogram of the proper $a$ for family 4 members, showing for $2.3<a<2.395$ a denser core containing about $2/3$ of the family members. We can define a subgroup as the family 4 members with $a<2.417$ and $e<0.102$, conditions satisfied by $5\,324$ members. We shall call this group with the non-committing name of ``low $e$ subfamily''. By assuming for sake of simplicity that albedo and density are the same, we can compute the center of mass, which is located at $a=2.3435$ and $e=0.0936$. To get to such values the relative velocity components after escape from Vesta should have been $-76$ and $-98$ m/s, respectively\footnote{The negative sign indicates a direction opposite to the orbital velocity for $a$, and a direction, depending upon the true anomaly at the collision, resulting in decrease of $e$.}. Since the escape velocity from Vesta surface is $\sim 363$ m/s, this is compatible with the formation of the low $e$ subfamily from a single cratering event, followed by a Yarkovsky evolution significant for all, since no member has $D>8$ km. What is then the interpretation of the rest of the family 4? We shall call ``high $e$ subfamily'' all the members not belonging to the low $e$ portion defined above, excluding also asteroids (556) and (1145) which have been found to be interlopers. This leaves $2\,538$ members, again with size $D<8$ km. It is also possible to compute a center of mass: the necessary relative velocities after escape are larger by a factor $\sim 2$, still comparable to the escape velocity from Vesta, although this estimate is contaminated by the possible inclusion of low $e$, low $a$ members into the low $e$ subfamily. Anyway, the shape of this subfamily is not as simple as the other one, thus there could have been multiple cratering events to generate it. This decomposition provides an interpretation of the results from Section~\ref{sec:ages}, in which there was a large discrepancy, by a factor $\sim 2$, between the age as computed from the low $a$ side and from the high $a$ side of the V-shape in $a, 1/D$. Indeed, if the low $e$ subfamily ends for $a<2.417$, while the high $e$ subfamily ends at $a\sim 2.482$, then the right side of the V-shape belongs to the high $e$ subfamily. From Figure~\ref{fig:4_ae} we see that the low $a$ side of the family appears to be dominated by the low $e$ subfamily. As a consequence of this model, the two discordant ages computed in Section~\ref{sec:ages} belong to two different cratering events. This interpretation is consistent with the expectation that cratering events, large enough to generate an observable family, occur multiple times on the same target. As for the uncertainties of these ages, they are dominated by the poor a priori knowledge of the Yarkovsky calibration constant $c$ for the Vesta family. Still the conclusion that the two ages should be different by a factor $\sim 2$ appears robust. From the DAWN images, the age of the crater Rheasilvia on Vesta has been estimated at about $1$ Gy \cite{dawn_marchi}, while the underling crater Veneneia must be older, its age being weakly constrained. Thus both the younger age and the ratio of the ages we have estimated in Section~\ref{sec:ages} are compatible with the hypothesis that the low $e$ subfamily corresponds to Rheasilvia, the high $e$ subfamily (or at least most of it) corresponds to Veneneia. We are not claiming we have a proof of this identification. Unfortunately, for now there are no data to disentangle the portions of the two collisional families which overlap in the proper elements space. Thus we can compute only with low accuracy the barycenter of the two separate collisional families, and to model the initial distributions of velocities would be too difficult. However, there are some indications \cite{bus_vesta} that discrimination of the two subfamilies by physical observations may be possible. In conclusion, the current family of Vesta has to be the outcome of at least two, and possibly more, cratering events on Vesta, not including even older events which should not have left visible remnants in the family region as we see it today. \subsection{Vesta Interlopers and lost Vestoids} \label{sec:lost} \begin{figure}[h!] \figfig{12cm}{vesta_back_albedo_hist}{Histogram of albedo measured by WISE with $S/N>3$: above for the asteroids belonging to family 4; below for the background asteroids with $2.2 <$ proper $a<2.5$ au. The uneven distribution of ``dark'' asteroids (albedo $<0.1$) is apparent. The asteroids with $0.27 <$ albedo $<0.45$, corresponding to the bulk of the family 4, are present, but as a smaller fraction, in the background population.} \end{figure} Another possible procedure of family analysis is to find interlopers, that is asteroids classified as members of a dynamical family, not belonging to the same collisional family, because of discordant physical properties; see as an example of this procedure Figure 25 and Table 3 of \cite{hungaria}. In the dynamical family of (4) Vesta there are $695$ asteroids with reasonable (as afore) WISE albedo data. We find the following 10 asteroids with albedo $<0.1$: (556), (11056), (12691), (13411), (13109), (17703), (92804), (96672), (247818), (253684); the first is too large ($D\simeq 41$ km) and was already excluded, the next 3 are larger than $7.5$ km, that is marginally too large for typical Vestoids; we had also excluded in Section~\ref{sec:volume} (1145), which has an intermediate albedo but is also too large ($D\simeq 23$ km). We think these $11$ are reliably identified as interlopers, of which $10$ belong to the C-complex. By scaling to the total number $7\,865$ of dynamic family members, we would expect a total number of interlopers belonging to the C-complex $\simeq 120$. The problem is how to identify the interlopers belonging to the S-complex, which would be expected to be more numerous. For this task the WISE albedo data are not enough, as shown by Figure~\ref{fig:vesta_back_albedo_hist}. The albedos of most family members are in the range between $0.16$ and $0.5$, which overlaps the expected for the S-complex, but there is no ostensible bimodality in this range. The background asteroids, with $2.2<a<2.5$, for which significant WISE albedos are available, clearly have a dark component, $34\%$ of them with albedo $<0.1$, but the majority have albedos compatible with the S-complex, a large fraction also compatible with V-type. The estimated value of the albedo is derived from an assumed absolute magnitude, which typically has an error of $0.3$ magnitudes (or worse). This propagates to a relative error of $0.3$ in the albedo. Thus the values of albedo for S and V type are mixed up as a result of the measurement errors, both in the infrared and in the visible photometry. The only class of objects which are clearly identified from the albedo data are the dark C-complex ones, because the main errors in the albedo are relative ones, thus an albedo estimated at $<0.1$ cannot correspond to an S-type, even less to a V-type. In conclusion, by using only the available albedo data there is no way to count the interlopers in the Vesta family belonging to the S-complex; it is also not possible to identify ``lost Vestoids'', originated from Vesta but not included in the dynamical family. \begin{figure}[h!] \figfig{12cm}{vesta_ae_SDSS}{Asteroids complying with the \cite{parker2008} criterion for V-type. Red points: members of family 4, green: members of other families, black: background asteroids. The background asteroids apparently matching the color properties of Vestoids are, among objects with significant SDSS data, at least twice more numerous than the family 4 members with the same colors.} \end{figure} The question arises whether it would be possible to use the SDSS data to solve these two problems. According to \cite{parker2008} the V-type objects should correspond to the region with $a^*>0$ and $i-z<-0.15$ in the plane of these two photometric parameters. However, as it is clear from \cite[Figure 3]{parker2008}, these lines are not sharp boundaries, but just probabilistic ones. Thus this criterion is suitable to reliably identify neither family 4 interlopers, nor lost Vestoids. On the other hand, the Parker et al. criterion can be used to estimate the V-type population in a statistical sense. To select the asteroids which have a large probability of being V-type we require $a^*-2\,STD(a^*)>0$ and $i-z +2\,STD(i-z)<-0.15$; we find $1\,758$ asteroids, of which $55$ with $a>2.5$ au; they are plotted on the proper $a,e$ plane in Figure~\ref{fig:vesta_ae_SDSS}. The number of asteroids of V-type beyond the $3/1$ resonance with Jupiter should be very small, anyway $55$ is an upper bound on the number of false positive for the V-type criterion in that region. Of the V-type with $a<2.5$ au, $504$ are members of the dynamical family 4 and $1\,199$ are not. In conclusion, even taking into account the possible number of false positive, there are at least twice as many V-types in the inner belt outside of the dynamical family rather than inside. Conversely, if we define ``non-V type'' by the criterion either $a^*+2\,STD(a^*)<0$ or $i-z -2\,STD(i-z)>-0.15$ we find in the inner belt $a<2.5$ as many as $8\,558$ non-V, out of which only $42$ belong to the dynamical family 4, which means the number of S-type interlopers is too small to be accurately estimated, given the possibility of ``false negative'' in the V-type test. This gives an answer to another open question: where are the ``lost Vestoids'', remnants of cratering events on Vesta which occurred billions of years ago? The answer is everywhere, as shown by Figure~\ref{fig:vesta_ae_SDSS}, although much more in the inner belt than in the outer belt, because the $3/1$ barrier deflects most of the Vestoids into planet crossing orbits, from which most end up in the Sun, in impacts on the terrestrial planets, etc. Still there is no portion of the asteroid main belt which cannot be reached, under the effect of billions of years of Yarkovsky effect and chaotic diffusion combined. We should not even try to find families composed with them, because they are too widely dispersed. All but the last two (possibly three) family-forming cratering events have completely disappeared from the Vesta family. \subsection{The Eunomia Family} \label{sec:eunomia} The number frequency distributions of the family members' proper elements indicate that some multiple collisions interpretation is plausible: the distribution of semimajor axes exhibits a gap around $a=2.66$~au, close to where Eunomia itself is located\footnote{A narrow resonance occurs at about the location of the gap, but it does not appear strong enough to explain it.}. The distribution of family members on all sides of the parent body for all three proper elements, and the barycenter of the family (not including Eunomia) very close to (15), are discordant with the supposed anisotropic distribution of velocities of a single cratering event. All these pieces of evidence indicate that a single collisional event is not enough to explain the shape of the dynamical family 15. Then the discrepancy in the slopes on the two sides could be interpreted as the presence of two collisional families with different ages. Since the subfamily with proper $a>2.66$ dominates the outer edge of the V-shape, while the inner edge is made only from the rest of the family, we could adopt the younger age as that of the high $a$ subfamily, the older as the age of the low $a$ subfamily. However, the lower range of diameters, starting only from $D>6.7$ km on the outer edge, and the ratio of ages too close to $1$ result in a difference of ages which is poorly constrained. Still the most likely interpretation is that the Eunomia dynamical family was generated by two cratering events, with roughly opposite crater radiants, such that one of the two collisional families has barycenter at $a>a(15)$, the other at $a<a(15)$, see Table~\ref{tab:tablebar_crat}. The WISE albedo distributions of the two subfamilies are practically the same, which helps in excluding more complex interpretations in which one of the two subfamilies has a different parent body. In conclusion, the interpretation we are proposing is similar to the one of the Vesta family. \subsection{The missing Ceres family} \label{sec:ceres} (1) Ceres in our dynamical classification does not belong to any family, still there could be a family originated from Ceres. The escape velocity from Ceres is $v_e\sim 510$ m/s, while the QRL velocity used to form families in zone 3 was $90$ m/s. An ejection velocity $v_0$ just above $v_e$ would results in a velocity at infinity larger than $90$ m/s: $v_0=518$ m/s is enough. Thus every family moderately distant from Ceres, such that the relative velocity needed to change the proper elements is $<v_e$, is a candidate for a family from Ceres. Family 93 is one such candidate\footnote{Other authors have proposed family classifications in which (1) is a member of a family largely overlapping with our family 93.}. By computing the distance $d$ between the proper element set of (1) and all the family 93 members, we find the minimum possible $d=153$ m/s for the distance (in terms of the standard metric) between (1) and (28911). Although the relationship between $d$ and $v_\infty$ is not a simple one (depending upon the true anomaly at the impact), anyway $v_\infty$ would be of the same order as $d$, thus corresponding approximately to $v_0=532$ m/s. \begin{figure}[h!] \figfig{11cm}{minerva_albhist}{Histogram of the albedos measured by WISE with $S/N>3$ among the members of the family 93. There is an obvious ``dark'' subgroup with albedo $<0.1$ and a large spread of higher estimated albedos. Most members have intermediate albedos typical of the S-complex.} \end{figure} This is a hypothesis, for which we seek confirmation by using absolute magnitudes and other physical observations, and here comes the problem. The albedo of (1) is $0.090\pm 0.0033$ according to \cite{Lietal2006}. The surface of Ceres has albedo inhomogenities, but according to the HST data reported by \cite{Lietal2006} the differences do not exceed $8\%$ of the value. The WISE albedos of the family 93 (again accepting only the 403 data with $S/N>3$) are much brighter than that of Ceres, apart from a small minority: only 37, that is $9\%$, have albedo $<0.1$. (93) has albedo $0.073$ from IRAS, but we see no way to eject a $D \sim 150$ km asteroid in one piece from a crater; also (255) belongs to the dark minority, and is too large for a crater ejecta. No other family member, for which there are good WISE data, has diameter $D \geq 20$ km. Actually, from Figure~\ref{fig:minerva_albhist} the albedo of Ceres is a minimum in the histogram. By using the SDSS data we also get a large majority of family 93 members in the S-complex region. \begin{figure}[h!] \figfig{11cm}{93_wise_aI}{The members of the dynamical family 93 for which significant WISE data are available, plotted in the proper $a, \sin{I}$ plane. Red= albedo $>0.2$, green=albedo between $0.1$ and $0.2$, black=albedo $<0.1$.} \end{figure} We cannot use V-shape diagrams composed with the same method used in Section~\ref{sec:ages}, because the assumption of uniform albedo is completely wrong, as shown by Figure~\ref{fig:minerva_albhist}. As an alternative, to study possible internal structures we use only the 403 objects with good WISE data, and use a color code to distinguish low albedo, high albedo and intermediate. The Figure~\ref{fig:93_wise_aI} in the proper $a,\sin{I}$ plane and the one in the proper $a,e$ plane show no concentration of objects with low, not even with intermediate, albedo. Thus there appears to be no family originated from Ceres, but only a family of bright and (possibly) intermediate asteroids, having to do neither with (1), nor with (93), nor with (255). Thus the family 93 is the only one suitable, for its position in proper elements space, to be a cratering family from Ceres, after removing the two large outliers. However, physical observations (albedo and colors) contradict this origin for an overwhelming majority of family members. Should we accept the idea that the bright/intermediate component of family 93 is the result of a catastrophic fragmentation of some S-complex asteroid, and the fact of being very near Ceres is just a coincidence? Moreover, why Ceres would not have any family? This could not be explained by assuming that Ceres has been only bombarded by much smaller projectiles than those impacting on (2), (4), (5) and (10): Ceres has a cross section comparable to the sum of all these four others. "How often have I said to you that when you have eliminated the impossible, whatever remains, however improbable, must be the truth?"\footnote{Sherlock Holmes to dr. Watson, in: C. Doyle, \emph{The Sign of the Four}, 1890, page 111.}. Following the advise above, we cannot discard the ``coincidence'' that a family of bright asteroids is near Ceres, but then all the dark family members have to be interlopers. This means we can assign $366$ ``bright and intermediate'' (with albedo $>0.1$ measured by WISE with $S/N>3$) members of family 93 to the Gerantina collisional family, from the lowest numbered member, (1433) Gerantina, which has albedo $0.191\pm 0.017$, from which $D\simeq 15$. The volume of these $366$ is estimated at $49\, 000$ km$^3$, equivalent to a sphere with $D\simeq 45$ km. This can be interpreted as a catastrophic fragmentation of some S-complex asteroid with a diameter $D>50$ km, given that a good fraction of the fragments has disappeared in the $5/2$ resonance. As for the missing family from Ceres, before attempting an explanation let's find out how significant are our family classification data, that is whether our classification could just have missed a Ceres family. This leads to a question, to which we can give only a low accuracy answer: which is the largest family from Ceres which could have escaped our classification? This question has to be answered in two steps: first, how large could be a family resulting from cratering on Ceres which could be superimposed to the family 93? $37$ dark interlopers out of $403$ with good WISE data could be roughly extrapolated to $168$ out of $1\,833$ members of family 93\footnote{This is an upper bound, since some dark interlopers from the background are expected to be there: in the range of proper semimajor axis $2.66< a <2.86$ au we find that $59\%$ of background asteroids have albedo $<0.1$.}. Second, how large could be a Ceres family separate from 93 and not detected by our classification procedure? In the low proper $I$ zone 3, the three smallest families in our classification have $93$, $92$ and $75$ members. Although we do not have a formal derivation of this limit, we think that a family with about $100$ members could not be missed. By combining these two answers, families from Ceres cratering with up to about $100$ members could have been missed. The comparison with the family of (10) Hygiea, with $2\,402$ members, and the two subfamilies of Vesta, the smallest with $>2,538$ members, suggests that the loss in efficiency in the generation of family members in the specific case of Ceres is at least by a factor $20$, possibly much more. Thus as an explanation for the missing Ceres family, the only possibility would be that there is some physical mechanism preventing most fragments which would be observable, currently those with $D>2$ km, either from being created by a craterization, or from being ejected with $v_0>510$ m/s, or from surviving as asteroids. We are aware of only two possible models. One model can be found in \cite[section 6.3]{Lietal2006}: ``The lack of a dynamical family of small asteroids associated with Ceres, unlike Vesta's Vestoids, is consistent with an icy crust that would not produce such a family.'' We have some doubts on the definition of an ``icy'' crust with albedo $<0.1$, thus we generalize the model from \cite{Lietal2006} by assuming a comparatively thin crust effectively shielding the mantle volatiles, with whatever composition is necessary to achieve the measured albedo. When Ceres is impacted by a large asteroid, $20<D<50$ km, the crust is pierced and a much deeper crater is excavated in an icy mantle. Thus the efficiency in the generation of family asteroids (with albedo similar to the crust) is decreased by a factor close to the ratio between average crater depth and crust thickness. The ejected material from the mantle forms a family of main belt comets, if they have enough water content they quickly dissipate by sublimation and splitting. This would be one of the most spectacular events of solar system history; unfortunately, it is such a rare event that we have no significant chance of seeing it\footnote{The exceptional presence of some $10\,000$ comets could perhaps be detected in an extrasolar planetary system, but since we know only of the order of $1\,000$ extrasolar systems, also this is a low probability event.}. Thus a crust thickness of few km would be enough to explain a loss of efficiency by a factor $>20$ and the possible Ceres family could be too small to be found. A second possible model is that there is a critical value for the ejection velocity $v_0$ beyond which asteroids with $D>2$ km cannot be launched without going into pieces. If this critical velocity $v_c$ is $>363$ m/s for V-type asteroids, but is $<510$ m/s for the composition of Ceres ejecta (presumably with much lower material strength), then Vesta can have a family but Ceres can not. Of course it is also possible that the number of large ejecta with $v_0>510$ is not zero but very small. Thus even a very large impact on Ceres does generate few observable objects, it does not matter whether they are asteroids or comets, leading to a family too small to be detected. However, if the crater is deeper than the crust, Ceres itself behaves like a main belt comet for some time, until the crater is ``plugged'' by dirt thick enough to stop sublimation. This would be spectacular too. The fact is, little is known of the composition and geological structure of Ceres. This situation is going to change abruptly in 2015, with the visit by the DAWN spacecraft. Then what these two models would predict for the DAWN data? With both models, larger impacts would leave only a scar, resulting from the plugging of the mantle portion of the crater (because Ceres does not have large active spots). What such a big scar looks like is difficult to be predicted, they could be shallow if the mantle restores the equilibrium shape, but still be observable as albedo/color variations. If there is a thin crust, then craters should have low maximum depth and a moderate maximum diameter. At larger diameters only scars would be seen. If the family generation is limited by the maximum ejection velocity, the crust could be thicker: there would be anyway craters and scars, but the craters can be larger and the scars would be left only by the very large impact basins. \section{Binaries and couples} \label{binaries} \begin{table}[h] \centering \caption{Binary (or multiple) systems belonging to some family identified in the present paper. Confirmed binary systems are listed in bold.} \label{tab:binaries} \medskip \begin{tabular}{rlr} \hline \multicolumn{2}{c}{Binary asteroid} & Family\\ \hline \\ \textbf{(87)} & \textbf{Sylvia} & \textbf{87} \\ \textbf{(90)} & \textbf{Antiope} & \textbf{24} \\ \textbf{(93)} & \textbf{Minerva} & \textbf{93} \\ \textbf{(243)} & \textbf{Ida} & \textbf{158} \\ \textbf{(283)} & \textbf{Emma} & \textbf{283} \\ \textbf{(379)} & \textbf{Huenna} & \textbf{24} \\ \textbf{(1338)} & \textbf{Duponta} & \textbf{1338} \\ (3703) & Volkonskaya & 4 \\ (3782) & Celle & 4 \\ \textbf{(5477)} & \textbf{Holmes} & \textbf{434} \\ (5481) & Kiuchi & 4 \\ (10208) & Germanicus & 883 \\ (11264) & Claudiomaccone & 5 \\ (15268) & Wendelinefroger & 135 \\ \textbf{(17246)} & \textbf{2000 GL74} & \textbf{158} \\ \textbf{(22899)} & \textbf{1999 TO14} & \textbf{158} \\ \textbf{(76818)} & \textbf{2000 RG79} & \textbf{434} \\ \\ \hline \end{tabular} \end{table} Having defined a list of asteroid families, an interesting investigation is to look at the list of currently known binary or multiple asteroid systems, to see whether these systems tend to be frequent among family members. This because a possible origin of binary systems is related to collisional events. This is true especially for objects above some size limit, for which rotational fission mechanisms related to the YORP effect become less likely. By limiting our attention to main belt asteroids, the list of binary systems includes currently a total of $88$ asteroids\footnote{A list of identified binary systems is maintained by R. Johnston at the URL http://johnstonsarchive.net/astro/asteroidmoons.html}, including $34$ systems which are not definitively confirmed. Among them, $17$ (including $6$ binaries still needing confirmation) are family members according to our results; see Table \ref{tab:binaries}. The data set of definitively confirmed is still fairly limited, but it is interesting to note that $17$ out of $88$, or $11$ out $54$ objects if we consider only certain binary systems, turn out to be family members. The relative abundance is of the order of $20$\%, so we cannot conclude that binary asteroids are particularly frequent among families. Looking more in detail at the data shown in Table \ref{tab:binaries}, we find that binaries tend to be more abundant in the Koronis, Hungaria and Themis families, while the situation is not so clear in the case of Vesta, due to the still uncertain binary nature of some objects. We also note that binary asteroids tend also to be found among the biggest members of some families, including Sylvia, Minerva and Duponta. \subsection{Couples} \label{sec:couples} One step of our family classification procedure is the computation of the distance in proper elements space between each couple of asteroids; the computation is needed only if the distance is less than some control $d_{min}$. If the value of $d_{min}$ is chosen to be much smaller than the QRL values used in the family classification, a new phenomenon appears, namely the \emph{very close couples}, with differences in proper elements corresponding to few m/s. A hypothesis for the interpretation of asteroid couples, very close in proper elements, has been proposed long ago, see \cite{trojan}[p. 166-167]. The idea, which was proposed by the late P.\ Farinella, is the following: the pairs could be obtained after \emph{an intermediate stage as binary}, terminated by a low velocity escape through the so-called fuzzy boundary, generated by the heteroclinic tangle at the collinear Lagrangian points. \begin{table}[h!] \footnotesize \centering \caption{Very close couples among the numbered asteroids: the $d$ distance corresponds to $<0.5$ m/s. } \medskip \begin{tabular}{rrrrrrrr} \hline name & H &name&H& $d$ & $\delta a_p/a_p$ & $\delta e_p$ & $\delta \sin I_p$\\ \hline 92652 & 15.11 & 194083 & 16.62 & 0.1557213 & 0.0000059 & -0.0000011 & 0.0000011\\ 27265 & 14.72 & 306069 & 16.75 & 0.2272011 & 0.0000076 & 0.0000021 & -0.0000029\\ 88259 & 14.86 & 337181 & 16.99 & 0.2622311 & -0.0000091 & 0.0000019 & -0.0000011\\ 180906 & 17.41 & 217266 & 17.44 & 0.2649047 & 0.0000069 & -0.0000049 & -0.0000013\\ 60677 & 15.68 & 142131 & 16.05 & 0.3064294 & 0.0000090 & 0.0000021 & 0.0000052\\ 165389 & 16.31 & 282206 & 16.85 & 0.3286134 & 0.0000019 & 0.0000080 & 0.0000022\\ 188754 & 16.29 & 188782 & 16.90 & 0.3384864 & -0.0000059 & -0.0000019 & 0.0000087\\ 21436 & 15.05 & 334916 & 18.14 & 0.3483815 & -0.0000041 & -0.0000081 & 0.0000016\\ 145516 & 15.39 & 146704 & 15.57 & 0.4131796 & -0.0000142 & -0.0000062 & 0.0000046\\ 76111 & 14.55 & 354652 & 16.55 & 0.4137084 & 0.0000174 & 0.0000033 & -0.0000011\\ 67620 & 15.35 & 335688 & 16.91 & 0.4225815 & 0.0000017 & 0.0000027 & 0.0000105\\ 52009 & 15.15 & 326187 & 17.22 & 0.4459464 & 0.0000079 & 0.0000115 & -0.0000020\\ 39991 & 14.15 & 340225 & 17.90 & 0.4495984 & 0.0000003 & -0.0000101 & -0.0000061\\ 64165 & 15.14 & 79035 & 14.54 & 0.4501940 & -0.0000018 & 0.0000010 & 0.0000127\\ 57202 & 15.34 & 276353 & 17.45 & 0.4543033 & -0.0000027 & 0.0000103 & 0.0000052\\ 180255 & 16.85 & 209570 & 17.08 & 0.4686305 & 0.0000185 & -0.0000026 & 0.0000003\\ 39991 & 14.15 & 349730 & 17.35 & 0.4686824 & 0.0000182 & -0.0000020 & -0.0000043\\ 56285 & 14.98 & 273138 & 16.72 & 0.4958970 & 0.0000199 & -0.0000061 & 0.0000025\\ \hline \end{tabular}\label{tab:couples} \end{table} The procedure to actually prove that a given couple is indeed the product of the split of a binary is complex, typically involving a sequence of filtering steps, followed by numerical integrations (with a differential Yarkovsky effect, given the differences in size) to find an epoch for the split and confirm that the relative velocity was indeed very small. Many authors have worked on this, including ourselves \cite{hungaria} and \cite{Delloroetal12}, who analyzed the efficiency of non-disruptive collisional events in leading to binary ``evaporation''. Our goal for this paper is not to confirm a large number of split couples, but just to offer the data for confirmation by other authors, in the form of a very large list of couples with very similar proper elements. Currently we are offering a dataset of $14\,627$ couples with distance $<10$ m/s, available from AstDyS\footnote{http://hamilton.dm.unipi.it/~astdys2/propsynth/numb.near}. A small sample of these couples is given in Table~\ref{tab:couples}, for $d<0.5$ m/s. To assess the probability of finding real couples in this large sample, it is enough to draw an histogram of the distance $d$. It shows the superposition of two components, one growing quadratically with $d$ and one growing linearly. Since the incremental growth of volume is quadratic in $d$, this should correspond to the fraction of couples which are random and to the ones which result from a very different phenomenon, such as split followed by drift due to differential Yarkovsky. It turns out that from the histogram it is possible to compute that, out of $14\, 627$ couples, about half belong to the ``random'' sub-population, half to the linear growth component. \section{Conclusions and Future Work} \label{sec:conclusion} By performing an asteroid family classification with a very enlarged dataset the results are not just ``more families'', but there are interesting qualitative changes. These are due to the large number statistics, but also to the larger fraction of smaller objects contained in recently numbered asteroids and to the accuracy allowing to see many structures inside the families\footnote{Note that these three are reasons to discard the usage of physical observation data as primary parameters for classification. Consistent catalogs of physical observation are available for less asteroids, they are also observationally biased against smaller asteroids, which are either absent or present with very low accuracy data.}. Another remarkable change is that we intend to keep this classification up to date, by the (partially automated) procedures we have introduced. In this section we would like to summarize some of these changes, and also to identify the research efforts which can give the most important contribution to this field. Note that we do not necessarily intend to do all this research ourselves, given our complete open data policy this is not necessary. \subsection{How to use HCM} The size increase of the dataset of proper elements has had a negative effect on the perception about the HCM method in the scientific community, because of the chaining effect which tends to join even obviously distinct families. Thus some authors have either reduced the dataset by asking that some physical observations are available, or used QRL values variable for each family, or even reverted to visual methods. We believe that this paper shows that there is no need to abandon the HCM method, provided a more complex multistep procedure is adopted. In short, our procedure amounts to using a truncation QRL for the larger members belonging to the core family different from the one used for smaller members. This is justified from the statistical point of view, because the smaller asteroids have larger number density. We are convinced that our method is effective in adding many smaller asteroids to the core families, without expanding the families with larger members. As a result we have a large number of families with very well defined V-shapes, thus with a good possibility of age estimation. We have also succeeded in identifying many families formed only with small asteroids, or at most with very few large ones, as expected for cratering. The portion of the procedure on which we intend to work more is the step 5, merging, which is still quite subjective (indeed, even visual inspection plays a significant role). This procedure is cumbersome and not automated at all, and may become even more difficult in the future with the expected large increase of the dataset size, thus further improvements are needed. \subsection{Stability of the classification} We have established a procedure to maintain our family classification up to date, by adding the newly discovered asteroids, as soon as their orbital elements are stable enough because determined with many observations. First the proper elements catalogs are updated, then we attribute some of the new entries as new members to the already known families. This last step 6 (see Section~\ref{sec:method}) is performed in a fully automated way, just by running a single computer code (with CPU times of the order of 1 hour), which is actually the same code used for steps 2 and 4. We already have done this for the proper elements catalog update of April 2013, and we continue with periodic updates. The results are available, as soon as computed, on the AstDyS information system. The classification is meant to be methodologically stable but frequently updated in the dataset. In this way, the users of our classification can download the current version. They can find a full description of the methods in this paper, even if specific results (e.g., number and list of members of some family) should not be taken from the paper but from the updated web site. Some changes in the classification such as mergers are not automatic, thus we are committed to apply them periodically, until when we will be able to automatize this too. \subsection{Magnitudes} As shown in Table~\ref{tab:infocount}, the absolute magnitudes computed with the incidental photometry could contribute a significant fraction of the information used in family classification and analysis. However, the accuracy is poor, in most cases not estimated; better data are available for smaller samples, not enough for statistical purposes, such as the analysis of Section~\ref{sec:absol_mag}. This does not affect the classification, but has a very negative impact on the attempt to compute ages, size distributions, and volumes. It also affects the accuracy of the albedos, and has serious implications on the population models. The question is what should be done to improve the situation. One possibility would be a statistically rigorous debiasing and weighting of the photometry collected with the astrometry. The problem is, the errors in the photometry, in particular the ones due to difference in filters and star/asteroid colors, are too complex for a simple statistical model, given that we have no access to a complete information on the photometric reduction process. Thus we think that the only reliable solution is to have an optimally designed photometric survey, with state of the art data processing, including the new models of the phase effect \cite{muinonen}. This requires a large number (of the order of $100$) photometric measurements per asteroid per opposition, with a wide filter, and with enough S/N for most numbered main belt asteroids. These appear tough requirements, and a dedicated survey does not appear a realistic proposal. However, it turns out that these requirements are the same needed to collect enough astrometry for a NEO wide survey, aiming at discovering asteroids/comets on the occasion of close approaches to the Earth. Thus a ``magnitude survey'' could be a byproduct of a NEO discovery survey, provided the use of many different filters is avoided. \subsection{Yarkovsky effect and ages} One of our main results is that for most families, large enough for statistically significant analysis of the shape, the V-shape is clearly visible. We have developed our method to compute ages, which we believe is better than those used previously (including by ourselves, \cite{hungaria}) because it is more objective and takes into account the error in the estimate of the diameter by the common albedo hypothesis, which is substantial. We believe our new method tackles in an appropriate way all the difficulties of the age estimation discussed in Section~\ref{sec:ages}, but for one: the calibration. The difficulty in estimating the Yarkovsky calibration, due to the need to extrapolate from NEAs with measured $da/dt$ to main belt asteroids, is in most cases the main limitation to the accuracy of the age estimation. Thus the research effort which could most contribute to the improvement of age estimation (for a large set of families) would be either the direct measurement of Yarkovsky effect for some family members (with known obliquity) or the measurement of the most important unknown quantities affecting the scaling from NEA, such as thermal conductivity and/or density. These appear ambitious goals, but they may become feasible by using advanced observation techniques, in particular from space, e.g., GAIA astrometry, Kuiper infrared observations, and radar from the ground. \subsection{Use of physical observations} In this paper we have made the choice of using the dynamical data first, to build the family classification, then use all the available physical data to check and refine the dynamical families. This is best illustrated by the example of the Hertha/Polana/Burdett complex dynamical family, in which the identification of the two collisional families Polana and Burdett can be obtained only with the physical data. This leaves us with more understanding, but with an incomplete classification because the majority of the members of the Hertha dynamical family there are no physical data. If we had a much larger database of accurate albedos and/or colors, we would be very glad to use the separation by physical properties as part of the primary family classification. The same argument could apply to the separation of the two collisional families of Vesta. In other words, the use of dynamical parameters first is not an ideological choice, but is dictated by the availability and accuracy of the data. We would very much welcome larger catalogs of physical data, including much smaller asteroids and with improved S/N for those already included. However, we are aware that this would require larger aperture telescopes. \subsection{Cratering vs. Fragmentation} \label{sec:crat_frag} Our procedure, being very efficient in the inclusion of small asteroids in the haloes of core families, has allowed to identify new cratering families and very large increases of membership for the already known ones. As a result of the observational selection effect operating against the cratering families, because they contain predominantly small asteroids (e.g., $D<5$ km), in the past the cratering events have been less studied than the catastrophic fragmentations. On the contrary, elementary logic suggests that there should be more cratering than fragmentation families, because the target of a cratering remains available for successive impacts, with the same cross section. The number of recognized cratering families is reduced because small asteroids from craterizations have a shorter lifetime, thus the observable cratering families have a limited age. As the observational bias against small asteroids is progressively mitigated, we expect that the results on cratering will become more and more important. This argument also implies that multiple cratering collisional families should be the rule rather than the exception. They necessarily intersect because of the common origin but do not overlap completely because of the different crater radiants. Although we have studied only some of the cratering families (listed in Section~\ref{sec:craters}), all the examples we have analyzed, namely the families of (4), (20), (15), show a complex internal structure. For the catastrophic fragmentation families, the two examples we have analyzed, (158) and (847), appear to contain significant substructures (named after Karin and Jitka). The general argument could be used that fragments are smaller, thus should have collisional lifetimes shorter than the parent body of the original fragmentation family. Thus we should expect that as fragmentation families become larger in membership and include smaller asteroids, substructures could emerge in most of the families. A full fledged collisional model for the ejection velocities and rotations of fragments from both cratering and fragmentation needs to be developed, possibly along the lines of the qualitative model of Section~\ref{sec:collmodel}. This will contribute both to the age determination and to the understanding of the family formation process. As for future research, there is the need to search for substructures and to classify as cratering/fragmentation, this for all the big families of Table~\ref{tab:bigfam} and many of the medium families of Table~\ref{tab:mediumfam}. \subsection{Comparison with space mission data} When on-site data from spacecraft, such as DAWN, become available for some big asteroid, a family classification should match the evidence, in particular from craters on the surface. For Vesta the main problem is the relationship between the dynamical family 4 and the two main craters Rheasilvia and Veneneia. The solution we are suggesting is that the two subfamilies found from internal structure of family 4 correspond to the two main craters. We have found no contradiction with this hypothesis in the data we can extract from the family, including ages. However, to prove this identification of the source craters requires more work: more accurate age estimates for both subfamilies and craters, and more sophisticated models of how the ejecta from a large crater are partitioned between ejecta reaccumulated, fragments in independent orbits but too small, and detected family members. Because the problem of the missing Ceres family is difficult, due to apparently discordant data, we have tried to build a consistent model, with an interpretation of the dynamical family 93 (without the namesake) and two possible physical explanations of the inefficiency of Ceres in generating families. We are not claiming these are the the only possible explanations, but they appear plausible. The data from DAWN in 2015 should sharply decrease the possibilities and should lead to a well constrained solution. \section*{Acknowledgments} The authors have been supported for this research by: the Ministry of Science and Technological Development of Serbia, under the project 176011 (Z.K. and B.N.). We thank S. Marchi, M. Micheli and S. Bus for discussions which have contributed to some of the ideas of this paper. \section*{References}
2,869,038,156,085
arxiv
\section{Introduction} As the most abundant element in the Universe, hydrogen plays a vital role in the process of stellar and galaxy evolution. With its easily-traceable 21-cm hyperfine transition line, the neutral hydrogen (HI) component in galaxies provides a fundamental tool to unveil the phase transition in the interstellar medium (ISM), which is crucial for star formation activities, galaxy kinematics, as well as the large-scale cosmic structures (e.g. see \citealt{Morganti2018} and references herein). And since the strength of HI absorbing feature against radio continuum background only depends on the column density of the absorber, as well as the intrinsic properties of the background source, and is largely distance-independent (e.g., see \citealt{Carilli1998}, \citealt{Chengalur2000}, \citealt{Kanekar2003}, \citealt{Curran2006}, \citealt{Srianand2008}, \citealt{Darling2011}, \citealt{Allison2012}, \citealt{Morganti2015}, \citealt{Wu2015}, \citealt{Allison2016}, \citealt{Maccagni2017} and Song et al. 2021, in preparation), compared with flux-limited and telescope sensitivity-restricted emission line observations, HI absorbers provide a chance to uncover the HI content in the high redshift universe (e.g., see \citealt{Kanekar2004}, \citealt{Morganti2015}, \citealt{Allison2016}, \citealt{Curran2019}, and references herein), with samples of $z>3$ absorbers detected (\citealt{Uson1991}, \citealt{Kanekar2007}). Also, such HI absorption lines can bring insights into the continuum sources, usually active galaxy nuclei (e.g. see \citealt{Maccagni2017}, \citealt{Morganti2018} and references herein), thus revealing possible interactions between star formation and central supermassive black hole activities. However, only chance alignments between radio-loud active galactic nuclei (AGNs) and foreground or associated neutral hydrogen gas can give rise to HI absorption lines \citep{Morganti2018}, and sufficiently high column density or large mass of absorbing HI is required for the production of prominent absorption lines \citep{Darling2011}, because of the low Einstein rate of HI emission. Thus, theoretically speaking, due to the occasional nature of HI absorption lines, extensive observations along numerous line of sights are required in order to accumulate a larger sample of HI absorbing systems \citep{Darling2011}. A more practical way to search for HI absorption features is to perform radio observations on damped Lyman-$\alpha$ (DLA) systems, which features high HI column density. However, since the comoving density of low-redshift ($z \leqslant$ 0.1) DLA systems is relatively low \citep{Zwaan2005}, such efforts bear limited results (e.g., see \citealt{Kanekar2009}). On the other hand, the absorption line search from AT20G compact radio galaxies using the Australia Telescope Compact Array (ATCA, \citealt{Allison2012}), the 21 cm Spectral Line Observations of Neutral Gas with the EVLA (21-SPONGE) survey \citep{Murray2015}, the Westerbork Synthesis Radio Telescope (WSRT) radio galaxy surveys (\citealt{Gereb2014}, \citealt{Gereb2015}, \citealt{Maccagni2017}), as well as searches by the Australian SKA Pathfinder (ASKAP, \citealt{Glowacki2019}, \citealt{Sadler2020}), all targeting known quasars with radio interferometers, have identified dozens of extragalactic HI absorption lines. And similar attempts has been made with the Millennium Arecibo 21 cm Absorption-Line Survey (\citealt{Heiles2003a}, \citealt{Heiles2003b}), although using a single-dish telescope. Such targeted observations are mainly focused on extragalactic radio sources with sufficiently high fluxes, thus can only lead to continuum flux-biased HI absorber samples, with galaxies exhibiting weak radio emissions while high HI column densities, which can also give rise to noticeable absorption features, largely missing (e.g., see \citealt{Sadler2007}). ``Blind'' searches through HI surveys performed by large aperture single-dish telescopes provide chances to uncover more HI absorption systems. Although such surveys usually utilise the non-tracking, drift scan observing strategy, with integration time for each individual source is limited, the collecting areas of the participating telescopes can still achieve considerable sensitivities. \cite{Allison2014} identified 4 HI absorbers, including one previously unknown source, with the archival data of the HI Parkes All-Sky Survey (HIPASS, see \citealt{Barnes2001}), while \cite{Darling2011}, \cite{Wu2015}, as well as Song et al. (2021, in preparation) has made attempts to perform ``blind'' searches for absorption features using part of Arecibo Legacy Fast Arecibo L-Band Feed Array (ALFA) Survey (ALFALFA, see \citealt{Giovanelli2005}, \citealt{Haynes2018}) data, respectively, with ten sources identified in total, including 3 samples remained undetected by other instruments so far, that is, UGC 00613, CGCG 049-033 and PGC 070403, thus proving the feasibility of searching for new HI absorbers with massive blind sky surveys. Compared with radio interferometers, large single-dish telescopes such as Arecibo can provide better sensitivities, and usually higher spectral resolution, which are all crucial to reveal the characteristics of extragalactic HI lines. And although the spatial resolution for such instruments are quite limited compared with interferometers, the chance of having two or more HI absorbing systems lying within the beam width with similar redshift is quite low, thus making confusions in source identification unlikely to happen. However, it should be noted that due to temporal variations in spectral baseline commonly seen during drift scans, follow-up observations are needed for reliable characterisation of the newly identified absorbers, especially for the weak ones. Also, interferometric mappings are usually required to discern possible fine structures within each absorbing system. The newly commissioned Five-hundred-meter Aperture Spherical radio Telescope (FAST) \citep{Nan2006, Nan2011, Jiang2019} is the largest filled-aperture single-dish radio antenna in the world, with its sensitivity and observable sky coverage both surpassing those of the Arecibo dish. Thus, it is nature to expect better HI absorption observations to be obtained with FAST. Compared with $\sim$several HI absorber discoveries made with existing blind surveys by \cite{Wu2015}, Song et al. (2021, in preparation), and \cite{Allison2014}, \cite{Wu2015} predicted that the number of absorbing systems detected by FAST should achieve an order of magnitude of at least $10^2$ with the upcoming Commensal Radio Astronomy FasT Survey (CRAFTS, see \citealt{Li2018}), and \cite{Yu2017} gave an expectation of over 1,000 extragalactic HI absorption line detections through blind searches in FAST drift scan data. This means that the number of known HI absorbers could be increased by 10 times with FAST, considering currently only $\geqslant 100$ such systems with redshift $z<1$ exist \citep{Chowdhury2020}. And the FAST observations will also complement the ongoing HI absorption surveys performed by the next generation arrays, including the MeerKAT Absorption Line Survey (MALS, see \citealt{Gupta2016}), the Widefield ASKAP L-band Legacy All-sky Blind surveY (WALLABY, see \citealt{Koribalski2020}), the First Large Absorption Survey in HI (FLASH) by ASKAP (\citealt{Allison2020}), the Search for HI Absorption with AperTIF (SHARP) survey (\citealt{Adams2018}), as well as the uGMRT Absorption Line Survey (\citealt{Gupta2021}), with a more complete Northern Sky coverage, better spectral resolution compared with all these surveys, and a similar noise level obtained during much shorter integration times provided by FAST. In this paper, we report extragalactic HI absorption line observations conducted with FAST on five galaxies identified by \cite{Wu2015} and Song et al. (2021, in preparation), as a pilot study of massive HI absorption line observations by FAST in the near future. The content of this paper is organised as follows. In Section \ref{sec:2}, we introduce the galaxy samples, FAST observation settings, as well as the data reduction process in details. The related results are presented and summarised in Section \ref{sec:3}. In Section \ref{sec:4} we give discussions, and conclusions are drawn in Section \ref{sec:5}. Here, we adopt the $\Lambda$CDM cosmology, assuming the cosmological constants as $H_0 = 70$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_M = 0.3$, and $\Omega_{\Lambda} = 0.7$. And in accordance with the ALFALFA survey \citep{Haynes2018}, the optical definition of line speed ($\delta \lambda/\lambda$) rather than the radio one ($\delta \nu/\nu$) is applied throughout our data reduction process. \section{Sample selection and FAST observations}\label{sec:2} \subsection{Sample selection} As a pilot study, we tested various observing modes and adopted different backend set-ups to observe the HI spectra during the FAST commissioning phase. The HI absorption systems are particularly suitable for such a purpose, as they contain both continuum sources and spectra lines with know fluxes, thus providing good opportunities for checking telescope pointing accuracy and testing our calibration methods. We selecte 5 absorbing systems firstly identified from 40\% of ALFALFA data by \cite{Wu2015}, including UGC 00613, ASK 378291.0, CGCG 049-033, J1534+2513, and PGC 070403. Among them, ASK 378291.0 and J1534+2513 were also reported by the WSRT survey for radio galaxies described by \cite{Maccagni2017}, with J1534+2513 and PGC 070403 already marked as possible absorptions by \citep{Haynes2011}. For each system in our samples, the continuum background is provided by a previously-detected radio source, while the absorbing HI gas lies within the source itself. The basic information of these sources are listed in Table \ref{tab1}. \begin{table*} \centering \renewcommand\arraystretch{1.5} \caption{Basic characteristics of the 5 targets observed by FAST.} \label{tab1} \begin{tabular}{llccccccc} \hline Source name & AGC no.$^{\left(a\right)}$ & RA (J2000)$^{\left(b\right)}$ & Dec (J2000)$^{\left(b\right)}$ & $cz$ (km s$^{-1}$)$^{\left(c\right)}$ & $S_{ 1.4{\rm GHz, NVSS} }$ (mJy)$^{\left(d\right)}$ & $S_{1.4{\rm GHz, FIRST}}$ (mJy/beam)$^{\left(e\right)}$\\ \hline UGC 00613 & 613 & 00h 59m 24.42s & +27d 03m 32.6s & $13770.07 \pm 23.08$ & {$112.6 \pm 4.0$} & - & \\% & Tracking \\ ASK 378291.0 & 712921 & 10h 25m 44.23s & +10d 22m 30.5s & $13732.00 \pm 26.98$ & $76.6 \pm 2.3$ & $73.85 \pm 0.145$ \\% & Tracking \\ CGCG 049-033 & 250239 & 15h 11m 31.38s & +07d 15m 07.1s & $13383.94 \pm 35.98$ & $108.8 \pm 3.8$ & $79.50 \pm 0.151$ \\% & Tracking \\ ;13440.4 13383.936370 +/- 35.975100 10180.653508 +/- 2.698133 J1534+2513 & 727172 & 15h 34m 37.62s & +25d 13m 11.4s & $10180.65 \pm 2.698$ & $50.1\pm 1.6$ &$44.58 \pm 0.146$ \\% & Tracking \\ 10164.1 PGC 070403 & 331756 & 23h 04m 28.24s & +27d 21m 26.5s & $7520.894 \pm 118.4$ & $116.6\pm 3.5$ & - \\% & Drifting \\ \pm 0.37 ;7520.9 \hline \end{tabular} $^{\left(a\right)}$ Serial number in the Arecibo General Catalogue (AGC), see \cite{Haynes2018}. $^{\left(b\right)}$ Equatorial coordinates for corresponding optical counterparts retrieved from NASA/IPAC Extragalactic Database (NED). $^{\left(c\right)}$ Heliocentric redshift data retrieved from NED. $^{\left(d\right)}$ 1.4 GHz integrated flux data retrieved from the NRAO Very Large Array (VLA) Sky Survey (NVSS) catalogue \citep{Condon1998}. $^{\left(e\right)}$ 1.4 GHz peak flux data retrieved from the VLA Faint Images of the Sky at Twenty-Centimeters (FIRST) survey \citep{Becker1995}. (For comparison only in order to keep consistency with \citealt{Wu2015} and Song et al. 2021, in preparation; not used in data reduction.) \end{table*} \subsection{Observation setting-ups} Four of the five sources have been observed with the tracking mode during the commissioning phase of the FAST telescope. And two of these, ASK 378291.0 and UGC 00613, have been observed on October 29 and November 5 of 2017, respectively, with the wide band receiver covering the $270 - 1620$ MHz frequency range. Since the spectral backend for FAST was still in development at that time, an off-the-shelf N9020A MXA spectrum analyser produced by Keysight Technologies. Inc was adopted as the primary spectrometer, with an integration time of $\sim 6$ s for each spectrum, and 1001 frequency channels over a 6 MHz band, thus achieving a spectral resolution of $\sim 1.2$ km s$^{-1}$, which brought significant improvements towards ALFALFA's 5.3 km s$^{-1}$ \citep{Giovanelli2005}, or WSRT survey's 16 km s$^{-1}$ \citep{Maccagni2017}. Data from each source was recorded for 1200 s, with signals from one polarisation only being recorded. According to the optical redshifts of the sources, the frequency coverage of the spectrum analyser was chosen as $1354.0 - 1360.0$ MHz for ASK 378291.0, and $1354.3- 1360.3 $ MHz for UGC 00613. \begin{figure} \includegraphics[width=\columnwidth]{ASK_sample.eps}\\ \includegraphics[width=\columnwidth]{ASK_sample_calibrated.eps} \caption{Top: The original, unprocessed on-source (blue) and off-source (green) data sample of ASK 378291.0. Bottom: Noise diode-calibrated on-source (blue) and off-source (green) data of ASK 378291.0. It can be seen that the flux levels and standing wave behaviours of the two spectra differ from each other significantly, even for the calibrated data. Besides, the off-source data exhibits a higher background level, which is possibly due to temporal instrumental baseline fluctuations, as well as broad-band RFI contamination. The red dotted line mark the frequency of the desired absorption feature. Observed frequencies shown in this figure are not heliocentric-corrected.} \label{fig1} \end{figure} With the 19-beam L-band receiver covering the band of $1050 - 1450$ MHz \citep{Smith2017} and its spectral backend inaugurated in the summer of 2018, we were able to observe sources ASK 378291.0, CGCG 049-033, J1534+2513, and PGC 070403 with this new set of instruments. The first three of these 4 sources were observed using the tracking mode, with data from the central beam (Beam 01) only being recorded. The 19-beam spectral backend can take data from both polarisations with 65,536 spectral channels covering the $1000-1500$ MHz band (equivalent to $\sim 1.5$ km s$^{-1}$ spectral resolution), and the typical integration time for each spectrum is $\sim 1$ s \citep{Jiang2020}. ASK 378291.0 was observed for 382.5 seconds on September 2, 2018, while data for CGCG 049-033 and J1534+2513 were recorded for 1200 s each on October 31, 2018. And an extra round of tracking observation for CGCG 049-033 was performed on December 28, 2018, also lasting 1200 s. The last source, PGC 070403 with a relatively weak absorption feature as reported by \cite{Wu2015} and Song et al. (2021, in preparation), was observed as a by-product of the drift scan conducted on September 14, 2020. One of the aims of this observation, during which the footprint of Beam 01 passed directly over PGC 070403 at the transit time of this target, was to verify the feasibility of finding extragalactic HI absorbing systems with the upcoming CRAFTS survey. The detailed of each observing session are listed in Table \ref{setup}. \begin{table*} \centering \renewcommand\arraystretch{1.5} \caption{FAST observation setting-ups for the 5 targets.} \label{setup} \begin{tabular}{llcccclcc} \hline Source name & Obs. Mode & Obs. Session & Receiver$^{\left(a\right)}$ & Backend$^{\left(b\right)}$ & Band Coverage & Channel No. & Obs. Date & Duration \\ & & & & & (MHz) & & & (s) \\\hline UGC 00613 & Tracking & 1 & W & A & $1354.3 - 1360.3$ & 1,001 & 2017 Nov. 05 & 1200\\ \hline ASK 378291.0 & Tracking & 1 & W & A & $1354 - 1360$ & 1,001 & 2017 Oct. 29 & 1200\\ & Tracking & 2 & 19 & S & $1000-1500$ & 65,536 & 2018 Sep. 02 & 382.5\\ \hline CGCG 049-033 & Tracking & 1 & 19 & S & $1000-1500$ & 65,536 & 2018 Oct. 31 & 1200\\ & Tracking & 2 & 19 & S & $1000-1500$ & 65,536 & 2018 Dec. 28 & 1200\\ \hline J1534+2513 & Tracking & 1 & 19 & S & $1000-1500$ & 65,536 & 2018 Oct. 31 & 1200\\ \hline PGC 070403 & Drifting & 1 & 19 & S & $1000-1500$ & 65,536 & 2020 Sep. 14 & $\sim 12$\\ \hline \hline \end{tabular} $^{\left(a\right)}$ ``W'' for the wide band receiver; ``19'' for the 19-beam receiver. $^{\left(b\right)}$ ``A'' for Keysight spectrum analyser; ``S'' for spectral line backend. \end{table*} In order to reduce the effects of baseline fluctuations, we chose an ``off-source'' position in the sky for each source observed with the tracking mode, which is located several arcminutes away from the primary target, as our reference for background. However, it should be noted that as seen in Fig. \ref{fig1}, due to phase-shifting behaviours of standing waves and instabilities of instruments commonly seen during the FAST commissioning phase, as well as time-dependent broad-band radio frequency interference (RFI), the continuum flux level obtained by comparing on- and off-source positions fluctuates significantly between different observing sessions, even for the same source. And cases exhibiting higher off-source baseline level flux than on-source spectra, as well as curved bandpass exist, which makes a direct measurement of continuum level by FAST unreliable. Thus, in the following data reduction process, we mainly extract the background ``baseline'' by directly fitting background continuum with on-source data, rather than utilising the off-source observations. And the details of our data reduction process is described in the following subsection. \subsection{Flux calibration and baseline subtraction} The 19-beam receiver of FAST is equipped with built-in noise diode for signal calibration. As illustrated in \cite{Jiang2020}, the diode can be operated at 2 modes, with characteristic noise temperatures $T_{noise}$ as $\sim 1.2$ K and $\sim 12$ K, respectively. For ASK 378291.0, CGCG 049-033, J1534+2513, and PGC 070403 observations, we all adopted the high-temperature mode of the noise diode, and the calibration signal was injected periodically during each observing session. Let ${\rm OFF}$ be the original instrument reading without noise, ${\rm ON}$ the reading with noise injected (it should be noted here that throughout this work, we adopt capitalised ON and OFF to describe the noise diode status, while the lower case on/off denote the on/off source position). $T_{noise}$ the pre-determined noise temperature measured with hot loads, which has been shown in \cite{Jiang2020}, the system temperature $T_{sys}$ can be computed as \begin{equation} T_{sys} = \frac{{\rm OFF}}{{\rm ON} - {\rm OFF}} \times T_{noise}. \label{eq:1} \end{equation} For each target observed with the 19-beam receiver, we calculated averaged $T_{sys}$ with Eq. \ref{eq:1} using averages of ${\rm OFF}$ and ${\rm ON}$ during each observing session. And as seen in Fig. \ref{fig2}, since the phase of instrumental standing waves changes with each activation of the high-temperature noise, and the wavelength of the standing wave ripples induced by the FAST receiving system can be approximated as $ c/2f \sim 1.1$ MHz (where $c$ is the speed of light, and $f \sim 138$ m the focal length of FAST), corresponding to $\sim 140-150$ spectral channels, we performed a several-hundred-channel Gaussian smooth on each set of ${{\rm ON} - {\rm OFF}}$ value, to minimise the phase-shifting effects in calibrations. Then $T_{sys}$ from both polarisations were manually checked, and if they were consistent with each other (which was the case for all four samples observed by the 19-beam receiver), each pair of polarisation A \& B temperature readings were added together to get the average. Finally, the polarisation-averaged $T_{sys}$ was converted to flux densities with pre-measured antenna gain ($\sim 15.7- 16.5$ K/Jy for Beam 01, depending on frequency, see Table 5 in \citealt{Jiang2020} for details). \begin{figure} \includegraphics[width=\columnwidth]{ASK_on_noisecompare_part.eps} \caption{Averaged on-source spectra of ASK 378291.0 with noise ON (blue, offset in y-axis added for convenience) and OFF (green). The absorption feature can be found near to $\sim 1359$ MHz, indicated by the red dotted line. It can be seen that the phase of standing waves imposed on the background continuum flips between the ON and OFF states. Frequencies shown in this figure have not been heliocentric corrected yet.} \label{fig2} \end{figure} However, during the ``early-science'' stage of FAST's commissioning phase, when the wide band receiver was still in use, no reliable noise diode was available for flux calibrations. What was worse, the level of instrument reading for N9020A MXA spectrum analyser is often relatively unstable, even within one observing session. Thus, it poses significant difficulties in calibrating the UGC 00613 data. Fortunately, the absorption feature of ASK 378291.0 was detected with both the wide band receiver and the 19-beam array. Thus, we adopted the measured flux density of ASK 378291.0 from the latter instrument as reference to re-scale and calibrate the wide band receiver observations. And since only one polarisation was available during this period, we just assumed that the single set of flux densities applies for both polarisations. Although such operations could bring considerable compromise to the accuracy of our final products, it is the only sensible method that could produce reasonable results generally compatible with the ALFALFA observations, as provided by \cite{Wu2015} and Song et al. (2021, in preparation). More details of our results can be found in Section 3.1.1. Once the flux densities were calculated via calibration, the observed frequencies were Doppler-corrected to the heliocentric frame. Since the fully automatic data reduction pipeline for FAST HI observations \citep{Zhang2019} is still in development, all baseline subtraction and RFI flagging works involved in this paper were performed manually. For every source, the overall shape of the background continuum (``baseline'') curve was determined by calculating the median value for every channel during each entire session, similar to the process utilised by HI Parkes All Sky Survey (HIPASS, see \citealt{Barnes2001}). As for the baseline reading for the frequency channels in which the desired HI absorption lines resides, a cubic spline interpolation was performed. Once the baseline was subtracted, strong RFI was marked by visual inspections. Finally, 1- or 2-component Gaussian fitting was applied to data not contaminated by RFI, depending on the line profiles, to get the spectral parameters. Here considering the baseline instabilities between on- and off-source positions as described in Subsection 2.2, continuum flux densities measured by the NVSS survey (rather than direct measurements by FAST) were adopted for the calculation of optical depth and other line characteristics of each HI absorbing feature. Here it should be noted that since the NVSS survey was performed more than 2 decades ago, it is quite possible that the continuum flux level for the observed sources varies during this period. In fact, as shown in Table \ref{tab1}, the continuum flux of CGCG 049-03 does show a difference as large as 27\% between NVSS and the earlier VLA Faint Images of the Sky at Twenty-Centimeters (FIRST) survey \citep{Becker1995} results, which may be due to possible source variability, or the existence of the extended jet in this source (\citealt{Bagchi2007}), combined with the varied VLA configurations adopted by different surveys. (While for ASK 378291.0 and J1534+2513, which have also been observed by both NVSS and FIRST, measurements obtained by these two surveys are generally compatible, with flux variations less than $\sim10$\%.) Also, the beam size difference between NVSS survey and FAST, along with the unfilled $u-v$ coverage of VLA may cause the NVSS flux to be somewhat underestimated for this work. Thus, as mentioned in Section 2.1, our strategy to calculate absorption parameters (including HI column density $N_{I}$ and optical depth $\tau$) based on existing survey results is only a compromise to the not-so-stable behaviours of our newly commissioned instrument, and is adopted in reference to the solution utilised by various single-dish absorption line observations, such as \cite{Darling2011}, \cite{Grasha2019}, as well as \cite{Zheng2020}. \section{Observing results}\label{sec:3} We successfully detected HI absorption features in all of the 5 sources firstly identified with 40\% of the ALFALFA data by \cite{Wu2015} and Song et al. (2021, in preparation). The properties of each detected source estimated using FAST are summarised in Table \ref{tab2}. The column densities of HI towards absorption line $N_{HI}$ in this table are calculated with the following equation \begin{equation} N_{HI} = 1.823 \times 10^{18} \frac{T_s}{f} \int \tau {\rm d} v \text{ cm}^{-2}. \label{eq:2} \end{equation} Here $T_s$ denotes the HI spin temperature, $f$ the covering factor of the radio continuum source, with a typical value of unity assumed (e.g., see \citealt{Maccagni2017}). Since no source observed by this work is associated with known DLA system \citep{Wu2015}, which usually exhibits a higher $T_s$ in the order of $\sim 10^3$ K {(e.g., see \citealt{Chengalur2000}, \citealt{Darling2011})}, and \cite{Curran2019}, a typical value of $T_s\sim 100$ K is assumed. And the optical depth $\tau$ of the HI absorber can be calculated as \begin{equation} \tau = -\ln\left(1+\frac{S_{HI}}{S_{1400 \text{ MHz}}} \right), \label{eq:3} \end{equation} where $S_{HI}$ is the depth of the absorption line (in negative value), and $S_{1.4 \text{ GHz}}$ the 1400 MHz flux of the continuum source as provided by NVSS survey. All errors listed in Table \ref{tab2} have been evaluated with the spectral resolution and RMS level of each observing session, the calibration accuracy of the 19-beam receiving system ($\sigma_{cal} \sim 0.8-1.8$\%, as noted by \citealt{Jiang2020}), the $\sigma_{S_{1.4{\rm GHz}}}$ value of NVSS or VLA FIRST measurements, as well as the line profiles, in reference to the method adopted by \cite{Koribalski2004}, as follows \begin{equation \sigma_{ cz_{peak} } = 3\frac{\sqrt{P \Delta_v}}{{\rm S/N}}, \end{equation} \begin{equation} \sigma_{{\rm FWHM}} = 2 \sigma_{cz_{peak}}, \end{equation} \begin{equation} \sigma_{S_{HI,peak}} = \sqrt{ {\rm RMS}^2 + \sigma_{cal}^2\times S_{HI,peak}^2}, \end{equation} where $S_{HI,peak}$ the maximum line depth, $P = 0.5\times({\rm FW}20-{\rm FWHM})$ the slope of the HI line profile, ${\rm FW}20$ the line width measured at 20\% of the peak flux value, FWHM the full width at half maximum of the line, $\Delta_v$ the spectral resolution in km s$^{-1}$, ${\rm S/N} = -S_{HI,peak} / {\rm RMS}$ the signal-to-noise ratio, $cz_{peak}$ the corresponding redshift of the line peak, and RMS the root mean square of the background noise extracted from both side of the absorption line. Thus, the error of optical depth at the line peak can be evaluated with Eq. \ref{eq:2}, using the error propagation theory, as \begin{equation} \sigma_{\tau_{peak}} = \sqrt{\left(\frac{\partial \tau}{\partial S_{HI,peak}} \sigma_{S_{HI,peak}}\right)^2 + \left(\frac{\partial \tau}{\partial S_{1.4 \text{ GHz}}} \sigma_{S_{1.4 \text{ GHz}}} \right)^2 }. \end{equation} And considering $N_{HI} \approx 1.823 \times 10^{18} \left(T_s/100 {\rm K}\right) \times 1.064 \cdot {\rm FWHM} \cdot \tau_{peak}$ cm$^{-3}$ for Gaussian line profiles, as shown by \cite{Darling2011, Murray2015}, the error of HI column density $\sigma_{ N_{HI} }$ can be estimated as \begin{equation \sigma_{ N_{HI} } = \frac{ 1.940\times 10^{18} T_s}{100 {\rm K}} \sqrt{ \left(\tau_{peak} \cdot \sigma_{{\rm FWHM}} \right)^2 + \left( {\rm FWHM} \cdot \sigma_{\tau_{peak}}\right)^2} . \end{equation} \begin{table*}[th] \caption{FAST observations of the five extragalactic HI absorbers firstly identified with ALFALFA data. All parameters are listed in the rest frame of the absorbers. All background RMS data are measured with the original velocity bin. Parameters including $cz_{peak}$, FWHM, $S_{HI,peak}$, $\tau_{peak}$, and $\int \tau {\rm d} v$ are calculated with the fitted line profiles.}\label{tab2} \begin{tabular}{llccccccc} \hline Source name & RMS & Gaussian & $cz_{peak}$ & FWHM & $S_{HI,peak}$ &$\tau_{peak}$ & $\int \tau {\rm d} v$ & $N_{HI}$\\ &(mJy)& Component&(km s$^{-1}$)& (km s$^{-1}$)& (mJy) & & (km s$^{-1}$) & ($10^{20}$ cm$^{-2}$)\\\hline UGC 00613 & 6.49 & A & $13956.5\pm 0.9$ & $26.36\pm1.74$ & $-64.28\pm 6.61$ & $0.801\pm 0.145$ & $21.24\pm 4.10$ & $38.72 \pm 7.95\left( \frac{T_s}{100\text{ K}} \right)$ \\ \hline ASK 378291.0 & 0.89 & A & $13672.3\pm0.2$ & $14.06\pm 0.36$ & $-42.24\pm 1.22$ & $0.607 \pm 0.051$ & $9.095 \pm 0.753$ & $16.58 \pm 1.46 \left(\frac{T_s}{100\text{ K}} \right)$\\ & & B & $13702.2\pm0.1$ & $14.09\pm 0.30$ & $-45.93 \pm 1.28$ & $0.683 \pm 0.061$ & $9.724 \pm 0.887$ & $17.72 \pm 1.72\left(\frac{T_s}{100\text{ K}} \right)$\\ & & Total &$13687.3\pm0.1$& $44.00\pm 0.30$ & & & $18.83\pm 1.16$ & $34.33 \pm 2.26\left( \frac{T_s}{100\text{ K}} \right)$\\ \hline CGCG 049-033 & 0.45 & A & $13454.4\pm 1.0$ & $52.71\pm2.07$ & $-5.851 \pm 0.468$ & $0.055\pm 0.005$ & $3.069 \pm 0.285$ & $5.595 \pm 0.553\left(\frac{T_s}{100\text{ K}} \right)$\\ \hline J1534+2513 & 0.37 & A & $10089.0 \pm 0.1$ & $5.148 \pm 0.240$ & $-14.97 \pm 0.48$ & $0.355 \pm 0.019$ & $2.291 \pm 0.130$ & $4.176 \pm 0.253 \left( \frac{T_s}{100\text{ K}} \right)$\\ & & B & $10107.9 \pm 0.2$& $13.76 \pm 0.33$ & $-15.32 \pm 0.48$ & $0.365 \pm 0.020$ & $4.713 \pm 0.296$ & $8.593 \pm 0.575\left(\frac{T_s}{100\text{ K}} \right)$\\ & &Total&$10098.5\pm 0.1$& $27.53 \pm 0.29$ & & & $7.011 \pm 0.324$ & $12.78 \pm 0.63\left(\frac{T_s}{100\text{ K}} \right)$\\ \hline PGC 070403 & 2.53 & A & $7694.2\pm 4.2$ & $74.54\pm 8.37$ & $-9.602\pm 2.539$ & $0.086\pm 0.024$ & $6.643 \pm 2.314$ & $12.11 \pm 4.49\left(\frac{T_s}{100\text{ K}}\right)$\\ \hline \end{tabular} \end{table*} \subsection{Note on individual sources and comparisons with Arecibo/WSRT observations} \subsubsection{ASK 378291.0} The ASK 378291.0 at redshift $z= 0.04580$ has been classified as a newly-defined Fanaroff-Riley type 0 \citep{Baldi2018} galaxy by \cite{Cheng2018}, with a matched NVSS source exhibiting a $\sim 76.6$ mJy flux at the 1.4 GHz band, and a well-aligned jet and counterjet \citep{Cheng2018} on its position. Its absorption line was detected by both of the wide band receiver, as well as the 19-beam array of FAST. As shown in Fig. \ref{figASK}, the frequency and the double-horned profile in data from FAST and Arecibo show rough agreements to each other, thus proving the robustness of FAST detections. However, the low frequency peak in FAST data shows a significantly larger depth than ALFALFA's results. As can be seen in Fig. \ref{figASK_WSRT}, this discrepancy exist even if the FAST data are binned to resolution similar to that of ALFALFA. Considering the fact that spectra from both of the 19-beam and the wide band receiver of FAST unveiled similar line profiles, such a discrepancy between data acquired by the two telescopes could be due to footprint positions in the ALFALFA survey, since the source might not pass through the right center of the receiver beam during those drift scans, thus leading to off-axis measurements, as well as compromised flux and line profile estimations. Besides, the same peak in the 19-beam data shows a clear spike at $~1358.3$ MHz, which cannot be identified in the spectrum taken by the wide band receiver. Since multiple RFI features exist within the desired frequency range, it is reasonable to assume that such a spike is due to undesired interference overlaid on the double-horned absorption line. Thus, this spike was excluded in the fitting procedure of the line shape, as well as optical depth calculation. Compared with the results listed in Song et al. (2021, in preparation), which adopted a single-Gaussian function for line profile fitting, and yielding a line depth of $S_{HI,peak} \sim -23.5$ mJy, a full width at half maximum (FWHM) value of $\sim 56.3$ km s$^{-1}$, and $\int \tau {\rm d} v \approx 21.72$ km s$^{-1}$, the FAST observations of ASK 378291.0 show a slightly smaller integrated optical depth, along with a narrower FWHM. However, it should be noted that the ALFALFA fitting result is based on the continuum flux provided by the VLA Faint Images of the Sky at Twenty-Centimeters (FIRST) survey \citep{Becker1995}, which is $73.85$ mJy. When adopting this lower flux level, the FAST observations can lead to a slightly larger line depth than ALFALFA, which is $\int \tau {\rm d} v \approx 25.53 \pm 0.94$ km s$^{-1}$. \begin{figure} \includegraphics[width=\columnwidth]{ASK_flux_bsl_normalized_velocity_tau.eps}\\ \includegraphics[width=\columnwidth]{ASK_average_diff1.eps} \caption{Top: HI absorption feature ASK 378291.0. The black line shows observations taken by the FAST 19-beam receiver with its spectral line backend on Sep. 2, 2018, while the green curve denotes data taken from the ALFALFA survey. The magenta dotted line is the fitting result by dual-Gaussian functions, while the cyan curve shows the fitting residual with a 15 mJy offset for clarity. The regions shaded with light yellow are manually-flagged RFI, which have been all excluded in the fitting procedure and RMS computations. The red dotted line shows the optical redshift, while the blue line marks the zero flux level. Bottom: The same target observed by the wide band receiver along with the N9020A MXA spectrum analyser on Oct. 29, 2017, with the original, uncalibrated instrumental readings measured in mW shown along the y-axis. This set of data marks the first successful detection of extragalactic HI absorption line by the FAST telescope.} \label{figASK} \end{figure} It should also be noted that this absorption feature has also been detected by the WSRT absorption line survey, with an FWHM of $\sim 66.95$ km s$^{-1}$, and $\int \tau {\rm d} v \approx 9.62$ km s$^{-1}$ \cite{Maccagni2017}. Fig. \ref{figASK_WSRT} show a comparison between the FAST and WSRT data. It can be seen that the WSRT data only exhibit a single-peaked structure, rather than a double-horned one as shown by FAST and ALFALFA. And the maximum absorption measured by WSRT is only $-13.55$ mJy, which is nearly 3.4 times less than measurements performed by FAST. When channel-binned to a velocity resolution equivalent to WSRT, a single-peaked absorption line emerges in the FAST data, although with a lopsided profile, and a deeper feature of $-25.08$ mJy. Thus, even adopting the higher continuum flux value from \cite{Maccagni2017} ($S_{1400 \text{ MHz}} \sim 92.83$ mJy) compared with NVSS or VLA FIRST catalogues, the FAST data can still lead to an $\int \tau {\rm d} v \approx 17.22$ km s$^{-1}$, which is nearly 1.8 times higher than the WSRT integrated optical depth. Such a clear difference could be attributed to the incomplete $u-v$ coverage of a radio telescope array, which may result in compromised flux measurements. \begin{figure} \includegraphics[width=\columnwidth]{ASK_flux_bsl_normalized_velocity_tau_WSRT.eps} \caption{Comparison between FAST 19-beam, WSRT and ALFALFA observations of ASK 378291.0. The black line shows data taken by FAST, while the greenish-grey line is observations from WSRT, as shown in \citealt{Maccagni2017}, and the green line the ALFALFA data. The dotted cyan line denotes binned FAST data, with a final resolution similar to WSRT's 16 km s$^{-1}$, while the dotted magenta line the 4-channel binned FAST data to achieve a spectral resolution comparable with that of ALFALFA. The red line denotes the optical redshift, and the blue line marks the zero level. } \label{figASK_WSRT} \end{figure} \subsubsection{UGC 00613} The absorption feature in UGC 00613, a flat spectrum radio galaxy with extremely diffuse envelope and faint lobes at $z = 0.04593$ \citep{Condon1991}, was first reported by \cite{Wu2015} and Song et al. (2021, in preparation), with a line depth of $S_{HI,peak} \sim -65.0$ mJy, an FWHM of $\sim 33.6$ km s$^{-1}$, and $\int \tau {\rm d} v \approx 31.14$ km s$^{-1}$. The FAST telescope performed tracking observations on this source with during its early-science phase, with its wide band receiver. Due to the lack of reliable noise diode in this period, the flux of UGC 00613 obtained by the N9020A MXA spectrum analyser were estimated in reference to observations of ASK 378291.0 shown in the upper panel of Fig. \ref{figASK}. Firstly, the baseline-subtracted wide-band-receiver data of ASK 378291.0 (lower panel of Fig. \ref{figASK}) was re-scaled according to the flux level of the same source measured by the 19-beam receiver; then, the same scaling factor for conversion from N9020A MXA's instrumental reading to flux density (measured in mJy) was applied to UGC 00613 observations. Although such a calibration process is not strictly accurate, the resulted line depth, $\sim -64.34$ mJy, is in good agreements with the Arecibo data. The discrepancy between line width estimations of the two telescopes may arise from the higher spectral resolution of FAST, which could lead to a better estimation of the line profile; as well as the relatively high RMS level in the N9020A MXA data ($\sim 6.49$ mJy, which is an order-of-magnitude higher than that of the 19-beam data), which brings more uncertainties in identification of the wing structure, as can be seen from Fig. \ref{figUGC}. \begin{figure} \includegraphics[width=\columnwidth]{UGC00613_flux_bsl_normalized_velocity_tau.eps} \caption{HI absorption feature of UGC00613 obtained by the FAST wide band receiver. The black line shows observations taken by the FAST, while the grey curve is data from the ALFALFA survey. The magenta dotted line is the fitting result with a single Gaussian function, while the cyan line shows the fitting residual with a 35 mJy offset. It can be seen that the structure of this absorption feature observed by the two telescopes show good agreements with each other, although Arecibo data show a slightly broader wing component in the lower frequency end.} \label{figUGC} \end{figure} It should be noted that as can be seen in Fig. \ref{figUGC}, and also noted by Song et al. (2021, in preparation), the centre of the UGC 00613 absorption feature is redshifted from the optical redshift by as many as $\sim 186$ km s$^{-1}$ ($\sim 188$ km s$^{-1}$ for Arecibo data). According to \cite{Wegner1999}, although the signal-to-noise ratio of the optical data used for $z$ estimation is not high enough to accurately measure the velocity dispersion of the spectral lines, the redshift of this galaxy can be reliably determined as $ cz ~ 13770 \pm 23$ km s$^{-1}$. Thus, it is unlikely that such a large offset (which is at $>8 \sigma$ level of the optical measurement) between HI line and optical velocities arise from observation errors. Rather, this discrepancy may indicate the existence of unsettled infalling gas cloud or other similar structures around this AGN (e.g., see \citealt{Maccagni2017} and references herein), and further interferometric spectral line observation is required to reveal the details of the absorbing gas in this galaxy. \subsubsection{CGCG 049-033} The $z=0.04464$ elliptical galaxy CGCG 049-033 in Abell 2040 cluster possesses a highly asymmetric Fanaroff-Riley type II AGN, comprising of one of the largest radio jet yet discovered, a $> 10^9 M_{{\rm Sun}}$ central black hole, and intense polarised synchrotron radiation extending to a distance of $\sim 440$ kpc away from the galactic centre \citep{Bagchi2007}. The HI absorption feature with a maximum line depth $S_{HI,peak} \sim -4.3$ mJy, an FWHM $\sim 123.9$ km s$^{-1}$, and $\int \tau {\rm d} v \approx 7.23$ km s$^{-1}$ reported by Song et al. (2021, in preparation) is consistent with the general trend that early-type galaxies are in lack of abundant gaseous contents. \begin{figure} \includegraphics[width=\columnwidth]{CGCG049-033_flux_bsl_normalized_velocity_tau.eps} \caption{HI absorption feature of CGCG 049-033 obtained by the FAST 19-beam receiver on October 31, 2018. Legends are the same as the upper panel of Fig. \ref{figASK}, except that the line profile is fitted with a single Gaussian function, and the offset for fitting residual is 6 mJy.} \label{figCGCG} \end{figure} The FAST telescope conducted targeted observations twice towards CGCG 049-033, on October 31st and December 28th, 2018, respectively, each was comprised of 1200 s of on source exposure time. Although weak, the expected absorption feature did appear at the right frequency channels in both set of data, thus confirming the correctness of our detection. However, since the December observations suffered stronger RFI contamination, they have not been invoked in our analysis to get the line profile, flux, optical depth, as well as other characteristics of this HI absorber. As it can bee seen in Fig. \ref{figCGCG}, the HI absorption line from CGCG 049-033 is the weakest among all five sources detected by Arecibo and confirmed by FAST, with a maximum depth of $5.851$ mJy only, as measured by FAST. However, the integrated optical depth, $3.069$ km s$^{-1}$, is less than half of that of the ALFALFA results. Even if the lower continuum flux provided by VLA FIRST, as adopted by Song et al. (2021, in preparation), is applied to FAST data, the resulting $\int \tau {\rm d} \nu$ is still smaller than ALFALFA. This discrepancy should be the result of the much narrower line structure observed by FAST, and the spike-like RFI right on the low-frequency part of the absorption line further affects the fitting of the line profile, resulting in a FWHM less than half of Arecibo's results. Since the signal-to-noise ratio for this source in ALFALFA data is quite low, considering the RMS level $\sim 2.2$ mJy is almost the half of the peak estimation by Song et al. (2021, in preparation), the ALFALFA line profile itself exhibit considerable uncertainties. Thus, the low-noise FAST data should be considered more reliable, thanks to the better sensitivity and spectral resolution of the FAST telescope. \subsubsection{J1534+2513} The existence of HI absorption feature in radio source J1534+2513 at a redshift of $z=0.03396$ was first proposed by \cite{Haynes2011}, and later confirmed by \cite{Wu2015} and Song et al. (2021, in preparation), with a line depth of $S_{HI,peak} \sim -23.0$ mJy, FWHM $\sim 26.4$ km s$^{-1}$, and $\int \tau {\rm d} v \approx 18.31$ km s$^{-1}$. This absorber has also been detected by the WSRT survey, with a wider line width (FWHM $\sim 40.88$ km s$^{-1}$), and an integrated optical depth of $10.92$ km s$^{-1}$, against a $43.39$ mJy background \citep{Maccagni2017}. \begin{figure} \includegraphics[width=\columnwidth]{J1534+2513_flux_bsl_normalized_velocity_tau.eps} \caption{HI absorption feature of J1534+2513 obtained by the FAST 19-beam receiver. Legends are the same as the upper panel of Fig. \ref{figASK}, with an offset for the fitting residual set as 8 mJy.} \label{figJ1534} \end{figure} As can be seen in Fig. \ref{figJ1534}, the most prominent feature clearly shown in the FAST spectrum should be the double-horned structure over a relatively narrow $\sim 27.53$ km s$^{-1}$ frequency range, which is only barely seen with the $\sim5.3$ km s$^{-1}$ resolution of the ALFALFA data (Song et al. 2021, in preparation), and the $\sim 16$ km s$^{-1}$-resolution WSRT survey (\cite{Maccagni2017}, see Fig. \ref{figJ1534_WSRT}), thus demonstrating the necessity of performing observations with finer spectral resolution. And the centroid of this absorption line is again significantly deviates from the optical redshift, although in this case, the radio line is blueshifted by ~ $82.15$ km s$^{-1}$ compared with the optical $z$, although the extent of discrepancy is smaller than that of UGC 00613. Considering the error in $cz$ measurement for J1534+2513 is as small as $<2.7$ km s$^{-1}$ (which is the smallest among all five sources discussed in this work), such a line velocity difference must be originated from the absorbing HI content itself. Again, this may indicate the existence of HI outflows along the line-of-sight, and inteferometer observations should be of help to unveil the nature of the absorbing gas. \begin{figure} \includegraphics[width=\columnwidth]{J1534+2513_flux_bsl_normalized_velocity_tau_WSRT.eps} \caption{Comparison between FAST 19-beam, WSRT and ALFALFA observations of J1534+2513. The legends are the same as Fig \ref{figASK_WSRT}. The red line denotes the optical redshift, while the blue line marks the zero level.} \label{figJ1534_WSRT} \end{figure} Compared with Arecibo and WSRT data, the spectral parameters of J1534+2513 obtained with FAST exhibit a narrower width, a shallower peak depth, as well as a smaller optical depth, compared with the other 2 sets of data. And although the FAST line profile is significantly deeper than WSRT's $<10$ mJy (which still holds true if these data are binned to WSRT resolution), the line depth taken by ALFALFA survey is even larger. Besides, the distance between the two spectral peaks in the WSRT data is much larger than FAST and Arecibo's results. In one sentence, the existing 3 data sets do not coincide with each other, even when taken spectral resolution into considerations. A possible explanation for such an inconsistency between different instruments could be the errors induced by a relatively high RMS level compared with line depth for ALFALFA (which has a baseline fluctuation as large as nearly 10 mJy for frequencies around the J1534+2513 absorption feature, as illustrated in Figs. \ref{figJ1534} and \ref{figJ1534_WSRT}) and WSRT, which can make their results less accurate. Also, the interferometric nature of WSRT observations could bring extra compromise to flux estimations. \subsubsection{PGC 070403}\label{sec:PGC} It was \cite{Haynes2011} who put forward the first indication on the existence of $z= 0.0251$ absorption feature PGC 070403, and later the same structure has been tentatively identified by \cite{Wu2015} and Song et al. (2021, in preparation) from the $\alpha.40$ data, with an estimation of line depth as $S_{HI,peak} \sim -12.9$ mJy, FWHM $\sim 88.9$ km s$^{-1}$, and $\int \tau {\rm d} v \approx 11.11$ km s$^{-1}$. As noted by Song et al. (2021, in preparation), similar to the case of UGC 00613, this absorption line is also redshifted from the optical $cz$ by as many as $\sim 191.5$ km s$^{-1}$. However, it is also noticed that the error of optical $cz$ measurement is as large as $118.4$ km s$^{-1}$, which means that such a large discrepancy may be due to observational uncertainties. \begin{figure} \includegraphics[width=\columnwidth]{PGC070403_T_on_A_compare.eps}\\ \includegraphics[width=\columnwidth]{PGC070403_T_on_B_compare.eps} \caption{Drift scan data of PGC 070403 from both polarisations. The y-axis is shown in the original instrumental reading. The black line shows the average of $\sim 12$ s original data taken around the transit time of the targeted source, Gaussian-smoothed by 4 spectral channels to highlight possible line features. The blue and green lines are 4-channel Gaussian-smoothed, $\sim 12$ s-averaged data taken before and after source transit, respectively, with $\sim + 10$\% of adjustments on the y-axis readings for clarity. It can be seen that a ``dip'' appears at $\sim 1384.9$ MHz around the transit time on both polarisations. And the red dotted line denotes the $cz_{peak}$ of the expected absorption feature as measured by Arecibo, which is almost coincident with the position of the transit ``dip''.} \label{figPGC_original} \end{figure} Among all of the 5 HI absorbing sources observed by the FAST telescope and mentioned in this paper, PGC 070403 is the one with the second-weakest line depth, and the only sample observed with the drift scan mode by the FAST telescope. With a beam width of $\sim 2'.9$ at $\sim 1400$ MHz band, the central feed of the 19-beam receiver can observe each source for $\sim 12$ s during a one-pass scan. As can be seen in Fig. \ref{figPGC_original}, the absorption ``dip'' only appear during the $\sim 12$ s period around the transit time of PGC 070403 in both polarisations, thus making the FAST detection reliable. Considering the spectral baseline fluctuations appearing during the entire drift scan session, only $\sim 12$ s observations around target transit time are utilised in our data reduction process. The final spectrum is an average of the $\sim 12$ s transit data, and the background continuum is subtracted with similar smoothing and fitting method as the tracking data for sources described above. \begin{figure} \includegraphics[width=\columnwidth]{PGC070403_flux_bsl_normalized_velocity_tau.eps}\\ \includegraphics[width=\columnwidth]{PGC070403_flux_bsl_normalized_velocity_tau_smoothed.eps} \caption{Top: HI absorption feature PGC 070403, as observed by the 19-beam receiver of FAST. $\sim 12$ s of drift scan data around the transit time have been added up to obtain the line profile. Legends are the same as the upper panel of Fig. \ref{figCGCG}. The offset for the fitting residual is chosen as 12 mJy. Bottom: The same data, binned to a spectral resolution of $\sim 6$ km s$^{-1}$.} \label{figPGC} \end{figure} The line centre of PGC 070403 determined by FAST, which is $\sim 7694$ km s$^{-1}$, is slightly redshifted compared with ALFALFA's $7712$ km s$^{-1}$. Also, our line width is $\sim16$\% narrower than the Arecibo result, and the peak depth of the PGC 070403 absorbing feature obtained by FAST is more than $25$\% shallower than ALFALFA's value. Besides, the deepest structure in Arecibo data corresponding to a velocity of $7740$ km s$^{-1}$ does not show up in FAST observations. Combined with all of the factors mentioned above, the integrated optical depth obtained with FAST became $\sim 4.47$ km s$^{-1}$ smaller than ALFALFA's 11.11 km s$^{-1}$. And as shown in the bottom panel of Fig. \ref{figPGC}, with a 4-channel bin applied to the original FAST data (which leads to a spectral resolution of $\sim 6$ km s$^{-1}$, comparing to ALFALFA's $\sim 5.3$ km s$^{-1}$), an absorbing feature with line centre at $7689.07 \pm 2.14$ km s$^{-1}$, a narrower FWHM $\approx 64.37 \pm 4.29$ km s$^{-1}$, a shallower $S_{HI,peak} \sim -8.525 \pm 1.217$ mJy, and a smaller $\int \tau {\rm d} v \approx 4.783 \pm 0.809$ km s$^{-1}$, corresponding to an HI column density $N_{HI}\sim \left(8.719 \pm 1.570\right)\times 10^{20}\left(T_s/100\text{ K}\right)$ cm$^{-2}$, imposed on a background RMS level of $\sim 1.21$ mJy. Such a set of parameters exhibits noticeable differences with the ones acquired with the original, unbinned data. As for the discrepancy of the low-frequency part of the line profiles between FAST and Arecibo observations, we suggest that polarisation-dependent weak RFI or baseline fluctuations, which is quite common for FAST data (e.g., see Fig. 26 in \citealt{Jiang2020}), could be a possible explanation. As can be seen in Fig. \ref{figPGC_original}, compared with polarisation A, the uncalibrated spectrum in polarisation B shows a broader profile, as well as a greater depth in the low frequency regime, and is more similar to the ALFALFA result. When polarisation-combined, the $\sim 7740$ km s$^{-1}$ dip is largely missed due to polarisation-dependent behaviours. Such an effect influences more for weak sources, especially when the integration time is limited. Thus, due to complications induced by high background noise level of drift scan observations, targeted long exposures with FAST or other sensitive radio telescopes should be needed to pin down the characteristics for PGC 070403. Anyway, current results still demonstrate the feasibility of identifying weak HI absorption sources via FAST drift scan observations, thus more similar detections can be expected with the upcoming CRAFTS survey. \section{Discussions}\label{sec:4} \subsection{Note on ALFALFA detection rate} In our pilot study of extragalactic HI absorbers with the FAST telescope during its commissioning phase, all of the 5 sources firstly discovered with 40\% of the ALFALFA data have been confirmed. Combining the 5 known sources identified by \cite{Wu2015} and Song et al. (2021, in preparation), the detection rate for HI absorption features in $\alpha.40$, which is $\sim 5.5$\%, still holds, assuming the radio luminosity function of radio-loud AGNs to be in the form of that from \cite{Mauch2007}. Such a rate is higher than the result estimated by \cite{Darling2011} with 7.4\% of ALFALFA data, which is $\sim 3$\%, as well as a later prediction by \cite{Allison2014} ($\sim 1.6-4.4$)\%. However, as noted in \cite{Saintonge2007} and \cite{Haynes2011}, the performance of the matched-filtering algorithm adopted by the source finding process drops significantly below an S/N value of $\sim 6.5$, which means that weak sources embedded in data cannot be identified completely, while \cite{Wu2015} detected all the 10 absorbers based on the same method described by \cite{Saintonge2007}. Since sources CGCG 049-033, PGC 070403, as well as 3 other known sources identified with $\alpha.40$ exhibit flux levels less than the signal-to-noise ratio threshold, the detection rate of 5.5\% can only be considered as a lower limit when estimating the total number of extragalactic HI absorbers. However, since the rate for efficient detection (${\rm S}/{\rm N} > 6.5$) is 2.75\%, which is only the half of the 5.5\% value, it is safe to say that at least 12 or 13 HI absorbing systems can be identified with the $\alpha.100$ data (\citealt{Haynes2018}), and the total number of detections is hard to predict due to algorithm limitations. \subsection{Implications on the prospect of HI absorption line detections with FAST} Based on the different methods we have tested during the pilot observations, we find that a single-pass scan can already resolve weak absorption, while the 2-pass drift scan mode similar to that of ALFALFA \citep{Giovanelli2005} would be a more efficient strategy to detect large number of HI absorbers, since in this case, more time-varied RFI and other fluctuations can be excluded. In fact, the 2-pass strategy is the observing mode already adopted by the FAST extragalactic HI survey, which is one of the key projects undertaken at FAST, thus we can make good use of its large amount of data sets in the near future. Since it takes a single source $\sim 12$ s to pass through one beam during each scan, and a rotation angle of $23^{\circ}.4$ would be applied to the 19-beam receiver \citep{Li2018} to ensure maximum Declination coverage and non-overlap between all feed horns during sky surveys, the total integration time for each source with all scans finished should be $\sim 24$ s. We calculate the mean RMS value for a 24 s integration duration with all of the 19-beam data analysed by this paper, leading to a averaged noise level of $\sim 3$ mJy, which is comparable to the sensitivity of FLASH survey ($\sim 3.2$ mJy beam$^{-1}$) achieved with 2 hr of integration time by ASKAP (\citealt{Allison2020}). And such a noise level can be further suppressed to $\sim 1.5$ mJy with an extra 4-channel bin along the frequency (velocity) direction, which is slightly lower than the noise level of WALLABY ($\sim 1.6$ mJy beam$^{-1}$ with $8\times 2$ hr ASKAP integration time, see \citealt{Koribalski2020}). Thus, in the following evaluations, we calculate the prospect of HI absorption detections for FAST sky surveys with 2 sensitivity levels, $\sim 3$ and $\sim 1.5$ mJy, respectively. Take the averaged optical depth for 10 HI absorbers (including 5 previously-known samples) presented in Song et al. (2021, in preparation), $\tau \sim 0.352$, as a typical value for extragalactic HI absorption features, the corresponding normalised line depth should be of $\sim 0.3$ times the continuum flux. And suppose a reliable identification requires a $\geqslant 5 \sigma$ detection, averagely speaking, a continuum source, such as AGN, which served as the background for HI absorption, is required to have a minimum flux of $\sim 38$ mJy for detecting with the original unsmoothed observations, and $\sim 25$ mJy for the 4-channel binned data. The maximum zenith angle that can be attained by FAST is $\sim 40^{\circ}$ (\citealt{Nan2011, Li2018, Jiang2020}). Given the telescope's geographic latitude at $25^{\circ} 39' 10''.6$ N, the observable Declination extent of FAST should be $\sim -14^{\circ}.35<\delta<65^{\circ}.65$. Thus, the full FAST observable sky covers an area of $\sim 23800 $ deg$^2$, which is more than 3 times the mapped region of the ALFALFA survey (\citealt{Giovanelli2005, Haynes2018}). And the 19-beam receiver, the primary workhorse for sky surveys, operates at an HI redshift range of $\sim -6000 < cz < 105000$ km s$^{-1}$, compared to ALFALFA's $-2000$ to $18000$ km s$^{-1}$ \citep{Haynes2018}. Combined with both factors, the observable comoving volume of FAST should be $\sim4.2$ Gpc$^3$, which is nearly $\sim 300$ times the ALFALFA's total coverage. Following similar procedures by previous works such as \cite{Allison2014} and \cite{Wu2015}, we adopt the local luminosity function for radio-loud AGNs proposed by \cite{Mauch2007}. Ignoring the dependence of AGN distributions on redshift, we predict a total number of $\sim 43,000$ AGNs with fluxes above the $5 \sigma$ limit in the complete FAST sky survey coverage with a $\sim 1.5$ km s${^-1}$ spectral resolution, and $\sim 49,000$ AGNs for the 4-channel smoothed data. Applying the detection rate of HI absorbing systems calculated by Song et al. (2021, in preparation), $\sim 2,300$ extragalactic HI absorption lines should be identified with the complete set of original FAST sky survey data, and $\sim 2,600$ such sources could be detected with the 4-channel binned spectra. Of course, the calculations above only provide the most optimistic expectations, and the more or less compromised antenna gain of FAST for zenith angles larger than $\sim 26^{\circ}.4$ (\citealt{Li2018, Jiang2020}) was not taken into account. Besides, a number of frequencies in the $1050-1450$ MHz band are often heavily contaminated by RFI generated by distance measuring equipments for aviation, or emitters on board navigation satellites. Thus, the real number of HI absorption line detection could be largely reduced. However, even neglecting the frequency band of $\sim 1150-1300$ MHz, which is most severely affected by RFI from satellites, $\sim 1,500$ HI absorption systems can still be expected from the unbinned data, and $\sim 1,700$ from the 4-channel binned data, which is consistent with the forecast made by \cite{Yu2017}. Another approach to predict the FAST detection prospect for HI absorbers is with the NVSS (\citealt{Condon1998}) source count. $\sim 1.2 \times 10^5$ continuum sources with $S_{1.4 \text{ GHz}}>38$ mJy exist within the FAST observable sky, and the number for sources with $S_{1.4 \text{ GHz}}>25$ mJy is $\sim 1.8 \times 10^5$. With a detection rate of $\sim5.5$\% applied, these numbers can lead to $\sim 6000$ extragalactic HI absorbers found within the original data, and $\sim 9000$ such systems for 4-channel binned spectra. Still, since the NVSS catalogue does not provide associated redshift information for each individual source, and the chance of alignment between a high-$z$ continuum source along with a low-$z$, HI-bearing galaxy is low, it is almost certain that a significant amount of HI absorbers associated with NVSS sources lie beyond the frequency coverage of FAST. Thus, such an estimation can only be considered as the rough upper limit for the CRAFTS survey. \section{Conclusions}\label{sec:5} In this paper, we report FAST observations of 5 extragalactic HI absorption sources firstly identified from the ALFALFA survey data. We confirm the existence of all absorption features from these sources. However, the line widths, optical depths, as well as HI column densities of the detected HI absorbers as derived from FAST data show noticeable discrepancies with the results previously obtained by Arecibo and WSRT, due to various factors. Since the FAST data have much higher S/N and finer spectral resolution compared with existing sky surveys, more features of the HI absorption lines, such as the double-horned structure of J1534+2513, can be revealed with high confidence. And the HI absorption line of PGC 070403, which exhibits the second-shallowest line depth among the 5, was successfully detected during a $\sim 12$ s integration time using the drift scan mode, which will be the final choice for the upcoming extragalactic HI surveys. These observations, which can be considered as the first batch of extragalactic absorption line detections performed by FAST, demonstrated the capability of this telescope for HI absorption studies. It is expected that with a larger sky coverage and higher sensitivity than Arecibo, over 1,500 extragalactic HI absorbers could be unveiled with the entire set of FAST extragalactic sky survey data. \section*{Acknowledgements} This work is supported by the National Key R\&D Program of China (Grant No. 2017YFA0402600), the National Natural Science Foundation of China (Grant Nos. 11903056, 11763002), the Joint Research Fund in Astronomy (Grant Nos. U1731125, U1531246, U1931203) under cooperative agreement between the National Natural Science Foundation of China (NSFC) and Chinese Academy of Sciences (CAS), the Cultivation Project for FAST Scientific Payoff and Research Achievement of CAMS-CAS, as well as the Open Project Program of the Key Laboratory of FAST, National Astronomical Observatories, Chinese Academy of Sciences (NAOC). The authors thank Prof. Martha P. Haynes for providing the ALFALFA HI absorption spectra, and Cheng Cheng, You-Gang Wang, Hong-Liang Yan, as well as Si-Cheng Yu for helpful discussions. Also, the authors would like to thank the anonymous referee for valuable comments. \section*{Data Availability} The data analysed by this work are from the FAST project Nos. 3017 (UGC 00613, ASK 378291.0, CGCG 049-033, J1534+2513) and N2020\_3 (PGC 070403), and can be accessed by sending request to the FAST Data Centre or to the corresponding authors of this paper. \bibliographystyle{mnras}
2,869,038,156,086
arxiv
\section{Introduction} The current concordance ($\Lambda$CDM) cosmological model consisting of 25\% cold dark matter and 70\% dark energy, agrees very well with Planck CMB and large scale structure observations~\cite{Planck2018}. However, at scales smaller than about 1~Mpc, the cold dark matter paradigm runs into a number of problems such as the core/cusp problem, missing satellite problem (although see ~\cite{Peter}), too big to fail problem, satellites plane problem etc.~(See Refs.~\cite{Bullock,Weinberg,Tulin} for recent reviews on this subject). We also note that some of these problem can be ameliorated using various baryonic physics effects~\cite{Oh,Martizzi,Mori,Keres}. At a more fundamental level, another issue with the $\Lambda$CDM model is that there is no laboratory evidence for any cold dark matter candidate~\cite{Merritt}. Furthermore, LHC or other particle physics experiments have yet to find experimental evidence for theories beyond the Standard Model of Particle Physics, which predict such cold dark matter candidates~\cite{Merritt}. Therefore, a large number of theoretical alternatives to $\Lambda$CDM model have been proposed, and a variety of observational tests devised to test these myriad alternatives. An intriguing observational result discovered more than a decade ago is that the dark matter halo surface density is constant, for a wide variety of systems spanning over 18 orders in blue magnitude for a diverse suite of galaxies, such as spiral galaxies, low surface brightness galaxies, dwarf spheroidal satellites of Milky way~\cite{Kormendy,Donato,Gentile,Walker,Salucci,Hartwick,Kormendy14,Chiba,Burkert15,Salucci19} etc. See however Refs.~\cite{Boyarsky,Napolitano,Cardone10,DelPopolo12,Saburova,DelPopolo17,DelPopolo20} and references therein, which dispute these claims and argue for a mild dependence of the dark matter surface density with halo mass and other galaxy properties. These results for a constant dark matter surface density were obtained by fitting the dark matter distribution in these systems to a cored profile, either Burkert~\cite{Burkert95}, pseudo-isothermal profile~\cite{Kormendy}, or a simple isothermal sphere~\cite{Spano}. All these cored profiles can be parameterized by a central density ($\rho_c$) and core radius ($r_c$); and the halo surface density is defined as the product of $\rho_c$ and $r_c$. The existence of a constant dark matter surface density was found to be independent of which cored profile was used~\cite{Donato}. Alternately, some groups have also calculated a variant of the above dark matter halo density, which has been referred to as the dark matter column density~\cite{Boyarsky,DelPopolo12}~\footnote{See Eq. 1 of Ref.~\cite{Boyarsky} for the definition of dark matter column density.}, whose value remains roughly invariant with respect to the choice of the dark matter profile. This column density is equivalent to the product of $\rho_c$ and $r_c$ for a Burkert profile~\cite{DelPopolo12}, and provides a more precise value of the surface density for non-cored profiles, such as the widely used NFW profile~\cite{NFW}. The best-fit values for the dark matter surface density for single galaxy systems using the latest observational data is given by $\log (\rho_c r_c)=(2.15 \pm 0.2) M_{\odot} pc^{-2}$~\cite{Salucci19}. A large number of theoretical explanations have been proposed to explain the constancy of dark matter halo density. Within the standard $\Lambda$CDM model, some explanations include: transformation of cusps to cores due to dynamical feedback processes~\cite{Ogiya}, self-similar secondary infall model~\cite{Delpopolo09,Boyarsky,DelPopolo12,DelPopolo17}, dark matter-baryon interactions~\cite{Famaey}, non-violent relaxation of galactic haloes~\cite{Baushev}. Some explanations beyond $\Lambda$CDM include ultralight scalar dark matter~\cite{Matos}, super-fluid dark matter~\cite{Berezhiani}, self-interacting dark matter~\cite{Loeb,Kaplinghat,Bondarenko,Tulin}, MOND~\cite{Milgrom}, etc. A constant halo surface density is also in tension with fuzzy dark matter models~\cite{Burkert20}. It behooves us to test the same relation for galaxy clusters. Galaxy clusters are the most massive collapsed objects in the universe and are a wonderful laboratory for a wide range of topics from cosmology to galaxy evolution~\cite{Mantz,Vikhlininrev}. In the last two decades a large number of new galaxy clusters have been discovered through dedicated optical, X-ray, and SZ surveys, which have provided a wealth of information on Astrophysics and Cosmology. However, tests of the constancy of dark matter surface density for galaxy clusters have been very few. The first such study for galaxy clusters was done by Boyarsky et al~\cite{Boyarsky09}, who used the dark matter profiles from literature for 130 galaxy clusters and showed that the dark matter column density ($S$) goes as $S \propto M_{200}^{0.21}$, where $M_{200}$ is the mass within the density contrast at $\Delta=200$~\cite{White}, where the density contrast ($\Delta$) is defined with respect to the critical density. Hartwick~\cite{Hartwick} used the generalized NFW profile~\cite{NFW} fits in Ref.~\cite{Newman} (using strong and weak lensing data) for the Abell 611 cluster, and found that $\rho_c r_c = 2350 M_{\odot} pc^{-2}$. This is about twenty times larger than the corresponding value obtained for galaxies~\cite{Salucci19}. Lin and Loeb~\cite{Loeb} estimated $\rho_c r_c \approx 1.1 \times 10^3 M_{\odot} pc^{-2}$ for the Phoenix cluster, using multi-wavelength data obtained by the SPT collaboration~\cite{SPT2012}. Using a model for self-interacting dark matter including annihilations, they also predicted the following relation between the surface density and $M_{200}$~\cite{Loeb} \begin{equation} \rho_c r_c = 41 M_{\odot} pc^{-2} \times \left(\frac{M_{200}}{10^{10}M_{\odot}}\right)^{0.18} \label{loebeq} \end{equation} Del Popolo et al also predicted~\cite{DelPopolo12,DelPopolo17} a similar relation between the dark matter column density and $M_{200}$, within the context of a secondary infall model~\cite{Delpopolo09} valid for cluster scale haloes \begin{equation} \log (S)= 0.16 \log \left(\frac{M_{200}}{10^{12}M_{\odot}}\right)+2.23 \label{delpopoloeq} \end{equation} The first systematic study of the correlation between $\rho_c$ and $r_c$ for an X-ray selected cluster sample, and without explictly assuming any dark matter density model, was done by Chan~\cite{Chan} (C14, hereafter). C14 first considered the X-ray selected HIFLUGCS cluster catalog consisting of ASCA and ROSAT observations~\cite{Chen07}. They considered 106 relaxed clusters from this catalog. From the hydrostatic equilibrium equation and parametric models for the gas density and temperature profiles, the total mass ($M(r)$) was obtained as a function of radius. The total density as a function of radius ($\rho(r)$) was then obtained from the total mass, assuming spherical symmetry. One premise in C14 is that the total mass is dominated by the dark matter contribution, while the stellar and gas mass can be ignored. $\rho_c$ was obtained from extrapolating the dark matter density distribution to $r=0$. The core radius was obtained by finding the radius ($r$) at which $\rho (r)=\rho_c/4$. This emulates the definition of $r_c$ in the Burkert profile~\cite{Burkert95}. Therefore, the estimate of core density and radius was done in C14 without explicitly positing any dark matter profile. We note that from weak and strong lensing observations, galaxy clusters are estimated to have cored or shallower than cuspy NFW dark matter profiles~\cite{Newman12,DelPopolo14}. However, these results have been disputed~\cite{Schaller}, and some works have also found evidence for cuspy haloes in clusters~\cite{Caminha}. Therefore, there is no consensus on this issue~\cite{Andrade}. Nevertheless, no explicit assumptions about the dark matter profile was made in C14, while obtaining the dark matter core density and radius, although the gas density models used for deducing this, were initially motivated from assuming isothermality and a King profile for the total mass distribution in galaxy clusters. We also note that in some models, for example the cusp to core transformation model~\cite{Ogiya} or the self-interacting dark matter with annihilations~\cite{Loeb}, the product of the core density and core radius for the cored profile is same as the product of scale density and scale radius of cuspy NFW-like profiles. In their analysis, C14 used two different density profiles (single-$\beta$ and double-$\beta$ model) for the gas density. They also did separate fits for both the cool-core and the non cool-core clusters. Using the double-$\beta$ model, they obtained $\rho_c \propto r_c^{-1.46 \pm 0.16}$ for the HIFLUGCS sample. Results from fits with other profiles for the same sample as well as other samples can be found in C14. Therefore, their result shows that the dark matter surface density is not constant for clusters. C14 also carried out a similar analysis on the LOCUSS cluster sample analyzed in Shan et al~\cite{Shan} and found that $\rho_c \propto r_c^{-1.64 \pm 0.10}$. Therefore, these results indicate that unlike single-galaxy systems, the dark matter surface density is not constant for galaxy clusters and is about an order of magnitude larger than for single galaxy systems. We now implement the procedure recommended in C14 to determine $\rho_c$ and $r_c$ for a catalog of 12 galaxy clusters, selected using pointed X-ray and archival ROSAT observations by Vikhlinin et al~\cite{2006ApJ...640..691V} (V06, hereafter). Detailed parametric profiles for gas density and temperature profiles have been compiled by V06. This cluster sample has been used to constrain a plethora of modified gravity theories and also to test non-standard alternatives to $\Lambda$CDM model ~\cite{2014PhRvD..89j4011R,Khoury17,Bernal,Ng,Hodson17}. We have also previously used this sample to constrain the graviton mass~\cite{Gupta1} as well as to assess the importance of relativistic corrections to hydrostatic mass estimates~\cite{Gupta2}. Our work improves upon C14 in that, we account for the baryonic mass distribution while estimating the dark matter halo properties. The outline of this manuscript is as follows. We describe the V06 cluster sample and associated models for the density and temperature profile in Sect.~\ref{sec:data}. Our analysis and results for the relation between core radius and density can be found in Sect.~\ref{sec:analysis}. Comparison with various theoretical scenarios is discussed in Sect.~\ref{sec:theory}. We also test for dependence vs $M_{200}$ in Sect.~\ref{sec:m200}. We conclude in Sect.~\ref{sec:conclusions}. \section{Details of Chandra X-ray sample} \label{sec:data} V06 (See also Ref.~\cite{Vikhlinin05}) derived density and temperature profiles for a total of 13 nearby relaxed galaxy clusters (A133, A262, A383, A478, A907, A1413, A1795, A1991, A2029, A2390, MKW4, RXJ1159+55531, USGC 2152) using measurements obtained from pointed observations with the Chandra X-ray satellite. The redshifts of these clusters range approximately upto $z=0.2$. These measurements extended up to very large radii of about $r_{500}$ for some of the clusters. For lower redshift clusters, because of Chandra's limited field of view, archival ROSAT observations between 0.7-2.0 keV were used as a second independent data set to model the gas density distribution at large radii. The list of clusters for which ROSAT archival observations were used in conjunction with pointed Chandra observations include A133, A262, A478, A1795, A1991, A2029, and MKW 4. The redshifts of these clusters range approximately upto $z=0.2$. These measurements extended up to very large radii of about $r_{500}$ for some of the clusters. The typical Chandra exposure times ranged from 30-130 ksecs. The temperatures span the range between 1 and 10 keV and masses from $(0.5-10) \times 10^{14} M_{\odot}$. V06 provided analytical estimates for the 3-D gas density and temperature profiles used to reconstruct the masses. The accuracy of mass reconstruction, tested with simulations was estimated to be within a few percent. From this sample of 13 clusters, we skipped USGC 2152 for our analysis, as all the relevant data was not available to us. More details about this cluster sample can be found in V06. \iffalse \begin{table \caption{\label{tab:table1}% Cluster Sample } \begin{ruledtabular} \begin{tabular}{lc} \textrm{Cluster}& \textrm{z} \\ \colrule A133 & 0.0569 \\ A383 & 0.1883 \\ A478 & 0.0881 \\ A907 & 0.1603 \\ A1413 & 0.1429 \\ A1795 & 0.0622 \\ A1991 & 0.0592 \\ A2029 & 0.0779 \\ A2390 & 0.2302 \\ RX J1159+5531 & 0.0810 \\ MKW 4 & 0.0199 \\ \end{tabular} \end{ruledtabular} \end{table} \medskip \fi We now describe the three-dimensional models proposed by V06, for the gas density and temperature projected along the line of sight. These models can fit the observed X-ray surface brightness and projected temperature profiles. These parametric models are then used to derive the total gravitating mass of the clusters. \subsection{Gas Density Model} The analytic expression used for the three-dimensional gas density distribution is a modified version of the single-$\beta$-model~\citep{1978A&A....70..677C}. These modifications were introduced to account for some additional features in the observed X-ray emission, such as a power-law cusp at the center and a steepening of the X-ray brightness for $r>0.3 r_{200}$. To model the core region, a second $\beta$-profile was added to increase the modeling freedom. This modified emission $\beta$-profile is then defined as, \begin{eqnarray} n_pn_e=n_0^2\frac{(r/r_c)^{-\alpha}}{(1+r^2/r_c^2)^{3\beta-\alpha/2}}\frac{1}{(1+r^\gamma/r_s^\gamma)^{\epsilon/\gamma}}\nonumber\\+\frac{n_{02}^2}{(1+r^2/r_{c2}^2)^{3\beta_2}} \label{eq:eq3} \end{eqnarray} where $n_p$ and $n_e$ denote the number density of protons and electrons respectively. The model in Eq.~\ref{eq:eq3} can independently fit the inner and outer cluster regions, and all the clusters in V06 sample were adequately fit with a fixed value of $\gamma$ equal to three. A detailed description of all the other parameters in Eq.~\ref{eq:eq3}, as well as their values can be found in V06. The density and radius related constants used to parametrize Eq.~\ref{eq:eq3}, scale with the dimensionless Hubble parameter $h$ as $h^{1/2}$ and $h^{-1}$ respectively. V06 has used $H_0$=72 km/sec/Mpc. We have used the same values for the best-fit parameters of all the terms in Eq.~\ref{eq:eq3} as in V06, where they can be found in Table 2. V06 however notes that the parameters in Eq.~\ref{eq:eq3} are correlated, leading to degeneracies. However, the analytical expression in Eq.~\ref{eq:eq3} can adequately model the X-ray brightness for all clusters. No errors for individual fit parameters have been provided in V06. The statistical uncertainties have been estimated using Monte Carlo simulations and found to be less than 9\%~\cite{2006ApJ...640..691V}. From the gas particle number density profile given by Eq.~\ref{eq:eq3}, the gas mass density can be obtained by assuming a cosmic plasma with primordial Helium abundance and abundances of heavier elements $Z=0.2Z_\odot$ as, \begin{equation} \rho_g=1.624m_p(n_pn_e)^{1/2} \label{eq:eq4} \end{equation} \subsection{Temperature Profile Model} To calculate the total dynamical mass, we need the three-dimensional temperature radial profile, whereas X-ray observations can only constrain the projected two-dimensional profile. The reconstructed temperature profile in V06 consists of two different functions, one to model the central part and another to model the region outside the central cooling zone. A broken power law is used to model the temperature outside the central cooling region and is given as, \begin{equation} t(r)=\frac{(r/r_t)^{-a}}{\big[1+(r/r_t)^b\big]^{c/b}} \label{eq:eq5} \end{equation} The temperature decline in the central region can be expressed as~\citep{2001MNRAS.328L..37A}, \begin{equation} t_{cool}(r)=\frac{(x+T_{min}/T_0)}{x+1}, \quad x=\bigg(\frac{r}{r_{cool}}\bigg)^{a_{cool}} \label{eq:eq6} \end{equation} The three-dimensional temperature profile of the cluster is given: \begin{equation} T_{3D}(r)=T_0t_{cool}(r)t(r) \label{eq:eq7} \end{equation} Due to the large number (nine) of free parameters in the model, Eq.~\ref{eq:eq7} can adequately describe any smooth trend in temperature profile. However, there are also degeneracies between the parameters. Therefore, again no errors were provided in V06 for the individual parameters describing the temperature profile. For doing the fits, the temperature data below an inner cutoff radius (usually between 10-40 kpc, with exact values tabulated in V06) was excluded because the intracluster medium is expected to be multiphase at such small radii~\cite{2006ApJ...640..691V}. The best-fit parameters for this model can be found in Table 3 of V06. As can be seen in Figs 3-14 of V06, the analytical temperature agrees very well with the projected temperatures for all clusters for all radii greater than the inner cutoff radius. \iffalse \begin{table*} \caption{\label{tab:table3} Best fit parameters for the gas density profile} \begin{ruledtabular} \begin{tabular}{ccccccccc} Cluster & $T_0$ & $r_{t}$ & a & b & c & $T_{min}/T_0$ & $r_{cool}$ & $a_{cool}$ \\ & (keV) & (Mpc) & & & & & (kpc) & \\\hline A133 & 3.61 & 1.42 & 0.12 & 5.00 & 10.0 & 0.27 & 57 & 3.88\\ A383 & 8.78 & 3.03 & -0.14 & 1.44 & 8.0 & 0.75 & 81 & 6.17 \\ A478 & 11.06 & 0.27 & 0.02 & 5.00 & 0.4 & 0.38 & 129 & 1.60\\ A907 & 10.19 & 0.24 & 0.16 & 5.00 & 0.4 & 0.32 & 208 & 1.48\\ A1413 & 7.58 & 1.84 & 0.08 & 4.68 & 10.0 & 0.23 & 30 & 0.75\\ A1795 & 9.68 & 0.55 & 0.00 & 1.63 & 0.9 & 0.10 & 77 & 1.03\\ A1991 & 2.83 & 0.86 & 0.04 & 2.87 & 4.7 & 0.48 & 42 & 2.12 \\ A2029 & 16.19 & 3.04 & -0.03 & 1.57 & 5.9 & 0.10 & 93 & 0.48 \\ A2390 & 19.34 & 2.46 & -0.10 & 5.00 & 10.0 & 0.12 & 214 & 0.08 \\ RX J1159+5531 & 3.74 & 0.10 & 0.09 & 0.77 & 0.4 & 0.13 & 22 & 1.68\\ MKW 4 & 2.26 & 0.10 & -0.07 & 5.00 & 0.5 & 0.85 & 16 & 9.62 \\ \end{tabular} \end{ruledtabular} \end{table*} \fi \subsection{Mass and Density Profile in Clusters} \label{sec:procedure} The total mass of the galaxy cluster can be derived through hydrostatic equilibrium equation, given the temperature and gas density models~\cite{Mantz}, \begin{equation} M(r)=-\frac{kT(r)r}{G\mu m_p}\bigg(\frac{d \ln \rho_g}{d \ln r} + \frac{d \ln T}{d \ln r}\bigg) \label{eq:eq8} \end{equation} where $M(r)$ is the mass within radius $r$; $T$ and $\rho_g$ denote the gas temperature and density; $\mu$ is the mean molecular weight equal to 0.5954 as in V06, and $m_p$ is the mass of the proton. \par We can estimate the total dark matter mass distribution by subtracting the gas and stellar mass from the total mass, given by Eq. ~\ref{eq:eq8}. The gas mass can be simply obtained by assuming spherical symmetry and integrating the gas density profile ($\rho_g(r)$ from Eq.~\ref{eq:eq4}) \begin{equation} M_{gas}=\int 4\pi r^2 \rho_g(r)dr \label{eq:eq9} \end{equation} The gas mass and total mass can hence be robustly determined for each cluster using Eq.~\ref{eq:eq9} and ~\ref{eq:eq8}. V06 however cautions that A2390 reveals X-ray cavities at small radii due to AGN activity~\cite{Allen01}. Therefore, its gas mass will be overestimated and the total mass underestimated. We do not account for this in our estimates of the dark matter core radius and density for this cluster. To calculate the stellar mass at any radius ($M_{star} (r)$), we first estimate the star mass at $r=r_{500}$, where $r_{500}$ is the radius at which the overdensity is equal to 500, where overdensity is again defined relative to the critical density for an Einstein-DeSitter universe at the cluster redshift $z$. This star mass at $r_{500}$ was estimated using the empirical relation proposed in Ref.~\cite{ytlin}, assuming $H_0$=71 km/sec/Mpc \begin{equation} \frac{M_{star}(r=r_{500})}{10^{12}M_\odot}\approx (1.8 \pm 0.1) \bigg(\frac{M_{500}}{10^{14}M_\odot}\bigg)^{0.71 \pm 0.04 } \label{eq:eq10} \end{equation} The above empirical relation was determining by comparing $M_{500}$ and star mass for a sample of about 100 clusters in the redshift range $z=0.1-0.6$. $M_{500}$ was determined using the $Y_X$-$M_{500}$ scaling relation from ~\cite{Kravtsov}. The stellar mass was estimated from the WISE or 2MASS (depending on the redshift range) luminosity and the Bruzual-Charlot stellar population synthesis models~\cite{Charlot}. Lin et al also find no redshift evolution of this empirical relation~\cite{ytlin}. To estimate the star mass at $r_{500}$, for each cluster, we used $r_{500}$ tabulated for each cluster in V06. $r_{500}$ was estimated for every cluster using the cosmological parameters used in V06 : $\Omega_M=0.3,\Omega_{\Lambda}=0.7$ and $h=0.72$. From Eq.~\ref{eq:eq10}, one can estimate the star mass at any radius, by assuming an isothermal profile~\cite{2014PhRvD..89j4011R} \begin{equation} M_{star} (r)=\left(\frac{r}{r_{500}}\right)M_{star}(r=r_{500}) \end{equation} Alternately, the stellar mass can also be estimated using the stellar-to-gas mass relation obtained in Chiu et al~\cite{Chiu18}, as used in Ref.~\cite{Tian}. However, since the stellar mass contribution to the total mass is negligible, this will not make a large difference to the final result. Therefore, once we estimate the star and gas mass, we can determine the total dark matter mass distribution ($M_{DM} (r)$) at any radius ($r$) by subtracting the gas and star mass from the total mass distribution ($M(r)$) calculated in Eq.~\ref{eq:eq8}: \begin{equation} M_{DM} (r)=M (r)-M_{gas} (r)-M_{star} (r) \label{eq:eq11} \end{equation} From Eq.~\ref{eq:eq11}, the density profile of the dark matter halo can be easily calculated by assuming spherical symmetry: \begin{equation} \rho_{DM} (r) =\frac{1}{4\pi r^2}\frac{dM_{DM}}{dr} \label{eq:eq12} \end{equation} \par To obtain $\rho_c$ and $r_c$ from $\rho_{DM} (r)$, we follow the same prescription as in C14, which we now describe. To recap, $\rho_c$ is estimated from the dark matter density at the centre of the cluster. Therefore, similar to C14, we extrapolated our dark matter density profile $\rho_{DM} (r)$ (obtained from Eq.~\ref{eq:eq12}) to $r=0$ in order to calculate $\rho_c$. The core radius $r_c$ was estimated by determining the radius at which the local dark matter density (defined in Eq.~\ref{eq:eq12}) reaches a quarter of its central value. As mentioned earlier, this is how $r_c$ is defined in the Burkert profile~\citep{Burkert95,burkert2000structure,Gentile}. In this work and similar to C14, we also estimate $\rho_c$ and $r_c$ in a model-independent way without explicitly positing any dark matter profile. However, we should point out that the single-$\beta$ profile (used in C14 and also the starting point for Eq.~\ref{eq:eq3}) for the gas distribution, was originally derived from the equation of hydrostatic equilibrium for an isothermal gas, and assuming that the total cluster matter distribution is based on King profile~\cite{1978A&A....70..677C}. Both C14 and this work have used an augmented version of this (double and modified $\beta$-profile) to fit the X-ray observations. Nevertheless, the same $\beta$-profile is also usually used in testing modified gravity theories or alternatives to $\Lambda$CDM with galaxy cluster observations~\cite{2014PhRvD..89j4011R,Gupta2,Hodson17,Khoury17}. However, a truly {\it ab-initio} determination of the dark matter halo surface density is not possible. We note that we have assumed spherical symmetry in the calculation of gas and total mass. Although we expect galaxy clusters to be triaxial~\cite{Jing,Limousin,Battaglia}, one usually resorts to spherically averaged measurements of clusters, including in V06 and C14, and also for all other tests of modified gravity using relaxed clusters. This is due to the simplicity and the assumption that errors due to non-spherical effects are small. Furthermore, the intrinsic shapes and orientation of clusters are not directly observable and reconstructing them from X-ray images, which are inherently 2-D in nature can be very challenging~\cite{Limousin,Khatri}. Nevertheless, a large number of groups have investigated the errors in the determination of the total mass due to spherical symmetry assumption~\cite{Piffaretti,Gavazzi,Clowe,Battaglia,Buote1,Buote2,Limousin} (and references therein). The actual error depends on the intrinsic shape of the cluster and orientation along the line of sight. All these groups find that spherical averaging causes errors $\leq 5\%$ for all different intrinsic shapes and viewing orientations~\cite{Piffaretti,Gavazzi,Clowe,Battaglia,Buote1,Buote2,Limousin}. We therefore expect the same for the clusters in V06 sample. We now use Eq.~\ref{eq:eq12} to determine $\rho_c$ and $r_c$ for the 12 clusters in V06. Errors in the observed gas temperature at a fixed number of radii have also been provided in V06 and made available to us (A. Vikhlinin, private communication). These were used to propagate the errors in the values of $\rho_c$ and $r_c$, by varying the temperatures within the $1\sigma$ error bars. As no errors in the parameters of the gas and temperature density profile have been provided in V06, these were not used to calculate the errors in $\rho_c$ and $r_c$. \section{Analysis and Results} \label{sec:analysis} \subsection{Determination of scaling relations} The resulting values of $\rho_c$ and $r_c$ along with $1\sigma$ error bars for each of the 12 clusters estimated using the procedure outlined in Sect.~\ref{sec:procedure} can be found in Table \ref{tab:table4}. We note that our values for $\rho_c$ and $r_c$ are of the same order of magnitude as for other galaxy cluster systems estimated in C14. Our estimated dark matter surface density is about an order of magnitude larger than that found for galaxy systems~\cite{Donato,Salucci19}. Figure \ref{fig:f3} shows the log $\rho_c$ versus log $r_c$ plot, and we observe a tight scaling relation between the two. We also find that $\rho_c$ is inversely proportion to $r_c$ in agreement with C14. To determine the scaling relation between the two, we perform a linear regression ($y=mx+c$) in log-log space. Here, $y=\ln \rho_c$ and $x=\ln r_c$. Unlike C14, we also allow for an intrinsic scatter ($\sigma_{int}$) in the linear fit. This intrinsic scatter is treated as a free parameter and is added in quadrature to the observational uncertainties ($\sigma_y$ and $\sigma_x$). It can be determined along with the slope and intercept by maximizing the log-likelihood~\cite{Tian,Hoekstra}. The log-likelihood function ($\ln L$) can be written as, \begin{eqnarray} -2\ln L &=& \large{\sum_i} \ln 2\pi\sigma_i^2 + \large{\sum_i} \frac{[y_i-(mx_i+c)]^2}{\sigma_i^2} \label{eq:eq13} \\ \sigma_i^2 &=& \sigma_{y_i}^2+m^2\sigma_{x_i}^2+\sigma_{int}^2 \end{eqnarray} The maximization of the log-likelihood was done using the {\tt emcee} MCMC sampler~\cite{emcee} with uniform priors. Our best-fit value for the scaling relation is as follows: \begin{eqnarray} \ln\bigg(\frac{\rho_c}{M_{\odot} pc^{-3}}\bigg)=(-1.08^{+0.06}_{-0.05}) \ln\bigg(\frac{r_c}{kpc}\bigg)\nonumber\\ +(0.4^{+0.24}_{-0.25}) \label{eq:eq14} \end{eqnarray} with an intrinsic scatter, $\sigma_{int}=18.5^{+4.4}_{-6.5} \%$. Therefore, we obtain a much shallower slope for the core density-radius scaling relation compared to the slope obtained in C14, who found $\rho_c \sim r_c^{-1.46}$ for the HIFLUGCS sample, after assuming a double-beta profiles. Our results show deviations from a constant dark matter surface density at only about 1.3-1.4$\sigma$. A comparison of our result with the previous fits carried out in C14 can be found in Table~\ref{tab:summary}. As we can see all the fits done in C14 show a much steeper slope than our result. \begin{table}[h] \begin{ruledtabular} \begin{tabular}{ccc} Cluster & $\rho_c$ & $r_c$ \\ & $10^{-3}M_{\odot} pc^{-3}$ & kpc \\\hline A133 & 11.68$\substack{ +0.02 \\ -0.02 }$ & 102.01$\substack{+0.08 \\ -0.11 }$ \\ A262 & 5.17$\substack{ +0.87\\ -0.89}$ & 136.36$\substack{+5.40 \\-5.49 }$\\ A383 & 9.63$\substack{+0.62 \\ -0.78}$ & 121.45$\substack{ +3.95\\-4.94 }$\\ A478 & 3.39$\substack{+0.72 \\ -0.84}$ & 286.14$\substack{+30.41 \\-35.62 }$ \\ A907 & 4.15$\substack{ +0.42\\ -0.51 }$ & 208.96$\substack{+10.66 \\-12.98 }$\\ A1413 & 6.27$\substack{ +0.49\\ -0.53 }$ & 154.68$\substack{+6.06 \\ -6.61 }$\\ A1795 & 7.15$\substack{ +0.68 \\ -0.79 }$ & 131.89$\substack{+6.32 \\ -7.33}$\\ A1991 & 111.22$\substack{ +0.83\\ -0.92 }$ & 11.15$\substack{+0.04 \\-0.04 }$\\ A2029 & 9.39$\substack{ +0.66\\ -0.76 }$ & 134.31$\substack{+4.72 \\-5.45 }$ \\ A2390 & 5.83$\substack{ +0.22\\ -0.23 }$ & 137.18$\substack{+2.60 \\ -2.81}$\\ RX J1159+5531 & 41.06$\substack{ +1.33\\-1.19 }$ & 34.07$\substack{+0.55 \\-0.49 }$\\ MKW 4 & 102.4$\substack{ +0.92\\-0.98 }$ & 10.31$\substack{+0.04 \\ -0.04}$\\ \end{tabular} \end{ruledtabular} \caption{\label{tab:table4} Estimated values for the core density ($\rho_c$) and the core radius ($r_c$) for the V06 cluster sample.} \end{table} \begin{table}[h] \begin{ruledtabular} \begin{tabular}{ccc} Slope & Intercept & Cluster Sample \\ \hline $-1.47\pm 0.04$ & $0.75 \pm 0.08$ & ROSAT (single-$\beta$ profile) \\ $-1.46\pm 0.16$ & $0.88 \pm 0.33$ & ROSAT (double-$\beta$ profile) \\ $-1.30\pm 0.07$ & $0.6 \pm 0.11$ & ROSAT (cool-core clusters) \\ $-1.50\pm 0.24$ & $0.96 \pm 0.54$ & ROSAT (non cool-core clusters) \\ $-1.64\pm 0.1$ & $1.58 \pm 0.21$ & LOCUSS \\ $-1.08^{+0.06}_{-0.05}$ & $0.4^{+0.24}_{-0.25}$ & Chandra (this work) \\ \end{tabular} \end{ruledtabular} \caption{\label{tab:summary}Summary of results for a linear regression of $\ln(\rho_c)$ versus $\ln (r_c)$ from different cluster samples. All the other results are from V06.} \end{table} \begin{figure*} \includegraphics{sc_rel.pdf} \caption{$\ln \rho_c$ versus $\ln r_c$ from V06 cluster sample~\cite{2006ApJ...640..691V}. The units for $\rho_c$ and $r_c$ are in $M_\odot pc^{-3}$ and kpc respectively. The black line represents the fitted line of our analysis ($\rho_c \propto r_c^{-1.08}$, and can be compared with the dashed magenta line, which has a slope equal to -1. The blue and orange lines show the slope determined by Chan~\cite{Chan} for ROSAT catalog and the Shan et al~\cite{Shan} sample respectively. A summary of all results in literature can be found in Table~\ref{tab:table4}.} \label{fig:f3} \end{figure*} \subsection{Comparison with C14} \label{sec:C14comparison} Given the large discrepancies in the slope of our scaling relation with respect to C14, we investigate the reason for these differences. As mentioned earlier, C14 assumed that the dark matter mass in cluster is the same as the total mass. This assumption is not correct, as the gas mass fraction in the Chandra cluster sample ranges from (10-15)\%~\cite{2006ApJ...640..691V}. Furthermore, C14 did not account for any intrinsic scatter in the scaling relation. Instead, the slope and intercept were determined using the BCES method~\cite{BCES}. \rthis{Among the multiple implementations of the BCES technique~\cite{BCES}, C14 used the bisector method.} To reconcile the differences between our result and C14, we now redo our analysis in exactly the same way as C14, and also try a few variants to understand the impact of each of the different assumptions between our methods. Therefore, for this set of relaxed clusters, we also carried out the same analysis with the BCES\rthis{-bisector} method, without subtracting the gas and stellar contribution as well as by subtracting these contributions. We implemented the BCES\rthis{-bisector} technique using the {\tt bces} module~\cite{BCES,2012Sci...338.1445N} in Python~\footnote{\url{https://github.com/rsnemmen/BCES}}. We note that the BCES method~\cite{BCES} does not account for intrinsic scatter, when the abscissa contains errors~\cite{Ascenso}. \rthis{Furthermore, the authors of the {\tt bces} module have also cautioned against using the bisector method in BCES, as this method is self-inconsistent~\cite{Hogg}.} Finally, we also did a fit with the same likelihood as in Eq.~\ref{eq:eq13}, without accounting for intrinsic scatter but by subtracting the gas and star mass, and also vice-versa. \rthis{Again, we used the {\tt emcee}~\cite{emcee} module for carrying out the maximization of the log likelihood.} A summary of the resulting slopes using these different combinations of assumptions can be found in Table~\ref{tab:table4}. When we implement the BCES method, and do not subtract the gas and star mass (which exactly duplicates the C14 procedure), our value for the slope $(-1.44 \pm 0.18)$ is consistent with the value found by C14 for the ROSAT sample. Even when we maximize the log-likelihood, we get a value for the slope $(-1.37 \pm 0.02)$ closer to that found in C14. With both the BCES method and the maximization of the log likelihood (without intrinsic scatter), we get a near constant value for the dark matter surface density, only when we subtract the gas and star mass. Finally, if we do not subtract the gas and star mass, but do a fit using Eq.~\ref{eq:eq11} to allow for intrinsic scatter, we get a slope \rthis{consistent with} -1 (within $1\sigma$). Therefore, we conclude that we can reproduce nearly the same value for the slope of the regression relation between $\ln (\rho_c)$ and $\ln (r_c)$ as C14, when we use the BCES\rthis{-bisector} method (which does not fit for an intrinsic scatter), and also assume that the total dark matter mass is the same as the total cluster mass. However, to correctly estimate the scaling relations, one must subtract the gas and star mass estimates and also allow for an intrinsic scatter as we have done. \begin{table}[h] \begin{ruledtabular} \begin{tabular}{cccc} Method & Gas/Star Mass & Slope \\ &Subtracted?&\\ \hline BCES Method & No & $-1.44\pm0.18$ \\ BCES Method & Yes & $-1.09\pm0.02$ \\ W/o Intrinsic Scatter & No & $-1.37\pm0.02$ \\ W/o Intrinsic Scatter & Yes & $-0.99\pm 0.003$ \\ With Intrinsic Scatter & No & $-1.11\pm 0.29$ \\%& $0.40\pm1.09$ \\ \end{tabular} \end{ruledtabular} \caption{\label{tab:table3i}Summary of results for a linear regression of $\ln\rho_c$ versus $\ln r_c$ using the Chandra X-ray sample by choosing combinations of different fitting methods (BCES\rthis{-bisector} vs maximizing Eq.~\ref{eq:eq11}) as well as other assumptions related to intrinsic scatter and subtraction of gas/star mass. When we use the BCES method and do not exclude gas/star mass, our result for the slope agrees with C14 for ROSAT cluster sample (cf. Table~\ref{tab:summary}).} \end{table} \section{Comparison with theoretical models} \label{sec:theory} Our results from the previous section show that we see deviations from a constant halo surface density only at about 1.4$\sigma$. However, the resulting value of our halo surface density is about ten times larger than for galaxies. We briefly discuss some theoretical models which are consistent with such a scenario. The problems with $\Lambda$CDM at small scales can be solved by self-interacting dark matter with a velocity-dependent cross-section ranging from $\sigma/m \approx 2 cm^2/ g$ on galaxy scales to $\sigma/m \approx 0.1 cm^2/ g$ on cluster scales~\cite{Kaplinghat15,Tulin}. C14 had argued that such velocity-dependent self-interacting cross-sections for dark matter (SIDM) ~\cite{Vogelsberger,Weiner,Rocha,Kaplinghat15} are ruled out, if the observed core is produced due to dark matter self-interactions. The reason for this is that their scaling relations using the ROSAT sample showed that the halo surface density scales with $r_c$ as $r_c^{-0.46}$. However, combining the $r_c-V_{max}$ and $\rho_c-V_{max}$ scaling relations in ~\cite{Rocha}, one gets that the halo surface density scales with $r_c^{0.6}$, showing the opposite trend. However, as we showed in the previous section, C14's scaling relation was obtained using incorrect assumptions. Furthermore, one cannot rule out SIDM, by comparing only cluster-based halo scaling relations with simulations, since the dynamic range for $V_{max}$ using only clusters is not large enough to test the scaling relations over a wide range of values. Using cosmological simulations of self-interacting dark matter, Rocha et al~\cite{Rocha} showed that $\frac{\sigma}{m} \approx 0.1 cm^2/g$ correctly predicts cluster scale halo surface densities comparable to the observed ones in this work. Kamada et al~\cite{Kaplinghat} pointed out that for $\frac{\sigma}{m}$ of 3~$cm^2/g$, the halo surface density for galaxies scales as $V_{max}^{0.7}$, and assuming $V_{max}$ in the range of 20-100 km/sec, one gets values for the halo surface density for galaxies in the same ballpark as found by observations~\cite{Donato,Salucci19}. For galaxy clusters $V_{max}$ is usually about one to two orders of magnitude larger than for galaxies. Therefore, a simple extrapolation (along with a gradual decrease in cross-section as a function of velocity) to cluster scales should provide the correct order of magnitude for the halo surface density on cluster scales. Of course, a more detailed comparison is beyond the scope of this work and one would need to calculate the surface density using mock catalogs from the latest SIDM simulations. Lin and Loeb~\cite{Loeb} proposed another semi-analytical model of self-interacting dark matter involving dark matter annihilations, wherein they provided an empirical formula for the dark matter core halo surface density as a function of halo mass (Eq.~\ref{loebeq}). They showed that this halo density agrees with galaxy scale observations~\cite{Donato, Salucci19} as well with the data for one galaxy cluster (Phoenix)~\cite{SPT2012}. Since our halo surface densities are approximately the same as that for the Phoenix cluster, we argue that the model proposed in Ref.~\cite{Loeb} correctly predicts the one order of magnitude increase in the observed surface density, as we go from galaxy-scale to cluster-scale haloes. This model also predicts a mild dependence on $M_{200}$, which we test in the next section. Del Popolo et al have argued in a series of works~\cite{DelPopolo12,DelPopolo14,DelPopolo17,DelPopolo20} that Eq.~\ref{delpopoloeq}, which can be explained within the context of the secondary infall model~\cite{Delpopolo09} agrees with halo surface density of galaxies as well as clusters by comparing with the data for galaxies in Ref.~\cite{Donato,Gentile} along with additional data sets~\cite{Napolitano,DelPopolo12,Saburova} (which they analyzed themselves), and also with galaxy cluster data compiled in Ref.~\cite{Boyarsky}. They however fit the column density instead of the surface density, and find that the column density shows a mild dependence on the galaxy mass and galaxy magnitude, which can be easily explained using their model. We also note that Eq.~\ref{delpopoloeq} gives the right order of magnitude for the halo surface density, which we obtain for the Chandra cluster sample. For other models within and beyond $\Lambda$CDM, which explain a constant halo surface density on galactic scales~\cite{Ogiya,Famaey,Baushev,Matos,Berezhiani,Milgrom}, we could not find any definitive predictions therein for cluster scale halo surface densities. Finally, we note that most recently, Burkert pointed out that for fuzzy dark matter models, the halo surface density scales inversely with the cube of the core radius and is therefore in complete disagreement with observations~\cite{Burkert20}. Hence, the constant dark matter surface density observations~\cite{Donato,Salucci19} are in conflict with fuzzy dark matter scenarios. Therefore, we conclude that the results in this work in conjunction with the constant halo surface density values on galaxy scales, are consistent with velocity-dependent self-interacting dark matter models as well as the secondary infall model (within $\Lambda$CDM), but are in tension with fuzzy dark matter. For a more definitive test with various theoretical models one needs to extend this analysis to galaxy groups, which bridge the mass gap between galaxies and clusters, and then compare with predictions from simulations. \section{Dependence on $M_{200}$} \label{sec:m200} We now use our results for $\rho_c$ and $r_c$ to check for correlation with $M_{200}$, as suggested in some works~\cite{DelPopolo12,Loeb}. The first step in doing this is to estimate $M_{200}$ from $M_{500}$. In V06, the masses ($M_{500}$) and concentration parameters ($c_{500}$) for the overdensity (with respect to the critical density) level $\Delta=500$ and its corresponding radius ($r_{500}$) have already been derived, where $c_{500}$ was obtained using the mass-concentration relations in ~\cite{Dolag}. We have determined the $M_{200}$ values using same prescription as in Ref.~\cite{2017ApJ...844..101A}, which assumes an NFW profile \begin{equation} M_{200}=M_{500}\frac{f(c_{200})}{f(c_{500})} \label{eq:eq15} \end{equation} where $f(c)$ is a function of the concentration $c$ and the over-density ($\Delta$), and is given by~\cite{2017ApJ...844..101A} $$f(c_\Delta)=\ln(1+c_\Delta)-\frac{c_\Delta}{1+c_\Delta}$$ The concentration at $\Delta=200$, $c_{200}$ was obtained by solving for the following equation \begin{equation} \frac{c_{200}^3}{\ln(1+c_{200})-\frac{c_{200}}{1+c_{200}}}=\frac{3\rho_s}{200\rho_{c,z}} \label{eq:eq16a} \end{equation} where the NFW density scale parameter ($\rho_s$) and scale radius were fixed for each cluster and is given by $$\rho_s=\frac{M_{500}}{4\pi r_s^3} \quad \rm{with} \quad r_s=\frac{r_{500}}{c_{500}}$$ and $\rho_{c,z}$ is determined from $$\rho_{c,z}=\frac{3M_{500}}{2000\pi r_{500}^3}$$ The $M_{500}$ values for the clusters A262 and MKW4 were unavailable in V06. Hence, we calculated it by extrapolating the mass profiles to the corresponding $r_{500}$ values. To calculate the error in $M_{200}$, we propagated the errors in $M_{500}$ and $c_{500}$ provided in V06. The relation between the dark matter column density, $S=\rho_c r_c$ and $M_{200}$ in logarithmic space is shown in Fig \ref{fig:f4}. We have again done a linear regression with $y= \ln \rho_c r_c$ and $x=M_{200}$; and maximized the log-likelihood function (same as Eq.~\ref{eq:eq13}) using the {\tt emcee} MCMC sampler with uniform priors. The best-fit parameters thus obtained are, \begin{eqnarray} \ln\bigg(\frac{\rho_c r_c}{M_{\odot} pc^{-2}}\bigg)=(-0.07^{+0.05}_{-0.06}) \ln\bigg(\frac{M_{200}}{M_\odot}\bigg)\nonumber\\ +(9.41^{+2.07}_{-1.80}) \label{eq:eq16} \end{eqnarray} The intrinsic scatter for this fit is $17^{+4.0}_{-6.0} \%$. We now fit this data to two scaling relations predicted by two independent theoretical scenarios proposed in literature for the dark matter halo surface density. Del Popolo et al~\cite{DelPopolo14} found after applying the secondary infall model proposed in Ref.~\cite{Delpopolo09} to cluster data ~\cite{Boyarsky09}, that $S \propto M_{200}^{0.16}$, where $S$ is the dark matter column density. Lin and Loeb deduced from numerical simulations of self-interacting dark matter with annihilations, that $S \propto M_{200}^{0.18}$, where $S$ is the product of the halo core density and radius~\cite{Loeb}. However, we should caution that although the definition of core density in ~\cite{Loeb} is same as ours, the core radius defined in Ref.~\cite{Loeb} could differ from the definition in Burkert profile depending on the initial scale radius in the dark matter halo profile~\cite{Loeb}. Similarly $S$ in Eq~\ref{delpopoloeq} is the dark matter column density, whereas what we have estimated is the dark matter halo surface density. Although, both give about the same values for both cuspy and cored profiles~\cite{DelPopolo12}, they are not exactly the same quantities. Nevertheless, we would like to test for the correlation with $M_{200}$ as predicted in both these works. We note that in the Lin and Loeb model~\cite{Loeb}, there is also a slight dependence of the surface density as a function of redshift (See Fig. 2 of Ref.~\cite{Loeb}). However, since no analytic formula for the variation with redshift is provided, we do not account for this. We further point out these two relations are not exhaustive and other proposed scaling relations for the dark matter surface density as a function of halo mass are discussed in Ref.~\cite{DelPopolo12}. However, when we compare our estimated surface density with $M_{200}$, we find a slight decrease in dark matter core density with $M_{200}$, although the decrease with $M_{200}$ is not significant (cf. Table~\ref{tab:summary2}). Therefore, at face value our results would not be consistent with these predictions. To carry out a more definitive test, we now try to fit our data to these relations, by using the same slope (0.18 and 0.16) as predicted by these models, with only the intercept and intrinsic scatter as free parameters. We then do a model comparison with our best fits using AIC and BIC information criterion~\cite{Liddle07}. AIC and BIC are defined as follows~\cite{Liddle07}: \begin{eqnarray} \rm{AIC} &=& -2\ln L_{max} + 2p + \frac{2p(p+1)}{N-p-1} \\ \rm{BIC} &=& -2\ln L_{max} + p \ln N, \end{eqnarray} \noindent where $N$ is the total number of data points, $p$ is the number of free parameters in each model, and $L_{max}$ is the maximum likelihood. When comparing two models, the model with the smaller value of AIC and BIC is considered the favored one. The best-fit results for these two scaling relations can be found in Table~\ref{tab:summary2}. We find that $\Delta$AIC and $\Delta$BIC between our best-fit and that for other scaling relations is between 6-7, wherein our fit has the lowest value, indicating strong evidence for our fit as compared to the relations proposed in Refs.~\cite{DelPopolo12,Loeb}. We note that we shall obtain poorer fits for other scaling relations, which predict a steeper dependence with halo mass, for eg.~\cite{Boyarsky09}. We should however caution that the dynamical range in mass for our cluster sample is not large, and for a more definitive test, lower mass samples should be included. A comparison of our best-fit along with a fit to the theoretical relation in Lin and Loeb~\cite{Loeb} can be found in Fig.~\ref{fig:f4}. In the same figure, we also show for comparison the constant value for the surface density, obtained for single galaxy systems using the latest data~\cite{Salucci19}. \begin{table}[h] \begin{ruledtabular} \begin{tabular}{cccccc} Model & Slope & $\sigma_{int}$ & AIC & BIC \\ \hline Lin \& Loeb~\cite{Loeb} & 0.18 & 28\% & 12.7 & 14.7 \\ Del Popolo et al~\cite{DelPopolo12}& 0.16 & 26\% & 11.6 & 13.6 \\ This work & $-0.07^{+0.05}_{-0.06}$ & 17\% & 5.1 & 8.0 \\ \end{tabular} \end{ruledtabular} \caption{\label{tab:summary2}Summary of results for a linear regression of $\ln(\rho_c r_c)$ versus $\ln (M_{200})$ from different models and their comparison using AIC and BIC. Our best-fit (Eq.~\ref{eq:eq16}) has the smallest values of AIC and BIC and the difference between the other two scalings is between 6-7 indicating strong preference for our model compared to the other two.} \end{table} \begin{figure*} \includegraphics{sc_mass.pdf} \caption{$\ln (\rho_c r_c)$ versus $\ln M_{200}$ from V06 cluster sample~\cite{2006ApJ...640..691V}. The units for $\rho_c r_c$ and $M_{200}$ are in $M_\odot pc^{-2}$ and $M_\odot$ respectively. The black line represents the fitted line of our analysis, whereas the cyan line represents the models from Lin \& Loeb~\cite{Loeb}. We get similar fit for the scaling relation predicted by Del Popolo et al~\cite{DelPopolo12}, which we have omitted from the plot for brevity. The red dashed line indicates the constant surface obtained from single galaxy systems of various types~\cite{Salucci19}, while the orange shaded region represents 1$\sigma$ error. Note that the mass range for these systems is much lower than for clusters. Note that in this plot the range of values 2.4-6.4 have been culled from the Y-axis in this plot, given the large difference in surface density for single galaxies and clusters.} \label{fig:f4} \end{figure*} \section{Conclusions} \label{sec:conclusions} A large number of studies in the past decade have found that the dark matter surface density, given by the product of dark matter core radius ($r_c$) and core density ($\rho_c$) is constant for a wide range of galaxy systems from dwarf galaxies to giant galaxies over 18 orders in blue magnitude. This cannot be trivially predicted by the vanilla $\Lambda$CDM model, but it can be easily accommodated in various alternatives to $\Lambda$CDM or by invoking various feedback mechanisms in $\Lambda$CDM. However, there have been very few tests of this {\it ansatz} for galaxy clusters. The first systematic study of this relation for a large X-ray selected cluster sample was done by C14 using the ROSAT sample studied in Chen et al~\cite{Chen07}. They considered a sample of galaxy clusters in hydrostatic equilibrium and using parametric models for gas density and temperature, obtained the total mass density profile. They assumed that this is a proxy for the total dark matter density distribution. For this sample, $\rho_c$ was obtained by extrapolating the dark matter density distribution to the center of the cluster, whereas, $r_c$ was obtained by determining the radius at which the core density drops by a factor of four. This emulates the definition of core radius in the Burkert cored dark matter profile~\cite{Burkert95}. Therefore, this analysis was done without positing any specific dark matter density distribution. C14 did not find a constant dark matter surface density, but found a tight scaling relation between $\rho_c$ and $r_c$, given by $\rho_c \propto r_c^{-1.46 \pm 0.16}$. We then carried out a similar analysis as in C14 for a Chandra X-ray sample of 12 relaxed clusters, for which detailed 3-D gas density and temperature profiles were made available by Vikhlinin et al~\cite{2006ApJ...640..691V}. One improvement on the analysis in C14, is that we also subtracted the gas and star mass, while non-parameterically reconstructing the dark matter density profile. Furthermore, while determining the scaling relations between the core density and radius, we also accounted for the intrinsic scatter. Our results for the dark matter core density and radius can be found in Table~\ref{tab:table4}. They are of the same order of magnitude as previous estimates for galaxy clusters~\cite{Chan}, and are about an order of magnitude larger than for isolated galaxy systems. The halo surface densities for the cluster scale haloes are in the right ballpark from secondary infall model~\cite{Delpopolo09}, as well as velocity-dependent self-interacting dark matter scenarios~\cite{Tulin}. We find that $\rho_c \propto r_c^{-1.08^{+0.06}_{-0.05}}$. The intrinsic scatter for this relation is about 18 \%. Therefore, we get only a marginal deviation from a reciprocal relation between the dark matter core density and radius in contrast to C14 who found a steeper dependence of $\rho_c$ as a function of $r_c$. Our estimated dark matter surface density is inconsistent with flat density core at only $1.4\sigma$. A comparison of our result with previous scaling relations found for galaxy clusters can be found in Table~\ref{tab:summary}. We also checked that the discrepancy in our results compared to C14, is because C14 did not subtract the gas and star mass, or account for an intrinsic scatter. If we replicate exactly the same procedure as C14, we can reproduce their scaling relations using the Chandra cluster sample. We also checked for any dependence of the dark matter surface density with $M_{200}$ to test some of these predictions in literature~\cite{DelPopolo12,Loeb}, although the exact definition in these works is not the same as the product of the dark matter core radius and density, which we calculated. We find that the dark matter surface density ($S$) scales with $M_{200}$ as $S \propto M_{200}^{-0.07 \pm 0.55}$, which is in mild disagreement the weak logarithmic increase with $M_{200}$ predicted in Refs.\cite{DelPopolo12,Loeb}. However, a more definitive test can only be confirmed using a larger sample covering a wide dynamic range in mass by extending this test to galaxy groups. Further stringent tests of this relation for clusters should soon be possible, thanks to the recent launch of the e-ROSITA satellite, and the expected discovery of about 100,000 clusters~\cite{Hofmann}. \section*{Acknowledgements} We are grateful to Man-Ho Chan and Antonio Del Popolo for useful correspondence, and Alexey Vikhlinin for providing us the data in V06. We are also thankful to the anonymous referee for several constructive feedback on our manuscript.
2,869,038,156,087
arxiv
\section{Introduction} \label{sec:intro} Novel approaches to photometric supernova (SN) classification are in high demand in the astronomical community. The next generation of survey telescopes, such as the Dark Energy Survey (DES; \citealt{des}) and the Large Synoptic Survey Telescope (LSST; \citealt{ivez2008}), are expected to observe light curves for a few hundred thousand supernovae (SNe), far surpassing the resources available to spectroscopically confirm the type of each. To fully exploit these large samples, it is necessary to develop methods that can accurately and automatically classify large samples of SNe based only on their photometric light curves. In order to use Type Ia supernovae as cosmological probes, it is imperative that pure and efficient Type Ia samples are constructed. Yet, classifying SNe from their light curves is a challenging problem. The light flux measurements are often noisy, nonuniform in time, and incomplete. In particular, it is difficult to discern the light curves of Type Ia SNe from those of Type Ib or Ic supernovae, explosive events which result from the core collapse of massive stars. This difficulty can have dire effects on the subsequent cosmological inferences: if the sample of SNe Ia used to make cosmological inferences is impure, then the cosmological parameter estimates can be significantly biased (\citealt{home2005}). On the other hand, if the classification of Type Ia SNe is not efficient, then cosmologists do not fully utilize the sample of supernovae on hand, resulting in a loss of precision. In the last decade, several supernova photometric classification methods have been introduced. These include the methods of \citet{pozn2002,sull2006,john2006,pozn2007,kuzn2007,kunz2007,rodn2009,gong2010}, and \citet{falc2010}. Each of these approaches uses some version of template fitting, where each observed supernova's data is compared to the data from well established SNe of different types and the subsequent classification is estimated as the SN type whose template fits best (usually judged by maximum likelihood or maximum posterior probability). Usually, the sets of templates are constructed using the observed spectra of sets of well studied, high signal-to-noise SNe (see \citealt{nuge2002}) or spectral energy distribution models of SNe. The principal drawback of using template methods for classification is that they depend heavily on the templates used for each of the different classes. If there are errors in the template creation, these will propagate down to the estimated classifications. Furthermore, template fitting often assumes that each observed SN can be well modeled by one of the templates, an assumption that may be unreasonable, especially for large data sets. Additionally, these methods require that all relevant parameters (such as redshift and extinction) in the light curve model be simultaneously fit or otherwise estimated, which can pose computational challenges and cause catastrophic errors when the estimates are poor. A viable alternative to template fitting for SN classification is {\it semi-supervised learning}, a class of methods that are able to learn the low dimensional structure in all available data and exploit that structure in classification; as more data are obtained, these methods are able to utilize that additional information to better classify {\bf all} of the SNe. Template fitting methods, on the other hand, do not automatically learn as new data are collected \new{(for example, \citealt{2011arXiv1107.5106S} extract only 8 core-collapse SN templates from over 10,000 observed supernova candidates).} Adverse effects caused by incorrectly classified templates, under-sampled regions of template parameter space, or unrealistic templates will not go away as the number of SNe increases. Indeed, the magnitude of these biases relative to the statistical errors in the estimates will only increase. Well constructed semi-supervised approaches will reduce both bias and variance as the sample sizes grow. Our main contribution in this paper is to introduce a method for SN photometric classification that does not rely on template fitting. Our proposed method uses semi-supervised learning on a database of SNe: we first use all of the light curves to simultaneously estimate an appropriate, low dimensional representation of each SN, and then we employ a set of labeled (spectroscopically confirmed) examples to build a classification model in this reduced space, which we subsequently use to estimate the type of each unknown SN. An advantage to our semi-supervised approach is that it learns from the set of unlabeled SNe. Typically there are many more unlabeled than labeled supernovae, meaning that a method that learns from all the data is an obvious improvement over methods that only learn from the labeled SNe. Another advantage is that our method gives an accurate prediction of the class of each supernova without having to simultaneously estimate nuisance parameters such as redshift, stretch or reddening. This result arises because variations in these parameters appear as gradual variations in the low dimensional representation of the light curves when the observed data (labeled plus unlabeled SNe) are collected densely enough over the nuisance parameters. In the low dimensional representation, the variability due to supernova type is greater than the variability due to the nuisance parameters, allowing us to build an accurate classifier in this space. Until recently, there had been no standard testing procedure for the various SN classification methods. To assess the performance of these methods, \citet{kess2010} held a public ``SN Photometric Classification Challenge" in which they released a blended mix of simulated supernovae (Ia, Ib, Ic, II) to be classified. As part of the Challenge, a spectroscopically confirmed subset of SNe was included on which the Challenge participants could train or tune their algorithms. The results of that Challenge (\citealt{kess2010a}) showed that various machine learning classification algorithms were competitive with classical template fitting classifiers. Apart from our entry (InCA\footnote{International Computational Astrostatistics Group, \url{http://www.incagroup.org}}), none of the other entries to the Classification Challenge used semi-supervised learning. In this paper we will further explore the viability of semi-supervised classifiers for SN classification. Recently, \citet{newl2010} released a paper detailing their entries into the SN Challenge. They argue that their methods are comparable to the best template fitting techniques, but that they require training samples that are representative of the full (photometric) sample. Our findings are similar, and we carry out a detailed sensitivity analysis to determine how the accuracy of predicted SN types depend on characteristics of the training set. Like \citet{newl2010}, we perform our analysis both with and without photometric redshift estimates; we introduce a novel and effective way of using photo-z estimates in finding a low dimensional embedding of SN data. Based on our sensitivity analysis, we conclude that magnitude-limited spectroscopic follow-up strategies with deep limits (25th mag) produce the best training sets for our supernova classification method, at no extra observing cost. Though these deeper observing strategies result in fewer supernovae than shallower samples, they produce training sets that more closely resemble the entire population of SNe under study, causing overwhelming improvements in supernova classification. We strongly recommend that spectroscopic SN follow-up is performed with faint magnitude limits. The layout of the paper is the following. In {\S}\ref{sec:meth}, we describe our semi-supervised classification method, which is based on the diffusion map method for nonlinear dimensionality reduction and the random forest classification technique. We couch our description in general terms since elements of our methodology can in principle be applied to any set of light curves, not just those from supernovae. In {\S\S}\ref{sec:snapp}-\ref{sec:snapp1}, we describe the application of our methodology to the problem of classifying supernovae, fully detailing the steps in our analysis. These sections are divided into the unsupervised (\S\ref{sec:snapp}) and supervised (\S\ref{sec:snapp1}) parts of the analysis. We assess the performance of our methods in \S\ref{sec:results}, using the 21,319 light curves simulated for the SN Photometric Classification Challenge (\citealt{kess2010}). We examine the sensitivity of our results to the composition of the training datasets, and to the treatment of host-galaxy redshift (by either using redshifts to alter the diffusion map construction or using them as covariates in random forest classification). In {\S}\ref{sec:summary}, we offer a summary and our conclusions. We provide further information on classification trees and the random forest algorithm in Appendices \ref{ss:classificationTrees} and \ref{ss:randomForests}. \section{Methodology: Semi-Supervised Classification} \label{sec:meth} Suppose we have observed photometric light curves for $N$ objects. For each, the observed data are flux measurements at irregularly-spaced time points in each of a number of different filters (e.g., {\it griz}), plus associated measurement errors. Call the data for object $i$, ${\bf X}_i = \{t_{ik}^b, F^b_{ik},\sigma_{ik}^b\}$, where $k=1,...,p_i^b$, $b$ indexes the filter, and $p_i^b$ is the number of flux measurements in filter $b$ for object $i$. Here, $\mathbf{t}_{i}^b$ is the time grid, and $\mathbf{F}_i^b$ and $\mathbf{\sigma}_i^b$ are the flux measurements and errors, respectively, in filter $b$. Suppose, without loss of generality, that the first $n$ objects, ${\bf X}_1,...,{\bf X}_n$, have known types $y_1,...,y_n$. Our goal is to predict the type of each of the remaining $N-n$ objects. To perform this classification, we take a \emph{semi-supervised} approach (see \citealt{chap2006} for an introduction to semi-supervised learning methods). The basic idea is to use the data from all $N$ objects (both labeled and unlabeled) to find a low dimensional representation of the data (the unsupervised part) and then to use just the $n$ labeled objects to train a classifier (the supervised part). Our proposed procedure is as follows: \begin{enumerate} \item Do relevant preprocessing to the data. Because this is application specific, we defer details on our preprocessing of SN light curves to the next section. Here, it suffices to state that the result of this preprocessing for one object is a set of light curve function estimates interpolated over a fine time grid. \item Use the preprocessed light curves for all $N$ objects to learn a low dimensional, data driven embedding of $\{{\bf X}_1,...,{\bf X}_N\}$. We use the diffusion map method for nonlinear dimensionality reduction (\S\ref{ss:dmap}). \item Train a classification model on the $n$ labeled (spectroscopically confirmed) examples that predicts class as a function of the diffusion map coordinates of each object. We use the random forest method (\S\ref{ss:rf}) as our classifier. \item Use the estimated classification model to predict the type of each of the $N-n$ unlabeled objects. \end{enumerate} The advantage of the semi-supervised approach is that it uses all of the observed data to estimate the lower dimensional structure of the object set. Generally, we have many more objects without classification labels than with (e.g, the SN Photometric Classification Challenge provided $14$ unlabeled SNe per labeled SN). If we have continuously varying parameters that affect the shapes and colours of the light curves (e.g., redshift, extinction, etc.) then it is imperative that we use as many data points as possible to capture these variations when learning a low dimensional representation. We then use the examples for which we know the object type to estimate the classification model, which is finally employed to predict the type of each unlabeled object. \subsection{Diffusion Map} \label{ss:dmap} In this section, we review the basics of the diffusion map approach to spectral connectivity analysis (SCA) for data parametrization. We detail our approach of using diffusion map to uncover structure in databases of astronomical objects. For more specifics on the diffusion map technique, we refer the reader to \citet{coif2006} and \citet{lafo2006}. For examples of the application of diffusion map to problems in astrophysics, see \citet{rich2009a}, \citet{rich2009b}, and \citet{free2009}. In \citet{rich2009a}, the authors compare and contrast the use of diffusion maps, which are non-linear, with a more commonly utilized linear technique, principal components analysis (PCA), and demonstrate the superiority of diffusion maps in predicting spectroscopic redshifts of SDSS data. The basic idea of diffusion map is the following. To make statistical prediction (e.g., of SN type) tractable, one seeks a simpler parameterization of the observed data, which is often complicated and high-dimensional. The most common method for data parameterization is PCA, where the data are projected onto a lower-dimensional hyperplane. For complex situations, however, the assumption of linearity may lead to suboptimal predictions because a linear model pays very little attention to the natural geometry and variations of the system. The top plot in Figure 1 of \citet{rich2009a} illustrates this by showing a data set that forms a one-dimensional noisy spiral in two dimensional Euclidean space. Ideally, we would like to find a coordinate system that reflects variations along the spiral direction, which is indicated by the dashed line. It is obvious that any projection of the data onto a line would be unsatisfactory. If, instead, one imagines a random walk starting at ${\bf x}$ that only steps to immediately adjacent points, it is clear that the number of steps it takes for that walk to reach ${\bf y}$ reflects the length between ${\bf x}$ and ${\bf y}$ along the spiral direction. This is the driving idea behind the diffusion map, in which the ``connectivity" of the data in the context of a fictive diffusion process, is retained in a low-dimensional parametrization. This simple, non-linear parametrization of the data is useful for uncovering simple relations with the quantity of interest (e.g., supernova type) \new{and is robust to random noise in the data}. We make this more concrete below. Diffusion map begins by creating a weighted, undirected graph on our observed photometric data $\{{\bf X}_1,...,{\bf X}_N\}$, where each data point is a node in the graph and the pairwise weights between nodes are defined as \begin{equation} \label{eqn:dmap1} w({\bf X}_i,{\bf X}_j) = \exp\left(-\frac{s({\bf X}_i,{\bf X}_j)}{\epsilon}\right) \end{equation} where $\epsilon >0$ is a tuning parameter and $s(\cdot,\cdot)$ is a user-defined pairwise distance between objects. Here, $s$ is a \emph{local} distance measure, meaning that it should be small only if ${\bf X}_i$ and ${\bf X}_j$ are similar (in \S\ref{ss:lcdist} we define the distance measure we use for SN light curve data). In this construction, the probability of stepping from ${\bf X}_i$ to ${\bf X}_j$ in one step of a diffusion process is $p_1({\bf X}_i, {\bf X}_j) = w({\bf X}_i,{\bf X}_j)/ \sum_k w({\bf X}_i, {\bf X}_k)$. We store the one step probabilities between all $N$ data points in the $N \times N$ matrix $\P$; then, by the theory of Markov chains, for any positive integer $t$, the element $p_t({\bf X}_i, {\bf X}_j)$ of the matrix power $\P^t$ gives the probability of going from ${\bf X}_i$ to ${\bf X}_j$ in $t$ steps. See, e.g., Chapter 6 in \citet{grim2001} for an introduction to Markov chains. We define the diffusion map at scale $t$ as \begin{equation} \label{eqn:dmap2} \mathbf{\Psi}^t : {\bf X} \mapsto \left[ \lambda_1^t \mathbf{\Psi}_1({\bf X}), \lambda_2^t \mathbf{\Psi}_2({\bf X}),..., \lambda_m^t \mathbf{\Psi}_m({\bf X})\right] \end{equation} where $\mathbf{\Psi}_j$ and $\lambda_j$ are the right eigenvectors and eigenvalues of $\P$, respectively, in a biorthogonal spectral decomposition and $m$ is the number of diffusion map coordinates chosen to represent the data. \new{The diffusion map coordinates are ordered such that $\lambda_1 \ge \lambda_2 \ge ... \ge \lambda_m$, so that the top $m$ coordinates retain the most information about $\mathbf{P}^t$. Though there are $N$ total eigenvectors of $\mathbf{P}$, only $m \ll N$ are required to capture most of the variability of the system.} The Euclidean distance between any two points in the $m$-dimensional space described by equation (\ref{eqn:dmap2}) approximates the diffusion distance, a distance measure that captures the intrinsic geometry of the data set by simultaneously considering all possible paths between any two data points in the $t$-step Markov random walk constructed above. \new{Because it averages over all possible paths between data points in the random walk, the diffusion distance is robust to noise in the observed data.} The choice of the parameters $\epsilon$ (in equation \ref{eqn:dmap1}) and $m$ gives the map a great deal of flexibility, and it is feasible to vary these parameters in an effort to obtain the best classifier via cross-validation, as described in \S\ref{ss:classtune}. We note that in the random forest classifier used to predict object type (see \S\ref{ss:rf}), the scale of each coordinate of $\mathbf{\Psi}^t({\bf X})$ does not influence the method because each classification tree is constructed by splitting one coordinate at a time. Therefore, the parameter $t$, whose role in the diffusion map (\ref{eqn:dmap2}) is to rescale each coordinate, has no influence on our analyses. We choose to fix $t$ to have the value 1. For the remainder of this paper, we will use $\mathbf{\Psi}_i$ to stand for the $m$-dimensional vector of diffusion map coordinates, $\mathbf{\Psi}^1({\bf X}_i)$. \subsection{Classification: Random Forest} \label{ss:rf} In finding the diffusion map representation of each object, the idea is that this parameterization will hopefully obey a simple relationship with respect to object type. Then, simple modeling can be performed to build a class-predictive model from the diffusion map coordinates. That is, once we have the $m$-dimensional diffusion map representation of each object's light curve (equation \ref{eqn:dmap2}), we construct a model to predict the type, $y_i$, of the $i^{th}$ object as a function of its diffusion map representation, $\mathbf{\Psi}_i$. In other words, using the set of $n$ SNe with known classification labels, we estimate the underlying function $h$ that relates each $m$-dimensional diffusion map representation with a classification label. We will ultimately use this estimate, $\widehat{h}$, to predict the classification, $\widehat{y}_j = \widehat{h}(\mathbf{\Psi}_j)$ for each unlabeled supernova $j=n+1,...,N$. Any classification procedure that can handle more than two classes could be applied to the diffusion map representation of the SNe. By adapting the predictors to the underlying structure in the data, standard classification procedures should be able to separate the SNe of different types. \new{There are many classification tools available in the statistics and machine learning literature, ranging to simple $K$ nearest-neighbor averaging to more sophisticated ensemble methods (see \citealt{2011arXiv1104.3142B} for an overview of some of the classification methods that have been used in time-domain astronomy).} We choose to use the \emph{random forest} method of \citet{brei2001} due to its observed success in many multiclass classification settings, including in astrophysics (\citealt{2003sca..book..243B,rich2011,2011arXiv1101.2406D}). The basic idea of the random forest is to build a large collection of decorrelated classification tree estimates and then to average these estimates to obtain the final predictor, $\widehat{h}$. This approach usually works well because classification tree estimates are notoriously noisy, but tend to be unbiased. By averaging decorrelated tree estimates, the random forest produces classification estimates that are both unbiased and have small variance with respect to the choice of training set. See the Appendices A and B for a brief overview of classification trees (\S\ref{ss:classificationTrees}) and random forest (\S\ref{ss:randomForests}). \section{Semi-Supervised Supernova Classification: The Unsupervised Part} \label{sec:snapp} The first step of our analysis is to use the entire dataset of supernova light curves to learn an appropriate representation of each supernova using diffusion map. This is the \emph{unsupervised} part of our semi-supervised approach. \subsection{Data} \label{ss:data} We apply the methods described in {\S}\ref{sec:meth} to the {\tt SNPhotCC+HOSTZ} dataset of the SN Photometric Classification Challenge (\citealt{kess2010}). We use data from the ``Updated Simulations"\footnote{Data can be downloaded at \url{http://sdssdp62.fnal.gov/sdsssn/SIMGEN_PUBLIC/SIMGEN_PUBLIC_DES.tar.gz}}, as described in \S6 of \citet{kess2010a}. These data were simulated to mimic the observing conditions of the DES. Note that these data are significantly different than the data used in the Challenge, with several bug fixes and improvements to make the simulations more realistic. For instance, the ratio of photometric to spectroscopic SNe was 13:1 in the Challenge data, but 18:1 in the Updated Simulations. Therefore, we refrain from comparing our current results directly to the results in the Challenge.\footnote{Specifically, we find that our methods perform 40\% better on the Challenge data than on the data used in this paper in terms of the photometric Ia figure of merit.} We denote the updated Challenge data as $\mathcal{D}$. There are a total of $N$ = 20,895 SNe in $\mathcal{D}$\footnote{After removal of the 424 SNe simulated from the SDSS 2007nr II-P template, whose peak luminosities are anomalously dim by several magnitudes.} and for each SN we have {\it griz} photometric light curves. These light curves are simulated by the Challenge organizers so as to mimic the observing conditions of the Dark Energy Survey (DES; \citealt{bern2009}). The light curve for each SN is measured on anywhere between 16 to 160 epochs (between 4 and 40 epochs in each filter), with a median value of 101 epochs. The maximum $r$-band signal-to-noise ratio for the light curves ranges between 0.75 and 56, with a median of 11. The data, $\mathcal{D}$, is originally comprised of two sets. One that we dub $\mathcal{S}$ contains 1,103 spectroscopically confirmed SN light curves, while the other, containing 19,792 photometrically observed SNe, is dubbed $\mathcal{P}$. \subsection{Data preprocessing} \label{ss:preprocessing} In order to utilize diffusion map in our classification scheme, we first need to formulate a distance metric, $s$, between observed SN light curves (see equation \ref{eqn:dmap1}). Our distance measure should capture differences in the shapes of the light curves and the colours of the SN. Phrased differently, because the measure $s$ is only considered on small scales (controlled by $\epsilon$ in equation \ref{eqn:dmap1}), we wish to construct a measure that is small only if \emph{both} the shapes of the light curves and the colours of two SNe are very similar. \subsubsection{Non-Parametric Light Curve Shape Estimation} Each SN is sampled on an irregular time grid that differs from filter to filter and from SN to SN. To circumvent this difficulty, we find a nonparametric estimate of the shape of the observed light curve, ${\bf X}_i = \{t_{ik}^b, F^b_{ik},\sigma_{ik}^b\}$, for each SN. With this estimate, we can shift from using the irregularly sampled observed data to using fluxes on a uniform time grid when computing the distances between light curves. We independently fit a natural cubic regression spline to the data from each filter, $b$, and each SN, $i$ (see, e.g., \citealt{rupp2003} and \citealt{wass2006}). We utilize regression splines because they are particularly useful for estimating smooth, continuous functions that can have complicated behavior such as rapid increases. In doing so, we avoid assuming an overly restrictive template model for the shape of each light curve. We also leave the number of spline knots as a free parameter, allowing the model to adapt to the complexity of the true SN light curve shape. Our cubic spline estimator is \begin{equation} \label{eqn:spline} \widehat{F}_{ik}^b \equiv \widehat{F}_i^b(\widehat t_{k}) = \sum_{j=1}^{\nu + 4} \widehat{\beta}_{ij}^b B_j(\widehat t_{k}) \end{equation} where $B_j$ is the $j^{th}$ natural cubic spline basis, $\widehat t_{k}$ are points along our uniform time grid, and $\nu$ is the number of knots. The $\widehat{\beta}_{ij}^b$ are estimators of the coefficients $\beta_{ij}^b$ and are fit from the observed ${\bf X}_i$ by minimizing weighted least squares against the observed fluxes $F_{ik}^b$ with weights $(1/\sigma_{ik}^b)^2$. By fiat, we choose a grid of 1 measurement per MJD, noting that we have found denser grids to have negligible effect on our final results while incurring increased computing time. Other implementations could use a sparser grid, resulting in faster computing times. When fitting regression splines, we must choose the quantity, $\nu$, and locations of the knots, which correspond to points of discontinuity of the third derivative of the spline function. We follow convention and place the knots uniformly over the observed time points. To choose $\nu$, we minimize the generalized cross-validation (GCV) score, defined as \begin{equation} \label{eqn:gcv} {\rm GCV}_{i,b}(\nu) = \frac{1}{p_i^b}\sum_{k=1}^{p_i^b}\left(\frac{F_{ik}^b-\widehat{F}_{ik}^b}{\sigma_{ik}^b(1-\nu/p_i^b)}\right)^2 \end{equation} where $\widehat{F}_{ik}^b$ is the fitted value of the spline with $\nu$ knots at $\widehat t_{k}$ computed using eq.~(\ref{eqn:spline}). Minimizing eq. (\ref{eqn:gcv}) over $\nu$ balances the bias and variance (bumpiness) of the fit. Note that the observational uncertainties in the measured fluxes, $\sigma_{ik}^b$, are used to compute both the LC flux estimates, $\widehat{F}_{ik}^b$, and the model uncertainty in those estimates, $\widehat{\sigma}_{ik}^b$. In \S\ref{ss:lcdist} we will use these entities to construct a distance metric used to compute the diffusion map representation of each SN. We have applied the spline fitting routine to a set of simulated light curves, and the method produces reasonable fits. For an example of the fit for a single supernova's {\it griz} light curves, see Figure~\ref{fig:LC}. For a few SNe, minimizing the GCV led to too many knots (the estimated light curve was deemed too bumpy), so the parameter $\nu$ is restricted to be no greater than 10 to ensure that the estimated curve for each band is physically plausible. \begin{figure} \includegraphics[angle=0,width=3.5in]{figure1.eps} \caption{Spline fit to the data of a single Type Ia supernova. A regression spline was fit independently to the light curve data in each band using GCV (equation \ref{eqn:gcv}). The spline fit (solid red line) and model errors in that fit (blue dashed lines) are used to compute a pairwise supernova light curve distance (\ref{eqn:distband}), which is used to find a diffusion map representation of each SNe. } \label{fig:LC} \end{figure} Once we estimate a SN's raw light curve function, we normalize the flux to mitigate the effect that the observed brightness of each SN has on the pairwise distance estimates. The normalized flux of SN $i$ is \begin{equation} \widetilde{F}_{ik}^b \equiv \frac{\widehat{F}_{ik}^b}{\sum_{b \in griz} \max_k \{\widehat{F}_{ik}^b\}} \,. \end{equation} Similarly, we normalize the model error, $\widehat{\sigma}_{ik}^b$, associated with each spline-estimated flux $\widehat{F}_{ik}^b$; call this $\widetilde{{\sigma}}_{ik}^b$. Because the same divisor is used for each photometric band, the colour of each supernova is preserved. A distance measure constructed from $\{\widetilde{{F}}_i^b,\widetilde{{\sigma}}_i^b\}$ will capture both the shape and colour of the light curve. \subsubsection{Zero-Point Time Estimation} \new{In order to accurately measure the similarities and differences in the shapes of the observed light curves, we must ensure that they are aligned on the time axis. To this end, we define the \emph{zero-point time} of a SN light curve as its time, in MJD, of maximum observed {\it r}-band flux, $t_{o,i} = \max_k \{\widehat{F}_{ik}^r\}$. Shifting all the light curves to this common frame, but setting $\widetilde{t}_i = \widehat{t}_i - t_{o,i}$ enables us to construct a pair-wise distance measure that captures differences in observed LC shapes and colors.} \new{We estimate the zero-point time for a SN whose {\it r}-band maximum occurs at a time endpoint---either the first or last epoch of the observed light curve---on a case by case basis: if it is being compared to a light curve whose maximum {\it r}-band flux occurs either at the same endpoint or at neither endpoint, we use cross-correlation to estimate $t_o$; if it is being matched with a SN whose {\it r}-band flux occurs at the opposite endpoint, we abort the zero-point estimation process and set the distance to a large reference quantity since the two light curves are, by definition, dissimilar.} In our SN database, 2908 (14\%) of the supernovae are observed post $r$-band peak while 1489 (7\%) are observed pre $r$-band peak. \new{Our use of {\it r}-band data to establish a zero-point time is motivated by simulations of Type Ia SNe data. Let $\widehat{t}_o$ denote our estimator of $t_o$. Using 1,000 SNeIa simulated with {\tt SNANA} (\citealt{Kessler09}), which are generated using the default settings for the DES, we can characterize this estimator for each photometric band because the true time zero-point, $t_o$, of the simulations. Note that for the SNANA simulations, the true time of peak flux is computed in each band separately and then the results are combined using a filter-weighted average; thus, we are not actually comparing our estimator, $\widehat{t}_o$, to the true time of {\it r}-band maximum, but rather to a slightly different, redshift-dependent quantity.} See Figure \ref{fig:dfilt} and Table \ref{tab:dfilt} for the results for the 600 SNe in the sample with peaks (i.e., the regression spline maximum is not at an end point). The $x$-axis of Figure \ref{fig:dfilt} shows $\Delta t = \widehat{t}_o - t_o$, while the second and third columns of Table \ref{tab:dfilt} show the estimated mean and standard deviation for $\Delta t$. (The fourth column indicates the estimated correlation between $\Delta t$ and redshift $z$.) We find that using the time of the $r$-band peak flux as our estimator $\widehat{t}_o$ is the proper choice: it is nearly unbiased and it has the minimum variance. \begin{figure} \includegraphics[angle=-90,width=3.5in]{figure2.ps} \caption{ The difference $\Delta t = \widehat{t}_o - t_o$ (in days) versus redshift for 600 SNeIa simulated with {\tt SNANA} using its default settings for the Dark Energy Survey. The 600 SNe are the peaked SNe from a simulated sample of 1,000 SNe. The top row shows $g$- and $r$-band data while the bottom row shows $i$- and $z$-band data. Overlaid on the $r$-band data is a linear regression fit to the data in the range $-5 < \Delta t < 5$ that is meant to be purely illustrative. } \label{fig:dfilt} \end{figure} \begin{table} \centering \caption{Time Normalization Estimator: Different Filters. } \begin{tabular}{@{}lccc@{}} \hline Filter & $\widehat{\mu}$ (days) & $\widehat{\sigma}$ (days) & $\widehat{\rho}$\\ \hline g & 13.86 & 27.26 & 0.66 \\ r & -0.27 & 2.17 & -0.67 \\ i & 2.21 & 2.65 & -0.27 \\ z & 3.25 & 2.84 & 0.30\\ \hline \label{tab:dfilt} \end{tabular} \end{table} In Figure \ref{fig:dsne} and Table \ref{tab:dsne}, we characterize $\widehat{t}_o$, the {\it r}-band max estimator of $t_0$, given 1,000 examples each of different SN types, each observed in the $r$ band. The conclusion that we draw is that if we corrected $\widehat{t}_o$ using host photo-$z$ estimates, the effect on other SN types would be minimal, judging by the standard deviations shown in Table \ref{tab:dsne}. \begin{figure} \includegraphics[angle=0,width=3.5in]{figure3.ps} \caption{ The difference $\Delta t = \widehat{t}_o - t_o$ (in days) versus redshift for peaked SNeIa simulated with {\tt SNANA} using its default settings for the Dark Energy Survey. The top row shows $r$-band data for SN Types Ia, Ibc, and IIn while the bottom row shows $r$-band data for SN Types II-P and II-L. Overlaid on the SNeIa results is a linear regression fit to the data in the range $-5 < \Delta t < 5$ that is meant to be purely illustrative. } \label{fig:dsne} \end{figure} \begin{table} \centering \caption{Time Normalization Estimator: Different SN Types\label{tab:dsne}} \begin{tabular}{@{}lccc@{}} \hline SN Type & $\widehat{\mu}$ (days) & $\widehat{\sigma}$ (days) & $\widehat{\rho}$\\ \hline Ia & -0.27 & 2.17 & -0.67 \\ Ibc & 2.93 & 11.36 & -0.24 \\ IIn & 1.97 & 9.57 & -0.03 \\ II-P & 32.99 & 31.76 & -0.102 \\ II-L & -2.19 & 12.64 & -0.297\\ \hline \end{tabular} \end{table} Simulated SNe light curves without a peak in the {\it r}-band are treated differently from those with peaks. To estimate $t_o$ for a given un-peaked light curve, we cross-correlate it with with each SN that has an $r$-band peak. This produces a sequence of estimates of $t_o$, $(\widehat{t_{o,j}})_{j=1}^M$, where $M$ is the number of peaked SN. Finally, we return \begin{equation} \widehat{t}_o = \frac{1}{M} \sum_{j=1}^M \widehat{t}_{o,j} \,. \label{eqn:nopeak} \end{equation} as the estimate of $t_o$ for that unpeaked light curve. To examine our use of cross-correlation, we return to the SNANA simulated SNe described above. In Figure \ref{fig:nopeak} we plot $\Delta t$ for using the endpoint and using cross-correlation for each of the 400 unpeaked SN. Without cross-correlation, $(\widehat{\mu},\widehat{\sigma})$ = $(-5.02,4.76)$; with cross-correlation, the values are $(0.87,5.63)$, i.e., a small increase in estimator standard deviation is offset by a marked decrease in bias. Thus we conclude that cross-correlation is an effective approach. \begin{figure} \includegraphics[angle=-90,width=3.5in]{figure4.ps} \caption{ The difference $\Delta t = \widehat{t}_o - t_o$ (in days) versus redshift for 400 SNeIa simulated with {\tt SNANA} using its default settings for the Dark Energy Survey. The 400 SNe are the unpeaked SNe from a simulated sample of 1,000 SNe. The left panel shows the distribution of $\Delta t$ values assuming the time of the first $r$-band datum for $\widehat{t}_o$, while the right panel shows the same distribution except that $\widehat{t}_o$ is estimated by cross-correlating the unpeaked light curve with all peaked light curves in the sample and taking the mean (see eq.~\ref{eqn:nopeak}). } \label{fig:nopeak} \end{figure} \subsection{Light Curve Distance Metric} \label{ss:lcdist} Call the set of normalized light curves for supernova $i$ $\{\widetilde{t}_{ik}, \widetilde{F}_{ik}^b, \widetilde{\sigma}_{ik}^b\}$. For each pair of supernovae, ${\bf X}_i$ and ${\bf X}_j$, we define the squared {\emph b-band} distance between them, where {\emph b} ranges over {\it griz}, as \begin{equation} \label{eqn:distb} \label{eqn:distband} s_b({\bf X}_i,{\bf X}_j) = \frac{1}{t_u - t_l} \sqrt{\sum_{k: t_u \le k \le t_l} \frac{\left(\widetilde{F}_{ik}^b-\widetilde{F}_{jk}^b\right)^2}{(\widetilde{\sigma}^b_{ik})^2+(\widetilde{\sigma}^b_{jk})^2}} \end{equation} \new{where $k$ indexes the time grid of the smoothed supernova light curves; a time binning of 1 day is used.} Hence, $s_b({\bf X}_i,{\bf X}_j)$ is the weighted Euclidean distance between light curves, per overlapping time bin. The quantities $t_l=\max(\widetilde{t}_{i1},\widetilde{t}_{j1})$ and $t_u=\min(\widetilde{t}_{ip_i},\widetilde{t}_{jp_j})$ define the lower and upper time bounds, respectively, of the overlapping regions of the two light curves. If two normalized light curves have no overlap, then their distance is set to a large reference value. Finally, we define the \emph{total} distance between two light curves, as $s({\bf X}_i,{\bf X}_j) = \sum_b s_b({\bf X}_i,{\bf X}_j)$, the sum of the distances in eq.~(\ref{eqn:distb}), across bands. This distance is used in eq.~(\ref{eqn:dmap1}) to build the weighted graph which we use to compute the diffusion map parametrization, $\{\mathbf{\Psi}_1,...,\mathbf{\Psi}_N\}$, of our data (equation \ref{eqn:dmap2}). Each $\mathbf{\Psi}_i$ are the coordinates of the $i^{th}$ SN in the estimated diffusion space. \section{Semi-Supervised Supernova Classification: The Supervised Part} \label{sec:snapp1} Once we have a parametrization, $\mathbf{\Psi}$, for each supernova, the next step is to use a training set of SNe of known, spectroscopically confirmed type to learn a classification model to predict the type of each supernova from its representation $\mathbf{\Psi}$. This is the \emph{supervised} part of our semi-supervised methodology. \subsection{Constructing a Training Set} \label{ss:trainset} In the Supernova Challenge, the training set, $\mathcal{S}$, was generated assuming a fixed amount of spectroscopic follow-up on each of a 4m and 8m-class telescope. The magnitude limits were assumed to be 21.5 ($r$-band) for the 4m and 23.5 ($i$-band) for the 8m telescope (\citealt{kess2010a}). Using $\mathcal{S}$ as a training set is problematic for at least two reasons. First, $\mathcal{S}$ consists of SN that have much lower host-galaxy $z$ and higher observed brightness than those in $\mathcal{P}$ (see Fig. 2 in \citealt{kess2010a}). Second, $\mathcal{S}$ oversamples the number of Type Ia SN relative to that in the entire data set, $\mathcal{D}$ (see Table \ref{tab:datasetsComposition}). These distributional mismatches induce inadequate modeling in those parameter subspaces undersampled by $\mathcal{S}$ and can cause model selection procedures to choose poor models for classification in $\mathcal{P}$. Both of these issues hinder attempts to generalize models fit on $\mathcal{S}$ to classify supernovae in $\mathcal{P}$. We study the dependence of the classification accuracy on the particular training set employed (or more precisely, on the specific procedure used to perform spectroscopic follow-up). In this section, we propose a variety of procedures to procure spectroscopically confirmed labels; in \S\ref{sec:results} we will analyse the sensitivity of our classification results to the training set used and determine the optimal strategy for spectroscopic SN follow-up. We phrase the problem in the following way: assuming that we have a fixed number of hours for spectroscopic follow-up, what is the optimal way to use that time? To simplify our study, we assume that all follow-up is performed on an 8m telescope with spectroscopic integration times given in Table \ref{tab:spectimes}, based on simplistic simulations under average conditions using the FOcal Reducer and low dispersion Spectrograph for the Very Large Telescope (VLT FORS) exposure time calculator \footnote{\url{http://www.eso.org/observing/etc/bin/gen/form?INS.NAME=FORS2+INS.MODE=spectro}}. The amount of spectroscopic follow-up time necessary for each target is determined by the integration time (Table \ref{tab:spectimes}) corresponding to its $r$-band maximum.\footnote{For non-integer magnitudes, the integration times are determined by a quadratic interpolation function.} These integration times are meant to be approximate figures for means of constructing simulated spectroscopic training sets. \begin{table} \centering \caption{Spectroscopic integration times assumed for follow-up observations of supernovae, based on average conditions at the VLT using the FORS instrument.} \begin{tabular}{@{}cc@{}} \hline {$r$-band mag} & {integration time (minutes)}\\ \hline 20 & 1\\ 21 & 2\\ 22 & 5\\ 23 & 20\\ 24 & 100\\ 25 & 600\\ 25.5 & 1500\\ \hline \label{tab:spectimes} \end{tabular} \end{table} Under this scenario, the spectroscopic data in $\mathcal{S}$ require a total of 24,000 minutes (400 hours) of integration time. Assuming this amount of follow-up time\footnote{Note that we fix the total amount of \emph{integration} time needed, ignoring the time lost to overheads, weather, and other effects.}, we construct alternate training sets using each of the following procedures: \begin{enumerate} \item Observe SNe in order of decreasing brightness. This strategy observes only the brightest objects, and allows us to obtain spectra for the maximal number of supernovae. Call this $\mathcal{S}_B$. \item Perform a ($r$-band) magnitude-limited survey, down to a prespecified magnitude cutoff. Here, we randomly sample objects brighter than the magnitude cut, until the observing time is filled. We experiment with four different cutoffs: 23.5, 24, 24.5, and 25th magnitude. Call these $\mathcal{S}_{m,23.5},\mathcal{S}_{m,24},\mathcal{S}_{m,24.5}$, and $\mathcal{S}_{m,25}$. \item Perform a redshift-limited survey. We try two different redshift cutoffs: $z$=0.4, 0.6. Call these $\mathcal{S}_{z,0.4}$, and $\mathcal{S}_{z,0.6}$. \end{enumerate} The magnitude- and redshift-limited surveys both have a random component. To quantify the effects that this randomness has on the samples and the ultimate supernova typing, we construct 15 data sets from each spectroscopic ``survey". In Table \ref{tab:datasets}, we display the median number of SNe from each of $\mathcal{S}$ and $\mathcal{P}$ contained in each spectroscopic training set. Note that as the limits of the survey get fainter (and higher $z$), the ratio of elements from $\mathcal{P}$ to $\mathcal{S}$ increases, as the total number of SNe decreases. Table \ref{tab:datasetsComposition} shows the median class composition of each training set. Here, the deeper training sets more closely resemble the class distribution in $\mathcal{D}$. We will return to these training sets in \S\ref{sec:results}. \begin{table*} \centering \caption{Composition of the training datasets, broken down by the SN Challenge spectroscopic/photometric designation. In the first two rows, each cell's entry shows the number of elements in the set that is the intersection of $\mathcal{S}$ or $\mathcal{P}$ with the respective training set. In the third row, the total number of objects in each training set is given.} \begin{tabular}{@{}c|rrrrrrrrr@{}} \hline {Set} & {$\mathcal{S}$} & {$\mathcal{S}_{B}$} & {$\mathcal{S}_{m,23.5}$}& {$\mathcal{S}_{m,24}$} & {$\mathcal{S}_{m,24.5}$} & {$\mathcal{S}_{m,25}$} & {$\mathcal{S}_{z,0.4}$} & {$\mathcal{S}_{z,0.6}$} & {$\mathcal{D}$} \\ \hline $\mathcal{S}$ & 1103 & 686 & 294 & 73 & 26 & 11 & 44 & 15 & 1103\\ $\mathcal{P}$ & 0 & 1765 & 979 & 508 & 272 & 155 & 240 & 117 & 19792\\ Total & 1103 & 2451 & 1273 & 587 & 302 & 165 & 284 & 135 & 20895\\ \hline \label{tab:datasets} \end{tabular} Median values are displayed over 15 training sets. Only for $\mathcal{S}$ and $\mathcal{S}_B$ are the training set identical on each iteration. \end{table*} \begin{table*} \centering \caption{Composition of the training datasets, broken down by the SN Challenge spectroscopic/photometric designation.} \begin{tabular}{@{}c|rrrrrrrrr@{}} \hline {SN Type} & {$\mathcal{S}$} & {$\mathcal{S}_{B}$} & {$\mathcal{S}_{m,23.5}$}& {$\mathcal{S}_{m,24}$} & {$\mathcal{S}_{m,24.5}$} & {$\mathcal{S}_{m,25}$} & {$\mathcal{S}_{z,0.4}$} & {$\mathcal{S}_{z,0.6}$} & {$\mathcal{D}$} \\ \hline Ia & 559 & 1313 & 557 & 178 & 75 & 39 & 45 & 22 & 5088\\ Ib/c & 15 & 18 & 13 & 5 & 4 & 2 & 7 & 1 & 259\\ Ib & 71 & 168 & 79 & 32 & 15 & 8 & 30 & 10 & 1438\\ Ic & 58 & 88 & 50 & 26 & 13 & 8 & 36 & 14 & 1104\\ IIn & 63 & 131 & 84 & 42 & 24 & 14 & 10 & 5 & 1939\\ II-P & 326 & 707 & 480 & 295 & 161 & 90 & 159 & 74 & 10642\\ II-L & 11 & 26 & 12 & 5 & 3 & 2 & 17 & 6 & 425\\ \hline \label{tab:datasetsComposition} \end{tabular} Median values are displayed over 15 training sets. Only for $\mathcal{S}$ and $\mathcal{S}_B$ are the training set identical on each iteration. \end{table*} \subsection{Tuning the Classifier} \label{ss:classtune} We build a random forest classifier, $\widehat{h}$, by training on the diffusion map representation and known type of a set of spectroscopically confirmed SNe. This classifier then allows us to predict the class of each newly observed SN light curve. In constructing such a classifier, there are three tuning parameters which must be chosen: \begin{enumerate} \item $\epsilon$, the diffusion map bandwidth in eq.~(\ref{eqn:dmap1}), \item $m$, the number of diffusion map coordinates used in the classifier, and \item $\gamma_{\rm Ia}$, the minimum proportion of random forest trees predicting that a SN is of Type Ia necessary for us to decide that the SN is Type Ia. \end{enumerate} In this section we describe how to choose these parameters in a statistically rigorous way. The SN Classification Challenge was based on correctly predicting the Type Ia supernovae. The Challenge used the Type Ia Figure of Merit (FoM) \begin{equation} \label{eqn:fom} \widehat{f}_{\rm Ia} = \frac{1}{N_{\rm Ia}^{\rm Total}} \frac{(N_{\rm Ia}^{\rm true})^2}{N_{\rm Ia}^{\rm true} + WN_{\rm Ia}^{\rm false}} \end{equation} where $N_{\rm Ia}^{\rm Total}$ is the total number of Type Ia SNe in the sample, $N_{\rm Ia}^{\rm true}$ is the number of Type Ia SNe correctly predicted, and $N_{\rm Ia}^{\rm false}$ is the number of non-Ia SNe incorrectly predicted to be Type Ia. The factor $W$ controls the relative penalty on false positives over false negatives. For the SN Photometric Classification Challenge, $W\equiv 3$. This penalty on Type Ia purity means that we need to be conservative in calling a SN a Type Ia: we are penalized three times more by calling a non-Ia a Type Ia than in calling a Type Ia a non-Ia. Since the Challenge gives an explicit criterion (equation \ref{eqn:fom}), we choose the tuning parameters that maximize $\widehat{f}_{\rm Ia}$. To avoid overfitting to the training set, we maximize a 10-fold cross-validation estimate of $\widehat{f}_{\rm Ia}$ and call this maximum value $f^*_{\rm Ia}$. We find that $f^*_{\rm Ia}$ is insensitive to the value of $m$ for a large enough $m$, as the random forest largely ignores irrelevant components, so for the rest of the analysis we fix $m = 120$. To find the optimal model, $(\epsilon^*,t_{\rm Ia}^*)$, we perform a two dimensional grid search over $(\epsilon,\gamma_{\rm Ia})$. Once this optimal model is discovered by minimizing the cross-validated $\widehat{f}_{\rm Ia}$, it is applied to the photometric sample to predict the class of each supernova. \subsection{Incorporating Redshift Information} \label{ss:redshiftDescription} In addition to observed light curves, host-galaxy (photometric) redshift estimates, $z$, are often available. If this information is known, it should be included in the supernova analysis. We may incorporate redshift information in one of two ways: \begin{itemize} \item directly in the calculation of the pairwise distance metric, $s$; or \item as an additional covariate for the classification model, $h$. \end{itemize} In the former approach, we artificially inflate the distance measure between two SNe, $i$ and $j$, if \begin{equation} \label{eqn:hostz} \frac{\vert z_i - z_j \vert}{\sqrt{u_i^2 + u_j^2}} > n_s \,, \end{equation} where $u$ denotes the estimated redshift uncertainty, and $n_s$ is a set constant (e.g., 3). Using eq. (\ref{eqn:hostz}), we effectively deem two supernovae `dissimilar' if their redshifts are greater than $n_s$ standard deviations apart, even if their light curve distance, $s$, is small. This approach can aid to break redshift degeneracies in light curve shapes and colours. In the latter approach, we simply use $z$ as a covariate in the classification model in addition to the diffusion map coordinates. This treatment of redshift allows the classification model to directly learn the dependencies of supernova type on redshift. However, this method relies on the assumption that the training SN distribution resembles that of the set to be classified (and, crucially, that the entire range of redshifts is captured by the training data). The first approach does not rely on such assumptions because the redshifts are employed, in an unsupervised manner, to alter the SN representation prior to learning a classification model. In Tables \ref{tab:zclassIa} and \ref{tab:zclassIIp}, we show the classification results, on the SN Challenge data, of incorporating redshift into our analysis using each of the two methods. The strategy of using redshifts to directly influence the diffusion map coordinates performs better than including the redshifts as a covariate in the classification model, and both methods of incorporating redshift information provide improved classifications (see \S\ref{ss:zres}). \subsection{Computational Resources} Though fitting the above described method can take substantial computational resources, the splines, spectral decomposition, and random forest have very stable and efficient implementations that are readily used and highly optimized. The dominating computation remains computing the distance matrix, with the bottleneck being the cross-correlation approach to estimating the time of maximum flux of a non-peaked SN light curve. \section{Results} \label{sec:results} In this section, we analyse 20,895 supernovae provided in the Supernova Photometric Classification Challenge using the methods introduced in \S\S \ref{sec:meth}-\ref{sec:snapp1}. \subsection{Diffusion Map Representation of SNe} The first step in this analysis is to compute the diffusion map representation of each SN. In Figure \ref{fig:impnoz} we plot diffusion map coordinates, coloured by true class, for the entire SN set, $\mathcal{D}$ (top) and spectroscopic set, $\mathcal{S}$ (bottom). These coordinates were computed without using host photometric redshifts. Large discrepancies between the diffusion map distributions of these two sets are obvious. The left panels of Figure \ref{fig:impnoz} show the first two diffusion coordinates, where clear separation between the Type I and Type II SNe is obvious. The right panels plot the two most important coordinates (3 \& 7), as estimated by a random forest classifier trained only on the spectroscopic data. With respect to these two coordinates, there is separation between the Type Ia and Ibc supernovae in the spectroscopic set. \begin{figure} \includegraphics[angle=-90,width=3.5in]{figure5a.ps}\\ \includegraphics[angle=-90,width=3.5in]{figure5b.ps} \caption{ Top: Diffusion coordinates for all (spectroscopic+photometric) SNe, of Type Ia (red), IIn+II-P+II-L (green), and Ibc+Ib+Ic (blue). Bottom: Diffusion coordinates for only the spectroscopic SN sample. Left: In the first two coordinates, there is clear separation between the Type II and Type I SNe. Right: Plotted are the two most important diffusion coordinates for SN classification, as determined by a random forest classifier built on the training set. Clear separations between SN types in the spectroscopic set are not present in the combined set.} \label{fig:impnoz} \end{figure} In Fig. \ref{fig:lcdmap}, we plot the average 4-band light curves of the SNe within each of 4 regions in diffusion space. Stepping across this coordinate system (with respect to coordinates 3 and 7), we see that the relative strength of the $g$-band emission increases and the light curves become more plateaued. This corresponds to a gradual transition from Type Ia to Type II SNe. \begin{figure} \includegraphics[angle=0,width=3.5in]{figure6.ps}\\ \caption{ Top: Supernovae in the spectroscopic sample $\mathcal{S}$ show a separation between the various supernova types in diffusion coordinates 3 and 7. Bottom: The average $griz$ light curves in each of the four boxes B1-B4 are plotted, revealing a gradual flattening of the light curves and an incremental increase in the relative strength of the $g$-band emission. } \label{fig:lcdmap} \end{figure} We also explore the behavior, in diffusion space, of SN redshift. Fig. \ref{fig:zdmap} plots the first two diffusion coordinates of the 5088 Ia SNe, coloured by their true redshift. Even though we did not use any redshift information in the computation of these coordinates, we see a gradual trend in redshift with respect to this coordinate system, with an increase in $z$ as one moves diagonally from bottom right to top left. This means that our construction of distance measure for the diffusion map captures the incremental changes with respect to redshift that occur in the light curves. Note that using the entire set of data to estimate the diffusion map coordinates has allowed us a fine enough sampling with respect to redshift; this would generally not be the case if we were to employ only the spectroscopic SNe to build the diffusion map. This is a critical result because it shows that the main source of variability in diffusion space is due to SN type, and not redshift. Hence, we can use the diffusion map representation of the SNe directly, without having to estimate the redshift of each SN and correct for its effects on the light curves. In \S\ref{ss:zres} we will compare this result with using host-galaxy photo-z estimates to directly alter the diffusion map coordinates. \begin{figure} \includegraphics[angle=-90,width=3.5in]{figure7.eps} \caption{ Redshift of all 5088 Type Ia supernovae, plotted in the first two diffusion map coordinates. The true redshift varies gradually across diffusion space, even though this information was not used to construct the diffusion map.} \label{fig:zdmap} \end{figure} \subsection{Classification of Type Ia SNe} Given the diffusion map representation of the entire set of SNe, $\mathcal{D}$ (Fig. \ref{fig:impnoz}, top) and a training set (see \S\ref{ss:trainset}), we train a classification model to predict the types of the remaining supernovae. To achieve this, we tune the classifier on our training set, as in \S\ref{ss:classtune}, and then apply this classifier to $\mathcal{P}$. In Table \ref{tab:classIa} we display the results of Type Ia SN classification with the eight training datasets described in \S\ref{ss:trainset}. For each training set, we report the optimal training set Type-Ia FoM, $f^*_{\rm Ia}$, and tuning parameters, $(\epsilon^*,t_{\rm Ia}^*)$, and the Type-Ia FoM, $\widehat{f}_{\rm Ia, pred}$, of the predictions for $\mathcal{P}$.\footnote{For each training set, we evaluate $\widehat{f}_{\rm Ia, pred}$ for only those points in $\mathcal{P}$ not included in the training set.} Additionally, we show the purity and efficiency of Type-Ia classification, defined as \begin{eqnarray} \widehat{p}_{\rm Ia} = \frac{N_{\rm Ia}^{\rm true}}{N_{\rm Ia}^{\rm true} + N_{\rm Ia}^{\rm false}} \end{eqnarray} and \begin{eqnarray} \widehat{e}_{\rm Ia} = \frac{N_{\rm Ia}^{\rm true}}{N_{\rm Ia}^{\rm Total}}. \end{eqnarray} Note that all entries in Table \ref{tab:classIa} are median values over 15 repetitions of each training set.\footnote{Training sets $\mathcal{S}$ and $\mathcal{S}_B$ are the same for each iteration, but results differ slightly on each iteration due to randomness in the random forest classifier.} Results of this experiment show that the deeper magnitude-limited follow-up strategies perform the best, achieving a $\widehat{f}_{\rm Ia, pred}$ of 0.305, a value 2.4 times the FoM achieved by the classifier trained on $\mathcal{S}$. For the $\mathcal{S}_{m,25}$ training procedure, $(\widehat{p}_{\rm Ia, pred},\widehat{e}_{\rm Ia, pred})=(0.72,0.65)$. For each training set, the cross-validated Type Ia figure of merit estimated on the training data is significantly larger than the figure of merit achieved on the photometric data, showing that none of the training sets completely resembles $\mathcal{P}$. Notably, on the Challenge training set, $\mathcal{S}$, our method achieves a cross-validated Type Ia purity/efficiency of 95\%/87\%, but this transfers to a purity/efficiency of 50\%/50\% on the data in $\mathcal{P}$. Figure \ref{fig:Iaphot} displays the distribution of Type Ia FoM, purity, and efficiency for each of the spectroscopic follow-up procedures. A few observations: \begin{itemize} \item The deeper magnitude-limited surveys perform the best in terms of FoM. The obvious trend is that deeper surveys perform better, even though their training sets are much smaller. For instance, the 23.5 mag. limited survey contains, on average, 7.7 times the number of training SNe as the 25 magnitude limited survey, but attains a prediction FoM a mere 19\% as large. \item Compared to the Challenge training set, $\mathcal{S}$, the 25th magnitude-limited survey attains a 44\% increase in purity and 30\% increase in efficiency of Type Ia SNe. \item The worst strategy is to follow-up on only the brightest SNe. Though this maximizes the number of labelled supernovae, it produces the smallest figure of merit. \item Redshift limited surveys are suboptimal to magnitude-limited surveys. Though this strategy results in Type Ia samples with high purity, the efficiency of these samples is very low. A redshift-limited survey is not ideal for our approach because we have not directly modeled the redshift dependence of SN light curves. Without the ability to capture high $z$ SNe, our model becomes overly conservative, resulting in low efficiency. \item Though there can be large variability in the actual samples produced by the magnitude-limited surveys, the Type Ia FoM is very stable, with FoM interquartile range typically smaller than 0.05. \end{itemize} Based on this study, we recommend that deeper magnitude-limited follow-up strategies be used to attain SN training samples. Using a 25th magnitude-limited follow-up procedure yields a Ia FoM of 242\% that of the shallower procedure used in the SN Challenge. \begin{table} \centering \caption{Results of Classifying Type Ia Supernovae using training sets from \S\ref{ss:trainset}.\label{tab:classIa}} \begin{tabular}{@{}l|cccccc@{}} \hline Training Set & $\epsilon^*$ & $t_{\rm Ia}^*$ & $f^*_{\rm Ia}$ & $\widehat{f}_{\rm Ia, pred}$ & $\widehat{p}_{\rm Ia, pred}$ & $\widehat{e}_{\rm Ia, pred}$ \\ \hline $\mathcal{S}$ & 1.4 & 0.58 & 0.757 & 0.126 & 0.503 & 0.497\\ $\mathcal{S}_B$ & 1.4 & 0.65 & 0.840 & 0.011 & 0.240 & 0.125\\ $\mathcal{S}_{m,23.5}$ & 1.4 & 0.54 & 0.796 & 0.058 & 0.404 & 0.316\\ $\mathcal{S}_{m,24}$ & 1.0 & 0.46 & 0.728 & 0.155 & 0.605 & 0.459\\ $\mathcal{S}_{m,24.5}$ & 1.0 & 0.45 & 0.610 & 0.250 & 0.730 & 0.501\\ $\mathcal{S}_{m,25}$ & 1.0 & 0.37 & 0.494 & {\bf 0.305} & 0.724 & 0.654\\ $\mathcal{S}_{z,0.4}$ & 1.4 & 0.41 & 0.664 & 0.061 & 0.896 & 0.085\\ $\mathcal{S}_{z,0.6}$ & 1.2 & 0.29 & 0.600 & 0.112 & 0.772 & 0.249\\ \hline \end{tabular} $f^*_{\rm Ia}$ is computed on training set via 10-fold cross-validation.\\ $\widehat{f}_{\rm Ia, pred}$, $\widehat{p}_{\rm Ia, pred}$, and $\widehat{e}_{\rm Ia, pred}$ are evaluated on all data in the photometric set $\mathcal{P}$ not in the training set. \end{table} \begin{figure} \includegraphics[angle=0,width=3.25in]{figure8.eps} \caption{ Performance on Type Ia SNe in $\mathcal{P}$ of classifiers trained on spectroscopically confirmed supernovae from 8 different follow-up procedures. Top: Type Ia Figure of Merit, $\widehat{f}_{\rm Ia, pred}$. Bottom: Type Ia purity and efficiency. Boxplots show the distribution of each metric over 15 training sets. The obvious winner is the deeper magnitude-limited survey, which achieves median purity and efficiency between 65-75\%.} \label{fig:Iaphot} \end{figure} \subsection{Type II-P Classification} Type II-P supernovae are also useful for cosmological studies because they can be used as standard candles (\citealt{pozn2010}). Here, we classify Type II-P supernovae in the SN Challenge data set using the above methods and analogous Type II-P figure of merit. In Table \ref{tab:classIIp} we display the results of Type II-P supernova classification with each of the 8 training sets, and in Fig. \ref{fig:IIpphot} we plot the distribution of the Type II-P FoM, purity and efficiency with respect to each spectroscopic follow-up strategy. Much like the Ia study, we find that the deeper magnitude-limited surveys perform the best. We find that in terms of Type II-P figure of merit, a 24.5th magnitude limited survey performs the best, achieving $\widehat{f}_{\rm IIP, pred}=0.586$, with purity and efficiency at 87\% and 84\%, respectively. Qualitatively, the Type II-P figures of merit resemble those of the type Ia study. For each training set, the purity of the classifications is quite high, above 80\%, while the efficiency differs greatly, from a minimum of 32\% for $\mathcal{S}$ to a maximum of 86\% for $\mathcal{S}_{m,25}$. Compared to the training set, $\mathcal{S}$, used in the SN Challenge, a 24.5th magnitude-limited survey achieves only a slightly better II-P purity, but a 1.6 times increase in II-P efficiency. \begin{table} \centering \caption{Results of Classifying Type II-P Supernovae using training sets from \S\ref{ss:trainset}.\label{tab:classIIp}} \begin{tabular}{@{}l|cccccc@{}} \hline Training Set & $\epsilon^*$ & $t^*_{\rm IIP}$ & $f^*_{\rm IIP}$ & $\widehat{f}_{\rm IIP, pred}$ & $\widehat{p}_{\rm IIP, pred}$ & $\widehat{e}_{\rm IIP, pred}$ \\ \hline $\mathcal{S}$ & 1.6 &0.55 &0.834 &0.218 &0.866 &0.319\\ $\mathcal{S}_B$ & 1.6 &0.49 &0.835 &0.203 &0.820 &0.337\\ $\mathcal{S}_{m,23.5}$ & 1.6 &0.54 &0.826 &0.286 &0.865 &0.430\\ $\mathcal{S}_{m,24}$ &1.2 &0.59 &0.791 &0.491 &0.896 &0.648\\ $\mathcal{S}_{m,24.5}$ &1.0 &0.52 &0.747 &{\bf 0.586} &0.868 &0.835\\ $\mathcal{S}_{m,25}$ &1.0 &0.48 &0.593 &0.532 &0.845 &0.862\\ $\mathcal{S}_{z,0.4}$ &1.4 &0.57 &0.745 &0.289 &0.844 &0.456\\ $\mathcal{S}_{z,0.6}$ &1.2 &0.55 &0.660 &0.383 &0.838 &0.673\\ \hline \end{tabular} $f^*_{\rm IIP}$ is computed on training set via 10-fold cross-validation.\\ $\widehat{f}_{\rm IIP, pred}$, $\widehat{p}_{\rm IIP, pred}$, and $\widehat{e}_{\rm IIP, pred}$ are evaluated on all data in the photometric set $\mathcal{P}$ not in the training set. \end{table} \begin{figure} \includegraphics[angle=0,width=3.25in]{figure9.eps} \caption{ Same as Figure \ref{fig:Iaphot} for Type II-P classification. Here, a 24.5th magnitude-limited survey attains the maximal II-P figure of merit. Each spectroscopic training strategy results in a high II-P purity, but a much different classification efficiency. } \label{fig:IIpphot} \end{figure} \subsection{Incorporating Host Redshift} \label{ss:zres} Finally, we study the performance of the two methods of incorporating host-galaxy redshifts (\S\ref{ss:redshiftDescription}). In Tables \ref{tab:zclassIa} and \ref{tab:zclassIIp} we show the results of classifying Type Ia and II-P SNe, respectively, using each of the two strategies for incorporating redshifts. Results are shown for each of the 8 training sets, where the optimal redshift cutoff, $n_s$ in eq. (\ref{eqn:hostz}), was chosen by maximizing the cross-validated training set FoM over a grid of integer values from 2 to 6. There is a clear improvement to the FoMs by including host-galaxy redshifts. Compared to the non-redshift results in Tables \ref{tab:classIa}-\ref{tab:classIIp}, the FoM values increase for every training set by the use of redshift to alter the diffusion map coordinates. Using redshifts to alter the diffusion map coordinates consistently performs better than using redshift as a covariate. Overall, the best strategy for including host-galaxy redshifts for Type Ia classification is to use eq. (\ref{eqn:hostz}) with $n_s=2$ on the training set $\mathcal{S}_{m,25}$. Using this prescription yields a Type Ia FoM of 0.355, an improvement of 16\% over the best FoM without redshift. For Type II-P classification, the best strategy is to use $n_s=4$, resulting in a FoM of 0.612, which represents an improvement of 4.4\%. We plot the Type Ia FoM, purity, and efficiency as a function of redshift in Figure \ref{fig:fomz} for the analysis without using photometric redshifts, and in Figure \ref{fig:fomz4} for the analysis incorporating photo-z's with $n_s=2$. Within each of 7 equally sized redshift bins, the median performance measures are plotted. The magnitude-limited training set experiences improvements in performance for both the lowest and highest redshifts. The purity of the Type Ia sample found using the magnitude-limited training sample improves significantly at the lowest and highest redshifts by incorporating redshifts. This highlights the tremendous value that host redshifts have in Type Ia classification, especially at breaking degeneracies between supernova type and redshift. \begin{figure} \includegraphics[angle=0,width=3.25in]{figure10.eps} \caption{ Performance of Type Ia classification as a function of redshift, for classification without using host-galaxy photometric redshifts. In 7 equal sized redshift bins, the median FoM (top), purity (bottom left), and efficiency (bottom right) are plotted for three different training sets. All three training sets result in poor estimates for high redshift SNe. The magnitude-limited training set $\mathcal{S}_{m,25}$ (solid line) performs poorly for low redshifts, but the best for redshifts $> \sim 0.5$. The Challenge training set ($\mathcal{S}$, dashed line) performs poorly for all redshifts, while the redshift-limited training set $\mathcal{S}_{z,0.6}$ (dotted line) performs the best for low redshifts but declines sharply to zero after reaching its redshift limit. \label{fig:fomz} \end{figure} \begin{figure} \includegraphics[angle=0,width=3.25in]{figure11.eps} \caption{ Same as Figure \ref{fig:fomz}, using photometric redshifts to alter the diffusion map coordinates, with $n_s=2$. The magnitude-limited training set $\mathcal{S}_{m,25}$ (solid line) performs similarly across all redshifts except the highest redshift bin, besting the figure of merit for the Challenge training set ($\mathcal{S}$, dashed line) for six of the seven redshift bins. Performance for the redshift-limited training set $\mathcal{S}_{z,0.6}$ (dotted line) declines sharply after reaching the redshift limit.} \label{fig:fomz4} \end{figure} \begin{table} \centering \caption{Results of Classifying Type Ia Supernovae incorporating host redshifts. \label{tab:zclassIa}} \begin{tabular}{@{}l|ccccccc@{}} \hline Tr. Set & $n^*_s$ & $\epsilon^*$ & $t_{\rm Ia}^*$ & $f^*_{\rm Ia}$ & $\widehat{f}_{\rm Ia, pred}$ & $\widehat{p}_{\rm Ia, pred}$ & $\widehat{e}_{\rm Ia, pred}$ \\ \hline $\mathcal{S}$ &2 & 1.1 & 0.53 & 0.871 & 0.249 & 0.539 & 0.9\\ &-- & 1.2 & 0.59 & 0.84 & 0.131 & 0.446 & 0.584\\ $\mathcal{S}_B$ &6 & 1.2 & 0.55 & 0.903 & 0.029 & 0.24 & 0.308\\ &-- & 1.6 & 0.58 & 0.899 & 0.014 & 0.18 & 0.213\\ $\mathcal{S}_{m,23.5}$ &6 & 1.4 & 0.53 & 0.873 & 0.108 & 0.463 & 0.482\\ &-- & 1.4 & 0.57 & 0.848 & 0.06 & 0.369 & 0.37\\ $\mathcal{S}_{m,24}$ &4 & 1 & 0.47 & 0.799 & 0.252 & 0.726 & 0.564\\ &-- & 1.4 & 0.51 & 0.733 & 0.153 & 0.633 & 0.463\\ $\mathcal{S}_{m,24.5}$ &5 & 1.2 & 0.44 & 0.689 & 0.315 & 0.769 & 0.594\\ &-- & 1 & 0.43 & 0.649 & 0.232 & 0.716 & 0.503\\ $\mathcal{S}_{m,25}$ &2 & 1.2 & 0.39 & 0.615 & {\bf 0.355} & 0.758 & 0.741\\ &-- & 1 & 0.39 & 0.54 & 0.308 & 0.732 & 0.688\\ $\mathcal{S}_{z,0.4}$ &4 & 1.1 & 0.45 & 0.649 & 0.065 & 0.923 & 0.078\\ &-- & 1.6 & 0.41 & 0.671 & 0.058 & 0.87 & 0.084\\ $\mathcal{S}_{z,0.6}$ &4 & 1.4 & 0.39 & 0.562 & 0.104 & 0.831 & 0.206\\ &-- & 1.6 & 0.32 & 0.591 & 0.116 & 0.761 & 0.257\\ \hline \end{tabular} $n_s=$ -- indicates that host-galaxy redshift is used as a covariate in the Random Forest classifier, and not to construct diffusion map.\\ $f^*_{\rm Ia}$ is computed on training set via 10-fold cross-validation.\\ $\widehat{f}_{\rm Ia, pred}$, $\widehat{p}_{\rm Ia, pred}$, and $\widehat{e}_{\rm Ia, pred}$ are evaluated on all data in the photometric set $\mathcal{P}$ not in the training set. \end{table} \begin{table} \centering \caption{Results of Classifying Type II-P Supernovae incorporating host redshifts. \label{tab:zclassIIp}} \begin{tabular}{@{}l|ccccccc@{}} \hline Tr. Set & $n^*_s$ & $\epsilon^*$ & $t_{\rm Ia}^*$ & $f^*_{\rm IIP}$ & $\widehat{f}_{\rm IIP, pred}$ & $\widehat{p}_{\rm IIP, pred}$ & $\widehat{e}_{\rm IIP, pred}$\\ \hline $\mathcal{S}$ &6 & 1.6 & 0.55 & 0.829 & 0.221 & 0.944 & 0.261\\ &-- & 1.6 & 0.55 & 0.862 & 0.219 & 0.919 & 0.275\\ $\mathcal{S}_B$ &6 & 1.4 & 0.57 & 0.795 & 0.189 & 0.946 & 0.221\\ &-- & 1.8 & 0.52 & 0.85 & 0.174 & 0.889 & 0.235\\ $\mathcal{S}_{m,23.5}$ &4 & 1.1 & 0.56 & 0.821 & 0.272 & 0.913 & 0.355\\ &-- & 1.8 & 0.55 & 0.849 & 0.294 & 0.882 & 0.408\\ $\mathcal{S}_{m,24}$ &6 & 1.6 & 0.57 & 0.818 & 0.584 & 0.923 & 0.735\\ &-- & 1.6 & 0.57 & 0.787 & 0.488 & 0.891 & 0.674\\ $\mathcal{S}_{m,24.5}$ &4 & 1 & 0.55 & 0.755 & {\bf 0.612} & 0.904 & 0.807\\ &-- & 1.4 & 0.54 & 0.727 & 0.569 & 0.881 & 0.789\\ $\mathcal{S}_{m,25}$ &4 & 1.1 & 0.51 & 0.694 & 0.582 & 0.889 & 0.835\\ &-- & 1.2 & 0.51 & 0.6 & 0.543 & 0.856 & 0.845\\ $\mathcal{S}_{z,0.4}$ &6 & 1.4 & 0.59 & 0.715 & 0.245 & 0.724 & 0.552\\ &-- & 1.6 & 0.57 & 0.754 & 0.315 & 0.836 & 0.493\\ $\mathcal{S}_{z,0.6}$ &4 & 1 & 0.55 & 0.671 & 0.298 & 0.779 & 0.606\\ &-- & 1.4 & 0.57 & 0.658 & 0.375 & 0.846 & 0.67\\ \hline \end{tabular} $n_s=$ -- indicates that host-galaxy redshift is used as a covariate in the Random Forest classifier, and not to construct diffusion map.\\ $f^*_{\rm IIP}$ is computed on training set via 10-fold cross-validation.\\ $\widehat{f}_{\rm IIP, pred}$, $\widehat{p}_{\rm IIP, pred}$, and $\widehat{e}_{\rm IIP, pred}$ are evaluated on all data in the photometric set $\mathcal{P}$ not in the training set. \end{table} \section{Summary and Conclusions} \label{sec:summary} In this paper, we introduce the first use of semi-supervised classification for supernova typing. Most of the previous methods have relied on template fitting. Only recently, due in large part to the Supernova Classification Challenge, other statistics and machine learning methods been used for SN typing. Our semi-supervised approach makes efficient use of the data by using {\bf all} photometrically observed supernovae to find an appropriate representation for their classification. Also, we show that the complex variation in SN light curves as a function of redshift is captured by this representation In this manner, classification accuracy will improve as both the number of observed SNe grows and the parameters such as redshift, stretch, and reddening are sampled more densely. It is not clear how this adaptation can occur for existing methods, where either a fixed set of templates is used, or sets of summary statistics are extracted from each light curve independently. Another advantage of our approach is the flexibility in the choice of distance measure, $s$, in the diffusion map construction. In our analysis, we used only the shapes of the light curves and colours of the SN to define $s$ (eq.~\ref{eqn:distband}) and also showed how this distance can be modified if host-galaxy redshifts are available (eq. \ref{eqn:hostz}). When using a diffusion map for SN classification, each astronomer is free to use their own choice of $s$, and presumably more sophisticated distance measures, such as ones that include more context information or more accurately capture the inter-class differences in SN light curves, will perform better. In applying our semi-supervised classification approach to data from the SN Classification Challenge, we find that results are highly sensitive to the training set used. We proposed a few spectroscopic follow-up strategies, discovering that deeper magnitude-limited surveys obtain the best classification results--both for Type Ia and II-P SNe--despite accruing labels for a smaller number of supernovae. Results show that our methods are competitive with the entrants of the SN Challenge as we obtain Type Ia purity/efficiency of 72\%/65\% on the photometric sample without using host redshifts, and 76\%/74\% using host redshifts. We hesitate to compare directly with results from that challenge due to large differences in the corrected set of data studied in this paper. Throughout this study, we attempted to avoid all use of SED templates. However, there is some physical knowledge that is impervious to SED template problems, such as cosmological time dilation. Indeed, our method of incorporating host-$z$ using equation (\ref{eqn:hostz}) does account for time dilation: even if a high-$z$ type Ia SN light curve appears like a low-$z$ type II light curve, the degeneracy is broken because they are pulled apart in diffusion space since their redshifts are different. Thus, our conclusion that deeper magnitude-limited surveys produce better training sets is not simply an artifact of neglecting to explicitly model time dilation: deeper training sets still (tremendously) outperform shallower training sets even after incorporating host redshift (see Tables \ref{tab:zclassIa}--\ref{tab:zclassIIp}). The improved performance is likely due both to training on lower S/N data and capturing SED-dependent effects at high z. Though our template-free method allows us to avert classification errors that arise through use of wrong or incomplete template bases, as a trade-off we cannot extend results to redshifts outside of our training data, and thus require deeper training sets. This inability of our method to extrapolate beyond the training set is exhibited by the poor performance at high redshifts for the redshift-limited training sample (see Figures \ref{fig:fomz} and \ref{fig:fomz4}). This suggests that our methods can be improved with increased use of physical modeling; indeed the future may lie in a combination of both semi-supervised learning and template methods. In a future work, we plan to research such \emph{hybrid} models, which combine the semi-supervised approach presented in this paper with physical SN models, in hopes to decrease the dependency of our results on the specific training set employed and to improve the overall classification accuracy. One possible approach to this is to include, along with all of the observed SNe, sets of supernovae simulated from different templates with different values of the relevant parameters. This method would allow both the observed supernova light curves, along with the templates, to discover the optimal diffusion map representation of the SNe. Then, in the supervised part of the algorithm, the classification model would be able to learn from both the spectroscopic data and the simulated supernovae, allowing the model to extend more widely (e.g., to higher redshifts than those sampled by the training set). \section*{Acknowledgments} J.W.R. acknowledges the generous support of a Cyber-Enabled Discovery and Innovation (CDI) grant (\#0941742) from the National Science Foundation. Part of this work was performed in the CDI-sponsored Center for Time Domain Informatics (\url{http://cftd.info}). P.E.F and C.M.S. acknowledge NSF grant 0707059 and NASA AISR grant NNX09AK59G. D.P. is supported by an Einstein Fellowship.
2,869,038,156,088
arxiv
\section{Introduction}\label{sec:intro} Extended defects play an important role in many materials applications, affecting electronic, functional and mechanical properties. The simplest two-dimensional extended defect is a surface created upon cleaving two halves of a crystal \cite{dross2007stress,sweet2016controlled,shim2018controlled}. The two halves of a crystal can also be rigidly sheared relative to each other as occurs in many layered battery materials that undergo stacking sequence phase transformations upon intercalation \cite{radin2017role,kaufman2019understanding,van2020rechargeable}. Plastic deformation of crystals is mediated by the passage of dislocations that glide along a slip plane. While a dislocation is an extended one-dimensional defect, it often dissociates into a pair of partial dislocations that bound an extended two-dimensional stacking fault \cite{hull2011introduction} . Two-dimensional extended defects are also present in heterostructures of 2D materials \cite{mounet2018two}. 2D materials exhibit unique electronic properties that are absent in their three dimensional counterparts. Furthermore, new electronic behavior can emerge when a pair of two-dimensional building blocks are twisted relative to each other \cite{shallcross2008quantum,mele2010commensuration,shallcross2010electronic}, as was recently demonstrated for graphene \cite{cao2018correlated,cao2018unconventional}. The twisting of a pair of stacked two-dimensional materials by an angle $\theta$ around an axis that is perpendicular to the sheets breaks the underlying translational symmetry of the 2D building blocks. A new field of "twistronics" has emerged that seeks to understand and exploit the electronic properties that arise upon twisting a pair of two-dimensional materials \cite{carr2020electronic}. Here we describe a generalized framework and accompanying software package, called \mush{}\cite{mushgithub}, to facilitate the study of the thermodynamic and electronic properties of periodic two-dimensional defects in crystalline solids from first principles. We introduce the concept of a generalized cohesive zone model that simultaneously encapsulates the energy of decohesion and rigid glide of two halves of a crystal. We then detail how super cells and crystallographic models can be constructed to accommodate a pair of two-dimensional slabs that have been twisted relative to each other by an angle $\theta$. \mush{}\cite{mushgithub} automates the construction of crystallographic models of extended two-dimensional defects and of two-dimensional layered materials to enable (i) the study of surfaces, (ii) the calculation of cohesive zone models \cite{jarvis2001effects,friak2003ab,van2004thermodynamics,hayes2004universal,enrique2014solute,enrique2017traction,olsson2015role,olsson2016first,olsson2017intergranular} used as constitutive laws for fracture studies \cite{deshpande2002discrete, xie2006discrete,sills2013effect}, (iii) the calculation of generalized stacking fault energy surfaces needed as input for phase field \cite{koslowski2002phase,shen2003phase,shen2004incorporation,hunter2011influence,feng2018shearing} and Peierls-Nabarro \cite{bulatov1997semidiscrete,juan1996generalized,lu2000peierls,lu2000generalized,lu2001dislocation,lu2001hydrogen,lu2005peierls} models of dislocations and (iv) the construction of models of twist grain boundaries \cite{sutton1995} and twisted 2D materials \cite{carr2020electronic}. The crystallographic models generated by \mush{} can then be fed into first-principles electronic structure codes to calculate a range of thermodynamic and electronic properties. \mush{} consists of C++ and Python routines and draws on crystallographic libraries of the CASM software package \cite{thomas2013finite,puchala2013thermodynamics,van2018first}. \section{Mathematical descriptions}\label{sec:math} \subsection{Degrees of freedom} We consider displacements of two half crystals relative to a particular crystallographic plane {\bf P} as illustrated in \Cref{fig:schematics}. The half crystals could be two-dimensional materials such as a pair of graphene sheets or two-dimensional slabs of MoS$_2$. They could also be the bottom and top half of a macroscopic crystal that has a stacking fault or that is undergoing cleavage. There are several ways in which the two half crystals can be displaced. As schematically illustrated in \Cref{fig:shiftschematic}, they may be separated by a distance $d$ along a direction perpendicular to the plane {\bf P} and they can be made to glide relative to each other by a translation vector $\vec{\tau}$ parallel to the plane {\bf P}. One half can also be twisted relative to the other by an angle $\theta$ around a rotation axis $\vec{r}$ that is perpendicular to the plane {\bf P} as illustrated in \Cref{fig:twistschematic}. It is rare that uniform deformations across an infinite plane {\it P} as illustrated in \Cref{fig:schematics} occur in actual materials processes. Nevertheless, the energy and electronic structure associated with such idealized deformations are crucial ingredients for a wide variety of mesoscale models of plastic deformation and fracture \cite{deshpande2002discrete, xie2006discrete,sills2013effect,koslowski2002phase,shen2003phase,shen2004incorporation,hunter2011influence,feng2018shearing,lu2000peierls,lu2000generalized,lu2001dislocation,lu2001hydrogen,lu2005peierls} and help understand emergent electronic properties in two-dimensional materials \cite{carr2020electronic}. \begin{figure} \centering \subfloat[]{ \includegraphics[width=0.87\linewidth]{fig/shifted_blocks.pdf} \label{fig:shiftschematic} } \hfill \subfloat[]{ \includegraphics[width=0.7\linewidth]{fig/twisted_blocks.pdf} \label{fig:twistschematic} } \caption{(a) Two halves of a crystal separated by a distance $d$ can glide by a vector $\vec{\tau}$ parallel to the plane P. (b) The two halves of a crystal can also be twisted by an angle $\theta$ around a twist axis $\vec{r}$ perpendicular to the plane $P$.} \label{fig:schematics} \end{figure} \subsection{Mathematical expression for a generalized cohesive zone model} A generalized cohesive zone model describes the energy of a bicrystal as the two half crystals are rigidly separated by $d$ and translated relative to each other by $\vec{\tau}$. A convenient reference state for the energy scale is the bicrystal at infinite separation (i.e. $d\rightarrow{} \infty$). In this state, the energy of the bicrystal is independent of separation $d$ and translation $\vec{\tau}$. There are well-known and well tested functional forms for the dependence of the energy on separation \cite{rose1981universal,rose1983universal,rose1984universal,enrique2017traction,enrique2017decohesion}. A general relationship takes the form \cite{enrique2014solute,enrique2017traction} \begin{equation} u_{cz}(d,\vec{\tau})=-2\gamma\left[ 1+\frac{\delta}{\lambda}+\sum_{n=2}^{n_{max}} \alpha_{n}\left(\frac{\delta}{\lambda}\right)^{n}\right]e^{-\delta/\lambda} \label{eq:xuber} \end{equation} where $\delta = d-d_0$ measures the degree of separation relative to an equilibrium separation of $d_0$ and where the energy $u_{cz}$ is per unit area of the plane {\bf P}. The dependence of the energy on the translation vector $\vec{\tau}$ can be built into \Cref{eq:xuber} by making the parameters $\gamma$, $\lambda$ and $\alpha_{n}$ each functions of the translation vector $\vec{\tau}$: i.e. $\gamma\left(\vec{\tau}\right)$, $\lambda\left(\vec{\tau}\right)$ and $\alpha_{n}\left(\vec{\tau}\right)$. The equilibrium separation $d_{0}$ is also a function of $\vec{\tau}$ and corresponds to the minimum of $u_{cz}(\delta,\vec{\tau})$ as a function of $d$ at fixed $\vec{\tau}$. The dependence of $u_{cz}\left(\delta,\vec{\tau}\right)$ on $\vec{\tau}$ at fixed $\delta$ or $d$ is periodic and has the same 2-dimensional periodicity as that of the crystallographic plane {\bf P}. This means that the parameters that appear in \Cref{eq:xuber} are also periodic functions of $\vec{\tau}$. When all the $\alpha_{n}$ are set to zero, we recover the universal binding energy relation (UBER) that is able to describe the cohesive properties of metals with remarkable accuracy \cite{rose1981universal,rose1983universal,enrique2017decohesion}. The additional parameters $\alpha_{n}$ are necessary to capture deviations from the UBER form due to deviations from purely metallic bonding \cite{enrique2017traction}. The coefficient $2\gamma$ is related to the energy of cleaving a crystal into two bicrystals \begin{equation} 2\gamma\left(\vec{\tau}\right)=\frac{E_{cleaved}-E_{bulk}\left(\vec{\tau}\right)}{A_{surface}} \label{eq:surface_energy} \end{equation} where $E_{bulk}\left(\vec{\tau}\right)$ is the energy of the bicrystal at separation $d$=$d_0\left(\vec{\tau}\right)$ and translation $\vec{\tau}$, $E_{cleaved}$ is the energy of the bicrystals at infinite separation and $A_{surface}$ is the area of the exposed surfaces. $2\gamma\left(\vec{\tau}\right)$ corresponds to the minimum of $u_{cz}\left(\delta,\vec{\tau}\right)$ at each $\vec{\tau}$ with respect to interslab separation $\delta$ and is referred to as the generalized stacking fault energy (GSFE), also known as the $\gamma$ surface. The GSFE is an essential ingredient of mesoscale simulation techniques such as phase field models \cite{koslowski2002phase,shen2003phase,shen2004incorporation,hunter2011influence,feng2018shearing} and Peierls-Nabarro \cite{lu2000peierls,lu2000generalized,lu2001dislocation,lu2001hydrogen,lu2005peierls} models of dislocations. When cleaving a single crystal across a plane {\bf P} (as opposed to a bicrystal consisting of two different materials), it is common to set the origin of translations $\vec{\tau}$ at the shift coinciding with a perfect crystal (i.e. no stacking fault). In that case $\gamma_{0} = \gamma\left(\vec{\tau}=0\right)$ becomes equal to the surface energy for the crystallographic plane {\bf P} in the absence of any surface reconstructions \cite{thomas2010systematic}. The generalized cohesive zone model can serve as a constitutive law to describe the response of a solid ahead of a transgranular crack tip \cite{deshpande2002discrete, xie2006discrete,sills2013effect}. The elastic constant, $C$, along the direction of separation is a function of the parameters of \Cref{eq:xuber} according to \cite{enrique2017traction} \begin{equation} C=2d_{0}\frac{2\gamma}{\lambda^2}\left(\frac{1}{2}-\alpha_2\right) \label{eq:elastic_constant} \end{equation} Since the parameters, $d_0$, $\gamma$, $\lambda$ and $\alpha_{n}$ of \Cref{eq:xuber} are periodic functions of $\vec{\tau}$, they can each be expressed as a Fourier series. For example, the $\vec{\tau}$ dependence of $\gamma$ can be expressed as \begin{equation} \gamma\left(\vec{\tau}\right)=\sum_{\vec{K}}\tilde{\gamma}_{\vec{K}}e^{-i\vec{K}\vec{\tau}} \label{eq:fourier} \end{equation} where the sum extends over $\vec{K}$ vectors of the two-dimensional reciprocal lattice of the two-dimensional unit cell of the crystallographic plane {\bf P}. The expansion coefficients, $\tilde{\gamma}_{K}$, are the Fourier transform of $\gamma$. \subsection{Extensions to account for twist}\label{sec:twist} The twisting of two halves of a crystal or of a pair of two-dimensional materials will generally break any translational symmetry that may have existed before. It is only for a subset of special twist angles $\theta$ that a super cell translational symmetry is preserved \cite{shallcross2008quantum,shallcross2010electronic,mele2010commensuration,silva2020exploring}. The energy of the bicrystal will not only depend on the relative separation $d$ and translation $\vec{\tau}$, but also on the choice of rotation axis $\vec{r}$ and twist angle $\theta$ \cite{silva2020exploring}. This dependence can be formulated generally as \begin{equation} u\left(d,\vec{\tau},\vec{r},\theta\right)= u_{cz}\left(d,\vec{\tau}\right)+u_{t}\left(d,\vec{\tau},\vec{r},\theta\right) \end{equation} where $u_{cz}$ is the reference energy in the absence of a twist, and can be expressed using a form such as Eq \ref{eq:xuber}. The function $u_{t}\left(d,\vec{\tau},\vec{r},\theta\right)$ accounts for the twist energy and is equal to zero when $\theta$=0. While the $\theta$ dependence of $u_{t}\left(d,\vec{\tau},\vec{r},\theta\right)$ could be represented as a Fourier series, as it is periodic in $\theta$, it may exhibit cusps and therefore not be a smooth function of $\theta$ \cite{sutton1987overview,sutton1995}. The expansion coefficients of such a Fourier series would be a function of $d$, $\vec{\tau}$ and $\vec{r}$. \section{Creating slab models}\label{sec:slab} Most electronic structure methods impose periodic boundary conditions on the crystallographic model. In this section, we describe how crystallographic models can be constructed to realize crystal separation, glide and twist within super cells that have periodic boundary conditions. \subsection{Constructing slab geometries to parameterize cohesive zone models}\label{sec:uber} A cohesive zone model such as \Cref{eq:xuber} can be parameterized by fitting to training data as calculated with a first-principles electronic structure method. It is possible to accommodate the periodic boundary conditions that these methods impose using a slab geometry as illustrated in \Cref{fig:slab}. The unit cell then consists of a slab of crystal with its periodic images separated by layers of vacuum parallel to the plane {\bf P}. For bulk crystals, the crystal slab must be sufficiently thick to avoid interactions between periodic images of the surfaces adjacent to the vacuum layer. \begin{figure} \centering \subfloat[]{ \includegraphics[width=0.7\linewidth]{fig/stack_intersection.png} \label{fig:slab} } \hfill \subfloat[]{ \includegraphics[width=\linewidth]{fig/shift_symmetry.pdf} \label{fig:shiftsym} } \caption{(a) A slab of fcc that has been cleaved along the (1,1,1) plane. The unit cell of the slab has had 3\AA{} of vacuum inserted above the exposed plane. The conventional unit cell of fcc is also shown for reference. (b) Symmetric equivalence of translation vectors parallel to the (1,1,1) plane of fcc. Spots with the same color correspond to translation vectors that generate symmetrically equivalent structures.} \label{fig:slabmodel} \end{figure} To construct a crystallographic model consisting of slabs parallel to a plane {\bf P}, it is necessary to first identify a super cell of the primitive cell vectors, $\vec{a}$, $\vec{b}$ and $\vec{c}$, such that two super cell vectors, $\vec{A}$ and $\vec{B}$ span the plane {\bf P}. The vectors $\vec{A}$ and $\vec{B}$ can be determined by connecting the origin to the closest non-collinear lattice points that lie on a plane parallel to {\bf P} that also passes through the origin. A third vector, $\vec{s}$, can then be chosen as the shortest translation vector of the parent crystal structure that is not in the plane {\bf P}. The vector $\vec{s}$ determines the smallest possible thickness of the slab. The thickness of the slab can be adjusted by multiplying $\vec{s}$ by an integer, $l$. It is usually desirable to translate the resulting vector $l\vec{s}$ parallel to {\bf P} (by an integer linear combination of $\vec{A}$ and $\vec{B}$) until its projection onto the plane {\bf P} falls within the unit cell spanned by $\vec{A}$ and $\vec{B}$. This new vector, $\vec{C}$, is then the third super cell vector of the slab model. The next task is to sample different values of slab separations $d$ and relative translations $\vec{\tau}$. Due to translational periodicity of the crystal, only translation vectors $\vec{\tau}$ within a two-dimensional unit cell spanned by the $\vec{A}$ and $\vec{B}$ vectors of the super cell need to be considered. One approach is to generate a uniform grid within the unit cell of possible translation vectors $\vec{\tau}$, as is illustrated in \Cref{fig:shiftsym} for an fcc crystal in which one half is sheared relative to another half along a (111) plane. The two half crystals often have additional symmetries that make a subset of the translations $\vec{\tau}$ equivalent to each other. Symmetric equivalence between two different translation vectors $\vec{\tau}_1$ and $\vec{\tau}_2$ can be ascertained by mapping the resultant crystals onto each other with a robust crystal mapping algorithm \cite{mapping_paper}. \Cref{fig:shiftsym} shows the orbits of equivalent translation vectors $\vec{\tau}$ for fcc by assigning the same color to all translation vectors that are equivalent by symmetry. The choice of super cell may break some symmetries of the underlying crystal, making symmetric equivalence of translation vectors $\vec{\tau}$ dependent on the particular super cell. For each symmetrically distinct translation vector $\vec{\tau}$ it is possible to generate a grid of separations $d$ over increments of $\Delta d$. This can be realized by adding $\Delta d \mathbf{\hat{n}}$ to $\vec{C}$ while keeping the Cartesian coordinates of the atoms within the unit cell unchanged. The vector $\mathbf{\hat{n}}$ is a unit vector normal to the $\vec{A}$-$\vec{B}$ plane. The parameterization of a generalized cohesive zone model can occur in two steps. The first step is to calculate the energy of separation over a discrete set of separations $d$ for each symmetrically distinct translation vector $\vec{\tau}$. The resultant energy versus separation $d$ relation can then be fit to an xUBER relation, Eq \ref{eq:xuber}. The parameterization of an xUBER relation over all symmetrically distinct translation vectors $\vec{\tau}$ will generate numerical values for the adjustable parameters, $d_0$, $\gamma$, $\lambda$ and $\alpha_{n}$, of the xUBER, \Cref{eq:xuber}, over a uniform grid of translation vectors $\vec{\tau}$. The dependence of the adjustable parameters on $\vec{\tau}$ can then be represented with a Fourier series such as Eq. \ref{eq:fourier}, thereby allowing for the accurate interpolation at any translation vector $\vec{\tau}.$ \subsection{Crystallographic models for twisted bicrystals}\label{sec:bicrystals} \begin{figure} \centering \subfloat[]{ \includegraphics[width=\linewidth]{fig/rotate5_moire_zoom.pdf} \label{fig:moire5} } \centering \subfloat[]{ \includegraphics[width=0.9\linewidth]{fig/rot21.79_real_new.pdf} \label{fig:rot21.79real} } \hfill \subfloat[]{ \includegraphics[width=0.9\linewidth]{fig/rot21.79_recip_new.pdf} \label{fig:rot21.79recip} } \caption{(a) Superposition of two triangular lattices in which one (green) has been rotated 5 degrees relative to the other one (brown). The resulting interference pattern forms the Moir\'e lattice. (b) The reciprocal lattices of the two triangular lattices of (a). (c) The reciprocal lattice of the Moir\'e lattice (black vectors) can be constructed by taking the difference between the reciprocal lattice vectors of the rotated lattices (shown as gray vectors).} \label{fig:moireconstruct} \end{figure} In addition to separation and glide, it is also possible to twist the two halves of a bicrystal by rotating one half relative to the other around a rotation axis, $\vec{r}$, that is perpendicular to {\bf P}. It is increasingly recognized that interesting electronic properties can emerge when a pair of two-dimensional materials are rotated relative to each other in this manner \cite{cao2018correlated,cao2018unconventional,burch2018magnetism,carr2020electronic}. In bulk materials, special grain boundaries, referred to as twist boundaries, can be generated by twisting the top half of a crystal around an axis that is perpendicular to the grain boundary \cite{sutton1995}. A challenge for electronic structure calculations is to identify a super cell that is able to accommodate the twisted half crystals. A Moir\'e pattern emerges when a periodic two-dimensional lattice is rotated relative to a second periodic lattice. Figure \ref{fig:moire5}, for example, shows the emergence of a Moir\'e pattern after a pair of two-dimensional triangular lattices (brown and green) have been rotated relative to each other by 5$^{\circ}$. The Moir\'e pattern is itself periodic, but has lattice vectors that are usually much larger than those of the two-dimensional lattices that have been rotated. Furthermore, the lattice of the Moir\'e pattern is rarely commensurate with the lattices of the twisted half crystals. This is evident in Figure \ref{fig:moire5}, where the lattice points of the Moir\'e pattern, shown as the intersections of the grey lines, do not exactly overlap with sites of the twisted triangular lattices shown in brown and green. Only a subset of particular twist angles $\theta$ produce Moir\'e patterns that are commensurate with the twisted two-dimensional lattices \cite{shallcross2008quantum,shallcross2010electronic,mele2010commensuration,carr2020electronic}. The Moir\'e lattice can, nevertheless, serve as a guide to identify a super cell that can accommodate the twisted half crystals for first-principles electronic structure calculations. But since the Moir\'e lattices for most twist angles $\theta$ only approximately coincide with sites of the twisted lattices, both half crystals must usually be deformed slightly such that they can be accommodated within a common super cell. The lattice of the Moir\'e pattern can be determined by working in reciprocal space. Consider a three dimensional unit cell with lattice vectors $\vec{A}$, $\vec{B}$ and $\vec{C}$. Assume that the vectors $\vec{A}$ and $\vec{B}$ form the two-dimensional lattice parallel to the plane {\bf P} (e.g. a two-dimensional triangular lattice) and that the third vector $\vec{C}$ is perpendicular to {\bf P}. The rotation axis $\vec{r}$ is therefore parallel to $\vec{C}$. It is convenient to work with a $3\times3$ matrix $\mat{L} = [\vec{A}, \vec{B}, \vec{C}]$ where each lattice vector appears as a column of $\mat{L}$. The reciprocal lattice vectors of the lattice $\mat{L}$ are then the column vectors of the matrix $\mat{K}$ defined by \begin{equation} \mat{K}=2\pi \left( \mat{L}^{-1} \right)^\intercal \label{eq:reciprocal} \end{equation} The application of a rotation $\theta$ to a lattice $\mat{L}$ around a rotation axis that is parallel to $\vec{C}$ produces a lattice $\mat{L}_{\theta}$. The lattice vectors of the two-dimensional Moir\'e pattern, $\mat{M}$, will have a reciprocal lattice represented by the matrix $\mat{K}_{M}$ in which the upper left $2\times2$ block is equal to the corresponding difference between the reciprocal lattices of $\mat{L}$ and $\mat{L}_{\theta}$ (i.e. $\left[\mat{K}_{M}\right]_{i,j} = \left[\mat{K}\right]_{i,j} - \left[\mat{K}_{\theta}\right]_{i,j}$ for $i,j = 1,2$) and the third axis, which is unaffected by the rotation is the same as that of $\mat{K}$ and $\mat{K}_{\theta}$ (i.e. $\left[\mat{K}_{M}\right]_{3,3}=\left[\mat{K}\right]_{3,3}=\left[\mat{K}_{\theta}\right]_{3,3}$). The Moir\'e lattice is then \begin{equation} \mat{M}=2\pi\left( \mat{K}_{M}^{-1}\right)^{\intercal} \label{eq:moire} \end{equation} This is illustrated in \Cref{fig:rot21.79real} and \Cref{fig:rot21.79recip} for a pair of triangular lattices. The green triangular lattice of \Cref{fig:rot21.79real} has been rotated by an angle $\theta$ relative to the brown triangular lattice. The reciprocal lattice vectors of the green and brown triangular lattices are shown in \Cref{fig:rot21.79recip}. The two grey vectors in \Cref{fig:rot21.79recip} are the differences of the reciprocal lattice vectors of the rotated green lattice and those of the fixed brown lattice. The grey vectors, when translated to the origin of reciprocal space, span the unit cell of the reciprocal lattice of the Moir\'e pattern. This is illustrated by the thick black arrows in \Cref{fig:rot21.79recip} with the grid representing the reciprocal lattice points of the Moir\'e lattice. (For large rotation angles $\theta$, it is possible that one of the reciprocal lattice vectors of $\mat{K}_{M}$ falls outside of the Wigner-Seitz cell of either $\mat{K}$ or $\mat{K}_{\theta}$. In these situations, the offending reciprocal lattice vector of $\mat{K}_{M}$ must be translated back into the Wigner-Seitz cell of either $\mat{K}$ or $\mat{K}_{\theta}$.) The example illustrated by \Cref{fig:rot21.79real} and \Cref{fig:rot21.79recip} is for a special angle $\theta$ for which the Moir\'e lattice is commensurate with the two rotated lattices. For these special angles, the reciprocal lattice vectors $\mat{K}$ and $\mat{K}_{\theta}$ (brown and green vectors in \Cref{fig:rot21.79recip}) coincide with sites of the reciprocal lattice of the Moir\'e lattice $\mat{K}_{M}$. Only a subset of twist angles produce Moir\'e lattices that are commensurate with the twisted lattices\cite{carr2020electronic}. In general, the Moir\'e lattice points do not exactly coincide with sites of either the $\mat{L}$ or $\mat{L}_{\theta}$ lattices, as illustrated by the example of \Cref{fig:moire5} for a pair of triangular lattices that have been rotated by 5$^{o}$. Nevertheless, the Moir\'e lattice can guide the search for a super cell that can simultaneously accommodate the twisted pair of two-dimensional lattices. The first task is to identify the super cells of $\mat{L}$ and $\mat{L}_{\theta}$ that are close to that of the Moir\'e lattice. A super cell of a lattice can be generated as an integer linear combination of the lattice vectors $\vec{A}$, $\vec{B}$, $\vec{C}$ according to \begin{equation} \mat{S}=\mat{L}\mat{T} \label{eq:transfmat} \end{equation} where $\mat{T}$ is a $3\times3$ integer matrix and where the columns of $\mat{S}$ contain the super cell lattice vectors. The sought after integer matrix $\mat{T}$ is one that generates a super cell $\mat{S}$ that is closest to that of the Moir\'e pattern $\mat{M}_{\theta}$. This can be obtained by rounding each element of the matrix $\mat{L}^{-1}\mat{M_{\theta}}$ to the nearest integer. A similar matrix $\mat{T}_{\theta}$ must be determined by rounding the elements of $\mat{L}_{\theta}^{-1}\mat{M_{\theta}}$ to the nearest integer. The resulting super cells, $\mat{S}=\mat{L}\mat{T}$ and $\mat{S}_{\theta}=\mat{L}_{\theta}\mat{T}_{\theta}$, will usually not coincide exactly. However, they can both be strained and twisted slightly to a common super cell $\mat{\bar{S}}$ defined as the average of $\mat{S}$ and $\mat{S}_{\theta}$ according to \begin{equation} \mat{\bar{S}}=\frac{1}{2}\left(\mat{S}+\mat{S}_{\theta}\right) \end{equation} This super cell can be used to accommodate the twisted bicrystal. The amount of strain and twist needed to fit both bicrystals in $\mat{\bar{S}}$ can be calculated as follows. The dimensions of the bottom half of the bicrystal with superlattice $\mat{S}$ will need to be deformed according to \begin{equation} \mat{F}\mat{S}=\mat{\bar{S}} \end{equation} where $\mat{F}$, the deformation gradient, is a $3\times 3$ matrix. The deformation gradient $\mat{F}$ can be factored into a product of a symmetric stretch matrix $\mat{U}$ and a rotation matrix $\mat{R}$ according to $\mat{F}=\mat{R}\mat{U}$. The stretch matrix $\mat{U}$ describes the deformation of the crystal, while the rotation matrix $\mat{R}$ in this situation corresponds to a rotation around $\vec{C}$. A similar deformation gradient exists for the top half of the bicrystal with $\mat{F}_{\theta}\mat{S}_{\theta}=\mat{\bar{S}}$ and $\mat{F}_{\theta}=\mat{R}_{\theta}\mat{U}_{\theta}$. If the rotation angles of $\mat{R}$ and $\mat{R}_{\theta}$ are $\phi_{1}$ and $\phi_2$, respectively, then the two bicrystals need to undergo an additional relative twist of $\Delta \theta = \phi_2-\phi_1$ to fit into $\mat{\bar{S}}$. The actual rotation angle when describing the twisted bicrystal with a periodic super cell $\mat{\bar{S}}$ is then $\theta +\Delta \theta$. Hence, it is generally not possible to realize the target twist angle of $\theta$ when using a periodic super cell to accommodate the twisted halves. Information about the strains is embedded in the stretch matrices $\mat{U}$ for the bottom half and $\mat{U}_{\theta}$ for the top half. We use the Biot strain defined as $\mat{E}=\mat{U}-\mat{I}$ where $\mat{I}$ is the identity matrix \cite{thomas2017exploration}. The strain is restricted to the two-dimensional space parallel to the twist plane. We assume that this plane is parallel to the $\hat{x}$-$\hat{y}$ plane of the Cartesian coordinate system used to represent the lattice vectors. Convenient metrics of the degree with which the two bicrystals are strained is the square root of the sum of the squares of the eigenvalues of the strain matrices $\mat{E}$ and $\mat{E}_{\theta}$ (i.e. $\sqrt{\lambda_1^{2}+\lambda_{2}^{2}}$ where $\lambda_1$ and $\lambda_{2}$ are the non zero eigenvalues of the strain matrices). \begin{figure} \centering \includegraphics[width=\linewidth]{fig/supermoire15.pdf} \caption{The Moir\'e lattice (gray) that emerges when a pair of triangular lattices have been rotated by 15 degrees, and one of its possible super cells (black). By using a super cell of the Moir\'e lattice, it is possible to generate a crystallographic model to accommodate a twisted bicrystal that requires less deformation.} \label{fig:supermoire} \end{figure} A refinement of the above approach can be used to lower the error in the target angle $\Delta \theta$ and the incurred strains. However, the improvement comes at the cost of requiring larger super cells. Instead of identifying super cells $\mat{S}$ and $\mat{S}_{\theta}$ of the lattices $\mat{L}$ and $\mat{L}_{\theta}$ that are as close as possible to the Moir\'e lattice $\mat{M}$, the super cells $\mat{S}$ and $\mat{S}_{\theta}$ can be matched to {\it super cells} of $\mat{M}$. This is illustrated in \Cref{fig:supermoire} for a pair of triangular lattices that are being rotated by a target angle of $\theta = 15^{\circ}$. When matching super cells $\mat{S}$ and $\mat{S}_{\theta}$ to the Moir\'e lattice $\mat{M}$ itself, as described above, the actual twist angle is $\theta = 12.117^{\circ}$. In contrast, when matching the super cells $\mat{S}$ and $\mat{S}_{\theta}$ to a $\sqrt{3}a\times\sqrt{3}a$ super cell of the Moir\'e lattice $\mat{M}$ the actual rotation angle becomes 14.911$^{\circ}$, which is much closer to the target angle of 15$^{\circ}$. However, the common super cell that accommodates the twisted bicrystal is three time larger. There are special twist angles for which super cells of their Moir\'e lattice can be found that are commensurate with the under lying lattices of the bicrystals. For these special twist angles $\Delta \theta$ and the strain order parameters will be zero. \Cref{fig:totallatticesites1000} plots the number of triangular lattice unit cells that are needed in the super cells of a subset of commensurate twist angles. It is clear that very large super cells are necessary for most commensurate twist angles. \begin{figure} \centering \includegraphics[width=\linewidth]{fig/total_lattice_sites_1000.pdf} \caption{Number of triangular lattice sites within the super cells of bilayers with a commensurate twist angle.} \label{fig:totallatticesites1000} \end{figure} \section{\mush{}}\label{sec:mush} The \mush{} software package constructs crystallographic models of extended two-dimensional defects that separate a pair of bicrystals as described in the previous section. This includes crystallographic models to parameterize generalized cohesive zone models such as Eq. \ref{eq:xuber} and crystallographic models of twisted bicrystals. \Cref{fig:flowchart} illustrates a schematic flowchart of \mush{}. The first step of the \mush{} workflow is to construct the slab building blocks. The user provides the primitive cell of a crystal ({\it prim} in \Cref{fig:flowchart}) and the Miller indices of the crystallographic plane {\bf P}. In the {\it slice} step, \mush{} constructs the thinnest super cell of {\it prim} with lattice vectors $\vec{A}$ and $\vec{B}$ parallel to the plane {\bf P} and the shortest vector $\vec{C}$ that is not in the plane {\bf P}. The resulting super cell contains the {\it slab unit} (\Cref{fig:flowchart}) that constitutes the fundamental building block of subsequent crystallographic models. The slab unit must next be made thicker when modeling three dimensional materials or stacked when modeling the twist of two-dimensional materials. The slab thickness is increased based on user input. This occurs with the {\it stack} step. Depending on the basis of the crystal, an additional {\it translate} may be required, where the basis atoms in the slab unit are rigidly shifted, changing the position through which the plane $\mat{P}$ penetrates the crystal. For example, when exploring the glide of CoO$_2$ sheets in a layered battery material such as $\mathrm{LiCoO_2}$, $\mat{P}$ should extend between the oxide layers, not through them. At this point, the \mush{} workflow diverges into two tracks. The first (right arrows), generates crystallographic models of crystal cleavage and glide with respect to a plane {\bf P}. A list of separations $d$ is specified ({\it cleave}) and for each of these separations a second grid of glide vectors $\vec{\tau}$ may be enumerated ({\it shift}). Symmetrically equivalent translation grid points are tagged as such. Directories with input files for the VASP\cite{kresse1993ab,kresse1994ab,kresse1996efficiency,kresse1996efficient} first-principles electronic structure software package are then generated. Upon completion of static electronic structure calculations, a list of first-principles energies are available to paramaterize a generalized cohesive zone model. An alternative to the imposed regular grid of translation vectors is to use the {\it mutate} step. With this approach, a single custom structure that has been shifted by an arbitrary vector $\vec{\tau}$ and separated by a custom value $d$ from its periodic image is created. A second track in the \mush{} workflow (left arrows) generates crystallographic models of twisted bicrystals. Here the user provides a target twist angle $\theta$ and a maximum number of two-dimensional unit cells for the super cell that will accommodate the twisted bicrystals. \mush{} next determines the Moir\'e lattice and then identifies the super cells of two bicrystals that best match a super cell of the Moir\'e lattice. A crystallographic model is output along with the actual twist angle $\theta+\Delta \theta$ and values of the strain order parameters $\eta_1$, $\eta_2$ and $\eta_3$. When twisting a pair of bicrystals within a unit cell constrained by periodic boundary conditions (including the $\vec{C}$ axis) two interfaces are necessarily introduced. One is the twist interface of interest, while the other is usually separated by a large slab of vacuum. For two-dimensional materials, the vacuum is not necessarily a drawback. For twist grain boundaries, however, the thicknesses of the twisted slabs should be sufficiently large such that the free surfaces in contact with vacuum do not affect the energy and electronic structure of the twist boundary. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{fig/flowchart.pdf} \caption{Flowchart of the possible \mush{} commands} \label{fig:flowchart} \end{figure} \section{Examples}\label{sec:examples} \begin{figure} \centering \subfloat[]{ \includegraphics[width=0.8\columnwidth]{fig/Al-FCC.uberstack.pdf} \label{fig:uberstack} } \hfill \subfloat[]{ \includegraphics[width=0.9\linewidth]{fig/al_surface_interpolated.pdf} \label{fig:al.gammasurf} } \hfill \subfloat[]{ \includegraphics[width=0.9\linewidth]{fig/al_d0_interpolated.pdf} \label{fig:al.d0surf} } \caption{(a) Energy versus separation of fcc Al as the crystal is separated between a pair of adjacent (111) planes as calculated using the super cell shown in \Cref{fig:slab}. The dashed lines represent UBER fits through the first-principles (DFT) data points. Separation curve for perfect fcc, a stacking fault, and a translation that places a pair of adjacent (111) planes directly on top of each other are shown in blue, orange, and green respectively. (b) Energy of the same Al super cell as a function of glide parallel to the (111) plane evaluated at the $d_0$ separation for each glide vector. (c) The value of $d_0$ as a function of glide (relative to the equilibrium separation of fcc Al).} \label{fig:al_surfaces} \end{figure} \subsection{Cohesive zone models of simple metals} As an illustration of a generalized cohesive zone model, we consider cleavage and glide with respect to a (111) plane of fcc Al. \Cref{fig:uberstack} shows the energy of an Al crystal as it is cleaved along a pair of neighboring (111) planes for three different translational shifts $\vec{\tau}$. The points were calculated with density functional theory (DFT) within the generalized gradient approximation (GGA-PBE) using the VASP plane wave code \cite{kresse1993ab,kresse1994ab,kresse1996efficiency,kresse1996efficient,blochl1994projector,kresse1999ultrasoft}. The projector augmented wave (PAW) method \cite{blochl1994projector} was used with a plane-wave energy cutoff of 520 eV. K-points were generated using a fully automatic mesh with a length parameter $R_k=50$ \AA{}$^{-1}$. The UBER form of Eq. \ref{eq:xuber} was fit through the DFT points and is shown as dashed lines in \Cref{fig:uberstack}. As is clear from \Cref{fig:uberstack}, the UBER curve is able to fit the DFT data very well. The blue points in \Cref{fig:uberstack} reside on the energy versus separation curve for perfect fcc corresponding to $\vec{\tau}=0$. The difference in energy at $d_0$, where the curve has a minimum, and the energy for large separations corresponds to the surface energy $2\gamma$. The energy versus separation curve when separating pairs of (111) planes that form a stacking fault corresponding to $\vec{\tau} = 1/3\vec{A}+1/3\vec{B}$ is very similar, as is evident by the orange points in \Cref{fig:uberstack}, although the minimum is not as deep and the equilibrium spacing $d_0$ is slightly shifted to a larger distance. The energies of separation for a translation that puts atoms of adjacent (111) planes directly on top of each other (green points in \Cref{fig:uberstack}) is also very well described with an UBER curve. The minimum is at a much higher energy compared to other translations. By performing similar DFT calculations over a uniform grid of symmetrically distinct translation vectors, it becomes possible to express the $\vec{\tau}$ dependence of the adjustable parameters of the UBER curve as a Fourier series. \Cref{fig:al.gammasurf}, for example shows the dependence of $2\gamma$ on the translation $\vec{\tau}$. It exhibits the periodicity of the (111) plane of fcc Al, with the global minima corresponding to perfect fcc and the other minima corresponding to stacking faults. \Cref{fig:al.d0surf} shows the dependence of the equilibrium separation, $d_0$, between two half crystals of fcc Al as a function of $\vec{\tau}$. This plot looks very similar to the $2\gamma$ surface of \Cref{fig:al.gammasurf}, with the minimum in $d_0$ coinciding with the fcc stacking and the maximum in $d_0$ coinciding with a stacking for which a pair of adjacent (111) planes are directly on top of each other. \begin{figure} \centering \subfloat[]{ \includegraphics[width=0.7\linewidth]{fig/hcp.pdf} \label{fig:hcp_slip_planes} } \hfill \subfloat[]{ \includegraphics[width=0.9\linewidth]{fig/mg_pyramidal1_surface_interpolated.pdf} \label{fig:pyramidal.gammasurf} } \hfill \subfloat[]{ \includegraphics[width=0.9\linewidth]{fig/mg_prismatic_surface_interpolated.pdf} \label{fig:prismatic.gammasurf} } \caption{(a) Different slip planes of an hcp crystal. (b) The energy of hcp Mg as a function of glide parallel to the pyramidal plane evaluated at $d_0$ for each glide vector. (c) The energy of hcp Mg as a function of glide parallel to the prismatic plane evaluated at $d_0$.} \label{fig:Mg_surfaces} \end{figure} While slip in fcc predominantly occurs on (111) planes, there are often multiple slip planes in more complex crystal structures such as hcp. \Cref{fig:hcp_slip_planes} schematically shows two slip planes in hcp: basal slip, prismatic slip and pyramidal slip. Each plane has a different periodicity. \Cref{fig:pyramidal.gammasurf} and \cref{fig:prismatic.gammasurf} shows the energy at fixed spacing $d$ for the pyramidal and prismatic slip planes of hcp Mg. The DFT method to calculate these energy surfaces was the same as that used for Al in \Cref{fig:al_surfaces}, except that a plane wave cutoff of 650 eV was used. \subsection{Crystallographic models of twisted two-dimensional materials: triangular lattices and honeycombs}\label{sec:honey} \mush{} facilitates the construction of crystallographic models of twisted two-dimensional materials. As described in \cref{sec:bicrystals}, most twist angles $\theta$ do not produce structures that have periodic boundary conditions. The imposition of periodic boundary conditions, therefore, requires an adjustment of the target twist angle $\theta$ by $\Delta \theta$ and some degree of strain within the twisted two-dimensional building blocks. We explore the variation in the error $\Delta \theta$ in the target twist angle and the strain within the twisted building blocks due to the imposition of periodic boundary conditions for a pair of triangular lattices. This is of relevance for two-dimensional materials such as MoS$_2$ and graphene since their two-dimensional lattices are triangular. Of interest is the variation of $\Delta \theta$ and strain with twist angle $\theta$ and maximum super cell size. \mush{} determines the best super cell for a twisted bicrystal with target twist angle $\theta$ that is smaller than a user specified maximum. The "best" super cell is defined as the super cell that minimizes $\sqrt{\lambda_1^2+\lambda_2^2}$, where $\lambda_1$ and $\lambda_2$ are the non-zero eigenvalues of the strain matrix. This strain metric is equal to zero for super cells that are commensurate with the lattices of the twisted bicrystal. The size of a super cell is measured in terms of the number of primitive two-dimensional unit cells of the fundamental slab building blocks. \Cref{fig:degreeerror} and \Cref{fig:strainerror} shows $\Delta \theta$ and $\sqrt{\lambda_1^2+\lambda_2^2}$ as a function of $\theta$ for two scenarios. The error $\Delta \theta$ versus $\theta$ of \Cref{fig:angleerror0} is for super cells that were generated using the primitive Moir\'e lattice only. For small angles close to zero, the absolute error is small, however, for larger angles, the error $\Delta \theta$ can be very large. The black circles correspond to angles for which there is a commensurate super cell that can accommodate the twisted pair of bicrystals. As is evident in \Cref{fig:angleerror0}, the primitive Moir\'e lattice is not sufficiently large to identify the commensurate super cell for those special angles, as a large fraction of them have large errors $\Delta \theta$. Similarly, the strain for super cells generated using the primitive Moir\'e lattice is also large as shown in \Cref{fig:strainerror0}. \Cref{fig:angleerror1000} plots $\Delta \theta$ versus $\theta$ as determined by considering {\it super cells} of the Moir\'e lattice to identify the optimal super cell of the twisted bicrystals. A cap of 1000 primitive unit cells was used for \Cref{fig:angleerror1000} and \Cref{fig:strainerror1000}. The error in the target angle $\Delta \theta$ is dramatically reduced and almost all special angles for which commensurate super cells exist now have an error $\Delta \theta$ equal to zero, indicating that all commensurate super cells have been found. \begin{figure} \centering \subfloat[]{ \includegraphics[width=\linewidth]{fig/angle_error_0.pdf} \label{fig:angleerror0} } \subfloat[]{ \includegraphics[width=\linewidth]{fig/angle_error_1000.pdf} \label{fig:angleerror1000} } \caption{(a) The error $\Delta \theta$ in the target rotation angle when using the Moir\'e lattice to determine the cell of the twisted bicrystal. (b) By considering {\it super cells} of the Moir\'e lattice to identify the cell of the twisted bicrystal, it is possible to dramatically reduce the error $\Delta \theta$. A cap of 1000 triangular lattice sites was imposed when enumerating super cells of the Moir\'e lattice. The black circles correspond to twist angles for which commensurate super cells exist.} \label{fig:degreeerror} \end{figure} \begin{figure} \centering \subfloat[]{ \includegraphics[width=\linewidth]{fig/eigen_error_0.pdf} \label{fig:strainerror0} } \hfill \subfloat[]{ \includegraphics[width=\linewidth]{fig/eigen_error_1000.pdf} \label{fig:strainerror1000} } \caption{(a) The strain (as measured with $\sqrt{\lambda_{1}^2+\lambda_{2}^2}$, where $\lambda_1$ and $\lambda_2$ are the non-zero eigenvalues of the strain matrix) as a function of the target twist angle $\theta$ when using the Moir\'e lattice to generate the cell of the twisted pair of triangular lattices. (b) The strain when using {\it super cells} of the Moir\'e lattice to generate the cell of the twisted triangular lattice (a cap of 1000 triangular lattice sites was imposed). The black circles correspond to twist angles for which commensurate super cells exist.} \label{fig:strainerror} \end{figure} \section{Discussion and Conclusion}\label{sec:discuss} Two-dimensional defects in bulk crystals play an out-sized role in determining the properties of many crystalline materials. The energy of two-dimensional extended defects are a crucial ingredient to a variety of meso-scale models of fracture and dislocations \cite{deshpande2002discrete, xie2006discrete,sills2013effect,koslowski2002phase,shen2003phase,shen2004incorporation,hunter2011influence,feng2018shearing,lu2000peierls,lu2005peierls}. The information of relevance for such models can be encapsulated in a generalized cohesive zone model that describes the energy of a pair of bicrystals as a function of the perpendicular separation and parallel glide of a pair of crystallographic planes. In this paper, we have described how such a generalized cohesive zone model can be formulated and how crystallographic models can be constructed for the first-principles electronic structure calculations that are needed to parameterize the cohesive zone model. The \mush{} code automates the construction of periodic crystallographic models of decohesion and glide. It can also be used to generate crystallographic models for surface calculations. However, with the exception of the simplest materials, most surfaces undergo surface reconstructions to eliminate dangling bonds \cite{}. This requires an additional enumeration step \cite{thomas2010systematic}. Twisted two-dimensional materials are currently attracting much attention due to the promise of emergent electronic properties in such structures \cite{carr2020electronic}. We have described an approach to construct crystallographic models of twisted bilayers within periodic unit cells. Only a subset of twist angles generate bilayer structures that are periodic. For all other angles, the imposition of periodic boundary conditions results in an error in the target twist angle and some degree of strain within the constituent bilayers. \mush{} constructs these crystallographic models and quantifies the twist angle error and the strain. In the construction of cohesive zone models, a question often arises as to whether to allow for relaxations or not. When including relaxations, it is important to carefully extract the energy of homogeneous elastic relaxations of the adjacent slabs since the cohesive zone model should only describe the energy between the cleaved planes. These subtleties are discussed in great detail in \cite{enrique2017decohesion,van2004thermodynamics}. For simple metals, empirical evidence suggests that relaxations do not need to be explicitly taken into considerations when parameterizing a cohesive zone model for decohesion \cite{enrique2017decohesion,van2004thermodynamics}. Many applications require a generalized cohesive zone model for a multi-component solid. The cohesive zone model will then not only depend on the local concentration, but also on the local arrangement of different chemical species. This dependence can be accounted for with the cluster expansion approach \cite{sanchez1984generalized,van2018first} as has been done in the context of hydrogen embrittlement \cite{van2004thermodynamics} and fracture in Li-ion battery electrode materials \cite{qi2012chemically}. Often cohesive zones must be treated as open systems to which mobile species can segregate and thereby alter cohesive properties. In these circumstances it is convenient to formulate cohesive zone models at constant chemical potential as opposed to constant concentration \cite{van2004thermodynamics,enrique2014solute,olsson2017intergranular} A cluster expansion approach has also been used to describe the dependence of the $\gamma$ surface on composition and short-range ordering in a refractory alloys \cite{natarajan2020}. In summary, \mush{} is a code that automates the construction of crystallographic models within periodic unit cells to enable the construction of cohesive zone models and the study of twisted bilayers of two-dimensional materials with first-principles electronic structure methods. \mush{} can also be used to construct models with highly distorted local environments that are representative of dislocations, grain boundaries and surfaces for the purposes of training machine learned inter-atomic potentials. \section{Acknowledgement} The scientific work in this paper was supported by the National Science Foundation DMREF grant: DMR-1729166 “DMREF/GOALI: Integrated Computational Framework for Designing Dynamically Controlled Alloy -Oxide Heterostructures”. Software development was supported by the National Science Foundation, Grant No. OAC-1642433. Computational resources provided by the National Energy Research Scientific Computing Center (NERSC), supported by the Office of Science and US Department of Energy under Contract No. DE-AC02-05CH11231, are gratefully acknowledged, in addition to support from the Center for Scientific Computing from the CNSI, MRL, and NSF MRSEC (No. DMR-1720256). \section{Data Avaliability} The data used as an example for the application of \mush{} cannot be shared at this time due to time limitations.
2,869,038,156,089
arxiv
\section{Introduction} The goal of explainable AI is to move beyond an exclusive focus on the outputs of neural networks, an approach which risks treating such networks as `black boxes' which, though reasonably well-understood at the level of their macro-architecture, are hard to explain because of their complex, non-linear structure \cite{Samek2018}. The broad goal of the present paper is to seek a better explanation of the behaviour of neural image caption generators \cite{Bernardi2016}. Such generators typically consist of a neural language model that is conditioned on the features extracted from an image using a convolutional neural network, with several possibilities available on how to do the conditioning \cite{Tanti2018}. The main question we address is how sensitive such generators actually are to the visual input, that is, to what extent the string generated by these models varies as a function of variation in the visual features extracted by the image. We address this using a sensitivity analysis \cite{Samek2018} and an analysis based on foils \cite{Shekhar2017}. In addressing this question, we hope to achieve a better understanding of the extent to which caption generation architectures succeed in grounding linguistic symbols in visual features\footnote{Code is available on \url{https://github.com/mtanti/quantifing-visual-information}.}. \section{Background} It is known that not all words in a sentence are given equal importance by a neural language model of the kind image captioning systems use \cite{Kadar2017}. Rather than measuring the importance of words, as was done in \cite{Kadar2017}, we would like to measure how important the image is at conditioning the language model to emit the different words of a caption during the generation process. This can shed light on the extent to which the generator is grounded in visual data and help to explain some of the model's output decisions. One way of making neural architectures more explainable is to examine their sensitivity with respect to their inputs \cite{Samek2018}. Such sensitivity analysis can be done by measuring the gradient of the output with respect to different parts of the input. In this paper, we conduct such an analysis, measuring the gradient of the output with respect to the input image feature vectors. A related approach is to compare the outcomes produced by a network when parts of the input are altered, or replaced by foils. This has been done in the image captioning domain, and has yielded datasets such as FOIL-COCO \cite{Shekhar2017}. In \cite{Shekhar2017}, the visual sensitivity of images was tested by replacing words in captions with foils and checking if models are able to detect whether a caption contains an incorrect word. The results showed that this is a hard task for many vision-language models, despite being trivial for humans. However, this task does not directly quantify the visual sensitivity of such models with respect to different parts of a caption. This is what we attempt to do in the second part of our analysis. \section{Data and methods} Our goal is to measure how much visual information is used in order to predict a particular word in a caption generator. To do this we make use of the data and models from \cite{Tanti2018}\footnote{See: \url{https://github.com/mtanti/where-image2}}, which examines four different neural caption generator architectures which are often found in the literature. These are illustrated in Fig. \ref{fig:architectures}. They differ mainly in the way the language model is conditioned on image features, as follows: \begin{itemize} \item {\em init-inject}: the image features are used as an initial hidden state vector of the RNN; \item {\em pre-inject}: the image features are used as the first input to the RNN; \item {\em par-inject}: the image features are included together with every word input to the RNN; and \item {\em merge}: the image features are concatenated with the RNN hidden state vector and fed to the softmax layer. \end{itemize} \begin{figure} \centering % \subfloat[ \label{fig:architecture_init} Init-inject ]{ \includegraphics[scale=0.5]{img/architecture_init.png} } % \qquad % \subfloat[ \label{fig:architecture_pre} Pre-inject ]{ \includegraphics[scale=0.5]{img/architecture_pre.png} } \subfloat[ \label{fig:architecture_par} Par-inject ]{ \includegraphics[scale=0.5]{img/architecture_par.png} } % \qquad % \subfloat[ \label{fig:architecture_merge} Merge ]{ \includegraphics[scale=0.5]{img/architecture_merge.png} } % \caption{ \label{fig:architectures} \small Different neural image captioning architectures. } \end{figure} In our experiments, each architecture uses a GRU as an RNN. The architectures were trained on MSCOCO \cite{Lin2014} which was obtained from the distribution provided by \cite{Karpathy2015}\footnote{See: \url{http://cs.stanford.edu/people/karpathy/deepimagesent/}}. The distributed datasets come with the images already converted into feature vectors using the penultimate layer of the VGG-16 CNN \cite{Simonyan2014}. The vocabulary consists of all words that occur at least five times in the training set. We run two sets of experiments to see how much visual information is retained by each architecture: sensitivity analysis and omission scoring; both of which are explained in detail below. \subsection{Sensitivity analysis} Sensitivity analysis involves measuring the gradient of a model's output with respect to its input in order to see how sensitive the current output is to different parts of the input \cite{Samek2018}. The more sensitive, the more important that part of the input is to produce the output. We use this technique to measure how sensitive the softmax layer of the caption generator is to the image at different time steps in the generation process. We do this by computing the partial derivative of the softmax output with respect to the input image vector. It is important to note that even though the image might only be input once as an initial state to the RNN, its influence on the output will not be the same at every time step. As we implemented our neural networks in Tensorflow, which does not currently allow for computing full Jaccard matrices efficiently, instead of finding the gradient of the whole softmax we only take the gradient of the probability of the next word in the caption. Measuring the gradient of this word allows us to infer what contribution the image made to the selection of this word during generation. Although the gradient is a single number, it is computed with respect to every element in the image feature vector. We aggregate these partial gradients by taking the mean of the absolute values. We take captions that were already generated by the same caption generator being analyzed. Each caption is fed back to the caption generator that generated them to re-predict the probability of the next word for every prefix of increasing length in the caption. We report the mean gradient for each time step aggregated over all corresponding time steps in captions of the same length. We also compare these gradients to the gradient with respect to the last word in the prefix (i.e. with respect to the preceding word, but not the image) in order to compare a model's sensitivity to linguistic context, as compared to visual features. \subsection{Omission scoring} Omission scoring \cite{Kadar2017} measures changes in the model's output or hidden layers as some part of an input is removed. The more similar the hidden layer representation, the less important the removed input is. We use a similar technique to measure how important the image is to the representation. Of course, the image cannot be omitted from a caption generator, but it can be replaced by a different image, known as a foil. In image caption generators, excluding the merge architecture, the RNN hidden state vector at each time-step consists of a mixture of visual information and the preceding caption prefix. In the case of merge architectures, the same mixture is found in the layer that concatenates the image vector to the RNN hidden state vector. We call these mixed image-prefix vectors `multimodal vectors'. We take the multimodal vector of a caption generator and measure by how much it changes when a caption prefix is input together with a distractor (foil) image, as opposed to when the correct image is used with that same prefix. This is done after each time step in order to measure whether the distractor image affects the representation less and less over time. We take captions that were already generated by the caption generator. Each caption is fed back to the caption generator that generated it to re-predict the probability of the next word at every time-step. This is repeated with a distractor image in place of the correct one. We then compute the cosine distance between the multimodal vectors resulting from the correct and the distractor images. We report the mean cosine distance for each time step aggregated over all corresponding time steps in captions of the same length. In addition to multimodal vectors, we also compare the softmax layers at each time step with the test image and the foil, to assess the impact of the image change on the output probabilities. Comparison of the softmax layer is done using both cosine distance and Jensen-Shannon divergence. To identify distractor images, we compare each image in the test set to the others, finding the one whose feature vector is furthest (in terms of cosine) from the correct one. \section{Results} The lengths of generated captions vary between 6 and 15 words. For the sake of brevity, we only report the results on captions of length 9, which is the most common length. The results follow the same pattern for captions of other lengths as well. Table~\ref{tbl:tags} shows the distribution of parts of speech at each of the 9 word positions in the captions; this sheds light on which words cause spikes in visual information usage. \begin{table}[h] \centering \begin{small} { \setlength{\tabcolsep}{0.25em} \begin{tabular}{c|cccccccccc} word & ADJ & ADP & ADV & CONJ & DET & NOUN & NUM & PRON & PRT & VERB \\ \hline 0 & & & & & \textbf{99.8\%} & & 0.4\% & & & \\ 1 & 22.8\% & & 0.2\% & & & \textbf{77.1\%} & & & & 0.1\% \\ 2 & 1.5\% & \textbf{34.1\%} & 0.1\% & 7.3\% & 0.7\% & 26.2\% & & & & 30.4\% \\ 3 & 7.5\% & \textbf{30.6\%} & 0.1\% & 0.5\% & 9.9\% & 27.5\% & 0.1\% & 0.1\% & 1.3\% & 22.6\% \\ 4 & 3.8\% & 13.6\% & 0.1\% & 1.3\% & \textbf{33.6\%} & 23.0\% & 0.1\% & 0.2\% & 1.1\% & 23.3\% \\ 5 & 11.8\% & 16.1\% & 0.1\% & 0.5\% & 6.7\% & \textbf{52.0\%} & 0.1\% & & 2.2\% & 10.5\% \\ 6 & 0.5\% & \textbf{51.4\%} & & 4.5\% & 17.0\% & 9.3\% & 0.1\% & 0.1\% & 13.7\% & 3.5\% \\ 7 & 5.7\% & 7.1\% & & 1.0\% & \textbf{68.8\%} & 13.3\% & & 2.4\% & 0.1\% & 1.7\% \\ 8 & 8.5\% & 0.1\% & 0.1\% & & & \textbf{88.7\%} & & 2.5\% & & 0.2\% \\ \end{tabular} } \end{small} \caption{ \small Part of speech tags found at different positions in all 9 word captions. Since different architectures generate different captions, the percentages are averaged over all the architectures. Maximum probability per word position is in bold. } \label{tbl:tags} \end{table} \begin{figure} \centering \subfloat[ \label{fig:results_sensitivity_image} Sensitivity of the next word's probability with respect to the image. ]{ \includegraphics[scale=0.6]{img/results_sensitivity_image.png} } \quad \subfloat[ \label{fig:results_sensitivity_prevword} Sensitivity of the next word's probability with respect to the previous word. ]{ \includegraphics[scale=0.6]{img/results_sensitivity_prevword.png} } \caption{ \label{fig:results_sensitivity} \small Sensitivity analysis of 9-word captions (plus END token). Note that the previous word for position 0 is the START token and position 9 is the END token. } \end{figure} The results for the sensitivity analysis are shown in Fig.~\ref{fig:results_sensitivity}. It is clear that certain word positions are more sensitive to the image than others. Irrespective of architecture, there are substantial peaks in Fig.~\ref{fig:results_sensitivity_image} at positions 1 and 8, both of which are predominantly nouns. It could be argued that the gradient can be used to detect visual words in captions, that is, nouns referring to objects in the pictures. Par-inject has a consistently low gradient throughout the caption, which is probably reflecting the tendency of the network to avoid retaining excessive visual information, since the same image is input at every time step. Turning to Fig.~\ref{fig:results_sensitivity_prevword}, the output is much more sensitive to the previous word than to the image, by an order of magnitude. The merge architecture has an upward trend in sensitivity to the previous word as the caption prefix gets longer whilst the other architectures are somewhat more stable. This could be because in the merge architecture, which does not mix visual features directly in the RNN, there is more memory allocated in the RNN to focus on the previous word. Although nouns are more frequent at position 8 compared to 1, there is less sensitivity to the image, across all architectures. This happens even in the merge architecture, which doesn't include image features in the RNN hidden state. Hence, this downward trend in gradient is likely due to the caption prefix becoming more predictive as it gets longer, progressively reducing the image importance. Although more sensitive than par-inject, init-inject has a much steeper decline than merge and pre-inject, suggesting that something else is at play apart from prefix information content. One possibility is that image information is being lost by the RNN as the prefix gets longer. To investigate this, we can look at the results for the omission scores which are shown in Fig.~\ref{fig:results_convergence}. \begin{figure} \centering \subfloat[ \label{fig:results_convergence_multimodal_min} Omission scores: multimodal vector (cosine distance). ]{ \includegraphics[scale=0.5]{img/results_convergence_multimodal_min.png} } \subfloat[ \label{fig:results_convergence_output_min} Omission scores: softmax layer (cosine distance). ]{ \includegraphics[scale=0.5]{img/results_convergence_output_min.png} } \quad \subfloat[ \label{fig:results_convergence_outputjsd_min} Omission scores: softmax layer (Jensen-Shannon divergence). ]{ \includegraphics[scale=0.5]{img/results_convergence_outputjsd_min.png} } \caption{ \label{fig:results_convergence} \small Results for omission scoring of all 9-word long captions (plus the END token). Note that the previous word for position 0 is the START token and position 9 is the END token. } \end{figure} Again, we see peaks at word positions predominately occupied by `visual' words (nouns). The multimodal vector of the merge architecture seems to be an exception. This is because merge's multimodal vector concatenates separate image and prefix vectors, meaning that the image representation is unaffected by the prefix. The other architectures mix the image and prefix together in the RNN's hidden state vector, requiring the RNN to use memory for both image and prefix, thereby causing visual information to be degraded. Note that greater distance between multimodal vectors is unexpected in merge: since image features are concatenated with the RNN hidden state, the RNN part of the multimodal vector is identical in both foil and correct multimodal vectors, which should make the two vectors more similar, not less. The softmax on the other hand changes very similarly for all architectures and this is reflected both in cosine distance and Jensen-Shannon divergence (compare Fig.~\ref{fig:results_convergence_output_min} and \ref{fig:results_convergence_outputjsd_min}). Merge is slightly more influenced by the image when determining the last noun in the caption and par-inject being the least influenced throughout. The fact that the merge architecture has a very different multimodal vector distance from the other architectures but then ends up with a similar output distance merits further investigation. We investigate the discrepancy in the results for the merge architecture -- a relatively flat curve for the multimodal vector versus peaks at the output layer -- by repeating the analysis for the logits vector, that is, the output layer prior to applying the softmax activation function. As shown in Fig.~\ref{fig:results_stats_convergence_logits_min}, this results in curves that are similar to those shown in Fig. \ref{fig:results_convergence_multimodal_min}. \begin{figure} \centering \subfloat[ \label{fig:results_stats_convergence_logits_min} \small Omission scores: logits (cosine distance). ]{ \includegraphics[scale=0.5]{img/results_convergence_logits_min.png} } \quad \subfloat[ \label{fig:results_stats_numneglogits} \small Number of negative numbers in the logits vector. ]{ \includegraphics[scale=0.5]{img/results_stats_numneglogits.png} } \quad \caption{ \label{fig:results_stats} \small Further analysis of logits (output layer without softmax) of all 9-word long captions (plus the END token). Note that the previous word for position 0 is the START token and position 9 is the END token. } \end{figure} The logits vectors resulting from the original and foil images are much more similar to each other than the multimodal vector for all architectures (peaking at around 0.15 instead of 0.7), but the merge architecture still evinces higher distance between test and foil conditions, and greater stability. The fact that logits are similarly affected by the test-foil discrepancy as the multimodal vectors (compare Fig. \ref{fig:results_convergence_multimodal_min} and \ref{fig:results_stats_convergence_logits_min}) suggests that the peaks observed at the output layer (Fig. \ref{fig:results_convergence_output_min} and \ref{fig:results_convergence_outputjsd_min}) arise from the softmax function itself. One drastic change that the softmax function performs on the logits vector is the replacement of negative numbers with very small positive numbers. In fact we have found that merge uses fewer negative numbers in the logits vector than other architectures (about 94\% rather than 97\%\footnote{Most logits will be negative since most words will have small probabilities in the softmax.}) as is shown in Figure~\ref{fig:results_stats_numneglogits}, which means that the extra cosine distance between the logits vectors resulting from the original and foil images is probably due to a larger number of elements with opposite signs which, after softmax is applied, become positive and hence more similar. This, coupled with the fact that the output probabilities of any trained caption generator should be similar (otherwise they would not be describing the same image similarly) gives at least a partial explanation for why the outputs in all architectures change similarly when a test image is replaced with the same foil image. \section{Conclusion} Caption generators use visual information less and less as the caption is generated, although the amount of visual sensitivity is highly dependent on the part of speech of the word to generate and on the length of the prefix generated so far. This has two implications. First, as a caption gets longer, linguistic context becomes increasingly predictive of the next word in the caption, and the image is less crucial. An additional factor, in the case of inject architectures, is that the RNN hidden state stores both visual and linguistic features, making it harder to remember visual features as captions get longer. The evidence for this is that the multimodal vector and logits of the merge architecture change more when the original image to a caption is replaced with a different image, compared to inject architectures. Second, the peaks observed with nouns in the sensitivity analysis show that image features as currently used in standard captioning models are highly tuned to objects, but far less so to relational predicates such as prepositions or verbs. For future work we would like to attempt to extract visual words from captions based on how sensitive to the image a trained caption generator is at different word positions in the caption. We would also like to use these techniques to analyze state of the art caption generators, including those with attention, in an effort to deepen our explanation of what makes a good caption generator. \section*{Acknowledgments} {\small The research in this paper is partially funded by the Endeavour Scholarship Scheme (Malta). Scholarships are part-financed by the European Union - European Social Fund (ESF) - Operational Programme II – Cohesion Policy 2014-2020 “Investing in human capital to create more opportunities and promote the well-being of society”.} \clearpage \bibliographystyle{plain}
2,869,038,156,090
arxiv
\section{Introduction}\label{sec:intro} Many problems in number theory are related to central values of L-functions associated to modular forms, and central values of twisted L-functions are tools used to make progress on these problems. In this paper we focus our attention on paramodular forms of level $N$ and the spin L-function associated to them. In the 1980s a conjecture was formulated by B\"ocherer \cite{Bocherer} that relates the coefficients of a Siegel modular form $F$ of degree 2 and the central value of the spin L-function associated to $F$. One fixes a discriminant $D$ and, roughly speaking, adds up all the coefficients of $F$ indexed by quadratic forms of discriminant $D$. One also computes the central value of the spin L-function twisted by the quadratic character $\chi_D$. The conjecture asserts that the central value, up to a constant that depends only on $F$ (and not on $D$) and a suitable normalization, is the square of the sum of coefficients. In B\"ocherer's original paper \cite{Bocherer} it was proved for $F$ that are Saito-Kurokawa lifts and later B\"ocherer and Schulze-Pillot \cite{BochererSchulzePillot} proved the conjecture in the case when $F$ is a Yoshida lift. Kohnen and Kuss \cite{KohnenKuss} gave numerical evidence in the case when $F$ is of level one, degree 2 and is not a Saito-Kurokawa lift (these computations have recently been extended by Raum \cite{Raum}). A much more general approach to the conjecture has been pursued by Furusawa, Martin and Shalika \cite{Furusawa,FurusawaMartin,FurusawaShalika2,FurusawaShalika3,FurusawaShalika1}. In \cite{RyanTornaria} we investigated an analogous conjecture in the setting of paramodular forms and our goal here is to state some generalizations of the conjecture and to point out some subtleties in the statement of the original conjecture. Paramodular forms have been gaining a great deal of attention because, for example, the most explicit analogue of Taniyama-Shimura for abelian surfaces has been formulated by Brumer and Kramer~\cite{BrumerKramer} and been verified computationally by Poor and Yuen~\cite{PoorYuen}. We summarize the main results in \cite{RyanTornaria}. Fix a paramodular eigenform $F$ of level $N$ with Fourier coefficient $a(T;F)$ for each positive semidefinite quadratic form $T$. One defines \[ A(D) = A_F(D) :=\frac{1}{2}\sum_{\{T>0\;:\;\disc T=D\}/\Gamma_0(N)}\frac{a(T;F)}{\varepsilon(T)} \] where $\varepsilon(T):=\# \{U\in\Gamma_0(N):T[U]=T\}$. We provided evidence for a conjecture, which can be considered a generalization of Waldspurger's theorem \cite{Waldspurger}. We state a different version of that conjecture now: \begin{conja}[Paramodular B\"{o}cherer's Conjecture] Let $p$ be prime and let $F\in S^k(\para{p})^+$ be a paramodular Hecke eigenform of even weight $k$. Then, for fundamental discriminants $D<0$ we have \begin{equation}\label{eq:bocherer} A_F(D)^2 = \alpha_D\, C_F \, L(F,1/2,\chi_D)\,\abs{D}^{k-1} \end{equation} where $C_F$ is a nonnegative constant that depends only on $F$, and where $\alpha_D=1+\kro{D}{p}$. Moreover, when $F$ is a Gritsenko lift we have $C_F>0$, and when $F$ is not a lift, we have $C_F=0$ if and only if $L(F,1/2)=0$. \end{conja} This conjecture corrects a defect of the corresponding conjecture in \cite{RyanTornaria}, the defect being that it might be wrong in the case of nonlifts since the conjecture in \cite{RyanTornaria} requires the constant be positive. It is a theorem in \cite{RyanTornaria} that $C_F>0$ when $F$ is a Gritsenko lift and we know that $C_F>0$ in all the examples of nonlifts that we computed. A minor difference is that the formula here uses the factor $\alpha_D$ instead of the factor $\star$ in the previous version of the conjecture. Though $\alpha_D$ here can vanish (while $\star$ could not), this does not affect the conjecture since $\kro{D}{p}=-1$ implies $A(D)$ is an empty sum and $L(F,1/2,\chi_D)=0$ because in this case the functional equation has sign $-1$. However, this factor $\alpha_D$ will be essential in the case of composite levels. \begin{remark}We make the simple observation that if Conjecture~A is true and if $F$ is a nonlift for which $L(F,1/2)=0$, then the average $A(D)$ of its Fourier coefficients would be zero for all $D$. This is a step in characterizing the kinds of forms that might violate the conjecture as stated in \cite{RyanTornaria}. \end{remark} \begin{remark}Our Conjecture~A has a noteworthy difference from the original version of the conjecture first stated by B\"ocherer in \cite{Bocherer}. Namely, in his conjecture the constant $C_F$ is required to be positive while we only require that ours be nonnegative, though we do characterize exactly when the constant is zero. This was subsequently addressed in a limited way in \cite{BochererSchulzePillot}, where the conjecture for Siegel modular forms of level $N$ was considered. \end{remark} \begin{remark} In order to verify Conjecture~A in a case where $F$ is a nonlift and $L(F,1/2)=0$, one would have to compute the Fourier coefficients of a paramodular form whose L-function vanishes to even order greater that zero. Here we note that if the Paramodular Conjecture holds, there should be such a paramodular of, for example, levels 3319, 3391, 3571, 4021, 4673, 5113, 5209, 5449, 5501, 5599 since there is an hyperelliptic curve for each of these conductors whose Hasse-Weil L-function vanishes to even order at least two~\cite{Stoll}. This last assertion about the order of vanishing was verified by directly computing the central value of these L-functions using $\text{lcalc}$ \cite{lcalc}. \end{remark} \subsection{Two surprises} After carrying out the computations used to verify the conjecture in \cite{RyanTornaria}, we made two observations that we describe now. We asked about what happens if you do not restrict the computations to forms in the plus space. To do this, we first noticed that the two sides of \eqref{eq:bocherer} do indeed make sense. We also carried out a simple computation \cite[Section 4]{RyanTornaria} that shows the averages $A(D)$ add to zero when a form is in the minus space. Undaunted, we carried out the computations and tabulated the following data for $F_{587}^{-}$ (see Table~\ref{tbl:egs} for a list of all the examples considered in this paper). \begin{center} \begin{tabular}{c|ccc ccc ccc ccc} $D$ & -4 & -7 & -31 & -40 & -43 & -47 \\\hline $L_D/L_{-3}$ & 1.0 & 1.0 & 4.0 & 9.0 & 144.0 & 1.0 \end{tabular} \end{center} Here $L_D := L(F_{587}^{-},1/2,\chi_D)\cdot\abs{D}$ and the table shows fundamental discriminants for which $\kro{D}{587} = -1$. The obvious thing to notice is that the numbers in the table appear to be squares and so the natural question to ask is: squares of what? This first experiment was a natural extension of our previous work in \cite{RyanTornaria} as we had a paramodular form to compute the righthand side of \eqref{eq:bocherer} and both sides of the equation make sense. Emboldened by the results of the first experiment, we decided to change another hypothesis in the conjecture: we decided to look at the case when $D>0$. This is somewhat unnatural as the sum $A(D)$ is an empty sum in this case. Nevertheless we get the following data for $F_{277}$. \begin{center} \begin{tabular}{c|ccc ccc ccc ccc} $D$ & 12 & 13 & 21 & 28 & 29 & 40 \\\hline $L_D/L_{1}$ & 225.0 & 225.0 & 225.0 & 225.0 & 2025.0 & 900.0 \end{tabular} \end{center} Here $L_D := L(F_{277},1/2,\chi_D)\cdot\abs{D}$ and $\kro{D}{277}=+1$. Again, these seem to be squares, but squares of what? (Also, the observant reader may have noticed that all these squares are divisible by $15^2$. See Section~\ref{sec:torsion} for more about this.) In Section~\ref{sec:conj} we will show how to define, for an auxiliary discriminant $\ell$, a twisted average $B_\ell(D)$. When $\ell$ is properly chosen, the squares of $B_\ell(D)$ are exactly the squares we see in the previous two tables. Given a discriminant $\Delta$, we put \[ \alpha_\Delta := \prod_{p\mid N} \left(1+\kro{\Delta_0}{p}\right) \;, \] where $\Delta_0$ is the fundamental discriminant associated to $\Delta$. \begin{conjb} Let $N$ be squarefree. Suppose $F\in S^k(\para{N})$ is a Hecke eigenform and not a Gritsenko lift. Let $\ell$ and $D$ be fundamental discriminants such that $\ell D<0$. Then \begin{equation* B_{\ell,F}(D)^2= \alpha_{\ell D} \, C_{\ell,F} \, L(F,1/2,\chi_D) \, \abs{D}^{k-1} \;, \end{equation*} where $C_{\ell,F}$ is a constant independent of $D$. Moreover, $C_{\ell,F}=0$ if and only if $L(F,1/2,\ell)=0$. \end{conjb} The notation in this conjecture is further explained in Section~\ref{sec:conj} but the analogy with Conjecture~A will be made clear now. First, note that $B_\ell(D)$ is a twisted average of the Fourier coefficients of $F$ indexed by quadratic forms of discriminant $D$. Essentially, if $k$ is even, $N$ is prime, and $\ell=1$ then $B_\ell(D)=\abs{A(D)}$ so we recover Conjecture~A In a later section, we are interested in verifying Conjecture~B in the case of nonlifts. To do this, we attempt to understand the constant $C_{\ell,F}$ a little better. In Conjecture~B, one can think of the discriminant $\ell$ as being fixed. In this next conjecture, we think of it as a parameter. \begin{conjc} Let $N$ be squarefree. Suppose $F\in S^k(\para{N})$ is a Hecke eigenform and not a Gritsenko lift. Let $\ell$ and $D$ be fundamental discriminants such that $\ell D<0$. Then \begin{equation*} B_{\ell,F}(D)^2= \alpha_{\ell D} \, k_F \, L(F,1/2,\chi_\ell) \, L(F,1/2,\chi_D) \, \abs{D\ell}^{k-1} \end{equation*} for some positive constant $k_F$ independent of $\ell$ and $D$. \end{conjc} This gives us a very explicit statement of a conjecture for forms that are not Gritsenko lifts. It is this formula that we verify in Section~\ref{sec:nonlifts}. We observe that Conjecture~C implies Conjecture~B by letting $C_{\ell,F}=k_F \, L(F,1/2,\chi_\ell)\,\abs{\ell}^{k-1}$. We note that when $F$ is a Gritsenko lift the formula of Conjecture~B is valid in the case $\ell=1$ with $C_{\ell,F}>0$, as shown in Theorem~\ref{thm:grit} below; the formula of Conjecture~C is valid provided $\ell\neq 1$ and $D\neq 1$, but uninteresting with both sides being zero for trivial reasons (see Proposition~\ref{prop:avg_lift} and Proposition~\ref{prop:centralvalue_lift}). \\ \subsection{Notation} The main objects of study in this paper are paramodular forms of level $N$ and their L-functions. Suppose $R$ is a commutative ring with identity. The symplectic group is $\Sp{4}{R}:=\{x\in \GL{4}{R}: x' J_2 x = J_2\}$, where the transpose of matrix $x$ is denoted $x'$ and for the $n \times n$ identity matrix $I_n$ we set $J_n = \left(\begin{smallmatrix} 0 & I_n\\-I_n&0 \end{smallmatrix}\right)$. When $R\subset \R$, the group of symplectic similitudes is $\GSp{4}{R} := \{x\in\GL{4}{R}: \exists \mu\in\R_{>0}: x' J_2 x = \mu J_2\}$. The paramodular group of level $N$ is \begin{equation* \para{N} := \Sp{4}{\mathbb{Q}}\cap \begin{pmatrix} * &* & */N &*\\ N* & * &*& *\\ N*& N*& * & N*\\ N* & * & * & * \end{pmatrix}, \text{ where $*\in\Z$.} \end{equation*} Paramodular forms of degree 2, level $N$ and weight $k$ are modular forms with respect to the group $\para{N}$. We denote the space of such modular forms by $M^k(\para{N})$ and the space of cuspforms by $S^k(\para{N})$. The space $S^k(\para{N})$ can be split into a plus space and a minus space according to the action of the Atkin-Lehner operator $\mu_N$: in particular, $S^k(\para{N})^\pm =\{f\in S^k(\para{N}): f\mid \mu_N = \pm f\}$. Every $F\in M^k(\para{N})$ has a Fourier expansion of the form \[ F(Z) = \sum_{T=[Na,b,c]\in \mathcal{Q}_N} a(T;F)\,q^{Na}\,\zeta^b\,{q'}^{c} \] where $q := e^{2\pi i z}$, $q':=e^{2\pi i z'}$ ($z,z'\in \H_1$), $\zeta := e^{2 \pi i \tau}$ ($\tau\in\C$) and \[ \mathcal{Q}_N := \left\{[Na,b,c] \geq 0 \;:\; a,b,c\in\Z \right\}; \] here we use Gauss's notation for binary quadratic forms. We will want to decompose $\mathcal{Q}_N$ by discriminant $D< 0$ so we also define \[ \mathcal{Q}_{N,D} = \left\{ T\in \mathcal{Q}_N: \disc T = D\right\} . \] This is useful, for example, so that we can write \[ A_F(D):=\frac{1}{2}\sum_{T\in \mathcal{Q}_{N,D}/\Gamma_0(N)}\frac{a(T;F)}{\varepsilon(T)} \;. \] For $F\in S^k(\para{N})$, we have $a(T[U];F)=a(T;F)$ for every $U\in\Gamma_0(N)$, where $\Gamma_0(N)$ is the congruence subgroup of $\SL$ with lower lefthand entry congruent to 0 mod $N$, and $a(T[\smat{1&0\\0&-1}];F)=(-1)^k\,a(T;F)$. Moreover, cusp forms are supported on the positive definite matrices in $\mathcal{Q}_N$. Suppose we are given a paramodular form $F\in S^k(\para{N})$ so that for all Hecke operators $T(n)$, $F|T(n) = \lambda_{F,n}F=\lambda_n F$. Then we can define the spin L-series by the Euler product \begin{equation*}\label{eq:spin} L(F,s) := \prod_{\text{$q$ prime}} L_q\bigl(q^{-s-k+3/2})^{-1}, \end{equation*} where the local Euler factors are given by \[ L_q(X) := 1 - \lambda_q X + (\lambda_q^2-\lambda_{q^2}-q^{2k-4}) X^2 - \lambda_q q^{2k-3} X^3 + q^{4k-6} X^4 \] for $q\nmid N$, and has a similar formula but of lower degree for $q\mid N$. The Paramodular Conjecture \cite{BrumerKramer,PoorYuen} asserts that the L-function of a para\-modular form is the same as the L-function of an associated abelian surface. In all the examples we consider, the paramodular forms have corresponding abelian surfaces isogenous to Jacobians of hyperelliptic curves that can be found in tables of Stoll \cite{Stoll}; thus we compute hyperelliptic curve L-functions when we carry out our computations. A table in \cite{Dokchitser} summarizes the data that we use to write down the functional equation of the L-function of an hyperelliptic curve: \begin{equation* L^*(F,s) = \left(\frac{\sqrt{N}}{4\pi^2}\right)^s\Gamma(s+1/2)\Gamma(s+1/2)L(F,s). \end{equation*} so that conjecturally \[ L^*(F,s) = \epsilon\, L^*(F,1-s), \] when $F\in S^2(\para{N})^\epsilon$. Let $D$ be a fundamental discriminant, and denote by $\chi_D$ the unique qua\-dratic character of conductor $D$. For the spin L-series $L(F,s) = \sum_{n\geq 1} a(n)\,n^{-s}$ of a paramodular form $F$, we define the quadratic twist \[ L(F,s,\chi_D) := \sum_{n \geq 1} \chi_D(n)\,a(n)\,n^{-s}, \] which is conjectured to have an analytic continuation and satisfy a functional equation. Suppose $N$ is squarefree, let $N'=ND^4/\gcd(N,D)$, and define \[ L^*(F,s,\chi_D) :=\left(\frac{\sqrt{N'}}{4\pi^2}\right)^s\Gamma(s+1/2)\,\Gamma(s+1/2)\,L(F,s,\chi_D) \] so that assuming standard conjectures, \[ L^*(F,s,\chi_D)=\epsilon'\,L^*(F,s,\chi_D). \] The global root number $\epsilon'$ of the functional equation for $L(F,s,\chi_D)$ is given in terms of the local root numbers $\epsilon_p$ of $L(F,s)$ by the following lemma \cite{SchmidtNotes}. \begin{lemma} Let $F\in S^k(\para{N})$ be a Hecke eigenform, with $N$ squarefree. Denote the local root numbers of, respectively, $L(F,s,\chi_D)$ and $L(F,s)$, as follows: $\epsilon'=\prod_{p\leq \infty} \epsilon_p'$, and $\epsilon=\prod_{p\leq \infty} \epsilon_p$. Then \begin{enumerate} \item At the infinite place, $\epsilon_\infty' =\epsilon_\infty = (-1)^k$. \item Assume $p\nmid N$, then $\epsilon_p'=\epsilon_p=+1$. \item Assume $p\mid N$ and $p\mid D$, then $\epsilon_p'=+1$. \item Assume $p\mid N$ and $p\nmid D$, then $\epsilon_p'=\chi_D(p) \epsilon_p$. \end{enumerate} In particular, if $N_0=N/\gcd(N,D)$, \[ \epsilon' = \epsilon \cdot \chi_D(N_0) \cdot \prod_{p\mid\gcd(D,N)} \epsilon_p \] \end{lemma} \section{Generalizations of the Paramodular B\"ocherer's Conjecture}\label{sec:conj} In this section, we motivate Conjectures~B and C. We do it by describing what happens for particular $F$ that are not Gritsenko lifts. We place particular emphasis on the transition from the hypotheses in Conjecture~A (namely, $F$ in the plus-space, of prime level and even weight) to Conjectures~B and C which have no such hypotheses. \subsection{A simple case} For $F=F_{249}$, the unique Hecke eigenform of weight 2 and level $249=3\cdot 83$, we have $A(D)=0$ for all $D$ (see Lemma~\ref{lem:AD}, below) since its eigenvalues under the Atkin-Lehner opertors $\mu_3$ and $\mu_{83}$ are $\epsilon_3=\epsilon_{83}=-1$. However, we have \begin{center} \begin{tabular}{c|ccc ccc ccc} $D$ & -7 & -8 & -20 & -31 & -35 & -40 & -47 & -56 & -71 \\\hline $L_D/L_{-4}$ & 1.0 & 1.0 & 4.0 & 1.0 & 16.0 & 4.0 & 1.0 & 4.0 & 0.0 \end{tabular} \end{center} where $L_D := L(F_{249},1/2,\chi_D)\cdot\abs{D}$ and $\kro{D}{249}=+1$. In this and the next section we will show where the squares in this table come from. Before we do that, we show that for $F_{249}$ (as well as $F_{587}^-$ and $F_{713}^-$), we really do get $A(D)=0$ for all $D$. \begin{lemma}\label{lem:AD} Let $F$ be a paramodular form of weight $k$ and level $N$. Assume $F$ is an eigenform under the Atkin-Lehner operators $\mu_p$ for every $p\mid N$, so that $F\!\mid\!\mu_p = \epsilon_p\,F$. If $\epsilon_p=-1$ for any $p\mid N$ or if $k$ is odd, then $A_F(D)=0$ for all $D$. \end{lemma} \begin{proof} For $N'\parallel N$, one can define an involution $W_{N'}$ over the set $\mathcal{Q}_N/\Gamma_0(N)$ (see \cite[p. 507]{GrossKohnenZagier}). This involution is related to the Atkin-Lehner operators in the following way: \[ a(W_{p^i}(T);F) = a(T;F\!\mid\!\mu_p) = \epsilon_p\,a(T;F) \] where $p^i$ is the largest power of $p$ dividing $N$. Taking the sum over all classes $T\in\mathcal{Q}_{N,D}/\Gamma_0(N)$ shows that $A(D) = \epsilon_p\,A(D)$, and it follows that $A(D)=0$ if $\epsilon_p=-1$. The case of odd $k$ is similar using $a(T[\smat{1&0\\0&-1}];F)=(-1)^k\,a(T;F)$. \end{proof} We will show now how to define a more refined average $B(D)$ on the coefficients of $F$ for which Lemma~\ref{lem:AD} does not apply. In order to do that, we further decompose $\mathcal{Q}_{N,D}$ as follows. Note that for any $T=[Na,b,c]\in\mathcal{Q}_{N,D}$ we have $b^2\equiv D\pmod{4N}$ and we can thus define \[ R_D := \{ \rho\mod 2N: \rho^2\equiv D\pmod{4N}\}. \] For each $\rho\in R_D$ we set \[ \mathcal{Q}_{N,D,\rho} := \{ T=[Na,b,c] \in \mathcal{Q}_{N,D} : b\equiv \rho\pmod{2N} \}. \] We observe that $\mathcal{Q}_{N,D}$ is the disjoint union of $\mathcal{Q}_{N,D,\rho}$ for $\rho\in R_D$. Now for each $\rho\in R_D$ we put \[ B(D, \rho)=B_F(D,\rho):=\sum_{T\in\mathcal{Q}_{N,D,\rho}/\Gamma_0(N)}\frac{a(T;F)}{\varepsilon(T)}. \] \begin{lemma}\label{lem:ADandBD} We note the following: \begin{enumerate} \item $A_F(D)=\frac12\,\sum_{\rho\in R_D} B_F(D,\rho)$, \item $B_F(D,-\rho) = (-1)^k \, B_F(D,\rho)$, and \item $\abs{B_F(D,\rho)}$ is independent of $\rho$. \end{enumerate} \end{lemma} \begin{proof} The first statement is obvious, and the second follows from the fact that $a(T[\smat{1&0\\0&-1}];F)=(-1)^k\,a(T;F)$. The last statement follows by noting that the Atkin-Lehner involutions $W_{N'}$ mentioned in the proof of Lemma~\ref{lem:AD} transitively permute the sets $\mathcal{Q}_{N,D,\rho}$. \end{proof} Now we can finally define the new average: \[ B(D) = B_F(D) := \frac12\,\sum_{\rho\in R_D} \abs{B(D,\rho)}. \] We observe by Lemma~\ref{lem:ADandBD} that $B(D)=\frac12\,\abs{R_D}\,\abs{B(D,\rho)}$. We also note that when $k$ is even and $N$ is prime, we have $B(D)=\abs{A(D)}$ since $R_D=\{\pm\rho\}$ and $B(D,-\rho)=B(D,\rho)$ in this case. We return to the example $F_{249}$. We get the following table: \begin{center} \def { } \begin{tabular}{c|ccc ccc ccc} $D$ & -7 & -8 & -20 & -31 & -35 & -40 & -47 & -56 & -71 \\\hline $L_D/L_{-4}$ & 1.0 & 1.0 & 4.0 & 1.0 & 16.0 & 4.0 & 1.0 & 4.0 & 0.0 \\%\hline $B(D)/B(-8)$ & & 1 & 2 & & 4 & & 1 & 2 & 0 \end{tabular} \end{center} where again $L_D := L(F_{249},1/2,\chi_D)\cdot\abs{D}$ and $\kro{D}{249}=+1$. When $\kro{D}{3}=\kro{D}{83}=-1$ the definition of $B(D)$ gives an empty sum; this is indicated in the table above with an empty space. In the next section we describe where the remaining squares come from. \begin{remark} Note that the value of $B(-71)=0$ is a non-trivial zero average, predicting the vanishing of the twisted L-function at the center. We will investigate such phenomena in a future paper. \end{remark} \subsection{A general case}\label{ssec:genus char} In the previous section we looked at a form $F_{249}$ of composite level for which the averages $A(D)$ vanish trivially. We introduced a refined average $B(D)$ that explained some of the data in the tables, but in the case $\kro{D}{3}=\kro{D}{83}=-1$ the sum $B(D)$ is empty, although the central values $L_D/L_{-4}$ are (nonzero) squares. Consider the form $F_{587}^-$ as described in the Introduction. It was shown in \cite[Section 4]{RyanTornaria} that for the discriminants $D$ so that $\kro{D}{587}=-1$ the sum $A(D)$ was empty and so, in particular, our new sum $B(D)$ is also empty, and cannot explain the fact that its normalized twisted central values are (nonzero) squares. Also, in the definition of $A(D)$ and of $B(D)$ we require that $D$ be negative, so neither average can make sense of the data in the Introduction related to real quadratic twists of the L-functions of $F_{277}$. In this section, using the genus theory for $\Gamma_0(N)$-classes of quadratic forms, we fully explain these examples by defining another new average $B_\ell(D)$ weighted by a genus character. We define this now. Fix a fundamental discriminant $\ell$. Then we define a genus character $\chi_\ell$ similar to the generalized genus character defined in~\cite{GrossKohnenZagier}. Let $T=[Na,b,c]\in\mathcal{Q}_{N,\ell D}$ so that $\gcd(a,b,c,\ell)=1$ and let $g=\gcd(N,b,c,\ell)$. Define $\tilde{T}=[Na/g,b,cg]$ and note that it represents an integer $n$ relatively prime to $\ell$. Now \[ \chi_\ell(T) := \kro{\ell}{n} \prod_{p\mid g} s_p \] where \[ s_p=\begin{cases} \kro{-\ell/p}{p} & p\text{ odd}\\ \kro{2}{t} & p=2,\, t\text{ the odd part of }\ell. \end{cases} \] We note that $\chi_\ell$ has the following properties. First, it is completely multiplicative: if $\ell=\ell_1\ell_2$, then $\chi_\ell=\chi_{\ell_1}\chi_{\ell_2}$. Second, it behaves predictably with respect to $W_p$. Namely, if $p\nmid\ell$, then $\chi_\ell(W_p\,T)=\kro{\ell}{p} \chi_\ell(T)$ and otherwise $\chi_{p^\ast}(W_p\,T)=\chi_{p^\ast}(T)$ where $p^\ast=\kro{-1}{p}\,p$ for odd $p$, and $p^\ast=-4$, $8$ or $-8$ for $p=2$. Then we define, for $D$ a fundamental discriminant such that $\ell D<0$, \[ B_\ell(D,\rho)= B_{\ell,F}(D,\rho) := \sum_{T\in\mathcal{Q}_{N,\ell D,\rho}/\Gamma_0(N)}\chi_{\ell}(T)\,\frac{a(T;F)}{\varepsilon(T)} \] and \[ B_\ell(D) =B_{\ell,F}(D) := \frac12\, \sum_{\rho\in R_{\ell D}} B_\ell(D,\rho)\,. \] We note that $B_1(D)=B(D)$ as defined in the previous section. One can also prove, using quadratic reciprocity, that $B_\ell(D)=B_D(\ell)$. \begin{remark} If $\kro{\ell D}{p}=-1$ for some $p\mid N$, then $B_\ell(D)=0$, because $\mathcal{Q}_{N,\ell D}$ is empty in this case. Nevertheless, for any fundamental discriminant $D$, there exists some $\ell$ for which $B_\ell(D)$ is not an empty sum. \end{remark} We will now complete the explanation of the examples we have discussed so far. We start with the form $F_{249}$. In the previous section we were able to explain the case $\kro{D}{3}=\kro{D}{83}=+1$ with auxiliary discriminant $\ell=1$ (implicitly). We can explain the other case, $\kro{D}{3}=\kro{D}{83}=-1$, by choosing $\ell=5$. \begin{center} \def { } \begin{tabular}{c|ccc ccc ccc} $D$ & -7 & -8 & -20 & -31 & -35 & -40 & -47 & -56 & -71 \\\hline $L_D/L_{-4}$ & 1.0 & 1.0 & 4.0 & 1.0 & 16.0 & 4.0 & 1.0 & 4.0 & 0.0 \\%\hline $B_1(D)/B_1(-8)$ & & 1 & 2 & & 4 & & 1 & 2 & 0 \\ $B_5(D)/B_5(-4)$ & 1 & & & 1 & & 2 & & & \end{tabular} \end{center} where $L_D := L(F_{249},1/2,\chi_D)\cdot\abs{D}$ and $\kro{D}{249}=+1$. The empty entries correspond to empty sums as noted in the remark above. In order to explain the case of the form $F_{587}^-$, in the minus space, we need to use an auxiliary discriminant $\ell>0$ such that $\kro{\ell}{587}=-1$. Using $\ell=5$: \begin{center} \begin{tabular}{c|ccc ccc ccc ccc} $D$ & -4 & -7 & -31 & -40 & -43 & -47 \\\hline $L_D/L_{-3}$ & 1.0 & 1.0 & 4.0 & 9.0 & 144.0 & 1.0 \\ $B_5(D)/B_5(-3)$ & 1 & 1 & 2 & 3 & 12 & 1 \end{tabular} \end{center} where $L_D := L(F_{587}^{-},1/2,\chi_D)\cdot\abs{D}$ and the table shows fundamental discriminants for which $\kro{D}{587} = -1$. Finally, in order to handle positive discriminants $D$, we can choose a negative discriminant $\ell$. In the example of $F_{277}$ we choose $\ell=-3$: \begin{center} \begin{tabular}{c|ccc ccc ccc ccc} $D$ & 12 & 13 & 21 & 28 & 29 & 40 \\\hline $L_D/L_{1}$ & 225.0 & 225.0 & 225.0 & 225.0 & 2025.0 & 900.0 \\ $B_{-3}(D)/B_{-3}(1)$ & 15 & 15 & 15 & 15 & 45 & 30 \end{tabular} \end{center} where $L_D := L(F_{277},1/2,\chi_D)\cdot\abs{D}$ and $\kro{D}{277}=+1$. \section{The Case of Nonlifts}\label{sec:nonlifts} In the previous section, we highlighted some tables that give evidence for our Conjectures~B and C. Now we describe how those tables were computed and how the tables in Section~\ref{sec:tables} were computed. The Paramodular Conjecture asserts that for each rational Hecke eigenform $F$ that is not a Gritsenko lift, there is an abelian surface $\mathcal{A}$ so that the Hasse-Weil L-function of $\mathcal{A}$ and the spin L-function of $F$ are the same. Suppose we have such an $F$ and such an $\mathcal{A}$. In all our examples, $\mathcal{A}$ is isogenous to the Jacobian of a hyperelliptic curve $C$. We list a sampling of such hyperelliptic curves in Table~\ref{tbl:egs}, and more examples can be found in \cite{RT}. \begin{table} \begin{center} \begin{tabular}{l||c|r|r} $F$ & $N$ & \hfill$C$\hfill{} & $T$ \\\hline\hline $F_{249}$ & 249 & $y^2+(x^3 + 1)y=x^2 + x$ & 14 \\ $F_{277}$ & 277 & $y^2+y=x^5 - 2x^3 + 2x^2 - x$& 15\\ $F_{295}$ & 295 & $y^2+(x^3 + 1)y=-x^4 - x^3$&14\\ $F_{587}^-$ & 587 & $y^2+(x^3 + x + 1)y = -x^3 - x^2$&1\\ $F_{713}^+$ & 713 & $y^2+(x^3 + x + 1)y=-x^4$&9\\ $F_{713}^-$ & 713 & $y^2+(x^3 + x + 1)y=x^5 - x^3$&1\\ \end{tabular} \end{center} \caption{Hyperelliptic curves $C$ used to compute the L-series of the paramodular form $F$ associated to $C$ via the Paramodular Conjecture. Here $T$ denotes the torsion of the abelian surface $\text{Jac}(C)$.} \label{tbl:egs} \end{table} Consider such a $C$. Then the Euler product of $C$ can be found as in \cite[Section 3]{RyanTornaria} and the functional equation can be found, for example, in \cite{Dokchitser}, though we give it the analytic normalization. The central values were then computed using Michael Rubinstein's \texttt{lcalc} \cite{lcalc}. Now we describe how the averages are computed. The Fourier coefficients of the 6 paramodular forms whose L-functions correspond to the L-functions of the curves listed in Table~\ref{tbl:egs} were computed by Cris Poor and David Yuen. The paramodular forms of level $277$ and $587$ are publicly available and computed via the methods of \cite{PoorYuen}. The other four paramodular forms were computed by Poor and Yuen for us, using an as of yet unpublished method~\cite{PoorYuenPrivate}. The sum $B_\ell(D)$ is computed using these Fourier coefficients using a combination of Sage \cite{Sage} code and custom-written Python code. In particular, we implemented a class that represent binary quadratic forms modulo $\Gamma_0(N)$ one of whose methods computes the generalized genus character $\chi_{\ell}$. In Table~\ref{tbl:verification} we summarize the forms and discriminants for which we have computed both twisted averages and twisted central values. \begin{table} \begin{center} \begin{tabular}{l||c|r|l} $F$ & $k_F$ & $\Delta_{\text{min}}$ & except these $\Delta$ \\\hline $F_{249}$ & 0.831968 & $-295$ & $\varnothing$ \\ $F_{277}$ & 0.537715 & $-2435$\rlap & $\{ -2167, -2180, -2191, -2200, -2212, -2215 \}$ \\ $F_{295}$ & 0.224744 & $-276$ & $\{ -200, -211, -231, -259 \}$ \\ $F_{587}^{-}$ & 0.002680 & $-1108$ & $\{ -927 \}$ \\ $F_{713}^{+}$ & 0.422121 & $-260$ & $\varnothing$ \\ $F_{713}^{-}$ & 0.005248 & $-260$ & $\varnothing$ \\ \end{tabular} \end{center} \captionsetup{singlelinecheck=off} \caption[table description]{ Summary of forms and discriminants for which we have computed both twisted averages $B_\ell(D)$ and the corresponding twisted central values, and for which Conjecture~C has been numerically verified. The discriminants for which we computed satisfy $0 > \Delta \geq \Delta_{\text{min}}$, where $\Delta=\ell D$, with the following exceptions: \begin{itemize}[topsep=0pt,itemsep=0pt,parsep=0pt,label=--] \item If a discriminant $\Delta$ is in the last column, it means that we did not have all the Fourier coefficients necessary to compute the averages. \item In the case of $F_{277}$, we have the further restriction $\abs{\ell}\leq 500$ and $\abs{D}\leq 500$ due to loss of precision in computing $L_\ell$ and $L_D$. \end{itemize} } \label{tbl:verification} \end{table} The following theorem summarizes the cases in which Conjecture~C has been verified. \begin{theorem}\label{thm:comp} Let $F$ be one of the paramodular forms listed in Table~\ref{tbl:verification}. Let $\ell$ and $D$ be fundamental discriminants such that $\ell D < 0$ satisfying the constraints described in the same table. Then \begin{equation*} B_{\ell,F}(D)^2\approx \alpha_{\ell D}\, k_F\, L(F,1/2,\chi_\ell)\, L(F,1/2,\chi_D)\, \abs{D\ell}^{k-1} \end{equation*} numerically, with $k_F$ a positive constant listed in the table. \end{theorem} In addition to the cases listed in Table~\ref{tbl:verification} we point out that more cases and more tables can be found at \cite{RT}, providing evidence for Conjecture~C using forms and curves that are not in this table. \section{The Case of Lifts}\label{sec:lifts} A Gritsenko lift $F$ \cite{Gritsenko} is a paramodular form that comes from a Jacobi form $\phi$ which in turn corresponds to an elliptic modular form $f$. The standard reference for Jacobi forms is \cite{EichlerZagier} and we refer the reader to \cite{PoorYuen} for background on the Gritsenko lift. We will now state and prove a theorem that gives evidence for Conjecture~B in the case of lifts. \begin{theorem}\label{thm:grit} Let $N$ be squarefree. Suppose $F\in S^k(\para{N})$ is a Hecke eigenform and a Gritsenko lift. Let $D<0$ be a fundamental discriminant. Then \begin{equation*} B_{F}(D)^2= \alpha_{D} \, C_{F} \, L(F,1/2,\chi_D) \, \abs{D}^{k-1} \end{equation*} where $C_F$ is a positive constant independent of $D$. \end{theorem} Let $F=\text{Grit}(\phi)$ where \[\phi(\tau,z)=\sum_{n\geq 0}\sum_{r^2\leq 4nN} c(n,r)\,q^n\,\zeta^r\] is a Jacobi form of weight $k$ and index $N$. We note \cite[Theorem 2.2, p. 23]{EichlerZagier} that $c(n,r)$ depends only on $D=r^2-4nN$ and $r\mod{2N}$; for each $\rho\in R_D$ we let \[ c_\rho(D) := c\left(\frac{\rho^2-D}{4N}, \rho\right) \qquad \text{and} \qquad c^*(D):=\frac{1}{2}\sum_{\rho\in R_D}\abs{c_\rho(D)} \,. \] We remark that $\abs{c_\rho(D)}$ is independent of $\rho$ and that $c^*(D)$ is, up to sign, the coefficient of a weight $k-1/2$ modular form \cite[Theorem~5.6, p. 69]{EichlerZagier}. \begin{proposition} \label{prop:avg_lift} If $D<0$ is a fundamental discriminant, then \[ B_F(D)=c^{*}(D)\frac{h(D)}{w_D} \,. \] If $\ell\neq 1$ and $D\neq 1$ are fundamental discriminants, we have $B_{\ell,F}(D)=0$. \end{proposition} \begin{proof} By the definition of the Gritsenko lift, we know that $a(T ; F) = c_b(\disc T)$ for $T=[Na,b,c]\in\mathcal{Q}_{N}$, provided $T$ is primitive, which is always the case for $\disc T$ fundamental. Thus \begin{align*} \abs{B_\ell(D,\rho)} & =\abs{\sum_{T\in\mathcal{Q}_{N,\ell D,\rho}/\Gamma_0(N)} \chi_\ell(T)\,\frac{a(T;F)}{\varepsilon(T)}} \\ & = \abs{c_\rho(\ell D)}\,\abs{\sum_{T\in\mathcal{Q}_{N,\ell D,\rho}/\Gamma_0(N)} \chi_\ell(T)\,\frac{1}{\varepsilon(T)}} \,. \end{align*} When $\ell=1$ the sum in the last term is $\sum \frac{1}{\varepsilon(T)} = \frac{h(D)}{w_D}$, since $\abs{\mathcal{Q}_{N,D,\rho}/\Gamma_0(N)}=h(D)$ for fundamental $D$ and $\varepsilon(T)=w_D$. On the other hand if $l\neq 1$ and $D\neq 1$ then $\chi_\ell$ is a nontrivial character in $\mathcal{Q}_{N,D,\rho}/\Gamma_0(N)$, hence the sum vanishes. \end{proof} Let $f$ be the elliptic modular form corresponding to the Jacobi form $\phi$ as in \cite[Theorem 5]{SkoruppaZagier}. It is a standard fact that $L(F,s) = \zeta(s+1/2)\, \zeta(s-1/2)\, L(f,s)$ (using the analytic normalization, so that the center is at $s=1/2$). Twisting by $\chi_D$ we obtain \[ L(F,s,\chi_D) = L(s+1/2,\chi_D)\, L(s-1/2,\chi_D)\, L(f,s,\chi_D) \] valid on the region of convergence. It follows from this that $L(F,s,\chi_D)$ has an analytic continuation (with a pole at $s=3/2$ for $D=1$) and, using Dirichlet's class number formula for the special values $L(0,\chi_D)$ and $L(1,\chi_D)$, we have \begin{proposition} \label{prop:centralvalue_lift} \[ L(F,1/2,\chi_D) = \begin{cases} \frac{4\pi^2}{w_D^2}\,\frac{h(D)^2}{\sqrt{\abs{D}}}\, L(f,1/2,\chi_D) & \text{if $D<0$,} \\ 0 & \text{if $D>1$,} \\ -\frac{1}{2} \, L'(f,1/2) & \text{if $D=1$.} \\ \end{cases} \] \end{proposition} \begin{proof}[Proof of Theorem~\ref{thm:grit}] By Waldspurger's formula \cite{Waldspurger,Kohnen}, we have \[ c^*(D)^2 = \alpha_D\,k_f\,L(f,1/2,\chi_D)\,\abs{D}^{k-3/2} \] with $k_f>0$. The theorem thus follows from Proposition~\ref{prop:avg_lift} and Proposition~\ref{prop:centralvalue_lift}, with $k_F=k_f/4\pi^2$. \end{proof} \section{Torsion}\label{sec:torsion} In the Introduction, we observed that for $L(F_{277},1/2,\chi_D)\cdot\abs{D}/L(F_{277},1/2)$ is divisible by $15^2$ when $D>1$: \begin{proposition} Let $D>1$ and assume Conjecture~B. Then, the ratio of special values $L(F_{277},1/2,\chi_D)\cdot\abs{D}/L(F_{277},1/2)$ is divisible by $15^2$. \end{proposition} \begin{proof} We recall \cite[Theorem 7.3]{PoorYuen} which asserts: suppose $\phi$ is the first Fourier-Jacobi coefficient of $F_{277}$ and let $G=\text{Grit}(\phi)$. Then, for all $T\in \mathcal{Q}_{N}$, \[ a(T;F)\equiv a(T;G)\pmod{15}. \] In Proposition~\ref{prop:avg_lift} we observed that $B_{l,G}(D) = 0$ when $\ell,D\neq 1$; hence, it follows that $B_{l,F}(D)\equiv 0\pmod{15}$. Finally, Conjecture~B implies that \[ L(F,1/2,\chi_D) \cdot |D| / L(F,1/2) = \star\, B_{-3}(D)^2 / B_{-3}(1)^2 \] where $B_{-3}(1)=1$ (see Table~\ref{tbl:277}) and where $\star = 1$ if $p\mid D$ and 2 if $p\nmid D$. \end{proof} We recall (see Table~\ref{tbl:egs}) that the Jacobian of the abelian surface associated to $F_{277}$ by the Paramodular Conjecture has torsion of size 15. In \cite{PoorYuen}, it is suggested that this phenomenon holds in generality. Observe in Tables~\ref{tbl:249}--\ref{tbl:713m} each entry (both the integers $B_\ell(D)$ and the normalized central values) is divisible by the corresponding curve's torsion unless $\ell=1$ or unless $D=1$. This provides further (indirect) evidence for Poor and Yuen's observation holding in general.
2,869,038,156,091
arxiv
\section{Introduction}\label{sec:intro} Superfluidity in quantum fluids is in general accompanied by the phenomenon of two sound modes, namely, first and second sound, which is supported by Landau-Tisza's two-fluid hydrodynamic theory \cite{Tisza1938, Landau1941}. The intriguing phenomenon of second sound was first observed in liquid helium and was described as an entropy wave based on the two-fluid hydrodynamic theory \cite{Peshkov}. Ultracold atoms expanded the scope of this study by a wide range of trappable quantum liquids including features such as reduced dimensionality and tunable interactions. Two sound modes were measured in a three-dimensional (3D) unitary Fermi gas \cite{Sidorenkov2013}, the BEC-BCS crossover \cite{Hoffmann2021}, and dilute 2D and 3D Bose gases \cite{Hadzibabic2021, Hilker2021}. Contrary to liquid helium, ultracold gases form a wide range of weakly interacting quantum fluids, which undergo an intriguing interplay of sound modes between hydrodynamic and non-hydrodynamic regimes \cite{Stamper1998, SinghSF, Hilker2021}. Based on the hydrodynamic theory at zero temperature, the second-sound velocity ($v_2=v_1/\sqrt{D}$) is either below the first-sound velocity $v_1$ for $D=3$ and $2$ dimensions or equal to $v_1$ for $D=1$ dimension \cite{Matveev2018}. Hydrodynamic theory does not support the sound velocity being higher than the Bogoliubov velocity. However, such regimes at low temperatures were predicted in dilute 3D and 2D Bose gases in the weak-coupling regime \cite{Ilias, SinghSS}. Sound propagation in 2D quantum fluids is of particular interest because the superfluid density undergoes a universal jump of $4/\lambda^2$ at the Berezinksii-Kosterlitz-Thouless (BKT) transition \cite{Berezinski1972, Kosterlitz1973, Minnhagen1987}, where $\lambda$ is the thermal de Broglie wave length. This has attracted interest to study sound propagation in ultracold 2D quantum gases both experimentally \cite{Dalibard2018, Bohlen2020, Hadzibabic2021} and theoretically \cite{Ozawa2014, Liu2014, OtaTwo, Ota2018, Salasnich2018, SinghSS, Zhigang2020, Krzysztof, Salasnich2020, Furutani2021}. In Ref. \cite{SinghSS} we discussed and gave numerical evidence for the weak and strong coupling regimes of the sound modes. For the weak-coupling regime, we found that the non-Bogoliubov (NB) sound mode has higher velocity than the Bogoliubov (B) sound mode. We referred to this as a non-hydrodynamic regime. For the strong-coupling regime, we showed that the B sound mode velocity is higher than the NB sound mode velocity. This was consistent with a hydrodynamic scenario. Furthermore, we found that the two sound modes undergo a temperature-dependent hybridization between these two coupling regimes \cite{SinghSF}. We note that for a finite-size system the BKT transition manifests itself as a crossover \cite{Pilati2008, Matt2010, sf_2019, SinghJJ}, rather than a sudden jump. Recently, Ref. \cite{Hadzibabic2021} reported the measurement of the two sound modes in a homogeneous 2D Bose gas of $^{39}$K atoms across the BKT transition. The density response of the driven system is measured as a function of the driving frequency, allowing the detection of both sound modes. The velocity of the lower sound mode decreases with increasing temperature and vanishes above the transition temperature $T_c$, whereas the velocity of the higher sound mode is higher than the Bogoliubov velocity at zero temperature and displays a weak temperature dependence across $T_c$. The measurements are compared to the two-fluid theory in an infinite system \cite{OtaTwo}, which predicts a jump in the velocities at $T_c$ and does not describe the temperature dependence of the higher sound mode at $T/T_c<0.75$. In this paper, we use classical-field simulations to study the propagation of the sound modes in 2D Bose gases for the experimental parameters of Ref. \cite{Hadzibabic2021}. The dynamic structure factor (DSF) of the unperturbed cloud shows the Bogoliubov (B) and non-Bogoliubov (NB) sound modes below the transition temperature and the diffusive and normal sound modes above the transition temperature. This allows us to determine the two sound velocities across the BKT transition, independent of an external probe, serving as a benchmark for the results of the density probes. We implement the experimental method to excite the sound modes by driving the system \cite{Hadzibabic2021}. We find a driving-strength dependent density response and the excitation of two well-resolved sound peaks is observed for strong driving strengths only. As a secondary comparison, we excite the two sound modes using a step-pulse perturbation of the density \cite{SinghSS, Hoffmann2021}. Finally, we compare the sound velocity results of the driven response, the step-pulse perturbation and the DSF with the measurements \cite{Hadzibabic2021}. The measured higher-mode velocity within the experimental error agrees with the simulation results at all temperatures, even at temperatures where the hydrodynamic prediction fails. The measured lower velocity shows a shift to higher velocities compared to the results of the DSF and step-pulse perturbation, which is captured by the simulation results of the driven response at high temperatures. This increase due to the nonlinear response of the strong driving partially captures the experimental observations. This paper is organized as follows. In Sec. \ref{sec:method} we describe the simulation method and the excitation protocols. In Sec. \ref{sec:dsf} we calculate the dynamic structure factor to characterize the sound modes. In Sec. \ref{sec:probes} we present the results of the driven response and the step-pulse perturbation. In Sec. \ref{sec:comp} we compare the simulation and the measurements of the two sound velocities. We conclude in Sec. \ref{sec:conclusion}. \begin{figure*}[] \includegraphics[width=1.0\linewidth]{dsf_all} \caption{Excitation spectra of a 2D Bose gas for the density $n=3 \, \mu \mathrm{m}^{-2}$ and the interaction $\tilde{g}=0.64$, which are the same as the experimental values in Ref. \cite{Hadzibabic2021}. Dynamic structure factor $S(\mathbf{k} , \omega)$ as a function of the wave vector $\mathbf{k} = k_y$ and the frequency $\omega$ is shown for various $T/T_0$ across the BKT transition, where $T_0$ is the estimate of the transition temperature. The white dashed line is the Bogoliubov dispersion determined using the numerically obtained superfluid density $n_s(T)$; see text. } \label{Fig:dsf} \end{figure*} \section{System and methodology}\label{sec:method} We simulate a bosonic cloud of $^{39}$K atoms confined to 2D motion in a box potential. This geometry was used in Ref. \cite{Hadzibabic2021}. The system is described by the Hamiltonian \begin{equation} \label{eq_hamil} \hat{H}_{0} = \int d \mathbf{r} \, \Big[\frac{\hbar^2}{2m} \nabla \hat{\psi}^\dagger({\bf r}) \cdot \nabla \hat{\psi}({\bf r}) + \frac{g}{2} \hat{\psi}^\dagger({\bf r})\hat{\psi}^\dagger({\bf r})\hat{\psi}({\bf r})\hat{\psi}({\bf r})\Big]. \end{equation} $\hat{\psi}$ ($\hat{\psi}^\dagger$) is the bosonic annihilation (creation) operator. $g = \tilde{g}\hbar^2/m$ is the 2D interaction parameter, with $\tilde{g}= \sqrt{8 \pi} a_s/\ell_z$ being the dimensionless interaction and $m$ the atomic mass. $a_s$ is the 3D scattering length and $\ell_z$ is the harmonic-oscillator length of the confining potential in the transverse direction. We use the same density $n=3\, \mu \mathrm{m}^{-2}$ and the same $\tilde{g}=0.64$, as in the experiments \cite{Hadzibabic2021}. We choose a square box of size $L_x \times L_y = 32 \times 32 \, \mu \mathrm{m}^2$ and various temperatures $T/T_0$. We use the temperature $T_0 = 2\pi n \hbar^2 /(m k_\mathrm{B} \mathcal{D}_c)$, with the critical phase-space density $\mathcal{D}_c= \ln(380/\tilde{g})$, as the temperature scale, see Refs. \cite{Prokofev2001, Prokofev2002}. This scale gives an estimate of the critical temperature $T_c$ of the BKT transition. For the simulations we discretize space on a lattice of size $N_x \times N_y = 64 \times 64$ and a discretization length $l=0.5\, \mu \mathrm{m}$. We note that $l$ is chosen to be smaller than the healing length $\xi= \hbar/\sqrt{2mgn}$ and the thermal de Broglie wave length \cite{Castin2003}. We use the classical-field method of Refs. \cite{Singh2017, SinghJJ}. According to this method, we replace the operators $\hat{\psi}$ in Eq. \ref{eq_hamil} and in the equations of motion by complex numbers $\psi$. We sample the initial states from a grand-canonical ensemble of a chemical potential $\mu$ and a temperature $T$ via a classical Metropolis algorithm. We propagate the state using the equations of motion to obtain the many-body dynamics. To excite the sound modes we add the perturbation term % \begin{align} \mathcal{H}_\text{p} = \int d \mathbf{r} V(\mathbf{r} , t) n(\mathbf{r} , t), \end{align} % where $V(\mathbf{r} , t)$ is the perturbation potential that couples to the density $n(\mathbf{r} , t)$ at the location $\mathbf{r} = (x, y)$ and time $t$. Within linear response theory, the induced density fluctuation $\delta n(\mathbf{k} , \omega)$ is described in terms of the density response function $\chi_{nn} (\mathbf{k} , \omega)= \delta n(\mathbf{k} , \omega)/V(\mathbf{k} , \omega)$, where $V(\mathbf{k} , \omega)$ is the Fourier transform of $V(\mathbf{r} , t)$. This allows us to determine the collective modes of the system. We first implement the experimental method of exciting both sound modes, as in Ref. \cite{Hadzibabic2021}. We drive the system along the $y$ direction using $V(\mathbf{r} , t)= V_0 \sin(\omega t) \times (y-L_y/2)/L_y$, where $V_0$ is the driving strength and $\omega$ is the driving frequency. This predominantly excites the longest wavelength sound modes with the wavevector $k_L=\pi/L_y$ \cite{Navon2016}. For each $\omega$, we calculate the time evolution of the density profile $\Delta n(y, t)=n(y, t) - n$, which is averaged over the ensemble and the $x$ direction. The driving protocol results in center-of-mass oscillations of the cloud, as shown in Fig. \ref{Fig:shaking}(a). Assuming that the density fluctuation corresponds to the lowest excitation mode in the box, we fit $\Delta n(y, t)$ with the function $n(y, t) = b_0 + b(t) \sin[\pi (y-L_y/2)/L_y]$ using $b_0$ and $b(t)$ as the fitting parameters. From $b(t)$, we obtain the center-of-mass displacement $d(t)=(1/L_y) \int_0^{L_y} dy\, y n(y, t) = 2b(t)L_y/\pi^2$. To calculate the steady-state response, we choose the time evolution between $160-360\, \mathrm{ms}$, which is fixed for every $\omega$. Fitting $d(t)$ with the function $f(t) = [ R(\omega) \sin(\omega t) - A(\omega) \cos(\omega t)] \exp(-\kappa t)$ enables us to determine the reactive $(R)$ and absorptive $(A)$ response, where we have included an additional fit parameter $\kappa$ as global damping rate. Using the Fourier decomposition $V(k_L, \omega)= 4 V_0/\pi^2$, the $A(\omega)$ response yields $\mathrm{Im} \chi_{nn} (k_L, \omega) = n \pi^4 A(\omega)/(8V_0L_y)$ and thus allows to determine the dynamic structure factor $S(k_L, \omega)$ \cite{Hadzibabic2021}. As second method, we employ a step-pulse perturbation of the density, which is motivated by Refs. \cite{SinghSS, Hoffmann2021}. To perturb the density we use the Gaussian potential $V(\mathbf{r} , t)= V_0(t) \exp[- (y-y_0)^2/(2 \sigma^2) ]$, which is centered at $y_0= L_y/2$. $V_0(t)$ is the time-dependent strength and $\sigma$ is the width. We suddenly turn on and off this potential for a short perturbation time of about $0.5\, \mathrm{ms}$, which excites sound pulses as shown in Fig. \ref{Fig:step}. \begin{figure}[] \includegraphics[width=1.0\linewidth]{dsf_cut} \caption{$S(k, \omega)$ plots at $k/k_\xi= 0.35$ for $T/T_0=0.81$ (blue circles) and $1.22$ (red circles). $k_\xi$ is the wave vector below which the Bogoliubov dispersion has a linear momentum dependence. The continuous lines are the fits with the two-mode dynamic structure factor in Eq. \ref{eq:fit}. } \label{Fig:modes} \end{figure} \begin{figure*}[] \includegraphics[width=1.0\linewidth]{density} \caption{Detecting both sound modes via periodic driving of the center of mass of the cloud. (a) Time evolution of the density profile $\Delta n(y, t)$, averaged over the $x$ direction and the ensemble, for $V_0/\mu=0.8$, $\omega/\omega_\mathrm{B}=0.74$ and $T/T_0 = 0.54$. (b) Displacement $d(t)$ of the cloud's center of mass and the fit $f(t) = [R \sin(\omega t) - A \cos(\omega t)] \exp(-\kappa t)$ (continuous line) using the fitting parameters $A$, $R$ and $\kappa$. (c) $A(\omega)$ response as a function of $V_0/\mu$ and $\omega/\omega_\mathrm{B}$. (d, e) show the determined values of the mode frequency and the amplitude; see text. B and NB denote the Bogoliubov and non-Bogoliubov sound mode, respectively. } \label{Fig:shaking} \end{figure*} \section{Dynamic structure factor}\label{sec:dsf} To characterize the sound modes we calculate the dynamic structure factor (DSF) \begin{align} S(\mathbf{k} , \omega) = \langle |n(\mathbf{k} , \omega)|^2 \rangle, \end{align} where $n(\mathbf{k} , \omega)$ is the Fourier transform of the density $n(\mathbf{r} , t)$ in space and time. We define $n(\mathbf{k} , \omega)$ as \begin{align} n(\mathbf{k} , \omega) = \frac{1}{\sqrt{N_l T_s}} \sum_i \int dt \, e^{-i(\mathbf{k} \mathbf{r} _i - \omega t)} n(\mathbf{r} _i, t). \end{align} $N_l= N_x N_y$ is the number of lattice sites and $T_s = 160 \, \mathrm{ms}$ is the sampling time for the numerical Fourier transform. The DSF gives the overlap of the density degree of freedom with the collective modes. In Fig. \ref{Fig:dsf} we show $S(\mathbf{k} , \omega)$ as a function of the wave vector $\mathbf{k} =k_y$ and the frequency $\omega$ for various $T/T_0$. At low temperatures it primarily shows one excitation branch, while at intermediate temperatures it shows two excitation branches. Above the transition temperature, it displays the diffusive mode at low momenta and the excitation branch of the normal sound mode in a thermal gas. We compare these spectra with the Bogoliubov dispersion $\hbar \omega_k = \sqrt{\epsilon_k(\epsilon_k + 2 g n_s )}$, where $\epsilon_k = 2J[1- \cos (k_y l)]$ is the free-particle dispersion on the lattice introduced for simulations and $J= \hbar^2/(2ml^2)$ is the tunneling energy. We numerically determine the superfluid density $n_s(T)$ from the current-current correlations; see Ref. \cite{SinghSS}. We show the Bogoliubov prediction in Fig. \ref{Fig:dsf}, which agrees with the lower branch at all $k$ for all $T/T_0$ below the transition temperature. This enables us to identify the lower branch as the Bogoliubov (B) mode and the higher branch as the non-Bogoliubov (NB) mode. At low temperatures the spectral weight is on the B mode, while at intermediate temperatures both B and NB modes are visible. The broadening of the B mode increases with increasing temperature, which occurs due to Landau damping \cite{Chung2009}. Above the transition temperature, the B mode transforms into the diffusive mode and the NB mode continuously connects to the normal sound mode. In experiments, the DSF is measured via the density response $\chi_{nn} (\mathbf{k} , \omega)$ that describes the density fluctuation created by a perturbing potential. Thus, the density response is a useful tool to identify the sound modes using density probes, as we show in Sec. \ref{sec:probes}. For $k_\mathrm{B} T \gg \hbar \omega$, the density response relates the DSF via $S(\mathbf{k} , \omega)= - k_\mathrm{B} T \mathrm{Im} \chi_{nn} (\mathbf{k} , \omega)/(\pi n \omega)$ \cite{Hohenberg1964, Griffin2009}. The two sound modes are supported by Landau-Tisza's two-fluid hydrodynamic theory, yielding the density response \cite{Griffin2009} \begin{align}\label{eq:nn} \chi_{nn} (\mathbf{k} , \omega) = \frac{n k^2}{m} \frac{\omega^2-v^2 k^2}{ (\omega^2-v_1^2 k^2) (\omega^2-v_2^2 k^2) }, \end{align} which has poles at the velocities $v_1$ and $v_2$ corresponding to the two sound modes. $v^2 = T s^2 n_s/(c_v n_n)$ denotes an additional velocity, where $s$ is the entropy, $c_v$ is the heat capacity, $n_s$ is the superfluid density, and $n_n$ is the normal fluid density. Following Eq. \ref{eq:nn} and including linear damping \cite{Hohenberg1965}, we fit the simulated $S(\mathbf{k} , \omega)$ with $S(\omega) =S_1(\omega)+S_2(\omega)$ for each $\mathbf{k} =k_y$, where \begin{align}\label{eq:fit} S_{1,2} (\omega) = \frac{x_{1,2} \omega_{1,2}^2 \Gamma_{1,2}} { (\omega^2 - \omega^2_{1,2} )^2 + ( \omega \Gamma_{1,2})^2 }. \end{align} The amplitudes $x_{1,2}$, mode frequencies $\omega_{1,2}$ and damping rates $\Gamma_{1,2}$ are the fit parameters. As an example, in Fig. \ref{Fig:modes} we show the simulated $S(\mathbf{k} , \omega)$ at $k/k_\xi=0.35$ for $T/T_0=0.81$ and $1.22$. The wave vector $k_\xi \equiv \sqrt{2}/\xi = 2.8\, \mu \mathrm{m}^{-1}$ sets the momentum scale, below which the Bogoliubov dispersion has a linear momentum dependence. We fit these spectra with Eq. \ref{eq:fit} and show their fits as the continuous lines in Fig. \ref{Fig:modes}. The two-mode feature of the DSF is captured by Eq. \ref{eq:fit}. At $T/T_0=0.81$, the lower (B) sound peak has higher spectral weight than the higher (NB) sound peak. Above the transition temperature at $T/T_0=1.22$, the B mode vanishes and results in the diffusive mode, while the NB mode becomes the normal sound mode. This numerical observation of the two sound peaks is consistent with the measured spectra \cite{Hadzibabic2021}, which we discuss below. To determine the sound velocities, we perform fits for various $k$ below $k_\xi$ and determine the velocities from the linear slope of $\omega_{1,2}/k$. When there is mainly one dominant mode at low and high temperatures, we fit with a single function in Eq. \ref{eq:fit}. The results of the two sound velocities are presented in Fig. \ref{Fig:vel}. \section{Excitation of density pulses}\label{sec:probes} The two sound modes that we find in the dynamic structure factor can be measured by exciting density pulses \cite{Sidorenkov2013, Hoffmann2021, Hadzibabic2021}. In the following, we present the method of periodic modulation \cite{Hadzibabic2021} and a potential quench of the local density \cite{Hoffmann2021, SinghSS}. \begin{figure}[t] \includegraphics[width=1.0\linewidth]{response_dsf} \caption{Dimensionless $\tilde{A}(\omega)$ response (inset) and the corresponding dynamic structure factor $S(\omega)$ are shown at $T/T_0= 0.54$ (a) and $1.35$ (b). We used $V_0/\mu = 1$. The continuous line in (a) is the fit with Eq. \ref{eq:fit}, whereas the continuous line in (b) is the Lorentzian fit centered at $\omega=0$. } \label{Fig:adsf} \end{figure} \subsection{Periodic driving}\label{sec:excitation} We first present the method of exciting both sound modes via periodic driving of the center of mass of the cloud, as described in \cite{Hadzibabic2021} and Sec. \ref{sec:method}. The driving potential is directed along the $y$ direction and sinusoidally oscillates in time at frequency $\omega$. In Fig. \ref{Fig:shaking}(a) we show the resulting time evolution of the density profile $\Delta n(y, t)$ for $V_0/\mu=0.8$, $\omega/\omega_\mathrm{B}=0.74$ and $T/T_0 = 0.54$. $\mu = gn$ is the mean-field energy and $\omega_\mathrm{B} = v_\mathrm{B} k_L $ is the Bogoliubov frequency, which results in $\omega_\mathrm{B}/(2\pi) = 34 \, \mathrm{Hz}$ for $v_\mathrm{B}= \sqrt{gn/m}=2.25\, \mathrm{mm/s}$. The driving protocol excites center-of-mass oscillations of the cloud at $\omega/\omega_\mathrm{B}$, from which we determine the displacement $d(t)= 2b(t)L_y/\pi^2$; see Sec. \ref{sec:method}. In Fig. \ref{Fig:shaking}(b) we show $d(t)$ and the corresponding fit with the function $f(t) = [R(\omega) \sin(\omega t) - A(\omega) \cos(\omega t)] \exp(-\kappa t)$, which allows us to determine the reactive $(R)$ and absorptive $(A)$ response. We find that the damping of the oscillations, determined by $\kappa$, depends on $\omega$ and $V_0$ of the driving potential. In Fig. \ref{Fig:shaking}(c) we show the $A(\omega)$ response determined as a function of $V_0/\mu$ and $\omega/\omega_\mathrm{B}$. For low $V_0/\mu$, $A(\omega)$ primarily displays one maximum that corresponds to the B mode. The location of this maximum occurs below the zero-temperature prediction $\omega/\omega_\mathrm{B} = 1$, which is due to the thermal broadening of the phonon modes at nonzero temperatures \cite{SinghSS}. For higher $V_0/\mu$ near and above $1$, $A(\omega)$ shows two maxima corresponding to the B and NB modes. Interestingly, the separation between the peak locations in frequency space increases with increasing $V_0/\mu$, which is due to the nonlinear response of the system, that sets in for larger perturbations beyond the linear response, as we describe below. To determine the peak amplitude and the frequency, we fit $A(\omega)$ with the function $A_{1, 2} (\omega) = \omega S_{1, 2} (\omega)$, based on the relation $S(\mathbf{k} , \omega) \propto A(\omega)/\omega$, see Sec. \ref{sec:dsf} and Eq. \ref{eq:fit}. In Figs. \ref{Fig:shaking}(d) and (e) we plot the frequency and the amplitude of each sound peak, determined via individual fitting, as a function of $V_0/\mu$. For low driving strengths up to $V_0/\mu \sim 0.6$, the B-mode frequency remains qualitatively unchanged and is about $\omega_1/\omega_\mathrm{B} \approx 0.8$. In this weak perturbation regime, the B-mode amplitude increases linearly, which is a characteristic of linear response. At higher $V_0/\mu$, the B-mode amplitude shows nonlinear behavior, where its frequency decreases and drops to $\omega_1/\omega_\mathrm{B} = 0.62$ for $V_0/\mu = 1.5$. The higher (NB) sound peak is resolved only above $V_0/\mu \gtrsim 0.7$. Contrary to the B-mode, the NB-mode frequency increases with increasing $V_0/\mu$, while its amplitude decreases. This reduction of the B-mode frequency and enhancement of the NB-mode frequency derives from the decreasing superfluid and increasing normal fluid density due to stronger probing, respectively. % To determine the sound velocities $v_{1,2}=\omega_{1,2}/k_L$, we use the $A(\omega)$ response in nonlinear regime at $V_0/\mu=1$, where the two sound peaks are well resolved and motivated by the probing regime of Ref. \cite{Hadzibabic2021}. We note that at this regime the frequencies of the modes are weakly renormalized by the probe, compared to the linear response regime. We show the results of $v_{1,2}$ for various $T/T_0$ in Fig. \ref{Fig:vel}. We now relate the $A(\omega)$ response to the dynamic structure factor (DSF) both below and above the transition temperature. For this, we calculate the $A(\omega)$ response using $V_0/\mu = 1$ at $T/T_0=0.54$ and $1.35$. In Fig. \ref{Fig:adsf}, we show the dimensionless response $ {\tilde{A} } = \pi^3 m v_\mathrm{B}^2 A/(8V_0 L_y)$ and the corresponding DSF $S = k_\mathrm{B} T {\tilde{A} }/(m v_\mathrm{B}^2 \omega)$ below and above the transition temperature. For $T/T_0=0.54$, $ {\tilde{A} }(\omega)$ shows the two sound peaks, which are also visible in the corresponding $S(\omega)$. The two-peak structure is captured by the DSF in Eq. \ref{eq:fit} and is consistent with the simulated DSF of unperturbed cloud in Fig. \ref{Fig:dsf}. Above the transition temperature, $ {\tilde{A} }(\omega)$ primarily shows the diffusive sound peak and the higher mode is not discernible. This absence of the higher sound peak is in contrast with the measured response \cite{Hadzibabic2021} as well as the simulated DSF in Fig. \ref{Fig:dsf} and the step-pulse excitation in Fig. \ref{Fig:step}, which show both diffusive and higher sound modes. The reason for this discrepancy can be the decay of the oscillations shown in Fig. \ref{Fig:shaking}(b), which is not discernible in the measurements \cite{Hadzibabic2021}. In Fig. \ref{Fig:adsf}(b) we show $S(\omega)$ corresponding to $ {\tilde{A} }(\omega)$ at $T/T_0= 1.35$, which results in the diffusive sound mode centered at $\omega=0$. We fit $S(\omega)$ with the Lorentzian $f(\omega)= x_\mathrm{T} \Gamma_\mathrm{T}/(\omega^2 + \Gamma_\mathrm{T}^2)$ using $x_\mathrm{T}$ and $\Gamma_\mathrm{T}$ as the fitting parameters. The width of the diffusive mode yields the thermal diffusivity $D_\mathrm{T}= \Gamma_\mathrm{T}/k_L^2 = (7.2 \pm 0.2) \hbar/m$, which agrees with the measured $D_\mathrm{T} = (5 \pm 2) \hbar/m$ \cite{Hadzibabic2021}. Furthermore, our result of $D_\mathrm{T}$ is above the sound diffusivities measured in strongly interacting 2D Fermi gases \cite{Bohlen2020}. \begin{figure*}[t] \includegraphics[width=1.0\linewidth]{density_step} \caption{Excitation of two sound pulses via a step-pulse perturbation of the local density. Time evolution of the density $\Delta n(y, t)$ displays the propagation of two sound modes below the transition and one sound mode above the transition. The blue (red) arrow denotes the propagation of the slow (fast) sound mode. The attractive potential produces a density increase at the location of the perturbation, which results in an additional excitation (white pulses) after the potential is switched off. } \label{Fig:step} \end{figure*} \subsection{Step-pulse perturbation}\label{sec:step} In this section we demonstrate the method of step-pulse perturbation, which excites sound modes by locally perturbing the density. The perturbation sequence is described in Sec. \ref{sec:method} and the perturbation potential is a Gaussian, for which we use the strength $|V_0|/\mu$ in the range $0.2-1.0$ and the width $\sigma/\xi=2.9$, where $\xi=0.51\, \mu \mathrm{m}$ is the healing length. In Fig. \ref{Fig:step} we show the time evolution of the density profile $\Delta n(y, t)$. The perturbation potential was turned on for about $0.5\, \mathrm{ms}$, which excites two (fast and slow) sound pulses below the transition temperature and one sound pulse above the transition temperature. The increased density at the location of the perturbation results in an additional pulse after the perturbation is turned off. Below $T/T_0 \sim 1$, the fast and slow pulse initially overlap and then eventually separate into two pulses propagating at different velocities, as shown in Fig. \ref{Fig:step}. The fast pulse corresponds to the NB mode, whereas the slow pulse represents the B mode. In the long-time evolution the fast pulse bounces off the wall of the box and continues propagating towards the center. At low temperatures, the amplitude of the B mode is higher than the NB mode, whereas at higher temperature $T/T_0=0.81$, the NB mode has higher amplitude than the B mode. This is consistent with the spectral weights of the modes in the dynamic structure factor in Fig. \ref{Fig:dsf}. Near the transition temperature $T/T_0 \sim 1$, the propagation of the B mode vanishes and results in the diffusive sound mode. Above the transition temperature, the time evolution primarily shows the propagation of the normal sound mode and also diffusive dynamics at the location of the perturbation. To obtain the sound velocities, we fit the density profile with one or two Gaussians to determine the locations of one or two density pulses. The time dependence of the locations gives the sound velocities, which we show in Fig. \ref{Fig:vel}. \begin{figure}[] \includegraphics[width=1.0\linewidth]{sound_step} \caption{Temperature dependence of the two sound mode velocities across the BKT transition. Results of the dynamic structure factor (blue and red squares), the driven response with $V_0/\mu=1$ (open circles connected with a dashed line) and the step-pulse perturbation (black and red crosses) are compared with the measurements of the two sound velocities (blue and red filled circles) of Ref. \cite{Hadzibabic2021}. The continuous line is the Bogoliubov estimate $v_{\mathrm{B}, T}$ based on the numerically determined superfluid density; see text. } \label{Fig:vel} \end{figure} \section{Comparison to experiments}\label{sec:comp} In Fig. \ref{Fig:vel}, we combine our simulation results of the sound velocities and compare them with the measurements \cite{Hadzibabic2021}. The temperature dependence of the two mode velocities determined from the dynamic structure factor (DSF) of unperturbed cloud serves as a benchmark for the results based on the density response involving external perturbations. The higher-mode velocity displays a weak temperature dependence at all temperatures and no signature of a jump at the transition. On the other hand, the lower-mode velocity decreases with increasing temperature and vanishes above the transition temperature. We compare this result with the nonzero temperature Bogoliubov estimate $v_{\mathrm{B}, T}=\sqrt{g n_s(T)/m}$, which is obtained using the numerically determined superfluid density $n_s(T)$; see, for details, Ref. \cite{SinghSS}. $v_{\mathrm{B}, T}$ agrees with the lower-mode velocity and shows the crossover behavior at the transition, rather than a jump, which is expected for a finite-size system \cite{Pilati2008, Matt2010, sf_2019, SinghJJ}. In Fig. \ref{Fig:vel} we now compare the DSF results with the sound velocities determined using density probes in the experiments and in our simulations. Overall, the measured higher-mode velocity agrees with the DSF higher-mode velocity. However, the measured lower velocity is higher than the DSF lower velocity at $T/T_0 < 1$, except for the measurements at $T/T_0 \geq 1$, which show agreement. We note that the measured $T_c/T_0 = 1$ is determined based on the disappearance of lower sound peak \cite{Hadzibabic2021}. We also present the simulation velocities obtained by imitating the experimental protocol described in Sec. \ref{sec:excitation}. We used the driving strength $V_0/\mu=1$ in line with the used values in the experiments between $V_0/\mu=0.47 - 1$ \cite{Hadzibabic2021}. The simulated higher-mode velocity agrees with the measured higher-mode velocity and shows deviations from the DSF higher-mode velocity at low temperatures. On the other hand, the simulated lower velocity is smaller than the measured lower velocity at $T/T_0 < 0.75$ and agrees with the measurements at $T/T_0 > 0.75$. In Fig. \ref{Fig:vel} we also show the two sound velocities determined via a step-pulse perturbation in Sec. \ref{sec:step}, which agree with the results of the DSF throughout the transition for both the higher and lower velocity. The lower-velocity results of the DSF and step-pulse perturbation are in agreement with the Bogoliubov estimate $v_{\mathrm{B}, T}$, confirming that the sound velocity is smaller than the measurements and there is smooth crossover occurring near the transition point. The deviation near the transition temperature is reproduced by the simulations of the driven response, which yield similar velocities as in the measurements. These simulations then systematically deviate from the measurements at intermediate and low temperatures, which seems to occur due to a variation in the used value of the driving strength in the measurements and the corresponding change in nonlinear response. Furthermore, the measurement uncertainty of the box length and the density can also affect the magnitude of the measured sound velocities \cite{Hadzibabic2021}. \section{Conclusions}\label{sec:conclusion} We have determined and discussed the propagation of first and second sound in homogeneous 2D Bose gases across the BKT transition using classical-field simulations for the experimental parameters of Ref. \cite{Hadzibabic2021}. We have identified the two sound modes based on the dynamic structure factor, which are the Bogoliubov (B) and non-Bogoliubov (NB) sound modes below the transition and the diffusive and normal sound modes above the transition. We have excited the sound modes using the experimental method of periodic driving \cite{Hadzibabic2021} and the method of step-pulse perturbation \cite{Hoffmann2021, SinghSS}. We have determined the sound velocities from the dynamic structure factor (DSF), the driven response and the step-pulse excitation and compared them with the measurements of Ref. \cite{Hadzibabic2021}. While the sound results of the DSF and step-pulse excitation show excellent agreement, the results of the driven response show a systematic deviation, compared to the DSF results, due to nonlinear response. If the probing strength is small, the driven response recovers the Bogoliubov mode velocity. However, for small probing strengths, the signal of the non-Bogoliubov mode vanishes. It only becomes measurable for intermediate probing strengths. At these probing strengths, the nonlinear character of the probe influences the frequencies of the measured sound modes. Therefore, this approach only gives an approximate value of the sound velocities, for these probing strengths. Overall, the simulated higher-mode velocity is above the B mode velocity and displays a weak-temperature dependence across the transition, which is in agreement with the corresponding measurements of the higher mode velocity. On the other hand, the measured lower mode velocity is below the simulated velocities of the DSF and step-pulse excitation but agrees with the results of the simulated density response at high temperatures across the transition. Our results give insight into the temperature dependence of the two sound modes of dilute 2D Bose gases, and the signature of these modes in the measurement techniques of Refs. \cite{Hadzibabic2021} and \cite{Hoffmann2021}. Our results reproduce largely the measurement results of \cite{Hadzibabic2021}, while demonstrating that the strong-driving results are subject to driving-induced frequency shifts. These shifts might obscure the measurements of these frequencies. Furthermore, generating a signal for the non-Bogoliubov mode requires strong driving, making this probe technique more suitable for a qualitative investigation of the modes, in contrast to the step-pulse technique of \cite{Hoffmann2021}. For increasing interactions or densities, these modes undergo hybridization and crossover to the strong-coupling regime occurs \cite{SinghSS, SinghSF}, which warrant further experimental investigations. The two sound modes and their coupling can affect the dynamics, such as the propagation of deterministic vortex colliders \cite{Kwon2021}. Our results enable the further study of these phenomena, as they provide an in-depth insight into the key probing techniques of the field. \section*{Acknowledgements} We thank Panagiotis Christodoulou for insightful discussions. This work is supported by the Deutsche Forschungsgemeinschaft (DFG) in the framework of SFB 925 – project ID 170620586 and the excellence cluster `Advanced Imaging of Matter’ - EXC 2056 - project ID 390715994, and the Cluster of Excellence ‘QuantumFrontiers’ - EXC 2123 - project ID 390837967.
2,869,038,156,092
arxiv
\section{Introduction} In the past several years, significant strides have been made in scaling up plan synthesis techniques. We now have technology to routinely generate plans with hundreds of actions. All this work, however, makes a crucial assumption--that a complete model of the domain is specified in advance. While there are domains where knowledge-engineering such detailed models is necessary and feasible (e.g., mission planning domains in NASA and factory-floor planning), it is increasingly recognized (c.f. \cite{hoffmann2010sap,rao07}) that there are also many scenarios where insistence on correct and complete models renders the current planning technology unusable. What we need to handle such cases is a planning technology that can get by with partially specified domain models, and yet generate plans that are ``robust'' in the sense that they are likely to execute successfully in the real world. This paper addresses the problem of formalizing the notion of plan robustness with respect to an incomplete domain model, and connects the problem of generating a robust plan under such model to \emph{conformant probabilistic planning}~\cite{kushmerick1995algorithm,hyafil2003conformant,bryce2006sequential,prob-ff}. Following Garland \& Lesh~\shortcite{garland2002plan}, we shall assume that although the domain modelers cannot provide complete models, often they are able to provide annotations on the partial model circumscribing the places where it is incomplete. In our framework, these annotations consist of allowing actions to have \emph{possible} preconditions and effects (in addition to the standard necessary preconditions and effects). As an example, consider a variation of the \emph{Gripper} domain, a well-known planning benchmark domain. The robot has one gripper that can be used to pick up balls, which are of two types light and heavy, from one room and move them to another room. The modeler suspects that the gripper may have an internal problem, but this cannot be confirmed until the robot actually executes the plan. If it actually has the problem, the execution of the \emph{pick-up} action succeeds only with balls that are \emph{not} heavy, but if it has no problem, it can always pickup all types of balls. The modeler can express this partial knowledge about the domain by annotating the action with a statement representing the possible precondition that balls should be light. Incomplete domain models with such possible preconditions/effects implicitly define an exponential set of complete domain models, with the semantics that the real domain model is guaranteed to be one of these. The robustness of a plan can now be formalized in terms of the cumulative probability mass of the complete domain models under which it succeeds. We propose an approach that compiles the problem of finding robust plans into the conformant probabilistic planning problem. We present experimental results showing scenarios where the approach works well, and also discuss aspects of the compilation that cause scalability issues. \section{Related Work} Although there has been some work on reducing the ``faults'' in plan execution (e.g. the work on \emph{k-fault} plans for non-deterministic planning~\cite{jensen2004fault}), it is based in the context of stochastic/non-deterministic actions rather than incompletely specified ones. The semantics of the possible preconditions/effects in our incomplete domain models differ fundamentally from non-deterministic and stochastic effects. Executing different instances of the same pick-up action in the \emph{Gripper} example above would either all fail or all succeed, since there is no uncertainty but the information is unknown at the time the model is built. In contrast, if the pick-up action's effects are stochastic, then trying the same picking action multiple times increases the chances of success. Garland \& Lesh ~\shortcite{garland2002plan} share the same objective with us on generating robust plans under incomplete domain models. However, their notion of robustness, which is defined in terms of four different types of risks, only has tenuous heuristic connections with the likelihood of successful execution of plans. Robertson \& Bryce ~\shortcite{robertson09} focuses on the plan generation in Garland \& Lesh model, but their approach still relies on the same unsatisfactory formulation of robustness. The work by Fox et al (\citeyear{fox05}) also explores robustness of plans, but their focus is on temporal plans under unforeseen execution-time variations rather than on incompletely specified domains. Our work can also be categorized as one particular instance of the general model-lite planning problem, as defined in \cite{rao07}, in which the author points out a large class of applications where handling incomplete models is unavoidable due to the difficulty in getting a complete model. \section{Problem Formulation} We define an \emph{incomplete domain model} $\mathcal{\widetilde{D}}$ as $\mathcal{\widetilde{D}} = \langle F, A \rangle$, where $F=\{p_1, p_2, ..., p_n\}$ is a set of \emph{propositions}, $A$ is a set of \emph{actions} that might be incompletely specified. We denote $\mathbf{T}$ and $\mathbf{F}$ as the \emph{true} and \emph{false} truth values of propositions. A \emph{state} $s \subseteq F$ is a set of propositions. In addition to proposition sets that are known as its preconditions $Pre(a) \subseteq F$, add effects $Add(a) \subseteq F$ and delete effects $Del(a) \subseteq F$, each action $a \in A$ also contains: \begin{itemize} \item Possible precondition set $\widetilde{Pre}(a) \subseteq F$ contains propositions that action $a$ \emph{might} need as its preconditions. \item Possible add (delete) effect set $\widetilde{Add}(a) \subseteq F$ ($\widetilde{Del}(a) \subseteq F$) contains propositions that the action $a$ \emph{might} add (delete, respectively) after its execution. \end{itemize} In addition, each possible precondition, add and delete effect $p$ of the action $a$ is associated with a weight $w^{pre}_a(p)$, $w^{add}_a(p)$ and $w^{del}_a(p)$ ($0 < w^{pre}_a(p), w^{add}_a(p), w^{del}_a(p) < 1$) representing the domain modeler's assessment of the likelihood that $p$ will actually be \emph{realized} as a precondition, add and delete effect of $a$ (respectively) during plan execution. Possible preconditions and effects whose likelihood of realization is not given are assumed to have weights of $\frac{1}{2}$. Given an incomplete domain model $\mathcal{\widetilde{D}}$, we define its {\em completion set} $\langle\!\langle \calDP \rangle\!\rangle$ as the set of {\em complete} domain models whose actions have all the necessary preconditions, adds and deletes, and a {\em subset} of the possible preconditions, possible adds and possible deletes. Since any subset of $\widetilde{Pre}(a)$, $\widetilde{Add}(a)$ and $\widetilde{Del}(a)$ can be realized as preconditions and effects of action $a$, there are exponentially large number of possible \emph{complete} domain models $\mathcal D_i \in \langle\!\langle \calDP \rangle\!\rangle = \{\mathcal D_1, \mathcal D_2, ..., \mathcal D_{2^K}\}$, where $K = \sum_{a \in A} (|\widetilde{Pre}(a)| + |\widetilde{Add}(a)| + |\widetilde{Del}(a)|)$. For each complete model $\mathcal D_i$, we denote the corresponding sets of realized preconditions and effects for each action $a$ as $\overline{Pre}_i(a)$, $\overline{Add}_i(a)$ and $\overline{Del}_i(a)$; equivalently, its complete sets of preconditions and effects are $Pre(a) \cup \overline{Pre}_i(a)$, $Add(a) \cup \overline{Add}_i(a)$ and $Del(a) \cup \overline{Del}_i(a)$. The projection of a sequence of actions $\pi$ from an initial state $I$ according to an incomplete domain model $\mathcal{\widetilde{D}}$ is defined in terms of the projections of $\pi$ from $I$ according to each complete domain model $\mathcal D_i \in \langle\!\langle \calDP \rangle\!\rangle$: \begin{equation} \gamma( \pi, I , \mathcal{\widetilde{D}}) = \bigcup_{\mathcal D_i \in \langle\!\langle \calDP \rangle\!\rangle} \gamma( \pi , I, \mathcal D_i ) \label{projection} \end{equation} \noindent where the projection over complete models is defined in the usual STRIPS way, with one important difference. The result of applying an action $a$ in a state $s$ where the preconditions of $a$ are not satisfied is taken to be $s$ (rather than as an undefined state).\footnote{We shall see that this change is necessary so that we can talk about increasing the robustness of a plan by adding additional actions.} A \emph{planning problem with incomplete domain } is $\mathcal{\widetilde{P}} = \langle \mathcal{\widetilde{D}} ,I,G \rangle$ where $I \subseteq F$ is the set of propositions that are true in the \emph{initial state}, and $G$ is the set of \emph{goal propositions}. An action sequence $\pi$ is considered a {\bf valid plan} for $\mathcal{\widetilde{P}}$ if $\pi$ solves the problem in at least one completion of $\langle\!\langle \calDP \rangle\!\rangle$. Specifically, $\exists_{\mathcal D_i \in \langle\!\langle \calDP \rangle\!\rangle} \gamma(\pi , I , \mathcal D_i) \models G $. \medskip \noindent {\bf Modeling Issues in Annotating Incompleteness}: From the modeling point of view, the possible precondition and effect sets can be modeled at either the grounded action or action schema level (and thus applicable to all grounded actions sharing the same action schema). From a practical point of view, however, incompleteness annotations at ground level hugely increase the burden on the domain modeler. To offer a flexible way in modeling the domain incompleteness, we allow annotations that are restricted to either specific variables or value assignment to variables of an action schema. In particular: \begin{itemize} \item \textit{Restriction on value assignment to variables}: Given variables $x_i$ with domains $X_i$, one can indicate that $p(x_{i_1},...,x_{i_k})$ is a possible precondition/effect of an action schema $a(x_1,...,x_n)$ when some variables $x_{j_1}, ..., x_{j_l}$ have values $c_{j_1} \in X_{j_1},..., c_{j_l} \in X_{j_l}$ ($\{i_1,...,i_k\},\{j_1,...,j_l\} \subseteq \{1,...,n\}$). Those possible preconditions/effects can be specified with the annotation $p(x_{i_1},...,x_{i_k}) \,\, :when \,\, (x_{j_1}=c_{1} \wedge ... \wedge x_{j_l} = c_{l})$ for the action schema $a(x_1,...,x_n)$. More generally, we allow the domain writer to express a constraint $C$ on the variables $x_{j_1}, ..., x_{j_l}$ in the $:when$ construct. The annotation $p(x_{i_1},...,x_{i_k}) \,\, :when \,\,(C)$ means that $p(c_{i_1},...,c_{i_k})$ is a possible precondition/effect of an instantiated action $a(c_1,...,c_n)$ ($c_i \in X_i$) if and only if the assignment $(x_{j_1}:=c_{j_1}, ..., x_{j_l}:=c_{j_l})$ satisfies the constraint $C$. This syntax subsumes both the annotations at the ground level when $l=n$, and at the schema level if $l=0$ (or the $:when$ construct is not specified). \item \textit{Restriction on variables}: Instead of constraints on explicit values of variables, we also allow the possible preconditions/effects $p(x_{i_1},...,x_{i_k})$ of an action schema $a(x_1,...,x_n)$ to be dependent on some specific variables $x_{j_1} ,..., x_{j_l}$ \emph{without any knowledge of their restricted values}. This annotation essentially requires less amount of knowledge of the domain incompleteness from the domain writer. Semantically, the possible precondition/effect $p(x_{i_1},...,x_{i_k}) \,\, :depends \,\, (x_{j_1},..., x_{j_l})$ of an action schema $a(x_1,...,x_n)$ means that (1) there is at least one instantiated action $a(c_1,...,c_n)$ ($c_i \in X_i$) having $p(c_{i_1},...,c_{i_k})$ as its precondition, and (2) for any two assignments $(x_1:=c_1,...,x_n:=c_n), (x_1:=c'_1,...,x_n:=c'_n)$ such that $c_{j_t} = c'_{j_t}$ ($1 \leq t \leq l$), either both $p(c_{i_1},...,c_{i_k})$ and $p(c'_{i_1},...,c'_{i_k})$ are preconditions of the corresponding actions, or they are not. Similar to the $:when$ above, the $:depend$ construct also subsumes the annotations at the ground level when $l=n$, and at the schema level if $l=0$ (or the $:depend$ field is not specified). \end{itemize} Another interesting modeling issue is the correlation among the possible preconditions and effects across actions. In particular, the domain writer might want to say that two actions (or action schemas) will have specific possible preconditions and effects in tandem. For example, we might say that the second action will have a particular possible precondition whenever the first one has a particular possible effect. We note that annotations at the lifted level introduce correlations among possible preconditions and effects at the ground level. Although our notion of plan robustness and approach to generating robust plans (see below) can be adapted to allow such flexible annotations and correlated incompleteness, for ease of exposition we limit our discussion to \emph{uncorrelated} possible precondition and effect annotations specified at the \emph{schema} level (i.e. without using the $:when$ and $:depend$ constructs). \section{A Robustness Measure for Plans} Given an incomplete domain planning problem $\mathcal{\widetilde{P}} = \langle \mathcal{\widetilde{D}} ,I,G \rangle$, a valid plan (by our definition above) need only to succeed in at least one completion of $\mathcal{\widetilde{D}}$. Given that $\langle\!\langle \calDP \rangle\!\rangle$ can be exponentially large in terms of possible preconditions and effects, validity is too weak to guarantee on the quality of the plan. What we need is a notion that $\pi$ succeeds in most of the highly likely completions of $\mathcal{\widetilde{D}}$. We do this in terms of a robustness measure. The robustness of a plan $\pi$ for the problem $\mathcal{\widetilde{P}}$ is defined as the cumulative probability mass of the completions of $\mathcal{\widetilde{D}}$ under which $\pi$ succeeds (in achieving the goals). More formally, % let ${\bf Pr}(\mathcal D_i)$ be the probability distribution representing the modeler's estimate of the probability that a given model in $\langle\!\langle \calDP \rangle\!\rangle$ is the real model of the world (such that $\sum_{\mathcal D_i \in \langle\!\langle \calDP \rangle\!\rangle} {\bf Pr}(\mathcal D_i) = 1$). The robustness of $\pi$ is defined as follows: \begin{equation} \label{eqn:robust-def} R(\pi, \mathcal{\widetilde{P}}: \langle \mathcal{\widetilde{D}} ,I,G \rangle ) \stackrel{def}{\equiv} \sum_{\mathcal D_i \in \langle\!\langle \calDP \rangle\!\rangle , \gamma(\pi , I , \mathcal D_i) \models G }{\bf Pr}(\mathcal D_i) \end{equation} It is easy to see that if $R(\pi, \mathcal{\widetilde{P}} ) > 0$, then $\pi$ is a valid plan for $\mathcal{\widetilde{P}}$. Note that given the uncorrelated incompleteness assumption, the probability ${\bf Pr}(\mathcal D_i)$ for a model $\mathcal D_i \in \langle\!\langle \calDP \rangle\!\rangle$ can be computed as the product of the weights $w^{pre}_a(p)$, $w^{add}_a(p)$, and $w^{del}_a(p)$ for all $a \in A$ and its possible preconditions/effects $p$ if $p$ \emph{is} realized in the model (or the product of their ``complement'' $1-w^{pre}_a(p)$, $1-w^{add}_a(p)$, and $1-w^{del}_a(p)$ if $p$ is \emph{not} realized). \begin{figure}[t] \centering \epsfig{file=realization.eps,width=3.3in} \caption{An example of different complete domain models, and the corresponding plan status. Circles with solid and dash boundary are propositions that are known to be $\mathbf{T}$ and may be $\mathbf{F}$ (respectively) when the plan executes. (See text.) } \vspace{-.05in} \label{fig:realization} \end{figure} \medskip \noindent {\bf Example:} Figure~\ref{fig:realization} shows an example with an incomplete domain model $\mathcal{\widetilde{D}} = \langle F, A \rangle$ with $F=\{p_1,p_2,p_3\}$ and $A=\{a_1,a_2\}$ and a solution plan $\pi=(a_1,a_2)$ for the problem $\mathcal{\widetilde{P}}= \langle \mathcal{\widetilde{D}},I=\{p_2\}, G=\{p_3\} \rangle$. The incomplete model is: $Pre(a_1) = \emptyset$, $\widetilde{Pre}(a_1) = \{p_1\}$, $Add(a_1) = \{p_2,p_3\}$, $\widetilde{Add}(a_1)=\emptyset$, $Del(a_1)=\emptyset$, $\widetilde{Del}(a_1)=\emptyset$; $Pre(a_2) = \{p_2\}$, $\widetilde{Pre}(a_2) = \emptyset$, $Add(a_2) = \emptyset$, $\widetilde{Add}(a_2)=\{p_3\}$, $Del(a_2)= \emptyset$, $\widetilde{Del}(a_2)=\{p_1\}$. Given that the total number of possible preconditions and effects is 3, the total number of completions ($| \langle\!\langle \calDP \rangle\!\rangle | $) is $2^3 = 8$, for each of which the plan $\pi$ may succeed or fail to achieve $G$, as shown in the table. The robustness value of the plan is $R(\pi) = \frac{3}{4}$ if ${\bf Pr}(\mathcal D_i)$ is the uniform distribution. However, if the domain writer thinks that $p_1$ is very likely to be a precondition of $a_1$ and provides $w^{pre}_{a_1}(p_1) = 0.9$, the robustness of $\pi$ decreases to $R(\pi) = 2 \times (0.9 \times 0.5 \times 0.5) + 4 \times (0.1 \times 0.5 \times 0.5) = 0.55$ (as intutively, the last four models with which $\pi$ succeeds are very unlikely to be the real one). Note that under the STRIPS model where action failure causes plan failure, the plan $\pi$ would considered failing to achieve $G$ in the first two complete models, since $a_2$ is prevented from execution. \subsection{A Spectrum of Robust Planning Problems} Given this set up, we can now talk about a spectrum of problems related to planning under incomplete domain models: \begin{description} \item[Robustness Assessment (RA):] Given a plan $\pi$ for the problem $\mathcal{\widetilde{P}}$, assess the robustness of $\pi$. \item[Maximally Robust Plan Generation (RG$^*$):] Given a problem $\mathcal{\widetilde{P}}$, generate the maximally robust plan $\pi^*$. \item[Generating Plan with Desired Level of Robustness (RG$^\rho$):] Given a problem $\mathcal{\widetilde{P}}$ and a robustness threshold $\rho$ ($0 < \rho \leq 1$), generate a plan $\pi$ with robustness greater than or equal to $\rho$. \item[Cost-sensitive Robust Plan Generation (RG$^*_c$):] Given a problem $\mathcal{\widetilde{P}}$ and a cost bound $c$, generate a plan $\pi$ of maximal robustness subject to cost bound $c$ (where the cost of a plan $\pi$ is defined as the cumulative costs of the actions in $\pi$). \item[Incremental Robustification (RI$_c$):] Given a plan $\pi$ for the problem $\mathcal{\widetilde{P}}$, improve the robustness of $\pi$, subject to a cost budget $c$. \end{description} The problem of assessing robustness of plans, RA, can be tackled by compiling it into a weighted model-counting problem. For plan synthesis problems, we can talk about either generating a maximally robust plan, RG$^*$, or finding a plan with a robustness value above the given threshold, RG$^\rho$. A related issue is that of the interaction between plan cost and robustness. Often, increasing robustness involves using additional (or costlier) actions to support the desired goals, and thus comes at the expense of increased plan cost. We can also talk about cost-constrained robust plan generation problem RG$^*_c$. Finally, in practice, we are often interested in increasing the robustness of a given plan (either during iterative search, or during mixed-initiative planning). We thus also have the incremental variant RI$_c$. In this paper, we will focus on RG$^\rho$, the problem of synthesizing plan with at least a robustness value of $\rho$. \section{Compilation to Conformant Probabilistic Planning} In this section, we will show that the problem of generating plan with at least $\rho$ robustness, RG$^\rho$, can be compiled into an equivalent conformant probabilistic planning problem. The most robust plan can then be found with a sequence of increasing threshold values. \subsection{Conformant Probabilistic Planning} Following the formalism in \cite{prob-ff}, a domain in conformant probabilistic planning (CPP) is a tuple $\mathcal D'= \langle F', A' \rangle$, where $F'$ and $A'$ are the sets of propositions and probabilistic actions, respectively. A belief state $b: 2^{F'} \rightarrow [0,1]$ is a distribution of states $s \subseteq F'$ (we denote $s \in b$ if $b(s) > 0$). Each action $a' \in A'$ is specified by a set of preconditions $Pre(a') \subseteq F'$ and conditional effects $E(a')$. For each $e=(cons(e),\mathcal{O}(e)) \in E(a')$, $cons(e) \subseteq F'$ is the condition set and $\mathcal{O}(e)$ determines the set of outcomes $\varepsilon=(Pr(\varepsilon),add(\varepsilon),del(\varepsilon))$ that will add and delete proposition sets $add(\varepsilon)$, $del(\varepsilon)$ into and from the resulting state with the probability $Pr(\varepsilon)$ ($0 \leq Pr(\varepsilon) \leq 1$ , $\sum_{\varepsilon \in \mathcal{O}(e)} Pr(\varepsilon) = 1$). All condition sets of the effects in $E(a')$ are assumed to be mutually exclusive and exhaustive. The action $a'$ is applicable in a belief state $b$ if $Pre(a') \subseteq s$ for all $s \in b$, and the probability of a state $s'$ in the resulting belief state is $b_{a'}(s') = \sum_{s \supseteq Pre(a')} b(s) \sum_{\varepsilon \in \mathcal{O}'(e)} Pr(\varepsilon)$, where $e \in E(a')$ is the conditional effect such that $cons(e) \subseteq s$, and $\mathcal{O}'(e) \subseteq \mathcal{O}(e)$ is the set of outcomes $\varepsilon$ such that $s' = s \cup add(\varepsilon) \setminus del(\varepsilon)$. Given the domain $\mathcal D'$, a problem $\mathcal{P}'$ is a quadruple $\mathcal{P}' = \langle \mathcal D',b_I,G',\rho' \rangle$, where $b_I$ is an initial belief state, $G'$ is a set of goal propositions and $\rho'$ is the acceptable goal satisfaction probability. A sequence of actions $\pi'=(a_1',..., a_n')$ is a solution plan for $\mathcal{P}'$ if $a_i'$ is applicable in the belief state $b_i$ (assuming $b_1 \equiv b_I$), which results in $b_{i+1}$ ($1 \leq i \leq n$), and it achieves all goal propositions with at least $\rho'$ probability. \subsection{Compilation} Given an incomplete domain model $\mathcal{\widetilde{D}}=\langle F,A \rangle$ and a planning problem $\mathcal{\widetilde{P}}=\langle \mathcal{\widetilde{D}},I,G \rangle$, we now describe a compilation that translates the problem of synthesizing a solution plan $\pi$ for $\mathcal{\widetilde{P}}$ such that $R(\pi,\mathcal{\widetilde{P}}) \geq \rho$ to a CPP problem $\mathcal{P}'$. At a high level, the realization of possible preconditions $p \in \widetilde{Pre}(a)$ and effects $q \in \widetilde{Add}(a)$, $r \in \widetilde{Del}(a)$ of an action $a \in A$ can be understood as being determined by the truth values of \emph{hidden} propositions $p_a^{pre}$, $q_a^{add}$ and $r_a^{del}$ that are certain (i.e. unchanged in any world state) but unknown. Specifically, the applicability of the action in a state $s \subseteq F$ depends on possible preconditions $p$ that are realized (i.e. $p_a^{pre} = \mathbf{T}$), and their truth values in $s$. Similarly, the values of $q$ and $r$ are affected by $a$ in the resulting state only if they are realized as add and delete effects of the action (i.e., $q_a^{add} = \mathbf{T}$, $r_a^{del} = \mathbf{T}$). There are totally $2^{|\widetilde{Pre}(a)|+|\widetilde{Add}(a)|+|\widetilde{Del}(a)|}$ realizations of the action $a$, and all of them should be considered simultaneously in checking the applicability of the action and in defining corresponding resulting states. With those observations, we use multiple conditional effects to compile away incomplete knowledge on preconditions and effects of the action $a$. Each conditional effect corresponds to one realization of the action, and can be fired only if $p = \mathbf{T}$ whenever $p_a^{pre} = \mathbf{T}$, and adding (removing) an effect $q$ ($r$) into (from) the resulting state depending on the values of $q_a^{add}$ ($r_a^{del}$, respectively) in the realization. While the partial knowledge can be removed, the hidden propositions introduce uncertainty into the initial state, and therefore making it a \emph{belief} state. Since the action $a$ may be applicable in some but rarely all states of a belief state, \emph{certain} preconditions $Pre(a)$ should be modeled as conditions of all conditional effects. We are now ready to formally specify the resulting domain $\mathcal D'$ and problem $\mathcal{P}'$. For each action $a \in A$, we introduce new propositions $p_a^{pre}$, $q_a^{add}$, $r_a^{del}$ and their negations $np_a^{pre}$, $nq_a^{add}$, $nr_a^{del}$ for each $p \in \widetilde{Pre}(a)$, $q \in \widetilde{Add}(a)$ and $r \in \widetilde{Del}(a)$ to determine whether they are realized as preconditions and effects of $a$ in the real domain.\footnote{These propositions are introduced once, and re-used for all actions sharing the same schema with $a$.} Let $F_{new}$ be the set of those new propositions, then $F' = F \cup F_{new}$ is the proposition set of $\mathcal D'$. Each action $a' \in A'$ is made from one action $a \in A$ such that $Pre(a') = \emptyset$, and $E(a')$ consists of $2^{|\widetilde{Pre}(a)|+|\widetilde{Add}(a)|+|\widetilde{Del}(a)|}$ conditional effects $e$. For each conditional effect $e$: \begin{itemize} \item $cons(e)$ is the union of the following sets: \begin{itemize} \item the certain preconditions $Pre(a)$, \item the set of possible preconditions of $a$ that are realized, and hidden propositions representing their realization: $\overline{Pre}(a) \cup \{ p_a^{pre} | p \in \overline{Pre}(a) \} \cup \{np_a^{pre} | p \in \widetilde{Pre}(a) \setminus \overline{Pre}(a) \}$, \item the set of hidden propositions corresponding to the realization of possible add (delete) effects of $a$: $\{ q_a^{add} | q \in \overline{Add}(a) \} \cup \{ nq_a^{add} | q \in \widetilde{Add}(a) \setminus \overline{Add}(a) \}$ ($\{ r_a^{del} | r \in \overline{Del}(a) \} \cup \{ nr_a^{del} | r \in \widetilde{Del}(a) \setminus \overline{Del}(a) \}$, respectively); \end{itemize} \item the \emph{single} outcome $\varepsilon$ of $e$ is defined as $add(\varepsilon) = Add(a) \cup \overline{Add}(a)$, $del(\varepsilon) = Del(a) \cup \overline{Del}(a)$, and $Pr(\varepsilon) = 1$, \end{itemize} \noindent where $\overline{Pre}(a) \subseteq \widetilde{Pre}(a)$, $\overline{Add}(a) \subseteq \widetilde{Add}(a)$ and $\overline{Del}(a) \subseteq \widetilde{Del}(a)$ represent the sets of realized preconditions and effects of the action. In other words, we create a conditional effect for each subset of the union of the possible precondition and effect sets of the action $a$. Note that the inclusion of new propositions derived from $\overline{Pre}(a)$, $\overline{Add}(a)$, $\overline{Del}(a)$ and their ``complement'' sets $\widetilde{Pre}(a) \setminus \overline{Pre}(a)$, $\widetilde{Add}(a) \setminus \overline{Add}(a)$, $\widetilde{Del}(a) \setminus \overline{Del}(a)$ makes all condition sets of the action $a'$ mutually exclusive. As for other cases (including those in which some precondition in $Pre(a)$ is excluded), the action has no effect on the resulting state, they can be ignored. The condition sets, therefore, are also exhaustive. The initial belief state $b_I$ consists of $2^{|F_{new}|}$ states $s' \subseteq F'$ such that $p \in s'$ iff $p \in I$ ($\forall p \in F$), each represents a complete domain model $\mathcal D_i \in \langle\!\langle \calDP \rangle\!\rangle$ and with the probability ${\bf Pr}(\mathcal D_i)$. The goal is $G' = G$, and the acceptable goal satisfaction probability is $\rho' = \rho$. \begin{mythem} Given a plan $\pi = (a_1,..., a_n)$ for the problem $\mathcal{\widetilde{P}}$, and $\pi' = (a_1', ..., a_n')$ where $a_k'$ is the compiled version of $a_k$ ($1 \leq k \leq n$) in $\mathcal{P}'$. Then $R(\pi,\mathcal{\widetilde{P}}) \geq \rho$ iff $\pi'$ achieves all goals with at least $\rho$ probability in $\mathcal{P}'$. \end{mythem} \begin{proof}[Proof (sketch)] According to the compilation, there is one-to-one mapping between each complete model $\mathcal D_i \in \langle\!\langle \calDP \rangle\!\rangle$ in $\mathcal{\widetilde{P}}$ and a (complete) state $s_{i0}' \in b_I$ in $\mathcal{P}'$. Moreover, if $\mathcal D_i$ has a probability of ${\bf Pr}(\mathcal D_i)$ to be the real model, then $s_{i0}'$ also has a probability of ${\bf Pr}(\mathcal D_i)$ in the belief state $b_I$ of $\mathcal{P}'$. Given our projection over complete model $\mathcal D_i$, executing $\pi$ from the state $I$ with respect to $\mathcal D_i$ results in a sequence of complete state $(s_{i1}, ..., s_{i(n+1)})$. On the other hand, executing $\pi'$ from $\{s_{i0}'\}$ in $\mathcal{P}'$ results in a sequence of belief states $(\{s_{i1}'\}, ..., \{s_{i(n+1)}'\})$. With the note that $p \in s_{i0}'$ iff $p \in I$ ($\forall p \in F$), by induction it can be shown that $p \in s_{ij}'$ iff $p \in s_{ij}$ ($\forall j \in \{1,...,n+1\}, p \in F$). Therefore, $s_{i(n+1)} \models G$ iff $s_{i(n+1)}' \models G = G'$. Since all actions $a_i'$ are deterministic and $s_{i0}'$ has a probability of ${\bf Pr}(\mathcal D_i)$ in the belief state $b_I$ of $\mathcal{P}'$, the probability that $\pi'$ achieves $G'$ is $\sum_{s_{i(n+1)}' \models G} {\bf Pr}(\mathcal D_i)$, which is equal to $R(\pi,\mathcal{\widetilde{P}})$ as defined in Equation~\ref{eqn:robust-def}. This proves the theorem. \end{proof} \begin{figure}[t] \centering \epsfig{file=compilation-example.eps,width=3.3in} \caption{An example of compiling the action \emph{pick-up} in an incomplete domain model (top) into CPP domain (bottom). The hidden propositions $p_{pick-up}^{pre}$, $q_{pick-up}^{add}$ and their negations can be interpreted as whether the action requires light balls and makes balls dirty. Newly introduced and relevant propositions are marked in bold.} \label{fig:compilation-example} \end{figure} \medskip \noindent {\bf Example:} Consider the action \emph{pick-up(?b - ball,?r - room)} in the Gripper domain as described above. In addition to the possible precondition \emph{(light ?b)} on the weight of the ball \emph{?b}, we also assume that since the modeler is unsure if the gripper has been cleaned or not, she models it with a possible add effect \emph{(dirty ?b)} indicating that the action might make the ball dirty. Figure~\ref{fig:compilation-example} shows both the original and the compiled specification of the action. \section{Experimental Results} We tested the compilation with Probabilistic-FF (PFF), a state-of-the-art planner, on a range of domains in the International Planning Competition We first discuss the results on the variants of the Logistics and Satellite domains, where domain incompleteness is deliberately modeled on the preconditions and effects of actions (respectively). Our purpose here is to observe how generated plans are robustified to satisfy a given robustness threshold, and how the amount of incompleteness in the domains affects the plan generation phase. We then describe the second experimental setting in which we randomly introduce incompleteness into IPC domains, and discuss the feasibility of our approach in this setting.\footnote{The experiments were conducted using an Intel Core2 Duo 3.16GHz machine with 4Gb of RAM, and the time limit is 15 minutes.} \medskip \noindent {\bf Domains with deliberate incompleteness} \noindent\textit{Logistics}: In this domain, each of the two cities $C_1$ and $C_2$ has an airport and a downtown area. The transportation between the two distant cities can only be done by two airplanes $A_1$ and $A_2$. In the downtown area of $C_i$ ($i \in \{1,2\}$), there are three \emph{heavy} containers $P_{i1}, ..., P_{i3}$ that can be moved to the airport by a truck $T_i$. Loading those containers onto the truck in the city $C_i$, however, requires moving a team of $m$ robots $R_{i1}, ..., R_{im}$ ($m \geq 1$), initially located in the airport, to the downtown area. The source of incompleteness in this domain comes from the assumption that each pair of robots $R_{1j}$ and $R_{2j}$ ($1 \leq j \leq m$) are made by the same manufacturer $M_{j}$, both therefore might fail to load a \emph{heavy} container.\footnote{The \emph{uncorrelated incompleteness} assumption applies for possible preconditions of action schemas specified for different manufacturers. It should not be confused here that robots $R_{1j}$ and $R_{2j}$ of the same manufacturer $M_j$ can independently have fault.} The actions loading containers onto trucks using robots made by a particular manufacturer (e.g., the action schema \emph{load-truck-with-robots-of-M1} using robots of manufacturer $M_1$), therefore, have a \emph{possible precondition} requiring that containers should not be heavy. To simplify discussion (see below), we assume that robots of different manufacturers may fail to load heavy containers, though independently, with the same probability of $0.7$. The goal is to transport all three containers in the city $C_1$ to $C_2$, and vice versa. For this domain, a plan to ship a container to another city involves a step of loading it onto the truck, which can be done by a robot (after moving it from the airport to the downtown). Plans can be made more robust by using additional robots of \emph{different} manufacturer after moving them into the downtown areas, with the cost of increasing plan length. \noindent\textit{Satellite}: In this domain, there are two satellites $S_1$ and $S_2$ orbiting the planet Earth, on each of which there are $m$ instruments $L_{i1}, ..., L_{im}$ ($i \in \{1,2\}$, $m \geq 1$) used to take images of interested modes at some direction in the space. For each $j \in \{1,...,m\}$, the lenses of instruments $L_{ij}$'s were made from a type of material $M_j$, which might have an error affecting the quality of images that they take. If the material $M_j$ actually has error, all instruments $L_{ij}$'s produce mangled images. The knowledge of this incompleteness is modeled as a \emph{possible add effect} of the action taking images using instruments made from $M_j$ (for instance, the action schema \emph{take-image-with-instruments-M1} using instruments of type $M_1$) with a probability of $p_j$, asserting that images taken might be in a bad condition. A typical plan to take an image using an instrument, e.g. $L_{14}$ of type $M_4$ on the satellite $S_1$, is first to switch on $L_{14}$, turning the satellite $S_1$ to a ground direction from which $L_{14}$ can be calibrated, and then taking image. Plans can be made more robust by using additional instruments, which might be on a different satellite, but should be of \emph{different} type of materials and can also take an image of the interested mode at the same direction. \begin{table} {\scriptsize \begin{center} \begin{tabular}{| c || c | c | c | c | c |} \hline $\rho$ & $m=1$ & $m=2$ & $m=3$ & $m=4$ & $m=5$ \\ \hline \hline $0.1$ & $32/10.9$ & $36/26.2$ & $40/57.8$ & $44/121.8$ & $48/245.6$ \\ \hline $0.2$ & $32/10.9$ & $36/25.9$ & $40/57.8$ & $44/121.8$ & $48/245.6$ \\ \hline $0.3$ & $32/10.9$ & $36/26.2$ & $40/57.7$ & $44/122.2$ & $48/245.6$ \\ \hline $0.4$ & $\bot$ & $42/42.1$ & $50/107.9$ & $58/252.8$ & $66/551.4$ \\ \hline $0.5$ & $\bot$ & $42/42.0$ & $50/107.9$ & $58/253.1$ & $66/551.1$ \\ \hline $0.6$ & $\bot$ & $\bot$ & $50/108.2$ & $58/252.8$ & $66/551.1$ \\ \hline $0.7$ & $\bot$ & $\bot$ & $\bot$ & $58/253.1$ & $66/551.6$\\ \hline $0.8$ & $\bot$ & $\bot$ & $\bot$ & $\bot$ & $66/550.9$\\ \hline $0.9$ & $\bot$ & $\bot$ & $\bot$ & $\bot$ & $\bot$ \\ \hline \end{tabular} \caption{The results of generating robust plans in Logistics domain (see text).} \vspace{-.1in} \label{table:logistics} \end{center} } \end{table} \begin{table} {\scriptsize \begin{center} \begin{tabular}{| c || c | c | c | c | c |} \hline $\rho$ & $m=1$ & $m=2$ & $m=3$ & $m=4$ & $m=5$ \\ \hline \hline $0.1$ & $10/0.1$ & $10/0.1$ & $10/0.2$ & $10/0.2$ & $10/0.2$ \\ \hline $0.2$ & $10/0.1$ & $10/0.1$ & $10/0.1$ & $10/0.2$ & $10/0.2$ \\ \hline $0.3$ & $\bot$ & $10/0.1$ & $10/0.1$ & $10/0.2$ & $10/0.2$ \\ \hline $0.4$ & $\bot$ & $37/17.7$ & $37/25.1$ & $10/0.2$ & $10/0.3$ \\ \hline $0.5$ & $\bot$ & $\bot$ & $37/25.5$ & $37/79.2$ & $37/199.2$ \\ \hline $0.6$ & $\bot$ & $\bot$ & $53/216.7$ & $37/94.1$ & $37/216.7$ \\ \hline $0.7$ & $\bot$ & $\bot$ & $\bot$ & $53/462.0$ & -- \\ \hline $0.8$ & $\bot$ & $\bot$ & $\bot$ & $\bot$ & -- \\ \hline $0.9$ & $\bot$ & $\bot$ & $\bot$ & $\bot$ & $\bot$ \\ \hline \end{tabular} \caption{The results of generating robust plans in Satellite domain (see text).} \vspace{-.15in} \label{table:satellite} \end{center} } \end{table} Table~\ref{table:logistics} and \ref{table:satellite} shows respectively the results in the Logistics and Satellite domains with $\rho \in \{0.1, 0.2, ..., 0.9\}$ and $m = \{1,2,...,5\}$. The number of complete domain models in the two domains is $2^{m}$. For Satellite domain, the probabilities $p_j$'s range from $0.25$, $0.3$,... to $0.45$ when $m$ increases from $1$, $2$, ... to $5$. For each specific value of $\rho$ and $m$, we report $l/t$ where $l$ is the length of plan and $t$ is the running time (in seconds). Cases in which no plan is found within the time limit are denoted by ``--'', and those where it is provable that no plan with the desired robustness exists are denoted by ``$\bot$''. \textit{Observations on fixed value of $m$}: In both domains, for a fixed value of $m$ we observe that the solution plans tend to be longer with higher robustness threshold $\rho$, and the time to synthesize plans is also larger. For instance, in Logistics with $m=5$, the plan returned has $48$ actions if $\rho=0.3$, whereas $66$-length plan is needed if $\rho$ increases to $0.4$. Since loading containers using the same robot multiple times does not increase the chance of success, more robots of different manufacturers need to move into the downtown area for loading containers, which causes an increase in plan length. In the Satellite domain with $m=3$, similarly, the returned plan has $37$ actions when $\rho=0.5$, but requires $53$ actions if $\rho = 0.6$---more actions need to calibrate an instrument of different material types in order to increase the chance of having a good image of interested mode at the same direction. Since the cost of actions is currently ignored in the compilation approach, we also observe that more than the needed number of actions have been used in many solution plans. In the Logistics domain, specifically, it is easy to see that the probability of successfully loading a container onto a truck using robots of $k$ ($1 \leq k \leq m$) different manufacturers is $(1 - {0.7}^{k})$. As an example, however, robots of all five manufacturers are used in a plan when $\rho=0.4$, whereas using those of three manufacturers is enough. \textit{Observations on fixed value of $\rho$}: In both domains, we observe that the maximal robustness value of plans that can be returned increases with higher number of manufacturers (though the higher the value of $m$ is, the higher number of complete models is). For instance, when $m=2$ there is not any plan returned with at least $\rho = 0.6$ in the Logistics domain, and with $\rho=0.4$ in the Satellite domain. Intuitively, more robots of different manufacturers offer higher probability of successfully loading a container in the Logistics domain (and similarly for instruments of different materials in the Satellite domain). Finally, it may take longer time to synthesize plans with the same length when $m$ is higher---in other words, the increasing amount of incompleteness of the domain makes the plan generation phase harder. As an example, in the Satellite domain, with $\rho=0.6$ it takes $216.7$ seconds to synthesize a $37$-length plan when there are $m=5$ possible add effects at the schema level of the domain, whereas the search time is only $94.1$ seconds when $m=4$. With $\rho=0.7$, no plan is found within the time limit when $m=5$, although a plan with robustness of $0.7075$ exists in the solution space. It is the increase of the branching factors and the time spent on satisfiability test and weighted model-counting used inside the planner that affect the search efficiency. \medskip \noindent {\bf Domains with random incompleteness} \noindent We built a program to generate an incomplete domain model from a deterministic one by introducing $M$ new propositions into each domain (all are initially $\mathbf{T}$). Some of those new propositions were randomly added into the sets of \emph{possible} preconditions/effects of actions. Some of them were also randomly made \emph{certain} add/delete effects of actions. With this strategy, each solution plan in an original deterministic domain is also a \emph{valid plan}, as defined earlier, in the corresponding incomplete domain. Our experiments with the Depots, Driverlog, Satellite and ZenoTravel domains indicate that because the annotations are random, there are often fewer opportunities for the PFF planner to increase the robustness of a plan prefix during the search. This makes it hard to generate plans with a desired level of robustness under given time constraint. In summary, our experiments on the two settings above suggest that the compilation approach based on the PFF planner would be a reasonable method for generating robust plans in domains and problems where there are chances for robustifying existing action sequences in the search space. \section{Conclusion and Future Work} In this paper, we motivated the need for synthesizing robust plans under incomplete domain models. We introduced annotations for expressing domain incompleteness, formalized the notion of plan robustness, and showed an approach to compile the problem of generating robust plans into conformant probabilistic planning. We presented empirical results showing the promise of our approach. For future work, we are developing a planning approach that directly takes the incompleteness annotations into account during the search, and compare it with our current compilation method. We also plan to consider the problem of robustifying a given plan subject to a provided cost bound. \medskip \noindent {\bf Acknowledgement:} This research is supported in part by ONR grants N00014-09-1-0017 and N00014-07-1-1049, the NSF grant IIS-0905672, and by DARPA and the U.S. Army Research Laboratory under contract W911NF-11-C-0037. The content of the information does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred. We thank William Cushing for several helpful discussions.
2,869,038,156,093
arxiv
\section{Proof of Lemma \ref{l:prefix}} Let $M$ be a total separated basic MTT $M$, $D$ a given DTA. Let $t\in{\sf dom}(\pi(q))$ be a smallest input tree of a state $q$ of $M$. The $\DeltaO$-prefix of every state $q$ of $M$ relative to $D$ can be computed in $\mathcal{O}(|t|\cdot |M|)$. \begin{proof} The proof is similar to the one of \cite[Theorem 8]{DBLP:journals/jcss/EngelfrietMS09} for top-down tree transducers. This construction can be carried over as, for the computation of $\DeltaO$-prefixes, the precise contents of the output parameters $y_j$ can be ignored. The $\DeltaO$-prefixes can be computed with the following system of in-equations over the complete lattice $\mathcal{P}_{\DeltaO}$ with unknowns $Y_{q}$, $q \in Q$: For each rule $q(f(x_1,\ldots,x_k),y_1,\ldots,y_l) \to T$ such that there is a tree $f(x_1,\ldots, x_k) \in {\sf dom}(\pi(q))$ we have the in-equation \begin{eqnarray} Y_{q} &\sqsupseteq& p[Z_1,\ldots,Z_m] \label{e:prefix} \end{eqnarray} if $T = p[T_1,\ldots,T_m]$ for some pattern $p\in\mathcal{T}_{\DeltaO\cup\{\top\}}$ and terms $T_1,\ldots, T_m$ which are either variables $y_j$ or of the form $q'(x_i, t'_1, \ldots, t'_l)$, $1 \leq i \leq k$ for some $q'\in Q$ where \begin{eqnarray*} Z_i &=& \left\{ \begin{array}{ll} \top &\text{if}\; T_i= y_j \\ Y_{q'} &\text{if}\; T_i = q'(x_i, t'_1, \ldots, t'_{l}) \end{array}\right. \label{e:prefix_cont} \end{eqnarray*} Since all right-hand sides of in-equations \eqref{e:prefix} are monotonic in their arguments, the system has a unique \emph{least} solution. Moreover, we observe that the right-hand sides are distributive w.r.t.\ the least upper bound of patterns in the arguments. I.e., for sets of patterns $S_{i}$ with $\bar{p_i} = \bigsqcup S_i$, $p[\bar{p}_1,\ldots, \bar{p}_m] = \bigsqcup \{p[p_1,\ldots,p_m] \mid p_i \in S_i\}$. First, we show that the $\DeltaO$-prefixes ${\sf pref}_o(q)$, $q\in Q$ build a solution for the system of in-equations (\ref{e:prefix}). Let $q(f(x_1,\ldots, x_k),y_1,\ldots,y_l) \rightarrow p[T_1,\ldots T_m]$ with $T_i = y_{j_i}$ or $T_i = q_i(x_{j_i},\underline{T_i})$ then \begin{eqnarray*} {\sf pref}_o(q) &=& \bigsqcup \{\sem{q}(t,\underline{T'}) \mid t\in{\sf dom}(\pi(q)), \underline{T'}\in\mathcal{T}_{\DeltaI}^l\} \\ &\sqsupseteq& \bigsqcup \{\sem{q}(f(\underline{t}),\underline{T'}) \mid f(\underline{t}) \in {\sf dom}(\pi(q)), \underline{T'}\in\mathcal{T}_{\DeltaI}^l\} \\ &=& \bigsqcup \{p[ \sem{T_1}\underline{t}\ \underline{T'}, \ldots, \sem{T_m}\underline{t}\ \underline{T'}] \mid t_i\in{\sf dom}(\pi(q_i)), \underline{T'} \in\mathcal{T}_{\DeltaI}^l\} \\ \end{eqnarray*} By distributivity of pattern substitution, the latter pattern equals $p[\bar p_1,\ldots,\bar p_m]$ where \begin{eqnarray*} \bar p_i &=& \begin{cases} \top, & \text{whenever}\quad T_i = y_{j_i}\\ \parbox[b]{4.9cm}{$\bigsqcup \{\sem{q_i}(t,\underline{T_{j_i}}[T'_1/y_1,\ldots,T'_l/y_l])\mid t\in{\sf dom}(r_i),T'_j\in\mathcal{T}_{\DeltaI}\}$,}& \text{if}\quad T_i = q_i(x_{j_i},\underline{T_{j_i}})\\ \end{cases}\\ &=& {\sf pref}_o(q_i) \end{eqnarray*} \ignore{ \[ \begin{array}{llll} \bar p_i &=& \top &\text{, whenever}\quad T_i = y_{j_i} \\ \bar p_i &=& \bigsqcup \{\sem{q_i}(t,\underline{T_{j_i}}[T'_1/y_1,\ldots,T'_l/y_l])\mid \\ && \hspace{1cm} t\in{\sf dom}(\pi(q_i)),T'_j\in\mathcal{T}_{\DeltaI}\} & \text{, if}\quad T_i = q_i(x_{j_i},\underline{T_{j_i}}) \\ &=&{\sf pref}_o(q_i) \end{array} \] } Therefore, ${\sf pref}_o(q),q\in Q$, satisfies all constraints. Now consider \emph{any} solution $p'_{q}, q\in Q$ of the given constraint system. We claim that then ${\sf pref}_o(q)\sqsubseteq p'_{q}$ holds for all $q$. Accordingly, ${\sf pref}_o(q)$ is the \emph{least} solution to the given constraint system. In order to prove the claim, we proceed by induction on the input trees $t\in {\sf dom}(\pi(q))$. Assume that $\pi(q)(f(x_1,\ldots, x_k)) \to (r_1(x_1),\ldots,r_k(x_k))$ and $q(f(t_1,\ldots, t_k),\underline{y}) \rightarrow p[T_1,\ldots, T_m]$ is the rule of the MTT for $q$ and $f$ where $T_i$ either equals some parameter $y_{j_i}$ or an expression $q_i(x_{j_i},\underline{S_{i}})$. In that latter case, we have by inductive hypothesis for the $t_i\in{\sf dom}(\pi(q_i))$, that $p'_{q_i}$ is a prefix of $\sem{q_i}(t_i,\underline S)$ for \emph{every} tuple $\underline S\in\mathcal{T}_{\DeltaI}^l$, i.e., $p'_{q_i}\sqsupseteq \sem{q_i}(t_i,\underline S)$ holds. For each $i$ with $T_i = y_{j_i}$, define $p'_i$ as the pattern $\top$. Then, $p[p'_1,\ldots,p'_m]$ is a prefix of $\sem{q}(t,\underline S)$, i.e., $p[p'_1,\ldots,p'_m]\sqsupseteq \sem{q}(t,\underline S)$ holds. Since $p'_q$ is a solution of the system of in-equations, we also have that $p'_{q} \sqsupseteq p[p'_1,\ldots,p'_m]$. Thus, the claim follows by transitivity of $\sqsupseteq$. To compute the $\DeltaO$-prefixes for every $q \in Q$ we first compute some output tree $t_q = q(t,\underline{Y})$ where $t$ is a minimal input tree in ${\sf dom}(\pi(q))$ and $\underline{Y}$ a minimal vector of terms over $\DeltaI$. These $t_q$ can be computed in polynomial time and serve as lower bound for ${\sf pref}_o(q)$, i.e., $t_q \sqsubseteq {\sf pref}_o(q)$. We therefore take $t_q$ as inital values for $Y_{q}$ in the fixpoint iteration of the constraint system. Since in each iteration at least one subtree of the current value of $Y_{q}$ has to be replaced the fixpoint iteration ends after a polynomial time of iterations. \qed \end{proof} \section{Proof of Lemma \ref{l:earliest}} For every pair $(M,A)$ consisting of a total deterministic separated basic MTT $M$ and axiom $A$ and a given DTA $D$, an equivalent pair $(M',A')$ can be constructed so that $M'$ is a total deterministic separated basic MTT that is $D$-earliest. Let $t$ be an output tree of $(M,A)$ for a smallest input tree $t \in{\sf dom}(\pi(q))$ where $q$ is the state occuring in $A$. Then the construction runs in $\mathcal{O}(|t|\cdot|(M,A)|)$. \begin{proof} Let $M=(Q,\Sigma,\Delta,\delta)$ and $A$ be an axiom. The new transducer $M'=(Q',\Sigma,\Delta,\delta')$ is defined as follows: The set $Q'$ of states of $M'$ are pairs $\angl{q,v}$ where $q\in Q$ is a state of $M$, and $v$ the position of a $\top$ leaf in the pattern ${\sf pref}_o(q)$, while the new axiom $A'$ and the new transition relation $\delta'$ are obtained as follows. Let the axiom $A$ of $M$ be of the form $A=p[q_1(x_1,\underline{T_1}),\ldots,q_m(x_1,\underline{T_m})]$ with $\underline{T_1},\ldots,\underline{T_m}$ vectors of output parameters and for $j=1,\ldots,m$, let $t_j = {\sf pref}_o(q_j)[\angl{q_j,v_{j1}}(x_1,\underline{T_j}),\ldots, \angl{q_j,v_{jr_j}}(x_1,\underline{T_j})]$. Here $v_{j1},\ldots,v_{jr_j}$ is the sequence of positions of $\top$-leaves in ${\sf pref}_o(q_j)$. Then the new axiom is given by $A' = p[t_1,\ldots,t_m]$. Now assume that $q(f(x_1,\ldots,x_k),\underline y)\to p[u_1,\ldots,u_m]$ where for $i=1,\ldots, m$, $u_i$ either equals a parameter $y_{j_i}$ or a call $q_i(x_{j_i},\underline{T_i})$. Let $I$ denote the set of indices where the latter is the case. Let $\bar u_i$ denote the term ${\sf pref}_o(q_i)$ if $i\in I$ and $u_i$ otherwise. By construction, $p[\bar u_1,\ldots,\bar u_m]$ is a prefix of ${\sf pref}_o(q)$. Let $t'_i = {\sf pref}_o(q_i)[\angl{q_i,v_{i1}}(x_{j_i},\underline{T_i}),\ldots,\angl{q_i,v_{ir_i}}(x_{j_i},\underline{T_i})]$ if $i\in I$ ($v_{i1},\ldots,v_{ir_i}$ is the sequence of positions of $\top$ leaves in ${\sf pref}_o(q_j)$) and $y_{j_i}$ otherwise. Then ${\sf pref}_o(q)[s_1,\ldots,s_r] = p[t'_1,\ldots,t'_m]$ for some $s_1,\ldots,s_r$. Assuming that $v_1,\ldots,v_r$ are the positions of $\top$-leaves in ${\sf pref}_o(q)$, we set \[ \angl{q,v_j}(f(x_1,\ldots,x_k),\underline y) \to s_j \] for $j=1,\ldots,r$. This ends the construction of the new transducer $M'$ with the corresponding axiom $A'$. Note, that the mapping $\pi'$ from states of $M'$ to states of $D$ is given by $\pi'(\angl{q,v}) = b$ if and only if $\pi(q) = b$ with $\angl{q,v} \in Q', q \in Q, b \in B$. From the construction follows that $M'$ is an earliest total deterministic separated basic MTT. We show that $(M, A)$ is equivalent to $(M', A')$ by induction over the structure of the input trees. The semantics of the pairs $(M,A)$ and $(M',A')$ is defined as $\sem{(M,A)}(t,\underline{T}) = p[\sem{q_1}(t,\underline{T_1}), \ldots, \sem{q_m}(t,\underline{T_m})]$ and $\sem{(M',A')}(t,\underline{T})=p[\sem{t_1},\ldots,\sem{t_m}]$ with $\sem{t_j} = {\sf pref}_o(q_j)[\sem{\angl{q_j,v_{j1}}}(t,\underline{T_j}),\ldots, \sem{\angl{q_j,v_{jr_j}}}(t,\underline{T_j})]$ for $j = 1,\ldots,m$. Therefore, it suffices to show for every state $q\in Q$, every input tree $t\in{\sf dom}(\pi(q)) = {\sf dom}(\pi'(q'))$ and every vector of output parameters $\underline{T}\in \mathcal{T}_{\DeltaI}^l$, that \[ \sem{q}(t, \underline{T}) = {\sf pref}_o(q)[\sem{\angl{q,v_{1}}}(t, \underline{T}),\ldots, \sem{\angl{q,v_{r}}}(t, \underline{T})] \] holds if the subtrees $\top$ of ${\sf pref}_o(q)$ occur at positions $v_1,\ldots,v_r$, respectively. For $j=1,\ldots,m$, there are thus nodes $v_{j1},\ldots,v_{jr_j}$ in ${\sf pref}_o(q_j)$ such that for each input tree $t\in{\sf dom}(\pi(q_j))$ and vector of output parameters $\underline{T}\in\mathcal{T}_{\DeltaI}^l$, $\sem{q_j}(t,\underline{T}) = {\sf pref}_o(q_j)[\sem{\angl{q_j,v_{j1}}}(t,\underline{T}),\ldots, \sem{\angl{q_j,v_{jr_j}}}(t,\underline{T})]$. Note, that in the base case when $t=g$ and $g\in \Sigma$ has rank $0$ ($m=1$), the right-hand side of a rule of $q$ is a term $t\in \mathcal{T}_{\DeltaO\cup Y}$, i.e., $q(t,\underline{y})\rightarrow p[y_{i1},\ldots, y_{ik}]$. Then for $i=1,\ldots,r$, $\angl{q,v_i}(t,\underline{y}) \rightarrow s_i$ where ${\sf pref}_o(q)[s_q,\ldots,s_r] = p[y_{i1},\ldots,y_{im}]$. Therefore, \[ \begin{array}{lll} \sem{M,A}(t,\underline{T}) &=& p[\sem{q_1}(t,\underline{T}),\ldots, \sem{q_m}(t,\underline{T_m})] \\ &=&p[ \begin{array}[t]{l} {\sf pref}_o(q_1)[ \sem{\angl{q_1,v_{11}}}(t,\underline{T_1}),\ldots, \sem{\angl{q_1,v_{1r_1}}}(t,\underline{T_1})],\ldots, \\ {\sf pref}_o(q_m)[ \sem{\angl{q_m,v_{m1}}}(t,\underline{T_m}),\ldots, \sem{\angl{q_m,v_{mr_m}}}(t,\underline{T_m})]\;] \end{array} \\ &=& \sem{M',A'}(t,\underline{T}) \end{array} \] with $t \in {\sf dom}(\pi(q)) = {\sf dom}(\pi'(q'))$. Accordingly, for all input trees $t \in {\sf dom}(\pi(q))$ and parameter vectors $\underline{T} \in \mathcal{T}_{\DeltaI}^l$, $\sem{M,A}(t,\underline{T}) = \sem{M',A'}(t,\underline{T})$. Assume that $n$ is the number of states of $M$, and $D$ is the maximal size of a common $\DeltaO$-prefix. Thus $D$ is at most of the size of a right-hand side of a rule for a symbol of rank $0$. Then each state $q$ of $M$ gives rise to at most $n\cdot D$ states $\angl{q,v}$ of $M'$ where each right-hand side of $\angl{q,v}$ is at most $D+1$ times the size of the corresponding right-hand side of $q$. Thus, the size of $M'$ is polynomial in the size of $M$. Likewise, the axiom $A'$ is only a factor of $D+1$ larger than the axiom $A$. Since the $\DeltaO$-prefixes of states of $M$ can be computed in polynomial time, the overall algorithm runs in polynomial time. \qed \end{proof} \section{Separated Basic Macro Tree Transducers}\label{s:basics} Let $\Sigma$ be a ranked alphabet, i.e., every symbol of the finite set $\Sigma$ has associated with it a fixed rank $k \in \mathbb{N}$. Generally, we assume that the input alphabet $\Sigma$ is \emph{non-trivial}, i.e., $\Sigma$ has cardinality at least 2, and contains at least one symbol of rank $0$ and at least one symbol of rank $>0$. The set $\mathcal{T}_\Sigma$ is the set of all (finite, ordered, rooted) trees over the alphabet $\Sigma$. We denote a tree as a string over $\Sigma$ and parenthesis and commas, i.e., $f(a,f(a,b))$ is a tree over $\Sigma$, where $f$ is of rank $2$ and $a,b$ are of rank zero. We use Dewey dotted decimal notation to refer to a node of a tree: The root node is denoted $\varepsilon$, and for a node $u$, its $i$-th child is denoted by $u.i$. For instance, in the tree $f(a,f(a,b))$ the $b$-node is at position $2.2$. A \emph{pattern} (or $k$-pattern) (over $\Delta$) is a tree $p\in\mathcal{T}_{\Delta\cup\{\top\}}$ over a ranked alphabet $\Delta$ and a disjoint symbol $\top$ (with exactly $k$ occurrences of the symbol $\top$). The occurrences of the dedicated symbol $\top$ serve as place holders for other patterns. Assume that $p$ is a $k$-pattern and that $p_1,\dots,p_k$ are patterns; then $p[p_1,\ldots,p_k]$ denotes the pattern obtained from $p$ by replacing, for $i=1,\ldots,k$, the $i$-th occurrence (from left-to-right) of $\top$ by the pattern $p_i$. A \emph{macro tree transducer} (\emph{MTT}) $M$ is a tuple $(Q,\Sigma,\Delta,\delta)$ where $Q$ is a ranked alphabet of states, $\Sigma$ and $\Delta$ are the ranked input and output alphabets, respectively, and $\delta$ is a finite set of rules of the form: \begin{eqnarray} q(f(x_1,\ldots,x_k),y_1,\ldots,y_l)\to T \label{e:rule} \end{eqnarray} where $q\in Q$ is a state of rank $l+1$, $l\geq 0$, $f\in\Sigma$ is an input symbol of rank $k\geq 0$, $x_1,\ldots,x_k$ and $y_1,\ldots,y_l$ are the formal input and output parameters, respectively, and $T$ is a tree built up according to the following grammar: \[ \begin{array}{lll} T &{::=}& a(T_1,\ldots,T_m) \mid q'(x_i,T_1,\ldots,T_n) \mid y_j \end{array} \] for output symbols $a\in\Delta$ of rank $m\geq 0$ and states $q'\in Q$ of rank $n+1$, input parameter $x_i$ with $1\leq i\leq k$, and output parameter $y_j$ with $1\leq j\leq l$. For simplicity, we assume that all states $q$ have the same number $l$ of parameters. Our definition of an MTT does not contain an initial state. We therefore consider an MTT always together with an axiom $A = p[q_1(x_1,\underline{T_1}),\ldots,q_m(x_1,\underline{T_m})]$ where $\underline{T_1},\ldots,\underline{T_m} \in \mathcal{T}_\Delta^l$ are vectors of output trees (of length $l$ each). Sometimes we only use an MTT $M$ without explicitly mentioning an axiom $A$, then some $A$ is assumed implicitly. Intuitively, the state $q$ of an MTT corresponds to a function in a functional language which is defined through pattern matching over its first argument, and which constructs tree output using tree top-concatenation only; the second to $(l+1)$-th arguments of state $q$ are its accumulating output parameters. The output produced by a state for a given input tree is determined by the right-hand side $T$ of a rule of the transducer which matches the root symbol $f$ of the current input tree. This right-hand side is built up from accumulating output parameters and calls to states for subtrees of the input and applications of output symbols from $\Delta$. In general MTTs are nondeterministic and only partially defined. Here, however, we concentrate on total deterministic transducers. The MTT $M$ is \emph{deterministic}, if for every $(q,f)\in Q\times\Sigma$ there is at most one rule of the form~\eqref{e:rule}. The MTT $M$ is \emph{total}, if for every $(q,f)\in Q\times\Sigma$ there is at least one rule of the form~\eqref{e:rule}. For total deterministic transducers, the semantics of a state $q\in Q$ with the rule $q(f(x_1,\ldots,x_k),y_1,\ldots,y_l)\to T$ can be considered as a function \[ \sem{q}:\mathcal{T}_\Sigma\times\mathcal{T}_\Delta^l\to \mathcal{T}_{\Delta} \] which inductively is defined by: \begin{eqnarray*} \sem{q}(f(t_1,\ldots,t_k),\underline S) &=& \sem{T}\,(t_1,\ldots, t_k)\,\underline S\\ \text{where} \nonumber \\ \sem{a(T_1,\ldots,T_m)}\,\underline{t}\,\underline S &=& a(\sem{T_1}\,\underline{t}\,\underline S,\ldots,\sem{T_m}\,\underline{t}\,\underline S) \\ \sem{y_j}\,\underline{t}\,\underline S &=& S_j \\ \sem{q'(x_i,T_1,\ldots,T_l)}\,\underline{t}\,\underline S &=& \sem{q'}(t_i,\sem{T_1}\,\underline{t}\,\underline S,\ldots,\sem{T_l}\,\underline{t}\,\underline S) \end{eqnarray*} where $\underline S = (S_1,\ldots,S_l)\in\mathcal{T}_\Delta^l$ is a vector of output trees. The semantics of a pair $(M,A)$ with MTT $M$ and axiom $A = p[q_1(x_1,\underline{T_1}),\ldots,q_m(x_1,\underline{T_m})]$ is defined by $\sem{(M,A)}(t) = p[\sem{q_1}(t,\underline{T_1}),\ldots,\sem{q_m}(t,\underline{T_m})]$. Two pairs $(M_1,A_1)$, $(M_2,A_2)$ consisting of MTTs $M_1$, $M_2$ and corresponding axioms $A_1$, $A_2$ are \emph{equivalent}, $(M_1,A_1) \equiv (M_2,A_2)$, iff for all input trees $t\in\mathcal{T}_\Sigma$, and parameter values $\underline{T}\in\mathcal{T}_{\DeltaI}^l$, $\sem{(M_1,A_1)}(t,\underline{T})=\sem{(M_2,A_2)}(t,\underline{T})$. The MTT $M$ is \emph{basic}, if each argument tree $T_j$ of a subtree $q'(x_i,T_1,\ldots,T_n)$ of right-hand sides $T$ of rules \eqref{e:rule} may not contain further occurrences of states, i.e., is in $\mathcal{T}_{\Delta\cup Y}$. The MTT $M$ is \emph{separated basic}, if $M$ is basic, and $\Delta$ is the disjoint union of ranked alphabets $\DeltaO$ and $\DeltaI$ so that the argument trees $T_j$ of subtrees $q'(x_i,T_1,\ldots,T_n)$ are in $\mathcal{T}_{\DeltaI\cup Y}$, while the output symbols $a$ outside of such subtrees are from $\DeltaO$. The same must hold for the axiom. Thus, letters directly produced by a state call are in $\DeltaO$ while letters produced in the parameters are in $\DeltaI$. The MTT $M_{\text{tern}}$ from the Introduction is separated basic with $\DeltaO=\{0, 1, 2, 3, *, +, \text{EXP}\}$ and $\DeltaI=\{p,s,z\}$. As separated basic MTTs are in the focus of our interests, we make the grammar for their right-hand side trees $T$ explicit: \[ \begin{array}{lll} T &{::=}& a(T_1,\ldots,T_m) \mid y_j \mid q'(x_i,T'_1,\ldots,T'_n) \\ T' &{::=}& b(T'_1,\ldots,T'_{m'}) \mid y_j \end{array} \] where $a\in\DeltaO$, $q'\in Q$, $b\in\DeltaI$ of ranks $m,n+1$ and $m'$, respectively, and $p$ is an $n$-pattern over $\Delta$. For separated basic MTTs only axioms $A=p[q_1(x_1,\underline{T_1}),\ldots,q_m(x_1,\underline{T_m})]$ with $T_1,\ldots,T_m \in \mathcal{T}_{\DeltaI}^l$ are considered. Note that equivalence of nondeterministic transducers is undecidable (even already for very small subclasses of transductions~\cite{DBLP:journals/jacm/Griffiths68}). Therefore, we assume for the rest of the paper that all MTTs are deterministic and separated basic. We will also assume that all MTTs are total, with the exception of Section~\ref{sec:app} where we also consider partial MTTs. \begin{example}\label{ex:binary} We reconsider the example from the Introduction and adjust it to our formal definition. The transducer was given without an axiom (but with a tacitly assumed ``start state'' $q_0$). Let us now remove the state $q_0$ and add the axiom $A=q(x_1,z)$. The new $q$ rule for $g$ is: \[ q(g(x_1,x_2),y)\to +(q(x_1,y),q'(x_2,p(y))). \] To make the transducer total, we add for state $q'$ the rule \[ q'(g(x_1,x_2),y) \to +(*(0,\text{EXP}(3,y)),*(0,\text{EXP}(3,y))). \] For state $r$ we add rules $q(\alpha(x_1,x_2),y) \to *(0,\text{EXP}(3,y))$ with $\alpha = f,g$. The MTT is separated basic with $\DeltaO=\{0, 1, 2, 3, *, +, \text{EXP}\}$ and $\DeltaI=\{p,s,z\}$. \qed \end{example} We restricted ourselves to \emph{total} separated basic MTTs. However, we would like to be able to decide equivalence for \emph{partial} transducers as well. For this reason we define now top-down tree automata, and will then decide equivalence of MTTs relative to some given DTA $D$. A \emph{deterministic top-down tree automaton} (\emph{DTA}) $D$ is a tuple $(B, \Sigma, b_0, \delta_D)$ where $B$ is a finite set of states, $\Sigma$ is a ranked alphabet of input symbols, $b_0\in B$ is the initial state, and $\delta_D$ is the partial transition function with rules of the form $b(f(x_1,\ldots, x_k)) \to (b_1(x_1),\ldots, b_k(x_k))$, where $b,b_1,\dots,b_k\in B$ and $f\in\Sigma$ of rank $k$. W.l.o.g.\ we always assume that all states $b$ of a DTA are productive, i.e., ${\sf dom}(b)\neq\emptyset$. If we consider a MTT $M$ relative to a DTA $D$ we implicitly assume a mapping $\pi: Q \to B$, that maps each state of $M$ to a state of $D$, then we consider for $q$ only input trees in ${\sf dom}(\pi(q))$. \section{Related Work} For several subclasses of attribute systems equivalence is known to be decidable. For instance, attributed grammars without inherited attributes are equivalent to deterministic top-down tree transducers (DT)~\cite{Engelfriet1980,DBLP:journals/tcs/CourcelleF82}. For this class equivalence was shown to be decidable by Esik~\cite{DBLP:journals/actaC/Esik81}. Later, a simplified algorithm was provided in~\cite{DBLP:journals/jcss/EngelfrietMS09}. If the tree translation of an attribute grammar is of linear size increase, then equivalence is decidable, because it is decidable for deterministic macro tree transducers (DMTT) of linear size increase. This follows from the fact that the latter class coincides with the class of (deterministic) MSO definable tree translations (DMSOTT)~\cite{DBLP:journals/siamcomp/EngelfrietM03} for which equivalence is decidable~\cite{DBLP:journals/ipl/EngelfrietM06}. \ignore{ For deterministic MTTs that produce only monadic output trees equivalence is decidable. This follows from the fact that such transducers are equivalent (by considering monadic trees as strings) to determinisitic top-down tree-to-string transducers~\cite{DBLP:journals/acta/EngelfrietV88,DBLP:journals/iandc/EngelfrietM99} for which equivalence was recently proved to be decidable~\cite{DBLP:journals/jacm/SeidlMK18}. For the inclusions of $\text{DT}^{\text{R}}$ in $\text{MTT}_{\text{mon}}^{\text{R}}$ and of DMSOTT in $\text{MTT}_{\text{mon}}^{\text{R}}$, consider the output tree as a string (with brackets) and turn that string into a monadic tree.} Figure~\ref{fig:classes} shows a Hasse diagram of classes of translations realized by certain deterministic tree transducers. The prefixes ``l'', ``sn'', ``b'' and ``sb'' mean ``linear size increase'', ``separated non-nested'', ``basic'' and ``separated basic'', respectively. A minimal class where it is still open whether equivalence is decidable is the class of \emph{non-nested} attribute systems (nATT) which, on the macro tree transducer side, is included in the class of \emph{basic} deterministic macro tree transducers (bDMTT). \begin{figure}[htb] \centering \begin{tikzpicture} \node[] at (0,1em) (DMTT) {\normalsize \underline{DMTT}}; \node[below =5mm of DMTT] (ATT) {\normalsize $\text{\underline{ATT}}$}; \node[below =5mm of ATT] (nATT) {\normalsize $\text{\underline{nATT}}$}; \node[below=5mm of nATT] (snATT) {\normalsize $\text{snATT}$}; \node[below right =5mm and 7mm of DMTT] (DMSOTT) {\normalsize $\text{DMSOTT}$}; \node[below=5mm of DMSOTT] (lATT) {\normalsize $\text{lATT}$}; \node[below left=5mm and 7mm of DMTT] (bDMTT) {\normalsize $\text{\underline{bDMTT}}$}; \node[below=5mm of bDMTT] (sbDMTT) {\normalsize $\text{sbDMTT}$}; \node[below=5mm of sbDMTT] (DT) {\normalsize $\text{DT}$}; \draw (DMTT) -- (ATT); \draw (ATT) -- (nATT); \draw (nATT) -- (snATT); \draw (DMTT) -- (bDMTT); \draw (bDMTT) -- (sbDMTT); \draw (sbDMTT) -- (DT); \draw (ATT) -- (lATT); \draw (DMTT) -- (DMSOTT); \draw (DMSOTT) -- (lATT); \draw(bDMTT) -- (nATT); \draw(sbDMTT) -- (snATT); \draw(DT) -- (ATT); \end{tikzpicture} \caption{Classes with and without (underlined) known decidability of equivalence}\label{fig:classes} \end{figure} For deterministic top-down tree transducers, equivalence can be decided in EXPSPACE, and in NLOGSPACE if the transducers are total~\cite{DBLP:journals/ijfcs/Maneth15}. For the latter class of transducers, one can decide equivalence in polynomial time by transforming the transducer into a canonical normal form and then checking isomorphism of the resulting transducers~\cite{DBLP:journals/jcss/EngelfrietMS09}. In terms of hardness, we know that equivalence of deterministic top-down tree transducers is EXPTIME-hard. For linear size increase deterministic macro tree transducers the precise complexity is not known (but is at least NP-hard). More compexity results are known for other models of tree transducers such as streaming tree transducers~\cite{DBLP:journals/jacm/AlurD17}, see~\cite{DBLP:journals/ijfcs/Maneth15} for a summary. \iffalse In this paper we provide an alternative approach. To decide the equivalence of total deterministic separated basic macro tree transducers different states with respect to their input and parameters have to be checked for equivalence. We show that transforming the transducers in a partial normalform called earliest form provides that the equivalence of corresponding states can be tested in polynomial time. The earliest form is a common technique used for normalforms and equivalence testing of different kinds of tree transducers~\cite{DBLP:journals/jcss/EngelfrietMS09,Friese2010,Laurence2011}. General macro tree transducer (MTT) are programs that transform a input tree in a top-down manner with rules of the form $q(f(x_1,\ldots, x_n),y_1,\ldots, y_l) \rightarrow T$. The $y_i$ are so called output parameters and the right-hand side $T$ of a rule is a tree over the output alphabet, calls to the parameters $y_i$ and recursive calls $q(x_i,T_1,\ldots, T_l)$ where $x_i$ is a child of the input tree and $T_i$ are of the same form as $T$. In this paper we consider two restrictions to MTTs called basic and separated. For basic macro tree transducers no recursive calls in the parameters $T_i$ are allowed and for separated MTTs the output alphabet $\Delta^{(1)}$ in the parameters is disjoint with the output alphabet $\Delta^{(0)}$ of the recursive calls. In this combination a separated basic MTT (sbMTT) can only build trees over the output alphabet $\Delta^{(1)}$ in their parameters. We can therefore visualize the output trees of sbMTTs as shown below where the white tree represents the output over alphabet $\Delta^{(0)}$ and the grey trees represents the output built in the parameters over alphabet $\Delta^{(1)}$. \begin{figure}[h] \begin{center} \begin{tikzpicture} \draw (0,0) -- (2,0); \draw (2,0) -- (1,2.5); \draw (1,2.5) -- (0,0); \draw (0.2,0) -- (0.2,-0.2) node[below]{}; \draw[fill=gray] (0.2,-0.2) -- (0.3,-0.5) -- (0.1,-0.5) -- (0.2,-0.2); \draw (0.6,0) -- (0.6,-0.2) node[below]{}; \draw[fill=gray] (0.6,-0.2) -- (0.7,-0.4) -- (0.5,-0.4) -- (0.6,-0.2); \draw (1.0,0) -- (1.0,-0.2) node[below]{}; \draw[fill=gray] (1.0,-0.2) -- (1.2,-0.6) -- (0.8,-0.6) -- (1.0,-0.2); \draw (1.4,0) -- (1.4,-0.2) node[below]{}; \draw[fill=gray] (1.4,-0.2) -- (1.6,-0.4) -- (1.2,-0.4) -- (1.4,-0.2); \draw (1.8,0) -- (1.8,-0.2) node[below]{}; \draw[fill=gray] (1.8,-0.2) -- (1.9,-0.5) -- (1.7,-0.5) -- (1.8,-0.2); \end{tikzpicture} \end{center} \end{figure} Consider a total deterministic separated basic macro tree transducer (sbMTT, for short). Due to the separated property we can distinguish whether an output is produced by a recursive call of a state or in a parameter. This enables us to construct for an sbMTT an equivalent earliest sbMTT. Earliest means that the output produced by recursive calls (the ``state-output'') of states is produced as early as possible. The earliest property implies that the common prefix of the trees produced by a given state is empty, or equivalently speaking, every state produces at least two output trees which differ in the label of their root node. The paper is structered as follows. In Section \ref{s:basics} we introduce our notation and the formal definition of sbMTTs. Section \ref{s:earliest} describes how an sbMMT can be transformed in the earliest form and introduces the equivalence relation that is the basis of our algorithm. Last, in Section \ref{s:poly} is shown how the equivalence relation can be computed in polynomial time. \fi \section{Conclusion} We have proved that the equivalence problem for separated non-nested attribute systems can be decided in polynomial time. In fact, we have shown a stronger statement, namely that in polynomial time equivalence of \emph{separated basic total deterministic macro tree transducers} can be decided. To see that the latter is a strict superclass of the former, consider the translation that takes a binary tree as input, and outputs the same tree, but under each leaf a new monadic tree is output which represents the inverse Dewey path of that node. For instance, the tree $f(f(a,a),a)$ is translated into the tree $f(f(a(1(1(e))),a(2(1(e)))),a(2(e)))$. A macro tree transducer of the desired class can easily realize this translation using a rule of the form $q(f(x_1,_2),y)\to f(q(x_1,1(y)),q(x_2,2(y)))$. In contrast, no attribute system can realize this translation. The reason is that for every attribute system, the number of distinct output subtrees is linearly bounded by the size of the input tree. For the given translation there is no linear such bound (it is bounded by $|s|\log(|s|)$). The idea of ``separated'' to use different output alphabets, is related to the idea of transducers ``with origin''~\cite{DBLP:conf/icalp/Bojanczyk14,DBLP:journals/iandc/FiliotMRT18}. In future work we would like to define adequate notions of origin for macro tree transducer, and prove that equivalence of such (deterministic) transducers with origin is decidable. \section{Top-Down Normalization of Transducers}\label{s:earliest} In this section we show that each total deterministic basic separated MTT can be put into an ``earliest'' normal form relative to a fixed DTA $D$. Intuitively, state output (in $\DeltaO$) is produced as early as possible for a transducer in the normal form. It can then be shown that two equivalent transducers in normal form produce their state output in exactly the same way. Recall the definition of patterns as trees over $\mathcal{T}_{\Delta\cup\{\top\}}$. Substitution of $\top$-symbols by other patterns induces a partial ordering $\sqsubseteq$ over patterns by $p\sqsubseteq p'$ if and only if $p = p'[p_1,\ldots,p_m]$ for some patterns $p_1,\ldots,p_m$. W.r.t.\ this ordering, $\top$ is the \emph{largest} element, while all patterns without occurrences of $\top$ are minimal. By adding an artificial \emph{least} element $\bot$, the resulting partial ordering is in fact a \emph{complete lattice}. Let us denote this complete lattice by $\mathcal{P}_\Delta$. Let $\Delta=\DeltaI\cup \DeltaO$. For $T\in\mathcal{T}_{\Delta\cup Y}$, we define the \emph{$\DeltaO$-prefix} as the pattern $p\in\mathcal{T}_{\DeltaO\cup\{\top\}}$ as follows. Assume that $T = a(T_1,\ldots,T_m)$. \begin{itemize} \item If $a\in\DeltaO$, then $p = a(p_1,\ldots,p_m)$ where for $j=1,\ldots,m$, $p_j$ is the $\DeltaO$-prefix of $T_j$. \item If $a\in\DeltaI\cup Y$, then $p = \top$. \end{itemize} By this definition, each tree $t\in\mathcal{T}_{\Delta\cup Y}$ can be uniquely decomposed into a $\DeltaO$-prefix $p$ and subtrees $t_1,\ldots,t_m$ whose root symbols all are contained in $\DeltaI\cup Y$ such that $t = p[t_1,\ldots,t_m]$. Let $M$ be a total separated basic MTT $M$, $D$ be a given DTA. We define the $\DeltaO$-prefix of a state $q$ of $M$ relative to $D$ as the minimal pattern $p\in\mathcal{T}_{\DeltaO\cup\{\top\}}$ so that each tree $\sem{q}(t,\underline T)$, $t\in{\sf dom}(\pi(q)),\underline T\in\mathcal{T}_\Delta^l,$ is of the form $p[T_1,\ldots,T_m]$ for some sequence of subtrees $T_1,\ldots,T_m \in \mathcal{T}_{\Delta}$. Let us denote this unique pattern $p$ by ${\sf pref}_o(q)$. If $q(f, y_1,\ldots, y_l) \rightarrow T$ is a rule of a separated basic MTT and there is an input tree $f(t_1,\ldots, t_k) \in {\sf dom}(\pi(q))$ then $|{\sf pref}_o(q)| \leq |T|$. \begin{lemma}\label{l:prefix} Let $M$ be a total separated basic MTT and $D$ a given DTA. Let $t\in{\sf dom}(\pi(q))$ be a smallest input tree of a state $q$ of $M$. The $\DeltaO$-prefix of every state $q$ of $M$ relative to $D$ can be computed in time $\mathcal{O}(|t|\cdot |M|)$. \end{lemma} The proof is similar to the one of \cite[Theorem 8]{DBLP:journals/jcss/EngelfrietMS09} for top-down tree transducers. This construction can be carried over as, for the computation of $\DeltaO$-prefixes, the precise contents of the output parameters $y_j$ can be ignored. The complete proof can be found in the Appendix. \begin{example}\label{ex:prefO} We compute the $\DeltaO$-prefix of the MTT $M$ from Example \ref{ex:binary}. We consider $M$ relative to the trivial DTA $D$ that consists only of one state $b$ with $dom(b) = \mathcal{T}_\Sigma$. We therefore omit $D$ in our example. We get the following system of in-equations: from the rules of state $r$ we obtain $Y_r \sqsubseteq *(i,\text{EXP}(3,\top))$ with $i \in \{0,1,2\}$. From the rules of state $q$ we obtain $Y_q \sqsubseteq +(Y_q,Y_{q'})$, $Y_q \sqsubseteq +(Y_r,Y_{q})$ and $Y_q \sqsubseteq *(i,\text{EXP}(3,\top))$ with $i \in \{0,1,2\}$. From the rules of state $q'$ we obtain $Y_{q'} \sqsubseteq +(*(0,\text{EXP}(3,\top)),*(0,\text{EXP}(3,\top)))$, $Y_{q'} \sqsubseteq +(Y_r,Y_{q'})$ and $Y_{q'} \sqsubseteq *(i,\text{EXP}(3,\top))$ with $i \in \{0,1,2\}$. For the fixpoint iteration we initialize $Y_r^{(0)}$, $Y_q^{(0)}$, $Y_{q'}^{(0)}$ with $\bot$ each. Then $Y_r^{(1)} = *(\top,\text{EXP}(3,\top)) = Y_r^{(2)}$ and $Y_q^{(1)} = \top$, $Y_{q'}^{(1)} = \top$. Thus, the fixpoint iteration ends after two rounds with the solution ${\sf pref}_o(q) = \top$. \qed \end{example} Let $M$ be a separated basic MTT $M$ and $D$ be a given DTA $D$. $M$ is called $D$-earliest if for every state $q \in Q$ the $\DeltaO$-prefix with respect to $\pi(q)$ is $\top$. \begin{lemma}\label{l:earliest} For every pair $(M,A)$ consisting of a total separated basic MTT $M$ and axiom $A$ and a given DTA $D$, an equivalent pair $(M',A')$ can be constructed so that $M'$ is a total separated basic MTT that is $D$-earliest. Let $t$ be an output tree of $(M,A)$ for a smallest input tree $t' \in{\sf dom}(\pi(q))$ where $q$ is the state occurring in $A$. Then the construction runs in time $\mathcal{O}(|t|\cdot|(M,A)|)$. \end{lemma} The construction follows the same line as the one for the earliest form of top-down tree transducer, cf. \cite[Theorem 11]{DBLP:journals/jcss/EngelfrietMS09}. The proof can be found in the appendix. \noindent Note that for partial separated basic MTTs the size of the $\DeltaO$-prefixes is at most exponential in the size of the transducer. However for total transducer that we consider here the $\DeltaO$-prefixes are linear in the size of the transducer and can be computed in quadratic time, cf.~\cite{DBLP:journals/jcss/EngelfrietMS09}. \begin{corollary}\label{c:total} For $(M,A)$ consisting of a total deterministic separated basic MTT $M$ and axiom $A$ and the trivial DTA $D$ accepting $\mathcal{T}_\Sigma$ an equivalent pair $(M',A')$ can be constructed in quadratic time such that $M'$ is an $D$-earliest total deterministic separated basic MTT. \end{corollary} \begin{example} We construct an equivalent earliest MTT $M'$ for the transducer from Example \ref{ex:binary}. In Example \ref{ex:prefO} we already computed the $\DeltaO$-prefixes of states $q, q', r$; ${\sf pref}_o(q) = \top$, ${\sf pref}_o(q') = \top$ and ${\sf pref}_o(r) = *(\top, \text{EXP}(3,\top))$. As there is only one occurrence of symbol $\top$ in the $\DeltaO$-prefixes of $q$ and $q'$ we call states $\angl{q,1}$ and $\angl{q',1}$ by $q$ and $q'$, respectively. Hence, a corresponding earliest transducer has axiom $A=q(x_1,z)$. The rules of $q$ and $q'$ for input symbol $g$ do not change. For input symbol $f$ we obtain \[ \begin{array}{rclr} q(f(x_1, x_2), y) &\to& +(*(r(x_2,y),\text{EXP}(3,y)), q(x1,s(y))) &\text{ and} \\ q'(f(x_1,x_2), y) &\to& +(*(r(x_1,y),\text{EXP}(3,y),q'(x_2,p(y))). \end{array} \] As there is only one occurrence of symbol $\top$ related to a recursive call in ${\sf pref}_o(r)$ we call $\angl{r,1}$ by $r$. For state $r$ we obtain new rules $r(\alpha(x_1,x_2),y) \to 0$ with $\alpha \in \{f,g\}$ and $r(i,y) \to i$ with $i \in \{0,1,2\}$. \qed \end{example} We define a family of equivalence relation by induction, $\cong_b\ \subseteq ((Q,\mathcal{T}_{\DeltaI}^k) \cup \mathcal{T}_{\DeltaI}) \times ((Q,\mathcal{T}_{\DeltaI}^k) \cup \mathcal{T}_{\DeltaI})$ with $b$ a state of a given DTA is the intersection of the equivalence relations $\cong^{(h)}_b$, i.e., $X \cong_b Z$ if and only if for all $h \geq 0$, $X \cong^{(h)}_b Z$. We let $(q,\underline{T}) \cong^{(h+1)}_b (q',\underline{T'})$ if for all $f \in {\sf dom}(b)$ with $b(f(x_1,\ldots,x_k) \to (b_1,\ldots, b_k)$, there is a pattern $p$ such that $q(f(x_1,\ldots, x_k),\underline{y}) \rightarrow p[t_1,\ldots, t_m]$ and $q'(f(x_1,\ldots, x_k),\underline{y'}) \rightarrow p[t'_1,\ldots,t'_m]$ with \begin{itemize} \item if $t_i$ and $t'_i$ are both recursive calls to the same subtree, i.e., $t_i = q_i(x_{j_i},\underline{T_i})$, $t'_i = q'_i(x_{j'_i},\underline{T'_i})$ and $j_i = j'_i$, then $(q_i,\underline{T_i})[\underline{T}/\underline{y}] \cong^h_{b_{j_i}} (q'_i,\underline{T'_i})[\underline{T'}/\underline{y'}]$ \item if $t_i$ and $t'_i$ are both recursive calls but on different subtrees, i.e., $t_i = q_i(x_{j_i},\underline{T_i})$, $t'_i = q'_i(x_{j'_i},\underline{T'_i})$ and $j_i \neq j'_i$, then $\hat{t} := \sem{q_i}(s,\underline{T_i})[\underline{T}/\underline{y}] = \sem{q'_i}(s,\underline{T'_i})[\underline{T}/\underline{y}]$ for some $s \in \Sigma^{(0)}$ and $(q_i,\underline{T_i})[\underline{T}/\underline{y}] \cong^{(h)}_{b_{j_i}} \hat{t} \cong^{(h)}_{b_{j'_i}} (q'_i,\underline{T'_i})[\underline{T}/\underline{y}]$ \item if $t_i$ and $t'_i$ are both parameter calls, i.e., $t_i = y_{j_i}$ and $t'_{i} = y'_{j'_i}$, then $T_{j_i} = T'_{j'_i}$ \item if $t_i$ is a parameter call and $t'_i$ a recursive call, i.e., $t_i = y_{j_i}$ and $t'_i = q'_i(x_{j'_i},\underline{T'_i})$, then $T_{j_i} \cong^{(h)}_{b_{j'_i}} (q'_i,\underline{T'_i})[\underline{T'}/\underline{y'}]$ \item (symmetric to the latter case) if $t_i$ is a recursive call and $t'_i$ a parameter call, i.e., $t_i = q_i(x_{j_i},\underline{T_i})$ and $t'_i = y'_{j'_i}$, then $(q_i,\underline{T_i})[\underline{T}/\underline{y}] \cong^{(h)}_{b_{j_i}} T'_{j'_i}$. \end{itemize} We let $T \cong^{(h+1)}_b (q',\underline{T'})$ if for all $f \in {\sf dom}(b)$ with $r(f(x_1,\ldots, x_k))\to (b_1,\ldots, b_k)$, $q'(f(\underline{x}),\underline{y}) \rightarrow t'$ \begin{itemize} \item if $t' = y_j$ then $T = T'_j$ \item if $t' = q'_1(x_i,\underline{T'_1})$ then $T \cong^{(h)}_{b_i} (q'_1,\underline{T'_1})[\underline{T'}/\underline{y'}]$. \end{itemize} Intuitively, $(q,\underline{T}) \cong^{h}_b (q',\underline{T'})$ if for all input trees $t \in {\sf dom}(b)$ of height $h$, $\sem{q}(t,\underline{T}) = \sem{q'}(t,\underline{T'})$. Then $(q,\underline{T}) \cong_b (q',\underline{T'})$ if for \emph{all} input trees $t \in {\sf dom}(b)$ (independent of the height), $\sem{q}(t,\underline{T}) = \sem{q'}(t,\underline{T'})$. \begin{theorem} For a given DTA $D$ with initial state $b$, let $M, M'$ be $D$-earliest total deterministic separated basic MTTs with axioms $A$ and $A'$, respectively. Then $(M,A)$ is equivalent to $(M',A')$ relative to $D$, iff there is a pattern $p$ such that $A=p[q_1(x_1,\underline{T_1}),\ldots, q_m(x_1,\underline{T_m})]$, and $A'=p[q'_1(x_1,\underline{T'_1}),\ldots, q'_m(x_1,\underline{T'_m})]$ and for $j=1,\ldots,m$, $(q_j,\underline{T_j}) \cong_b (q'_j,\underline{T'_j})$, i.e., $q_j$ and $q'_j$ are equivalent on the values of output parameters $\underline T_j$ and $\underline T'_j$. \end{theorem} \begin{proof} Let $\Delta$ be the output alphabet of $M$ and $M'$. Assume that $(M,A) \cong_b (M',A')$. As $M$ and $M'$ are earliest, the $\DeltaO$-prefix of $\sem{(M,A)}(t)$ and $\sem{(M',A')}(t)$, for $t \in {\sf dom}(b)$ is the same pattern $p$ and therefore $A=p[q_1(x_1,\underline{T_1}),\ldots, q_m(x_1,\underline{T_m})]$ and $A'=p[q'_1(x_1,\underline{T'_1}),\ldots, q'_m(x_1,\underline{T'_m})]$. To show that $(q_i,\underline{T_i}) \cong_{b} (q'_i,\underline{T'_i})$ let $u_i$ be the position of the $i$-th $\top$-node in the pattern $p$. For some $t\in {\sf dom}(b)$ and $\underline{T}\in\mathcal{T}_{\DeltaI}$ let $t_i$ and $t'_i$ be the subtree of $\sem{(M,A)}(t,\underline{T})$ and $\sem{(M',A')}(t,\underline{T})$, respectively. Then $t_i = t'_i$ and therefore $(q_i,\underline{T_i}) \cong_b (q'_i,\underline{T'_i})$. Now, assume that the axioms $A=p[q_1(x_1,\underline{T_1}),\ldots, q_m(x_1,\underline{T_m})]$ and $A'=p[q'_1(x_1,\underline{T'_1}),\ldots, q'_m(x_1,\underline{T'_m})]$ consist of the same pattern $p$ and for $i=1,\ldots,m$, $(q_i,\underline{T_i}) \cong_b (q'_i,\underline{T'_i})$. Let $t\in {\sf dom}(b)$ be an input tree then \[ \begin{array}{lll} \sem{(M,A)}(t) &=& p[\sem{q_1}(t,\underline{T_1}),\ldots,\sem{q_m}(t,\underline{T_m})]\\ &=& p[\sem{q'_1}(t,\underline{T'_1}),\ldots,\sem{q'_m}(t,\underline{T'_m})] \\ &=& \sem{(M',A')}(t). \end{array} \] \end{proof} \ignore{ \begin{tikzpicture} \draw (0,0) -- (2,0); \draw (2,0) -- (1,2.5); \draw (1,2.5) -- (0,0); \draw (0.2,0) -- (0.2,-0.2) node[below]{$\top$}; \draw (0.6,0) -- (0.6,-0.2) node[below]{$\top$}; \draw (1.0,0) -- (1.0,-0.2) node[below]{$\top$}; \draw (1.4,0) -- (1.4,-0.2) node[below]{$\top$}; \draw (1.8,0) -- (1.8,-0.2) node[below]{$\top$}; \end{tikzpicture} \begin{tikzpicture} \draw (0,0) -- (2,0); \draw (2,0) -- (1,2.5); \draw (1,2.5) -- (0,0); \draw (1,2.5) to [out=280, in=100] (0.6,0); \draw (0.6,0) -- (0.6,-0.25) node[right]{$q(f\underline{x},\underline{y})\rightarrow y_i$}; \draw[fill=gray] (0.6,-0.25) -- (0.85,-0.7) -- (0.35,-0.7) -- (0.6,-0.25); \end{tikzpicture} \begin{tikzpicture} \draw (0,0) -- (2,0); \draw (2,0) -- (1,2.5); \draw (1,2.5) -- (0,0); \draw (0.2,0) -- (0.2,-0.2) node[below]{}; \draw[fill=gray] (0.2,-0.2) -- (0.3,-0.5) -- (0.1,-0.5) -- (0.2,-0.2); \draw (0.6,0) -- (0.6,-0.2) node[below]{}; \draw[fill=gray] (0.6,-0.2) -- (0.7,-0.4) -- (0.5,-0.4) -- (0.6,-0.2); \draw (1.0,0) -- (1.0,-0.2) node[below]{}; \draw[fill=gray] (1.0,-0.2) -- (1.2,-0.6) -- (0.8,-0.6) -- (1.0,-0.2); \draw (1.4,0) -- (1.4,-0.2) node[below]{}; \draw[fill=gray] (1.4,-0.2) -- (1.6,-0.4) -- (1.2,-0.4) -- (1.4,-0.2); \draw (1.8,0) -- (1.8,-0.2) node[below]{}; \draw[fill=gray] (1.8,-0.2) -- (1.9,-0.5) -- (1.7,-0.5) -- (1.8,-0.2); \end{tikzpicture} } \section{Polynomial Time}\label{s:poly} In this section we prove the main result of this paper, namely, that for each fixed DTA $D$, equivalence of total deterministic basic separated MTTs (relative to $D$) can be decided in polynomial time. This is achieved by taking as input two $D$-earliest such transducers, and then collecting conditions on the parameters of pairs of states of the respective transducers for their produced outputs to be equal. \begin{example}\label{e:eq} Consider a DTA $D$ with a single state only which accepts all inputs, and states $q,q'$ with \[ \begin{array}{lll@{\qquad}lll} q(a,y_1,y_2) &\to& g(y_1) & q'(a,y'_1,y'_2) &\to& g(y'_2) \end{array} \] Then $q$ and $q'$ can only produce identical outputs for the input $a$ (in ${\sf dom}(b)$) if parameter $y_2'$ of $q'$ contains the same output tree as parameter $y_1$ of $q$. This precondition can be formalized by the equality $y_2'\doteq y_1$. Note that in order to distinguish the output parameters of $q'$ from those of $q$ we have used primed copies $y'_i$ for $q'$. \qed \end{example} It turns out that \emph{conjunctions} of equalities such as in example \ref{e:eq} are sufficient for proving equivalence of states. \noindent For states $q, q'$ of total separated basic MTTs $M, M'$, respectively, that are both $D$-earliest for some fixed DTA $D$, $h \geq 0$ and some fresh variable $z$, we define \[\begin{array}{lclr} \displaystyle \Psi_{b,q}^{(h)}(z) = \bigwedge_{b(f\underline{x}) \to (b_1,\ldots, b_k)} &\displaystyle\bigwedge_{q(f\underline{x},\underline{y})\rightarrow y_j}& \displaystyle(z \doteq y_j) &\wedge\\ &\displaystyle\bigwedge_{q(f\underline{x},\underline{y})\rightarrow \hat{q}(x_i,\underline{T})}& \displaystyle\Psi_{b_i,\hat{q}}^{(h-1)}(z)[\underline{T}/\underline{y}] &\wedge\\ & \displaystyle\bigwedge_{q(f\underline{x},\underline{y})\rightarrow p[\ldots] \atop p \neq \top}&\displaystyle \bot \end{array}\] where $\bot$ is the boolean value \emph{false}. We denote the output parameters in $\Psi_{b,q}^{(h)}(z)$ by $\underline{y}$, we define $\Psi_{b,q'}^{\prime(h)}(z)$ in the same lines as $\Psi_{b,q}^{(h)}(z)$ but using $\underline{y'}$ for the output parameters. To substitute the output parameters with trees $\underline{T}$, $\underline{T'}$, we therefore use $\Psi_{b,q}^{(h)}(z)[\underline{T}/\underline{y}]$ and $\Psi_{b,q'}^{\prime(h)}(z)[\underline{T'}/\underline{y'}]$. Assuming that $q$ is a state of the $D$-earliest separated basic MTT $M$ then $\Psi_{b,q}^{(h)}(z)$ is true for all ground parameter values $\underline s$ and some $T\in\mathcal{T}_{\Delta\cup Y}$ if $\sem{q}(t,\underline s) = T[\underline s/\underline y]$ for all input trees $t\in{\sf dom}(b)$ of height at most $h$. Note that, since $M$ is $D$-earliest, $T$ is necessarily in $\mathcal{T}_{\DeltaI\cup Y}$. W.l.o.g., we assume that every state $b$ of $D$ is productive, i.e., ${\sf dom}(b)\neq\emptyset$. For each state $b$ of $D$, we therefore may choose some input tree $t_b\in{\sf dom}(b)$ of minimal depth. We define $s_{b,q}$ to be the output of $q$ for a minimal input tree $t_r \in {\sf dom}(b)$ and parameter values $\underline{y}$ --- when considering formal output parameters as output symbols in $\DeltaI$, i.e., $s_{b,q} = \sem{q}(t_r,\underline{y})$. \begin{example} We consider again the trivial DTA $D$ with only one state $b$ that accepts all $t \in \mathcal{T}_\Sigma$. Thus, we may choose $t_b = a$. For a state $q$ with the following two rules $q(a,y_1,y_2) \rightarrow y_1$ and $q(f(x),y_1,y_2) \rightarrow q(x,h(y_2),b)$, we have $s_{b,q} = y_1$. Moreover, we obtain \begin{eqnarray*} \Psi_{b,q}^{(0)}(z) &=& z \doteq y_1 \\ \Psi_{b,q}^{(1)}(z) &=& (z \doteq y_1) \wedge (z \doteq h(y_2)) \\ \Psi_{b,q}^{(2)}(z) &=& (z \doteq y_1) \wedge (z \doteq h(y_2)) \wedge (z \doteq h(b))\\ &\equiv& (y_2 \doteq b) \wedge (y_1 \doteq h(b)) \wedge (z \doteq h(b)) \\ \Psi_{b,q}^{(3)}(z) &=& (z \doteq y_1) \wedge (b \doteq b) \wedge (h(y_2) \doteq h(b)) \wedge (z\doteq h(b))\\ &\equiv& (y_2 \doteq b) \wedge (y_1 \doteq h(b)) \wedge (z \doteq h(b)) \end{eqnarray*} We observe that $\Psi_{b,q}^{(2)}(z) = \Psi_{b,q}^{(3)}(z)$ and therefore for every $h\geq 2$, $\Psi_{b,q}^{(h)}(z) = \Psi_{b,q}^{(3)}(z)$. \qed \end{example} \noindent According to our equivalence relation $\cong_b$, $b$ state of the DTA $D$, we define for states $q,q'$ of $D$-earliest total deterministic separated basic MTTs $M,M'$, and $h \geq 0$, the conjunction $\Phi_{b,(q,q')}^{(h)}$ by \[ \begin{array}{llclr} &\displaystyle \bigwedge_{b(f\underline{x}) \to (b_1,\ldots,b_k) \atop{q(f\underline{x},\underline{y}) \rightarrow p[\underline{t}] \atop q'(f\underline{x},\underline{y'})\rightarrow p[\underline{t'}]}} \Big( &\displaystyle \bigwedge_{t_i = y_{j_i}, \atop t'_i = y'_{j'_i}}& \displaystyle(y_{j_i} \doteq y'_{j'_i}) &\wedge \\ & &\displaystyle \bigwedge_{t_i = y_{j_i}, \atop t'_i = q'_i(x_{j'_i}, \underline{T'})}& \displaystyle \Psi_{b_{j'_i},q'_i}^{\prime(h-1)}(y_{j_i})[\underline{T'}/\underline{y'}] &\wedge \\ & &\displaystyle \bigwedge_{t'_i = y'_{j'_i}, \atop t_i = q_i(x_{j_i},\underline{T})}& \displaystyle \Psi_{b_{j_i},q_i}^{(h-1)}(y'_{j'_i})[\underline{T}/\underline{y}] &\wedge \\ & &\displaystyle \bigwedge_{t_i = q_i(x_{j_i},\underline{T}), \atop{t'_i = q'_i(x_{j'_i},\underline{T'}) \atop j_i = j'_i}}& \displaystyle \Phi_{b_{j_i}, (q_i,q'_i)}^{(h-1)}[\underline{T}/\underline{y},\underline{T'}/\underline{y'}] &\wedge \\ & &\displaystyle \bigwedge_{t_i = q_i(x_{j_i},\underline{T}), \atop{t'_i = q'_i(x_{j'_i},\underline{T'}) \atop j_i \neq j'_i}}& \displaystyle (\Psi_{b_{j_i},q_i}^{(h-1)}(s_{b,q_i})[\underline{T}/\underline{y}] \wedge \Psi_{b_{j'_i},q'_i}^{\prime(h-1)}(s_{b,q_i}[\underline{T}/\underline y])[\underline{T'}/\underline{y'}])\ \Big) &\wedge \\ &\displaystyle \bigwedge_{b(f) \to (b_1,\ldots,b_k) \atop{p \neq p',q(f\underline{x},\underline{y}) \rightarrow p[\underline{t}] \atop q'(f\underline{x},\underline{q})\rightarrow p'[\underline{t'}]}} & \displaystyle \bot \end{array} \] $\Phi_{b,(q,q')}^{(h)}$ is defined in the same lines as the equivalence relation $\cong^{(h)}_b$. $\Phi_{b,(q,q')}^{(h)}$ is true for all values of output parameters $\underline{T}$, $\underline{T'}$ such that $\sem{q}(t,\underline{T}) = \sem{q'}(t,\underline{T'})$ for $t \in{\sf dom}(b)$ of height at most $h$. By induction on $h\geq 0$, we obtain: \begin{lemma}\label{l:PhiEquiv} For a given DTA $D$, states $q,q'$ of $D$-earliest total separated basic MTTs, vectors of trees $\underline{T}, \underline{T'}$ over $\DeltaI$, $b$ a state of $D$. $s\in{\sf dom}(b)$, and $h\geq 0$ the following two statements hold: $$(q,\underline{T}) \cong_b^{(h)} (q',\underline{T'}) \Leftrightarrow \Phi_{b,(q,q')}^{(h)}[\underline{T}/\underline{y},\underline{T'}/\underline{y'}] \equiv \mathrm{true}$$ $$s \cong_b^{(h)} (q',\underline{T'}) \Leftrightarrow \Psi_{b,q'}^{(h)}(t)[\underline{T'}/\underline{y}] \equiv \mathrm{true}$$ \qed \end{lemma} \noindent $\Phi_{b,(q,q')}^{(h)}$ is a conjunction of equations of the form $y_i \doteq y_j$, $y_i \doteq t$ with $t \in \DeltaI$. Every satisfiable conjunction of equalities is equivalent to a (possible empty) finite conjunction of equations of the form $y_{i} \doteq t_{ i}$, $t_i\in\mathcal{T}_{\DeltaI\cup Y}$ where the $y_{i}$ are distinct and no equation is of the form $y_j\doteq y_j$. We call such conjunctions \emph{reduced}. If we have two inequivalent reduced conjunctions $\phi_1$ and $\phi_2$ with $\phi_1 \Rightarrow \phi_2$ then $\phi_1$ contains strictly more equations. From that follows that for every sequence $\phi_0 \Rightarrow \ldots \phi_m$ of pairwise inequivalent reduced conjunctions $\phi_j$ with $k$ variables, $m \leq k+1$ holds. This observation is crucial for the termination of the fixpoint iteration we will use to compute $\Phi_{b,(q,q')}^{(h)}$. For $h\geq 0$ we have: \begin{align} \Psi_{b,q}^{(h)}(z) &\Rightarrow \Psi_{b,q}^{(h-1)}(z) \\ \Phi_{b,(q,q')}^{(h)} &\Rightarrow \Phi_{b,(q,q')}^{(h-1)} \label{eq:hh1} \end{align} As we fixed the number of output parameters to the number $l$, for each pair $(q,q')$ the conjunction $\Phi_{b,(q,q')}^{(h)}$ contains at most $2l$ variables $y_i, y'_i$. Assuming that the MTTs to which state $q$ and $q'$ belong have $n$ states each, we conclude that $\Phi_{b,(q,q')}^{(n^2(2l+1))} \equiv \Phi_{b,(q,q')}^{(n^2(2l+1)+i)}$ and $\Psi_{b,q}^{(n(l+1))} \equiv \Psi_{b,q}^{(n(l+1)+i)}$ for all $i \geq 0$. Thus, we can define $\Phi_{b,(q,q')} := \Phi_{b,(q,q')}^{(n^2(2l+1))}$ and $\Psi_{b,q} := \Psi_{b,q}^{(n(l+1))}$. As $(q,\underline{T}) \cong_b (q',\underline{T'})$ iff for all $h \geq 0$, $(q,\underline{T}) \cong_b^{(h)} (q',\underline{T'})$ holds, observation (\ref{eq:hh1}) implies that $$(q,\underline{T}) \cong_b (q',\underline{T'}) \Leftrightarrow \Phi_{b,(q,q')}[\underline{T}/\underline{y}][\underline{T'}/\underline{y'}] \equiv \mathrm{true}$$ Therefore, we have: \begin{lemma}\label{l:Phi} For a DTA $D$, states $q, q'$ of $D$-earliest separated basic MTTs $M,M'$ and states $b$ of $D$, the formula $\Phi_{b,(q,q')}$ can be computed in time polynomial in the sizes of $M$ and $M'$. \end{lemma} \begin{proof} We successively compute the conjunctions $\Psi_{b,q}^{(h)}(z), \Psi_{b,q'}^{(h)}(z), \Phi_{b,(q,q')}^{(h)}, h\geq 0$, for all states $b,q,q'$. As discussed before, some $h\leq n^2(2l+1)$ exists such that the conjunctions for $h+1$ are equivalent to the corresponding conjunctions for $h$ --- in which case, we terminate. It remains to prove that the conjunctions for $h$ can be computed from the conjunctions for $h-1$ in polynomial time. For that, it is crucial that we maintain \emph{reduced} conjunctions. Nonetheless, the \emph{sizes} of occurring right-hand sides of equalities may be quite large. Consider for example the conjunction $x_1 \doteq a \wedge x_2 \doteq f(x_1,x_1) \wedge \ldots \wedge x_n \doteq f(x_{n-1},x_{n-1})$. The corresponding reduced conjunction is then given by $x_1 \doteq a\wedge x_2 \doteq f(a,a)\wedge \ldots\wedge x_n \doteq f(f(f(\ldots(f(a,a))\ldots)$ where the sizes of right-hand sides grow exponentially. In order to arrive at a polynomial-size representation, we therefore rely on compact representations where isomorphic subtrees are represented only once. W.r.t.\ this representation, reduction of a non-reduced conjunction, implications between reduced conjunctions as well as substitution of variables in conjunctions can all be realized in polynomial time. From that, the assertion of the lemma follows. \end{proof} \begin{example} Let $D$ be a DTA with the following rules $b(f(x)) \to (b)$, $b(g) \to ()$ and $b(h) \to ()$. Let $q$ and $q'$ be states of separated basic MTTs $M$, $M'$, respectively, that are $D$-earliest and $\pi$, $\pi'$ be the mappings from the states of $D$ to the states of $M$, $M'$ with $(b,q)\in \pi$ and $(b,q')\in\pi'$. \[\begin{array}{rcl} q(f(x),y_1,y_2) &\rightarrow& a(q(x,b(y_1,y_1),c(y_2),d))\\ q(g,y_1,y_2) &\rightarrow& y_1 \\ q(h,y_1,y_2) &\rightarrow& y_2 \end{array} \] \[\begin{array}{rcl} q'(f(x),y'_1,y'_2) &\rightarrow& a(q'(x,c(y'_1),b(y'_2,y'_2),d))\\ q'(g,y'_1,y'_2) &\rightarrow& y'_2 \\ q'(h,y'_1,y'_2) &\rightarrow& y'_1 \end{array} \] \begin{eqnarray*} \Phi_{r,(q,q')}^{(0)} &=& (y_1 \doteq y'_2) \wedge (y_2 \doteq y'_1) \\ \Phi_{r,(q,q')}^{(1)} &=& (y_1 \doteq y'_2) \wedge (y_2 \doteq y'_1) \wedge (b(y_1,y_1) \doteq b(y'_2,y'_2)) \wedge (c(y_2) \doteq c(y'_1)) \\ &\equiv& (y_1 \doteq y'_2) \wedge (y_2 \doteq y'_1) = \Phi_{r,(q,q')}^{(0)} \end{eqnarray*} \qed \end{example} \noindent In summary, we obtain the main theorem of our paper. \begin{theorem}\label{t:eqpo} Let $(M,A)$ and $(M',A')$ be pairs consisting of total deterministic separated basic MTTs $M$, $M'$ and corresponding axioms $A$, $A'$ and $D$ a DTA. Then the equivalence of $(M,A)$ and $(M',A')$ relative to $D$ is decidable. If $D$ accepts all input trees, equivalence can be decided in polynomial time. \end{theorem} \begin{proof} By Lemma \ref{l:earliest} we build pairs $(M_1,A_1)$ and $(M'_1, A'_1)$ that are equivalent to $(M,A)$ and $(M',A')$ where $M_1$, $M'_1$ are $D$-earliest separated basic MTTs. If $D$ is trivial the construction is in polynomial time, cf.\ Corollary~\ref{c:total}. Let the axioms be $A_1 = p[q_1(x_{i_1},\underline{T_1}),\ldots, q_k(x_{i_k},\underline{T_k})]$ and $A'_1 = p'[q'_1(x_{i'_1}, \underline{T_1}),\ldots,q'_k(x_{i'_{k'}},\underline{T_{k'}})]$. According to Lemma \ref{l:PhiEquiv} $(M_1,A_1)$ and $(M'_1, A'_1)$ are equivalent iff \begin{itemize} \item $p = p'$, $k = k'$ and \item for all $j = 1,\ldots, k$, $\Phi_{b,(q_j,q'_j)}[\underline{T_j}/\underline{y},\underline{T_j}/\underline{y'}]$ is equivalent to ${\sf true}$. \end{itemize} By Lemma~\ref{l:Phi} we can decide the second statements in time polynomial in the sizes of $M_1$ and $M'_1$. \end{proof} \section{Applications} \label{sec:app} In this section we show several applications of our equivalence result. First, we consider partial transductions of separated basic MTTs. To decide the equivalence of partial transductions we need to decide $a)$ whether the domain of two given MTTs is the same and if so, $b)$ whether the transductions on this domain are the same. How the second part of the decision procedure is done was shown in detail in this paper if the domain is given by a DTA. It therefore remains to discuss how this DTA can be obtained. It was shown in \cite[Theorem 3.1]{DBLP:journals/mst/Engelfriet77} that the domain of every top-down tree transducer $T$ can be accepted by some DTA $B_T$ and this automaton can be constructed from $T$ in exponential time. This construction can easily be extended to basic MTTs. The decidability of equivalence of DTAs is well-known and can be done in polynomial time~\cite{DBLP:journals/actaC/GecsegS80,DBLP:books/others/tree1984}. To obtain a total transducer we add for each pair $(q,f)$, $q \in Q$ and $f \in \Sigma$ that has no rule a new rule $q(f(\underline{x}),\underline{y}) \to \bot$, where $\bot$ is an arbitrary symbol in $\DeltaO$ of rank zero. \begin{example} In Example \ref{ex:binary} we discussed how to adjust the transducer from the introduction to our formal definition. We therefore had to introduce additional rules to obtain a total transducer. Now we still add rules for the same pairs $(q,f)$ but only with right-hand sides $\bot$. Therefore the original domain of the transducer is given by a DTA $D = (R,\Sigma,r_0,\delta_D)$ with the rules $r_0(g(x_1,x_2)) \to (r(x_1), r(x_2))$, $r(f(x_1,x_2)) \to (r(x_1),r(x_2))$ and $r(i) \to (\ )$ for $i = 1,2,3$. \qed \end{example} \begin{corollary} The equivalence of deterministic separated basic MTTs with a \emph{partial} transition function is decidable. \end{corollary} Next, we show that our result can be used to decide the equivalence of total separated basic MTTs with look-ahead. A total macro tree transducer with regular look-ahead ($\text{MTT}^{\text{R}}$) is a tuple $(Q,\Sigma,\Delta,\delta,R,\delta_R)$ where $R$ is a finite set of look-ahead states and $\delta_R$ is a total function from $R^k \to R$ for every $f \in \Sigma^{(k)}$. Additionally we have a deterministic bottom-up tree automaton $(P,\Sigma,\delta,-)$ (without final states). A rule of the MTT is of the form $$q(f(t_1,\ldots, t_k),y_1,\ldots, y_k) \to t \hspace{1cm} \angl{p_1,\ldots, p_k}$$ and is applicable to an input tree $f(t_1,\ldots,t_k)$ if the look-ahead automaton accepts $t_i$ in state $p_i$ for all $i = 1,\ldots, k$. For every $q, f, p_1,\ldots, p_k$ there is exactly one such rule. Let $N_1 = (Q_1,\Sigma_1,\Delta_1,\delta_1,R_1,\delta_{R1}), N_2 = (Q_2,\Sigma_2,\Delta_2,\delta_2,R_2,\delta_{R2})$ be two total separated basic MTTs with look-ahead. We construct total separated basic MTTs $M_1, M_2$ \emph{without} look-ahead as follows. The input alphabet contains for every $f \in \Sigma$ and $r_1,\ldots,r_k \in R_1$, $r'_1,\ldots, r'_k \in R_2$ the symbols $\angl{f,r_1,\ldots,r_k,r'_1,\ldots,r'_k}$. For $q(f(x_1,\ldots, x_k),\underline{y}) \to p[T_1,\ldots,T_m]\ \angl{r_1,\ldots, r_k}$ and $q'(f(x_1,\ldots, x_k),\underline{y'}) \to p'[T'_1,\ldots,T'_m]\ \angl{r'_1,\ldots, r'_k}$ we obtain for $M_1$ the rules $$\hat{q}(\angl{f(x_1,\ldots, x_k), r_1,\ldots, r_k, r'_1,\ldots, r'_k},\underline{y}) \to p[\hat{T}_1,\ldots, \hat{T}_m]$$ with $\hat{T}_i = \hat{q}_{i}(\angl{x_{j_i}, \hat{r_1},\ldots, \hat{r_l},\hat{r'_1},\ldots, \hat{r'_l}},Z_{i})$ if $T_i = q_i(x_{j_i},Z_i)$ and $q_i(x_{j_i},\underline{y}) \to \hat{T_i}\ \ \angl{\hat{r_1},\ldots,\hat{r_l}}$ and $q'_i(x_{j_i},\underline{y'}) \to \hat{T'_i}\ \ \angl{\hat{r'_1},\ldots,\hat{r'_l}}$. If $T_i = y_{j_i}$ then $\hat{T}_i = y_{j_i}$. The total separated basic MTT $M_2$ is constructed in the same lines. Thus, $N_i$, $i=1,2$ can be simulated by $M_i$, $i=1,2$, respectively, if the input is restricted to the regular tree language of new input trees that represent correct runs of the look-ahead automata. \begin{corollary} The equivalence of total separated basic MTTs with regular look-ahead is decidable in polynomial time. \end{corollary} Last, we consider separated basic MTTs that concatenate strings instead of trees in the parameters. We abbreviate this class of transducers by $\text{MTT}^\text{yp}$. Thus, the alphabet $\DeltaI$ is not longer a ranked alphabet but a unranked alphabet which elements/letters can be concatenated to words. The procedure to decide equivalence of $\text{MTT}^\text{yp}$ is essentially the same as we discussed in this paper but instead of conjunctions of equations of trees over $\DeltaI \cup Y$ we obtain conjunctions equations of words. Equations of words is a well studied problem ~\cite{Makanin1977,DBLP:conf/focs/Plandowski99,Lothaire2002}. In particular, the confirmed Ehrenfeucht conjecture states that each conjunction of a set of word equations over a finite alphabet and using a finite number of variables, is equivalent to the conjunction of a finite subset of word equations \cite{DBLP:journals/tcs/Honkala00a}. Accordingly, by a similar argument as in section \ref{s:poly}, the sequences of conjunctions $\Psi_{b,q}^{(h)}(z),{\Psi'}_{b,q'}^{(h)}(z),\Phi_{b,(q,q')}^{(h)},h\geq 0$, are ultimately stable. \begin{theorem} The equivalence of total separated basic MTTs that concatenate words instead of trees in the parameters ($\DeltaI$ is unranked) is decidable. \end{theorem} \section{Introduction} Attribute grammars are a well-established formalism for realizing computations on syntax trees~\cite{Knuth1968,DBLP:journals/mst/Knuth71}, and implementations are available for various programming languages, see, e.g.~\cite{Silver2010,Kiama2013,JavaRAG2015}. A fundamental question for any such specification formalism is whether two specifications are semantically equivalent. As a particular case, attribute grammars have been considered which compute uninterpreted trees. Such devices that translate input trees (viz. the parse trees of a context-free grammar) into output trees, have also been studied under the name ``attributed tree transducer''~\cite{DBLP:journals/actaC/Fulop81} (see also~\cite{DBLP:series/eatcs/FulopV98}). In 1982, Courcelle and Franchi-Zannettacci showed that the equivalence problem for (strongly noncircular) attribute systems reduces to the equivalence problem for primitive recursive schemes with parameters~\cite{Courcelle1982,DBLP:journals/tcs/CourcelleF82}; the latter model is also known under the name \emph{macro tree transducer}~\cite{DBLP:journals/jcss/EngelfrietV85}. Whether or not equivalence of attributed tree transducers (ATTs) or of (deterministic) macro tree transducers (MTTs) is decidable remain two intriguing (and very difficult) open problems. For several subclasses of ATTs it has been proven that equivalence is decidable. The most general and very recent result that covers almost all other known ones about deterministic tree transducers is that ``deterministic top-down tree-to-string transducers'' have decidable equivalence~\cite{DBLP:journals/jacm/SeidlMK18}. Notice that the complexity of this problem remains unknown (the decidability is proved via two semi-decision procedures). The only result concerning deterministic tree transducers that we are aware of and that is \emph{not} covered by this general result, is the one by Courcelle and Franchi-Zannettacci about decidability of equivalence of ``separated non-nested'' ATTs (which they reduce to the same problem for ``separated non-nested'' MTTs). However, in their paper no statement is given concerning the complexity of the problem. In this paper we close this gap and study the complexity of deciding equivalence of separated non-nested MTTs. To do so we propose a new approach that we feel is simpler and easier to understand than the one of~\cite{Courcelle1982,DBLP:journals/tcs/CourcelleF82}. Using our approach we can prove that the problem can be solved in polynomial time. \begin{figure}\label{fig:M_tern} \vspace*{-8mm} \centering \Tree [.$g$ [.$f$ [.$f$ [.$f$ $2$ $1$ ] $0$ ] $1$ ] [.$f$ $0$ $2$ ] ] \Tree [.$+$ [.$+$ [.$*$ $1$ [.EXP $3$ $z$ ] ] [.$+$ [.$*$ $0$ [.EXP $3$ [.$s$ $z$ ] ] ] [.$+$ [.$*$ $1$ [.EXP $3$ [.$s$ [.$s$ $z$ ] ] ] ] [.$*$ $2$ [.EXP $3$ [.$s$ [.$s$ [.$s$ $z$ ] ] ] ] ] ] ] ] [.$+$ [.$*$ $0$ [.EXP $3$ [.$p$ $z$ ] ] ] [.$*$ $2$ [.EXP $3$ [.$p$ [.$p$ $z$ ] ] ] ] ] ] \caption{Input tree for 2101.01 (in ternary) and corresponding output tree of $M_{\text{tern}}$.} \end{figure} In a separated non-nested attribute system, distinct sets of operators are used for the construction of inherited and synthesized attributes, respectively, and inherited attributes may depend on inherited attributes only. Courcelle and Franchi-Zannettacci's algorithm first translates separated non-nested attribute grammars into separated total deterministic non-nested macro tree transducers. In the sequel we will use the more established term \emph{basic} macro-tree transducers instead of non-nested MTTs. Here, a macro tree transducer is called \emph{separated} if the alphabets used for the construction of parameter values and outside of parameter positions are disjoint. And the MTT is \emph{basic} if there is no nesting of state calls, i.e., there are no state calls inside of parameter positions. Let us consider an example. We want to translate ternary numbers into expressions over $+$, $*$, $\text{EXP}$, plus the constants $0$, $1$, and $2$. Additionally, operators $s$, $p$, and $z$ are used to represent integers in unary. The ternary numbers are parsed into particular binary trees; e.g., the left of Figure~\ref{fig:M_tern} shows the binary tree for the number $2101.02$. This tree is translated by our MTT into the tree in the right of Figure~\ref{fig:M_tern} (which indeed evaluates to $64.\overline{2}$ in decimal). The rules of our transducer $M_{\text{tern}}$ are shown in Figure~\ref{fig:rules}. \begin{figure}[htb] \[ \begin{array}{lcl} q_0(g(x_1,x_2)) &\ \to \ \quad& +(q(x_1,z), q'(x_2,p(z)))\\ q(f(x_1,x_2),y) &\ \to \ \quad& +(r(x_2,y), q(x_1,s(y)))\\ q'(f(x_1,x_2),y) &\ \to \ \quad& +(r(x_1,y), q'(x_2,p(y)))\\ \phi(i,y) &\ \to \ \quad& *(i,\text{EXP}(3,y))\quad\text{for }i\in\{0,1,2\}, \phi\in\{q,q',r\} \end{array} \] \caption{Rules of the transducer $M_{\text{tern}}$.}\label{fig:rules} \end{figure} The example is similar to the one used by Knuth~\cite{Knuth1968} in order to introduce attribute grammars. The transducer is indeed basic and separated: the operators $p$, $s$, and $z$ are only used in parameter positions. Our polynomial time decision procedure works in two phases: first, the transducer is converted into an ``earliest'' normal form. In this form, output symbols that are not produced within parameter positions are produced as early as possible. In particular it means that the root output symbols of the right-hand sides of rules for one state must differ. For instance, our transducer $M_{\text{tern}}$ is \emph{not} earliest, because all three $r$-rules produce the same output root symbol ($*$). Intuitively, this symbol should be produced earlier, e.g., at the place when the state $r$ is called. The earliest form is a common technique used for normal forms and equivalence testing of different kinds of tree transducers~\cite{DBLP:journals/jcss/EngelfrietMS09,DBLP:journals/ijfcs/FrieseSM11,Laurence2011}. We show that equivalent states of a transducer in this earliest form produce their state-output exactly in the same way. This means especially that the output of parameters is produced in the same places. It is therefore left to check, in the second phase, that also these parameter outputs are equivalent. To this end, we build an equivalence relation on states of earliest transducers that combines the two equivalence tests described before. Technically speaking, the equivalence relation is tested by constructing sets of Herbrand equalities. From these equalities, a fixed point algorithm can, after polynomially many iterations, produce a stable set of equalities. An abridged version of this paper will be published within FoSSaCS 2019.
2,869,038,156,094
arxiv
\section{Introduction} The structure adopted by a collection of particles is ultimately governed by the energetic interactions between the particles. It is therefore natural to ask what sort of structures are favoured by a given set of interactions, and also whether interactions can be chosen or manipulated in order to produce a particular target structure. The scope of both these questions is growing increasingly broad as it becomes possible to exert ever greater control over the shape and form of the interactions between molecular and colloidal building blocks \cite{Glotzer04b}. The motivation for seeking deeper understanding in these areas is the desire to design novel materials and supramolecular structures with unusual and useful properties. \par Considerable structural variety is possible even for spherical particles and isotropic interparticle potentials. Simple van der Waals interactions between inert gas atoms promote near-spherical, highly-coordinated structures, favouring icosahedral packing for clusters \cite{Hoare71a,Echt81a} and close-packed crystals in the bulk. However, these structures can be suppressed by introducing a repulsive barrier into the pair potential at a distance close to $\sqrt{2}$ times the nearest neighbour separation \cite{Dzugutov92a}. In clusters, potentials of this form further promote icosahedral local order, but lead to polytetrahedral structures \cite{Doye01b}, or to less compact shapes that are either elongated or contain holes \cite{Doye01a}. Local maxima in the pair potential can arise, for example, from the combination of short-range depletion attraction between colloidal particles with partially screened long-range Coulomb repulsion \cite{Mossa04a}, in which case the accumulated charge of a growing cluster can effectively limit the size of aggregate that forms. It is also possible to favour less highly-coordinated order by careful design of isotropic pair potentials. For example, inverse statistical mechanical techniques can be used to derive isotropic potentials that favour crystalline but non-close-packed bulk structures, such as the diamond and wurtzite lattices \cite{Rechtsman07b}. \par A vast range of structures become possible when either non-spherical particles or anisotropic interactions are considered. Evolution has selected molecules with interactions that lead to the self-assembled structures that we observe in living matter, including pseudo one-dimensional filaments, two-dimensional membranes, tube-like channels and pores, and shell-like capsules. Fascinating examples in the latter category are the capsids of viruses, many of which are spheroidal and are constructed from a specific number of copies of a small number of different proteins \cite{Crick56a}. It has long been known that the capsomers of some viruses can assemble {\it in vitro} into empty shells even in the absence of the viral genetic material \cite{Bancroft67a}, providing a powerful demonstration of how molecular shape and interactions dictate the structure of aggregates. For proteins, the symmetry and binding in oligomers is determined by the contacts between neighbours in the complex, and it is now becoming possible to engineer the quaternary structure by modifying the contact surface through mutations of the amino acid sequence \cite{Grueninger08a}. \par Drawing inspiration from Nature, and encouraged by rapid advances in the synthesis of tailor-made building blocks, computational scientists are trying to understand the principles of self-assembly and how these can be used to build designed structures. Explicit models of polyhedral shell assembly have shown how difficult this objective can be \cite{Rapaport04a}. While entropic and kinetic considerations are undoubtedly crucial for successful self-assembly, there is also a clear requirement for target structures to be energetically stable and kinetically accessible \cite{Wales05a,Wales06a}. A logical starting point is therefore to design building blocks with attractive sites or ``patches'' in geometries compatible with the target structure. Monodisperse discrete objects, as well as continuous structures, such as sheets, can be constructed in this way \cite{Zhang04a,Wilber07a}. \par Highly directional interactions can also be achieved through a non-uniform charge distribution in particles that remain neutral overall. For example, dipolar particles will endeavour to form chains, since the head-to-tail arrangement of two dipoles is low in energy and mechanically stable. The tendency to form chains is partially frustrated in the presence of competition for more compact arrangements arising from an isotropic van der Waals or depletion attraction \cite{Clarke94a,VanWorkum05a}. A simple model incorporating these features is the Stockmayer potential, consisting of particles with a Lennard-Jones (LJ) site plus a central point dipole. By adjusting the relative strength of the LJ and dipolar contributions, the energetically most stable morphology of the 13-particle Stockmayer cluster changes in four stages from a distorted icosahedron to a closed ring \cite{Clarke94a,Oppenheimer04a}. For slightly larger sizes, knots, links and coils emerge \cite{Miller05a}. \par Van Workum and Douglas have investigated the self-assembly of chains in low-density Stockmayer fluids \cite{VanWorkum05a}, regarding the process as a form of reversible polymerisation. These authors have also considered a natural extension of the Stockmayer potential to higher multipoles, in particular an LJ site plus point quadrupole \cite{VanWorkum06a,Douglas07a}. Quadrupole--quadrupole interactions favour the formation of extended two-dimensional sheets, which can produce tubes when the edges become connected. The LJ-plus-multipole class of potentials therefore provides control over the preference for compact (three-dimensional), sheet-like (two-dimensional) and chain-based (one-dimensional) structure in self-assembly. These tendencies compete with each other. In the present contribution we provide a systematic survey of the structure of small clusters of quadrupolar spheres in an attempt to identify and understand the structural motifs that emerge from the frustration between the isotropic and directional components of the potential. \section{Methods} \subsection{Model potential} The quadrupolar sphere is modelled as an isotropic Lennard-Jones site with a point quadrupole of variable strength superimposed \cite{OShea97a,VanWorkum06a}. The pair potential, which we denote as LJQ, is of the form \begin{multline*} V_{ij}({\bf R}_{ij},\Omega_i,\Omega_j) = 4u\left[\left(\frac{\sigma}{R_{ij}}\right)^{12}-\left(\frac{\sigma}{R_{ij}}\right)^6\right]\\ +V_{\rm Q}({\bf R}_{ij},\Omega_i,\Omega_j), \end{multline*} where ${\bf R}_{ij}$ is the vector from particle $i$ to $j$, $R_{ij}$ is the magnitude of this vector, and $\Omega_i$ represents the orientational degrees of freedom of particle $i$. The parameters $u$ and $\sigma$ are the Lennard-Jones dimer equilibrium well depth and separation, respectively, and will be used as the units of energy and length henceforth. $V_{\rm Q}$ is the quadrupole--quadrupole interaction, which depends on the component(s) of the quadrupole tensor involved. \par In this work, we consider the two quadrupolar arrangements of charges depicted in \fig{quaddef}. In each case, the point quadrupole is reached by taking the limit in which the separation of the charges $d$ goes to zero, while the strength $Q=qd^2$ of the quadrupole moment is held fixed. For the linear arrangement of charges, the interaction between two point quadrupoles $i$ and $j$ can be written in terms of the unit vectors ${\bf e}_{iz}$ and ${\bf e}_{jz}$ along the body-fixed $z$-axes of the particles as \begin{eqnarray} V_{\rm Q}^{\rm lin}&=& 3(Q^*)^2u\left(\frac{\sigma}{R_{ij}}\right)^{5} \times \\ && \big(1+2c_{zz}^2 - 20c_{zz}r_{iz}r_{jz} - 5r_{iz}^2 - 5r_{jz}^2 + 35r_{iz}^2r_{jz}^2\big), \nonumber \label{Vlin} \end{eqnarray} where $c_{\alpha\beta}={\bf e}_{i\alpha}\cdot{\bf e}_{j\beta}$, and $r_{i\alpha}={\bf e}_{i\alpha}\cdot{\bf R}_{ij}/R_{ij}$, $r_{j\alpha}={\bf e}_{j\alpha}\cdot{\bf R}_{ij}/R_{ij}$ (note the sign convention with respect to the direction of ${\bf R}_{ij}$), with $\alpha$ and $\beta$ representing $x$, $y$ or $z$. The dimensionless parameter $Q^*=Q/(4\pi\epsilon\sigma^5u)^{1/2}$ is the reduced quadrupole strength, $\epsilon$ being the dielectric permittivity of the medium. For the ``linear'' quadrupole defined by \eq{Vlin}, we represent the orientation ${\bf e}_{iz}$ of the quadrupole using spherical polar angles. \begin{figure} \centerline{\includegraphics[width=80mm]{quaddef}} \caption{Definition of the charge distributions in the ``linear'' (left) and ``square'' (right) quadrupoles with the local axis frame. The point quadrupole is obtained in each case by taking $d\to0$ while keeping $Q=qd^2$ fixed. \label{quaddef} } \end{figure} \par In the $d\to0$ limit, the pair interaction for the square arrangement of charges in \fig{quaddef} is \begin{multline*} V_{\rm Q}^{\rm squ}=\textstyle{\frac{3}{4}}(Q^*)^2u\left(\displaystyle\frac{\sigma}{R_{ij}}\right)^{5} \big(2c_{yy}^2 - 2c_{yz}^2 - 2c_{zy}^2 + 2c_{zz}^2 \\ - 20c_{yy}r_{iy}r_{iy} + 20c_{yz}r_{iy}r_{jz} + 20c_{zy}r_{iz}r_{jy}\\ - 20c_{zz}r_{iz}r_{jz} +35r_{iy}^2r_{jy}^2 - 35r_{iy}^2r_{jz}^2\\ - 35r_{iz}^2r_{jy}^2 + 35r_{iz}^2r_{jz}^2\big). \end{multline*} Three variables are now required to specify the orientation of the quadrupole, and we have chosen to represent the vectors ${\bf e}_{iy}$ and ${\bf e}_{iz}$ in terms of Euler angles. The LJQ potential with $V_Q=V_{\rm Q}^{\rm squ}$ has received less attention \cite{VanWorkum06a,Douglas07a} in the past than LJQ with $V_Q=V_{\rm Q}^{\rm lin}$. \par The linear quadrupole corresponds directly to the spherical tensor component $Q_{20}$. The square quadrupole can be reached continuously from the linear arrangement by increasing the angle $\theta$ in \fig{quaddef} from 0 to $\pi/2$, which corresponds to introducing a contribution from the component $Q_{22c}$. At $\theta=\pi/2$, the combination is $\frac{3}{2}Q_{20}+\frac{1}{2}\sqrt{3}Q_{22c}$. The interactions between components of the quadrupole in the spherical tensor representation are tabulated in Appendix F of Ref.~\cite{Stone97a}. \par We will need to perform local geometry optimisations using the LJQ potentials, and have therefore derived and coded their analytic derivatives with respect to the Cartesian position coordinates and the angular orientational variables. \subsection{Global optimisation} We performed unbiased searches for the global minima of clusters bound by the LJQ potentials using the basin-hopping algorithm \cite{Wales97a}, in which a Monte Carlo simulation is run on a transformed potential energy surface (PES) by performing a local minimisation of the energy at each step. The local minimisation \cite{Li87a} is key to the success of basin-hopping \cite{Wales99a}, and is a feature shared by other efficient methods of global optimisation for clusters, such as certain genetic algorithms \cite{Johnston03a}. \par For a given number of particles, $N$, and quadrupole strength, $Q^*$, several runs seeded from different random initial positions and orientations were performed. The number of Monte Carlo steps required to find a putative global minimum reliably in independent runs depends strongly on the size of the cluster and the strength of the quadrupole moment. It is also important to select a reasonable temperature for the accept/reject step in the basin-hopping runs. Although the success of the method is not very sensitive to the temperature, it must be high enough for the search to escape from local traps, but not so high that we fail to sample the low-lying minima in each region sufficiently. A fixed reduced temperature of $kT/u=1$ was often found to work well. \par To generate a structural map for the clusters, it is necessary to explore the two-dimensional parameter space defined by the size of the cluster and the strength of the quadrupole. The straightforward approach of running basin-hopping on a grid of $Q^*$ points for each $N$ would be inefficient, since a small change in $Q^*$ will often lead only to a relaxation of the global minimum, with qualitative changes to a new structure occurring at larger intervals in $Q^*$. We have therefore devised a surveying scheme with an iterative element, designed to identify the values of $Q^*$ where the identity of the global minimum changes for a given $N$. \par The algorithm begins with thorough searches for the global potential energy minimum at two values of the quadrupole strength, $Q^*_{\rm low}$ and $Q^*_{\rm high}$, that are far enough apart to lead to qualitatively different structures. These structures are then relaxed by local minimisation on a grid of $Q^*$ points that lie between $Q^*_{\rm low}$ and $Q^*_{\rm high}$, resulting in the correlation of the energy of each structure with $Q^*$. During this process, it is possible that one or both of the minima will disappear at some value of $Q^*$ due to a catastrophe in the PES \cite{Wales01b}, leading to a sudden change in energy as the structure falls into a different basin of attraction. In this case, a new basin-hopping run is performed at the $Q^*$ where the catastrophe occurred and the scan in $Q^*$ is continued. \par Eventually, the energies of the relaxed structures initiated from $Q^*_{\rm low}$ and $Q^*_{\rm high}$ cross at some quadrupole strength $Q^*_{\rm cross}$. New basin-hopping runs are then performed to identify the global minima at $Q^*_{\rm cross}\pm\delta Q^*$ a little above and below the crossing point. If the basin-hopping run at $Q^*_{\rm cross}-\delta Q^*$ returns the same structure and energy as the relaxed structure from $Q^*_{\rm low}$, then we assume that this structure was the global minimum not only at $Q^*_{\rm low}$ and at $Q^*_{\rm cross}-\delta Q^*$, but also at all values of $Q^*$ in the intervening range. In other words, we assume that there are no reentrant global minimum structures. This assumption is not only intuitively reasonable, given that $Q^*$ continuously changes the potential from isotropic van der Waals attraction to highly directional electrostatic interactions, but it is also borne out by careful checks of particular cases. The range $Q^*_{\rm cross}+\delta Q^*$ to $Q^*_{\rm high}$ was treated analogously. Since a local relaxation is much faster than a full basin-hopping run, this procedure is far more efficient than using basin-hopping afresh at each intermediate $Q^*$ value. \par If, on the other hand, the basin-hopping runs at $Q^*_{\rm cross}\pm\delta Q^*$ return new structures with lower energy than the relaxed structures from $Q^*_{\rm low}$ and $Q^*_{\rm high}$, then this global minimum supersedes them and was in turn relaxed at values of $Q^*$ in both directions away from $Q^*_{\rm cross}$ until the energy rose above those of the previous structures. A new check for the true global minimum must now be performed at this crossing point. The procedure was terminated when no lower minima were found at the crossing points of relaxed structures. Hence, full basin-hopping runs need only be performed close to the locations where the identity of the global minimum changes. \section{Results} \subsection{Local coordination of quadrupoles} For a given separation ${\bf R}$, the energetically optimal arrangement of two point quadrupoles is with the local $y$ axis on one particle and the local $z$ axis on the other aligned with ${\bf R}$ and the other two local axes coplanar. This is true for any value of $\theta$ in \fig{quaddef}, but we will refer to the arrangement as a ``T-shape,'' which is most clearly seen for the linear case, $\theta=0$. \begin{figure} \centerline{\includegraphics[width=70mm]{motifs}} \caption{Characteristic coordination motifs for the linear (left) and square (right) quadrupoles: global minima for the trimer (upper panels) and tetramer (lower) for a strong quadrupole (large $Q^*$). Extended charge arrangements are shown for illustration only; all calculations are in the point quadrupole limit. In the case of LQ$_4$ (lower left), the quadrupole axes are not coplanar. \label{motifs} } \end{figure} For the trimers, denoted LQ$_3$ and SQ$_3$ for the linear and square quadrupoles, respectively, triangular arrangements are optimal, as shown in the upper panels of \fig{motifs}. Despite the significant distortion away from three ideal T-shaped pair interactions, the energies of the trimers are each about $2.8$ times the respective dimer energies at $Q^*=5$. \par In the tetramers (lower panels of \fig{motifs}), the strain is relieved, making four undistorted T-shapes possible. For an interior angle of $45^\circ$, the slipped-parallel arrangement of the next-nearest neighbours is also favourable, further lowering the total energy of the tetramers to about $4.5$ times that of the respective dimers at $Q^*=5$. An important difference between the linear and square quadrupoles is demonstrated by the tetramers. The axial arrangement of charges in the linear quadrupole means that rotation of a quadrupole about a local $z$ axis makes no difference to the energy. LQ$_4$ is therefore able to lower its energy by twisting the quadrupolar axes slightly. This distortion places diagonally opposite particles above and below the plane of the projection in the lower-left panel of \fig{motifs}, allowing next-nearest neighbours to approach more closely. In contrast, SQ$_4$ does not have this flexibility, and the structure in the lower-right panel of \fig{motifs} is completely planar. \par We note that the T-shape and slipped-parallel pair geometries are stationary points for dimers of the linear quadrupolar molecule carbon dioxide. However, in contrast to the LJQ model, the T-shape of (CO$_2$)$_2$ is a saddle point, while the slipped-parallel geometry is stable \cite{Bukowski99a}. \subsection{Strong and weak quadrupole limits} \begin{figure} \centerline{\includegraphics[width=85mm]{maps}} \caption{Structural maps for (a) the square and (b) the linear quadrupole: relaxed LJ structure (filled circle), sheet (filled square), stacked triangular antiprisms (open upward triangle), decorated triangular antiprisms (filled upward triangle), stacked square antiprisms (open square), filled stacked pentagonal antiprisms (plus), hollow shell (open circle), filled shell (filled downward triangle), lattice-like (cross), decahedral core (filled diamonds), star (star). \label{maps} } \end{figure} The T-shaped nearest-neighbour geometry favoured by the quadrupole--quadrupole interactions encourages the formation of two-dimensional square networks. However, this tendency is frustrated by the LJ part of the potential, which drives the structure towards compact, highly-coordinated arrangements with polytetrahedral or icosahedral packing \cite{Wales97a}. This competition produces a series of structural motifs that partially satisfy the two opposing trends. \fig{maps} summarises the structural maps of SQ$_N$ and LQ$_N$ as a function of the number $N$ of particles and the strength $Q^*$ of the quadrupole moment. Some structures are difficult to classify in an unambiguous or meaningful way, and such combinations of $N$ and $Q^*$ have been left blank in the figure for clarity. \begin{figure} \centerline{\includegraphics[width=85mm]{structures}} \caption{Structures discussed in the text. (a) SQ$_{16}$ sheet at $Q^*=2$, (b) LQ$_{16}$ sheet at $Q^*=1.5$, (c) SQ$_{13}$ cuboctahedron at $Q^*=0.4$, (d) SQ$_{24}$ filled shell at $Q^*=0.6$, (e) SQ$_{24}$ decorated triangular antiprisms at $Q^*=0.4$, (f) LQ$_{12}$ star at $Q^*=1.025$. \label{structures} } \end{figure} For a sufficiently weak quadrupole moment, the global minimum must be close to the LJ global minimum structure, but with slight distortions induced by the quadrupoles. However, effectively confining the quadrupoles to an icosahedral framework for SQ$_{13}$ causes the quadrupoles to experience severe frustration, akin to that found in geometrically frustrated magnets \cite{Harrison04a}. Hence, a given arrangement of the particles can correspond to multiple potential energy minima in the orientational part of configuration space, introducing a new source of complexity to the PES. The number of such isomers of the LJ structure generally increases with $Q^*$, but also depends sensitively and non-monotonically on $N$. For the near-icosahedral SQ$_{13}$ there are two distinct isomers at $Q^*=0.025$, while for SQ$_{19}$, where the global minimum is based on two interpenetrating icosahedra for this value of $Q^*$, we located 23 distinct arrangements by quenches of the LJ structure starting from random quadrupole orientations. This figure rises to several hundred for SQ$_{19}$ at $Q^*=0.1$. Although these searches are not definitively exhaustive, the rapid increase in the number of isomers illustrates the roughness of the PES with respect to the orientational coordinates. \par In the opposite limit of large $Q^*$, a two-dimensional sheet always emerges. The sheets for the square number $N=16$ are shown in \fig{structures}. The twist seen in LQ$_4$ is continued as the sheet grows, while the sheets of square quadrupoles are always planar. The sheet grows by adding particles at adjacent sites of the extended square lattice, with the dimensions of the lattice adapting to maximise the number of T-shaped pairs in the first instance. It is often possible to achieve the maximum number of such pairs in more than one way on a square lattice, and next-nearest neighbour interactions then come into play. Hence, there can be close competition between structures even in the strong quadrupole limit. In contrast, the Stockmayer potential always has an unambiguous optimal structure consisting of a planar ring of head-to-tail dipoles in the strong dipole limit \cite{Miller05a}. The lower boundary of the quadrupole sheet on the structural map moves (non-monotonically) to higher $Q^*$ as $N$ increases, because a larger number of LJ pair interactions must be disrupted to create the sheet. The widening region between the relaxed LJ cluster and the sheet is occupied by structures that strike a compromise between high-coordination number and sheet-like arrangements. \subsection{The 13-particle cluster} \begin{figure} \centerline{\includegraphics[width=85mm]{growth}} \caption{Part of some growth sequences for the square quadrupole. (a) stacked triangular antiprisms, (b) stacked square antiprisms, (c) hollow shells. \label{growth} } \end{figure} The 13-particle cluster, which for the pure LJ potential has a global minimum consisting of a centred icosahedron with point group $I_h$, provides a good illustration of the sequence of changes driven by the quadrupole in small clusters. For SQ$_{13}$ with small $Q^*$, the quadrupole of the central particle aligns itself perpendicular to one of the icosahedral $C_2$ axes. The resulting small distortions of the particle positions lower the symmetry to $C_{2h}$. The quadrupole--quadrupole interactions are highly frustrated when confined to the vertices of the distorted icosahedron and at $Q^*=0.35$, the global minimum switches to a slightly distorted centred cuboctahedron ($D_{2d}$)---a fragment of face-centred cubic lattice (\fig{structures}c). This structure maintains the 12-fold coordination of the central particle but the shell consists of squares and triangles, with quadrupole arrangements closer to those in the right-hand panels of \fig{motifs}. At $Q^*=0.525$, a second change occurs, to a stack of face-sharing triangular antiprisms (illustrated in the second panel of \fig{growth}a), which can also be regarded as a narrow tube if viewed down the three-fold axis. This structure sacrifices the high coordination of a more spherical lattice-like fragment for an elongated arrangement, in which the quadrupoles are better aligned. The switch to the sheet structure then takes place in two steps. First, at $Q^*=0.8$ a $3\times4$ sheet arises with the thirteenth particle in the same plane, bridging the central bond of a long side. At $Q^*=1.15$, the triangular face becomes too unfavourable, and the sheet adopts a $4\times4$ square with three of the corners missing. \begin{figure} \centerline{\includegraphics[width=80mm]{orientations}} \caption{The 13-particle icosahedron (point group $I_h$) viewed along a $C_3$ axis and the two LQ$_{13}$ isomers at $Q^*=0.025$ of lowest energy. Both belong to point group $S_6$. \label{orientations} } \end{figure} A related sequence emerges with $Q^*$ for the cluster of 13 linear quadrupoles. Like the dipole moment in the 13-particle Stockmayer cluster, the axis of the quadrupole on the central particle in LQ$_{13}$ selects one of the $C_3$ axes of the icosahedron, here reducing the symmetry from $I_h$ to $S_6$. However, there are four other icosahedral minima differing by the quadrupole orientations. The two isomers with the lowest energy are illustrated in \fig{orientations}. With increasing $Q^*$, the cluster eventually passes to the stacked triangular antiprisms, but does so through a rather amorphous structure, unlike the highly symmetric cuboctahedron seen in SQ$_{13}$. The linear quadrupole can tolerate a considerably higher $Q^*$ before switching to the sheet than can the square quadrupole---an observation that holds for all $N$ studied here. \subsection{Structural families and growth sequences} A number of structural families emerge in LJQ clusters and persist over some range of $Q^*$ and $N$. The stacked triangular antiprisms described above for 13 particles are seen for SQ$_N$ with $9\le N\le15$. When $N$ is not a multiple of three, the end particles form an incomplete layer, giving rise to a simple growth sequence, part of which is depicted in \fig{growth}a. This family does not continue indefinitely as the global minimum, but is replaced for $16\le N\le21$ by stacked square antiprisms (\fig{growth}b). The diamond-like faces on the surface of the square-based structure are flatter and closer to the ideal tetramer arrangement than those on the surface of the triangular antiprismatic stack. \par Increasing the circumference of these stacks by another particle to make pentagons makes the diameter of the structure large enough to accommodate a line of particles down the centre of the stack, giving a filled tube-like arrangement. The additional contacts provided by the central line make such structures competitive at lower $Q^*$ than the stacked squares. In fact, pentagonal stacks arise in two forms in the structural map. The diamond symbols in \fig{maps}(a) indicate clusters built around a decahedron, which contains a pentagonal prism. For pair potentials with a preferred nearest-neighbour separation, decahedral structures are less strained than icosahedral ones and are seen in the global minima of short-ranged isotropic potentials \cite{Doye95d} and experimentally in metal clusters \cite{Marks94a}. In LJQ clusters, the square faces are favoured by the quadrupolar interactions, and the structure can grow by building additional partial layers around the five-fold axis. Twisting the pentagonal layers gives the second type of pentagonal stack structure, filled pentagonal antiprisms, which appear on the map for SQ$_N$ with $N\ge20$. These tube-like stacks can be regarded as two-dimensional sheets in which two opposite edges have been joined, thereby exchanging the energetic cost of an exposed edge for the penalty of curving the sheet. This trade-off is analogous to the formation of closed rings of dipoles \cite{Miller05a}. Larger tubes have been observed to assemble spontaneously in the finite-temperature simulations of the LJQ fluid by Van Workum and Douglas \cite{VanWorkum06a}. \par Tubes can dispose of their remaining exposed edges by also closing the ends to make a shell. We observe hollow shells over a range of $Q^*$ in SQ$_{22}$ and larger. For this class of structures, certain values of $N$ give rise to a structure of high symmetry. The first of these is SQ$_{24}$, illustrated in the central panel of \fig{growth}c, which has perfect $O_h$ octahedral symmetry. The shell normally grows by insertion of a particle into the surface, causing a distortion of the ideal triangular and square faces, but occasionally by the addition of an edge-bridging particle, as shown in \fig{growth}c. The shell seems to be a permanent feature of larger SQ$_N$ clusters. We have followed it as far as $N=36$, which forms an elongated shell of $D_{3d}$ symmetry with triangular faces at the ends and antiprismatically stacked hexagons along the body. \par If $Q^*$ is not sufficiently large, the shell is energetically penalised for its shortage of LJ nearest neighbour pairs. However, a large number of such pairs can be obtained by placing a few particles inside the shell. Hence, the hollow shell is typically preceded by a filled shell or filled pentagonal tube in the structural map, \fig{maps}a. A shell encapsulating two particles is shown in \fig{structures}d for SQ$_{24}$; compare the hollow shell for this cluster in the central panel of \fig{growth}c. \par Similar energetic compromises produce mixtures of structures that have already been described. For example, the transition from the LJ structure to the filled tubes and shells is sometimes bridged by decorated versions of the stacked triangular antiprisms, where the stack has been surrounded by a new layer. This arrangement is illustrated in a view down the three-fold axis for SQ$_{24}$ in \fig{structures}e. The characteristic network of square faces for the shell is beginning to emerge on the outside of these structures, but they maintain a larger number of LJ pairs than the filled shell. \par The linear quadrupole tends not to give rise to hollow global minima. Although tube-like and shell-like structures do appear, they are collapsed into what would be the central space in the square quadrupole equivalents. This ability to distort, or inability to support a hollow interior, is a result of the axial symmetry of the linear quadrupole. A square array of linear quadrupoles, such as the one depicted in \fig{structures}b, can fold and twist along one of its diagonals without severely disrupting the T-shaped nearest-neighbour interactions either side of the fold. This is not true of an array of square quadrupoles, such as that in \fig{structures}a, in which each quadrupole defines a plane and not just a line. By collapsing inwards, the clusters of linear quadrupoles gain favourable interactions between opposite sides of the structure that would otherwise be held far apart. However, the collapsed structures are often rather amorphous, making them hard to classify or describe in a helpful way. For this reason, the structural map in \fig{maps}b extends only to LQ$_{20}$. \par A distinctive and reproducible feature of the linear quadrupole clusters is a family of structures with a star-like organisation of the particles and a gear-wheel array of quadrupole axes, which are scattered around the structural map (\fig{maps}b). Again, for particular values of $N$, the cluster can achieve a high symmetry that may be based on a three-fold or four-fold principal symmetry axis. An example belonging to point group $S_6$ is shown in \fig{structures}f. However, the stability of these morphologies is strongly correlated with the number of particles; the stars are not observed away from the values of $N$ that allow the symmetry to be completed. \section{Concluding Remarks} The survey of putative global minima presented here shows that the competition between isotropic attractive forces and highly directional quadrupole--quadrupole interactions gives rise to a wide variety of structural motifs. These include elongated tube-like structures, hollow and filled shells, stars, and extended sheets. Some unusual point groups are represented in this collection. \par Global optimisation is most challenging when the competing influences are closely balanced, i.e., for intermediate strengths of the quadrupole in this work. Obtaining reproducible lowest-energy structures was significantly more difficult for the quadrupolar potential than for equivalent dipolar Stockmayer potential \cite{Miller05a}. This observation, together with the multiplicity of minima that were found to have virtually identical positions but different orientations of the quadrupoles, hints at a complex potential energy surface in certain parts of the $(N,Q^*)$ parameter space. Confirmation and further exploration of this complexity would require a more comprehensive analysis of the energy landscape \cite{Wales00b}. The landscape approach would provide information not only on the number of competing structures, but also on the barriers separating them and the rearrangement mechanisms that interconvert them. This information would provide a starting point for investigating the thermal stability and dynamic properties of the clusters, as well as the routes by which they might self-assemble. \par The various families of structures have their own ``magic'' numbers at which a particular shape is complete and the landscape is probably minimally frustrated \cite{Bryngelson87a,Wales06a}. Such numbers are well known for a variety of simpler interatomic potentials, and often correspond to the completion of successive icosahedral shells \cite{Mackay62a} at $N=13,\ 55,\ 147\dots$. The special stability associated with these sizes, combined with kinetic accessibility \cite{Wales06a}, can lead to prominent features such as experimental abundance \cite{Farges88a}. In the present work, the tube-like structures of stacked triangular and square antiprisms achieve completed layers for multiples of three and four particles, respectively, while a shell can be elongated by the insertion of a complete hexagonal ring. In the strong quadrupole regime, sheets adopt defect-free squares when $N$ is a square number. It would be interesting to investigate whether these ``perfect'' structures are especially stable and self-assemble efficiently, as for magic number Lennard-Jones clusters \cite{Doye99c}. \par We have seen that quadrupole--quadrupole interactions favour the formation of extended two-dimensional structures with four-fold coordination of the particles. In contrast, dipole--dipole interactions lead to extended pseudo one-dimensional chains, while isotropic attraction drives the structure towards compact three-dimensional arrangements. By careful balancing of the multipolar interactions, it should therefore be possible to exert considerable control over the structures that self-assemble out of multipolar particles with isotropic core interactions. \par Briefly considering bulk phases, rather than finite clusters, such control could be useful in adjusting the networking properties of colloidal gels. For example, it has recently been shown that dipolar colloids can be encouraged to form more interconnected gel-like networks by a slight extension of the dipole \cite{Blaak07a}. From studies of models of patchy spheres with fixed maximum valency it is now known that the average coordination number of the particles in a gel has important consequences for the structure of the gel and for the underlying phase behaviour of the fluid from which it forms \cite{Bianchi06a}. The present work suggests that the average coordination number could be finely tuned either by adding a weak point quadrupole to point dipolar particles, or by using a mixture of dipolar and quadrupolar spheres. Hence, multipolar particles could be an appealing alternative to patchy colloids for realizing and exploring reversible gels \cite{Zaccarelli07a}. \acknowledgments The authors are grateful to Josef O'Brien for some preliminary calculations on Lennard-Jones clusters with extended quadrupolar distributions of point charges. MAM thanks EPSRC for financial support.
2,869,038,156,095
arxiv
\section{Introduction} Behavior of systems with continuous symmetry in the presence of random anisotropy disorder is the subject of numerous theoretical and experimental investigations. This is because of the surprizing observation made by Larkin \cite{Larkin} and Imry and Ma \cite{ImryMa} that even a weak disorder may destroy the long-range translational or orientational order. Recent example is provided by the nematic liquid crystals in random porous medium, in which the order parameter -- the unit vector $\hat{\bf n}$ -- interacts with the quenched random anisotropy disorder (see e.g. Ref. \cite{Feldman} and references therein). Though the problem of violation of the long-range order by quenched disorder is more than 30 years old, still there is no complete understanding (see e.g. Refs. \cite{Feldman,Nattermann,Wehr,Itakura} and references therein), especially concerning the role of topological defects. In the anisotropic phase A of superfluid $^3$He, the Larkin-Imry-Ma effect is even more interesting. In this superfluid the order parameter contains two Goldstone vector fields: (1) the unit vector $\hat{\bf l}$ characterizes the spontaneous anisotropy of the orbital and superfluid properties of the system; and (2) the unit vector $\hat{\bf d}$ characterizes the spontaneous anisotropy of the spin (magnetic) degrees of freedom. In aerogel, the quenched random anisotropy disorder of the silicon strands interacts with the orbital vector $\hat{\bf l}$, which thus must experience the Larkin-Imry-Ma effect. As for the vector $\hat{\bf d}$ of the spontaneous anisotropy of spins it is assumed that $\hat{\bf d}$ does not interact directly with the quenched disorder, at least in the arrangement when the aerogel strands are covered by layers of $^4$He atoms preventing the formation of solid layers of $^3$He with large Curie-Weiss magnetization. There is a tiny spin-orbit coupling between vectors $\hat{\bf d}$ and $\hat{\bf l}$ due to which the $\hat{\bf l}$-vector may transfer the disorder to the $\hat{\bf d}$-field. Superfluid $^3$He-A experiences many different types of topological defects (see e.g. \cite{Book}), which may be pinned by the disorder. On recent experiments on the superfluid $^3$He-A in aerogel see Refs. \cite{Nazaretski,experiment,BarkerNegativeShift,Dmitriev1,Dmitriev2,DmitrievNew} and references therein. In particular, Refs. \cite{experiment,DmitrievNew} describe the transverse NMR experiments, in which the dependence of the frequency shift on the tipping angle $\beta$ of the precessing magnetization has been measured; and Ref. \cite{DmitrievNew} also reports the observation of the longitudinal NMR. Here we discuss these experiments in terms of the Larkin-Imry-Ma disordered state \cite{Larkin,ImryMa} extended for the description of the superfluid $^3$He-A in aerogel \cite{Volovik}. In Sec. 2 the general equations for NMR in $^3$He-A are written. In Sec. 3 these equations are applied to the states with disordered $\hat{\bf d}$ and $\hat{\bf l}$ fields. In Sec. 4 and Sec. 5 the models for the two observed states are suggested in terms of the averaged distributions of $\hat{\bf d}$ and $\hat{\bf l}$ fields consistent with observations,. Finally in Sec. 6 these states are interpreted in terms of different types of disorder. \section{Larmor precession of $^3$He-A} In a typical experimental arrangement the spin-orbit (dipole-dipole) energy is smaller than Zeeman energy and thus may be considered as a perturbation. In zero-order approximation when the dipole energy and dissipation are neglected, the spin freely precesses with the Larmor frequency $\omega_L=\gamma H$, where $\gamma$ is the gyromagnetic ratio of $^3$He nuclei. In terms of the Euler angles the precession of magnetization is given by \begin{equation} {\bf S}(t)=S ~R_z(-\omega_L t+\alpha) R_y(\beta) \hat {\bf z} ~~. \label{S} \end{equation} Here $S=\chi H$ is the amplitude of spin induced by magnetic field; the axis $\hat {\bf z}$ is along the magnetic field ${\bf H}$; matrix $R_y(\beta)$ describes rotation about transverse axis $y$ by angle $\beta$, which is the tipping angle of the precessing magnetization; $R_z$ describes rotation about $z$; $\alpha$ is the phase of the precessing magnetization. According to the Larmor theorem \cite{BunkovVolovik}, in the precessing frame the vector $\hat{\bf d}$ is in turn precessing about ${\bf S}$. Because of the interaction between the spin ${\bf S}$ and the order parameter vector $\hat{\bf d}$, the precession of $\hat{\bf d}$ occurs in the plane perpendicular to ${\bf S}$, and it is characterized by another phase $\Phi_d$. In the laboratory frame the precession of $\hat{\bf d}$ is given by \begin{equation} \hat {\bf d}(t)=R_z(-\omega_L t+\alpha) R_y(\beta) R_z(\omega_L t-\alpha+\Phi_d)\hat {\bf x} ~, \label{d} \end{equation} while the orbital vector $\hat{\bf l}$ is time-independent in this approximation: \begin{equation} \hat{\bf l}=\hat{\bf z} \cos\lambda + \sin\lambda (\hat{\bf x}\cos\Phi_l+\hat{\bf y}\sin\Phi_l) ~. \label{l} \end{equation} This is the general state of the pure Larmor precession of $^3$He-A, and it contains 5 Goldsone parameters: 2 angles $\alpha$ and $\beta$ of the magnetization in the precessing frame; angle $\Phi_d$ which characterizes the precession of vector $\hat{\bf d}$; and two angles $\lambda$ and $\Phi_l$ of the orbital vector $\hat{\bf l}$. The degeneracy is lifted by spin-orbit (dipole) interaction \cite{VollhardtWolfle} \begin{equation} F_D=-\frac{\chi\Omega^2_A} {2\gamma^2}(\hat{\bf l}\cdot \hat{\bf d})^2~, \label{DipoleInter} \end{equation} where $\Omega_A$ is the so-called Leggett frequency. In the bulk homogeneous $^3$He-A, the Leggett frequency coincides with the frequency of the longitudinal NMR, $\omega_\parallel= \Omega_A$. In typical experiments one has $\Omega_A\ll \omega_L$, which allows us to use the spin-orbit interaction averaged over the fast Larmor precession: \cite{Gongadze,BunkovVolovik} \begin{eqnarray} \bar F_D =\frac{\chi\Omega^2_A} {2\gamma^2} U~~, \label{SO1} \\ U=-{1\over 2}\sin^2\beta + {1\over 4}(1+\cos\beta)^2\sin^2\Phi\sin^2\lambda\nonumber\\ -( {7\over 8}\cos^2\beta +{1\over 4}\cos\beta -{1\over 8})\sin^2\lambda ~~, \label{SO} \end{eqnarray} where $\Phi=\Phi_d-\Phi_l$. The dipole interaction generates the frequency shift of the transverse NMR from the Larmor frequency: \begin{equation} \omega_\perp- \omega_L=-\frac{\partial \bar F_D}{\partial (S\cos\beta)} =- \frac{\Omega^2_A}{2\omega_L} \frac{\partial U}{\partial \cos\beta}~. ~~ \label{FShift} \end{equation} In the bulk $^3$He-A, the minimum of the dipole interaction requires that $\Phi_d=\Phi_l$, and $\sin^2\lambda=1$, i.e. the equilibrium position of $\hat{\bf l}$ is in the plane perpendicular to ${\bf H}$. However, for the $^3$He-A confined in aerogel, the interaction with the quenched disorder may essentially modify this spatially homogeneous state by destroying the long-range orientational order due to the Larkin-Imry-Ma effect \cite{Volovik}. \section{Two states of $^3$He-A in aerogel} Experiments reported in Ref. \cite{DmitrievNew} demonstrate two different types of magnetic behavior of the A-like phase in aerogel, denoted as (f+c)-state and (c)-state correspondingly. The (f+c)-state contains two overlapping lines (f) and (c) in the transverse NMR spectrum ({\it far} from and {\it close} to the Larmor frequency correspondingly). The frequency shift of the transverse NMR is about 4 times bigger for the (f)-line compared to the (c)-line. The behavior under applied gradient of magnetic field suggests that the (f+c)-state consists of two magnetic states concentrated in different parts of the cell. The (c)-state contains only a single (c)-line in the spectrum, and it is obtained after application of the 180 degree pulse while cooling through $T_c$. The pure (f)-state, i.e. the state with a single (f)-line, has not been observed. The (c) and (f+c) states have different of dependence of the frequency shift $\omega_\perp- \omega_L$ on the tilting angle $\beta$ in the pulsed NMR experiments: $\omega_\perp- \omega_L \propto \cos\beta$ in the pure (c)-state; and $\omega_\perp- \omega_L \propto (1+\cos\beta)$ in the (f+c)-state. The latter behavior probably characterizes the property of the (f)-line which has the bigger shift and is dominating in the spectrum of the (f+c)-state. The $(1+\cos\beta)$-law has been also observed in Ref. \cite{experiment}. The experiments with longitudinal NMR were also reported in Ref. \cite{DmitrievNew}. The longitudinal resonance in the (f+c)-state has been observed, however no traces of the longitudinal resonance have been seen in the (c)-state. Let us discuss this behavior in terms of the disordered states emerging in $^3$He-A due to the orientational disorder. In the extreme case of weak disorder, the characteristic Imry-Ma length $L$ of the disordered $\hat{\bf l}$-texture is much bigger than the characteristic length scale $\xi_D$ of dipole interaction, $L\gg \xi_D$. In this case the equilibrium values of $\Phi$ and $\lambda$ are dictated by the spin-orbit interaction, $\Phi=\Phi_d-\Phi_l=0$ and $\sin^2\lambda=1$; and the Eq.(\ref{FShift}) gives \begin{equation} \omega_\perp- \omega_L= \frac{\Omega^2_A}{8\omega_L} (1+3 \cos\beta)~. \label{case1} \end{equation} This dependence fits neither the (f)-state $(1+\cos\beta)$ behavior nor the $\cos\beta$ law in the (c)-state, which indicates that the disorder is not weak compared to the spin-orbit energy. In the extreme case of strong disorder, when $L\ll \xi_D$, both $\Phi$ and $\hat{\bf l}$ are random: \begin{eqnarray} \left<\sin^2\Phi\right>-\frac{1}{2} =0~, \label{randomPhi} \\ \left<\sin^2\lambda\right> -\frac{2}{3}=0~. \label{randomLambda} \end{eqnarray} In this case it follows from Eq.(\ref{FShift}) that the frequency shift is absent: \begin{equation} \omega_\perp= \omega_L~. \label{case2} \end{equation} The Eq.(\ref{randomPhi}) means that $\Phi_l$ and $\Phi_d$ are dipole unclocked, i.e. they are not locked by the spin-orbit dipole-dipole interaction, which is natural in case of small Imry-Ma length, $L\ll \xi_D$. In principle, there can be three different dipole-unlocked cases: (i) when both $\Phi_l$ and $\Phi_d$ are random and independent; (ii) when $\Phi_l$ is random while $\Phi_d$ is fixed; (iii) when $\Phi_d$ is random while $\Phi_l$ is fixed. The strong disorder limit is consistent with the observation that the frequency shift of the (c)-line is much smaller than the frequency shift in $^3$He-B in aerogel. The observed non-zero value can be explained in terms of the small first order corrections to the strong disorder limit. Let us introduce the parameters \begin{equation} a=\frac{1}{2} -\left<\sin^2\Phi\right>~~,~~b=\left<\sin^2\lambda\right> -\frac{2}{3}~, \label{random2} \end{equation} which describe the deviation from the strong disorder limit. These parameters are zero in the limit of strong disorder $L^2/\xi_D^2\rightarrow 0$, and one may expect that in the pure Larkin-Imry-Ma state they are proportional to the small parameter $L^2/\xi_D^2\ll 1$. The behavior of these two parameters can be essentially different in different realizations of the disordered state, since the vector $\hat{\bf l}$ entering the parameter $a$ interacts with the quenched orientational disorder directly, while $\Phi_d$ only interacts with $\Phi_l$ via the spin-orbit coupling. That is why we shall try to interpret the two observed magnetic states, the (c)-state and the (f)-state, in terms of different realizations of the textural disorder described by different phenomenological relations between parameters $a$ and $b$ in these two states. \section{Interpretation of (c)-state} \label{cstate} The observed $\cos\beta$-dependence of the transverse NMR frequency shift in the (c)-state \cite{DmitrievNew} can be reproduced if we assume that in the (c)-state the parameters $a_c$ and $b_c$ satisfy the following relation: $a_c\ll b_c$. Then in the main approximation, \begin{equation} a_c=0~~,~~b_c\neq 0~, \label{Stateb} \end{equation} the effective potential $U$ in Eq.(\ref{SO}) is \begin{equation} U_c=- \frac{3}{4}b_c \cos^2\beta+\frac{1}{4}\left(b_c +\frac{1}{3}\right)~. \label{PotentialRandom2} \end{equation} If the parameter $b_c$ does not depend on $\beta$, the variation of $U_c$ with respect to $\cos\beta$ gives the required $\cos\beta$-dependence of the frequency shift of transverse NMR in the (c)-state: \begin{equation} \omega_{c\perp}- \omega_L= \frac{3\Omega^2_A}{4\omega_L }b_c\cos\beta~. \label{TransverseRandom2} \end{equation} Let us estimate the parameter $b_c$ using the following consideration. The dipole energy which depends on $\lambda$ violates the complete randomness of the Larkin-Imry-Ma state, and thus perturbs the average value of $\sin^2\lambda$. The deviation of $b_c$ from zero is given by: \begin{equation} b_c \sim \frac{L^2}{\xi_D^2}\left(\cos^2\beta -\frac{1}{3}\right)~. \label{CorrectionRandom2} \end{equation} In this model the potential and frequency shift become \begin{equation} U_c\sim - \frac{L^2}{\xi_D^2}\left(\cos^2\beta -\frac{1}{3}\right)^2~, \label{PotentialRandom2Final} \end{equation} \begin{equation} \omega_{c\perp}- \omega_L\sim \frac{\Omega^2_A}{\omega_L} \frac{L^2}{\xi_D^2}\cos\beta \left(\cos^2\beta -\frac{1}{3}\right)~. \label{CubicTransverseRandom2} \end{equation} Such $\beta$-dependence of the transverse NMR is also antisymmetric with respect to the transformation $\beta\rightarrow \pi-\beta$ as in the model with the $\beta$-independent parameter $b_c$ in Eq.(\ref{TransverseRandom2}); however, as distinct from that model it is inconsistent with the experiment (see Fig.~6 in Ref. \cite{DmitrievNew}). Certainly the theory must be refined to estimate the first order corrections to the zero values of the parameters $a_c$ and $b_c$. The frequency of the longitudinal NMR in the (c)-state is zero in the local approximation. The correction due to the deviation of $\Phi$ from the random behavior, i.e. due to the non-zero value of the parameter $a_c$ in the (c)-state, is: \begin{equation} \omega_{c\parallel}^2 = \frac{2}{3}\left(1-2\left<\sin^2\Phi\right>\right)\Omega^2_A = \frac{4a_c}{3} \Omega^2_A~. \label{LongitudinalNMRStrong} \end{equation} In the simplest Imry-Ma model $a_c \sim L^2/\xi_D^2 \ll 1$, and thus the frequency of longitudinal NMR is small as compared with the frequency of the longitudinal resonance in (f+c)-state, discussed in the next section. This is consistent with non-observation of the longitudinal NMR in the (c)-state: under conditions of this experiment the longitudinal resonance cannot be seen if its frequency is much smaller than the frequency of the longitudinal resonance observed in the (f+c)-state \cite{DmitrievNew}. \section{Interpretation of (f)-state} \label{fstate} The observed $(1+\cos\beta)$-dependence of the transverse NMR frequency shift of the (f)-line dominating in the (f+c)-state \cite{DmitrievNew,experiment} is reproduced if we assume that for the (f)-line one has $a_f\gg b_f$. In this case, in the main approximation the (f)-state may be characterized by \begin{equation} a_f \neq 0~~,~~b_f=0~, \label{random3} \end{equation} and one obtains: \begin{equation} \omega_{f\perp}- \omega_L= \frac{a_f}{6} \frac{\Omega^2_A}{\omega_L} (1+ \cos\beta)~. \label{aStateTransverse} \end{equation} Let us compare the frequency shift of the (f)-line with that of the (c)-line in Eq. (\ref{TransverseRandom2}) at $\beta=0$: \begin{equation} \frac{\omega_{f\perp}- \omega_L}{\omega_{c\perp}- \omega_L} =\frac{4a_f}{9b_c} ~. \label{f/c} \end{equation} According to experiments \cite{DmitrievNew} this ratio is about 4, and thus one obtains the estimate: $b_c \sim 0.1 a_f$. This supports the strong disorder limit, $b_c\ll 1$, for the (c)-state. If the statistic properties of the $\hat{\bf l}$-texture in the (f)-state are similar to that in the (c)-state, then one has $b_f\ll a_f$ as suggested in Eq.(\ref{random3}). The frequency of the longitudinal NMR in such a state is \begin{equation} \omega_{f\parallel}^2 = \frac{4a_f}{3}\Omega^2_A ~, \label{aStateLongitudinal} \end{equation} which gives the relation between the transverse and longitudinal NMR frequencies \begin{equation} \omega_{f\perp}- \omega_L= \frac{1}{8} \frac{\omega_{f\parallel}^2}{\omega_L} (1+ \cos\beta)~. \label{UniversalRelation} \end{equation} This relation is also valid for the Fomin's robust phase \cite{Fomin} (see \cite{ResultForRobust}). However, the frequency of the longitudinal NMR measured in the (f+c)-state \cite{DmitrievNew} does not satisfy this relation: the measured value of $\omega_{f\parallel}$ is about 0.65 of the value which follows from the Eq.(\ref{UniversalRelation}) if one uses the measured $\omega_{f\perp}- \omega_L$. Probably the situation can be improved, if one considers the interaction between the $f$ and $c$ lines in the (f+c)-state (see Ref. \cite{DmitrievNew}). \section{Discussion} \subsection{Interpretation of A-phase states in aerogel.} The observed two magnetic states of $^3$He-A in aerogel \cite{DmitrievNew} can be interpreted in the following way. The pure (c)-state is the Larkin-Imry-Ma phase with strong disorder, $L\ll \xi_D$. The (f+c)-state can be considered as mixed state with the volume $V_c$ filled by the Larkin-Imry-Ma phase, while the rest of volume $V_f=V-V_c$ consists of the (f)-state. The (f)-state is also random due to the Larkin-Imry-Ma effect, but the spin variable $\Phi_d$ and the orbital variable $\Phi_l$ are not completely independent in this state. If $\Phi_d$ partially follows $\Phi_l$, the difference $\Phi=\Phi_d-\Phi_l$ is not random and the parameter $a_f$ in the (f)-state is not very small, being equal to $1/2$ in the extreme dipole-locked case. Thus, for the (f+c)-state one may assume that $a_f\gg b_f, b_c,a_c$. As a result the (f)-line has essentially larger frequency shift of transverse NMR and essentially larger longitudinal frequency compared to the (c)-line. Both results are consistent with the experiment: from the transverse NMR it follows that $b_c \sim 0.1 a_f$ (see Eq.(\ref{f/c})); and from the lack of the observation of longitudinal NMR in the (c)-state it follows that $a_c \ll a_f$. This confirms the assumption of the strong disorder in the (c)-state, in which the smallness of the parameters $b_c$ and $a_c$ is the result of the randomness of the $\hat{\bf l}$-texture on the length scale $L\ll\xi_D$. The $\cos\beta$-dependence of $\omega_{c\perp}- \omega_L$ in Eq.(\ref{TransverseRandom2}) and $(1+\cos\beta)$-dependence of $\omega_{f\perp}- \omega_L$ in Eq.(\ref{aStateTransverse}) are also consistent with the experiment. The open problem is how to estimate theoretically the phenomenological parameters $a_f,b_f, b_c,a_c$ and find its possible dependence on $\beta$. The `universal' relation (\ref{UniversalRelation}) between the longitudinal and transverse NMR frequencies is not satisfied in the experiment, but we cannot expect the exact relation in such a crude model, in which the interaction between the $f$ and $c$ lines in the (f+c)-state is ignored (see \cite{DmitrievNew}). Moreover, we use the local approximation, i.e. we do not take into account the fine structure of the NMR line which may contain the satellite peaks due to the bound states of spin waves in the texture of $\hat{\bf l}$ and $\hat{\bf d}$ vectors. The tendency is however correct: the smaller is the frequency shift of transverse NMR the smaller is the frequency of longitudinal NMR. \subsection{Global anisotropy and negative frequency shift} For further consideration one must take into account that in some aerogel samples the large negative frequency shift has been observed for the A-phase \cite{BarkerNegativeShift,Dmitriev1,Dmitriev2,BunkovPrivate}. The reason of the negative shift is the deformation of the aerogel sample which leads to the global orientation of the orbital vector $\hat{\bf l}$ in the large region of the aerogel \cite{BunkovPrivate}. The effect of regular uniaxial anisotropy in aerogel has been considered in Refs. \cite{Pollanen,Aoyama}. It is important that even a rather small deformation of aerogel may kill the subtle collective Larkin-Imry-Ma effect and lead to the uniform orientation of the $\hat{\bf l}$-vector. Using the estimation of the Imry-Ma length in Ref. \cite{Volovik}, one can find that the critical stretching of the aerogel required to kill the Larkin-Imry-Ma effect is proportional to $(R/\xi_0)^3$. Here $R$ is the radius of the silica strands and $\xi_0$ is the superfluid coherence length. From Eqs.(\ref{SO}) and (\ref{FShift}) it follows that the maximum possible negative frequency shift could occur if in some region the global orientation of $\hat{\bf l}$ induced by deformation of the aerogel is along the magnetic field (i.e. $\lambda=0)$: \begin{equation} \omega_\perp- \omega_L= - \frac{\Omega^2_A}{2\omega_L} ~. \label{negative} \end{equation} Such longitudinal orientation of $\hat{\bf l}$ is possible because the regular anisotropy caused by the deformation of aerogel is bigger than the random anisotropy, which in turn in the strong disorder limit is bigger than the dipole energy preferring the transverse orientation of $\hat{\bf l}$. Comparing the measured magnitude of the negative shift (which cannot be bigger than the maximum possible in Eq.(\ref {negative})) with the measured positive shift of the (f)-line in the (f+c)-state \cite{Dmitriev,DmitrievNew} one obtains that the parameter $a_f$ in Eq.(\ref{aStateTransverse}) must be smaller than $0.25$. This is also confirmed by the results of the longitudinal NMR experiments \cite{DmitrievNew}, which show that the frequency of the longitudinal NMR in the (f+c)-state of $^3$He-A is much smaller than the frequency of the longitudinal NMR in $^3$He-B. The latter is only possible if $a_f\ll 1$ in Eq.(\ref{aStateLongitudinal}), i.e. the (f)-state is also in the regime of strong disorder. Thus there is only the partial dipole locking between the spin variable $\Phi_d$ and the orbital variable $\Phi_l$ in the (f)-state. \subsection{Possible role of topological defects.} It is not very clear what is the origin of the (f)-state. The partial dipole locking is possible if the characteristic size of the $\hat{\bf l}$ texture in the (f)-state is on the order of or somewhat smaller than $\xi_D$. Alternatively, the line (f) could come from the topological defects of the A-phase (vortices, solitons, vortex sheets, etc., see Ref. \cite{Book}). The defects could appear during cooling down the sample from the normal (non-superfluid) state and are annealed by application of the 180 degree pulse during this process. Appearance of a large amount of pinned topological defects in $^3$He-B in aerogel has been suggested in Ref. \cite{Collin}. The reason why the topological defects may effect the NMR spectrum in $^3$He-A is the following. In the case of the strong disorder limit the texture is random, and the frequency shift is zero, if one neglects the $ L/\xi_D$ corrections in the main approximation. The topological defect introduces some kind of order: some correlations are nonzero because of the conserved topological charge of the defect. That is why the frequency shift will be nonzero. It will be small, but still bigger than due to the corrections of order $(L/\xi_D)^2$. If this interpretation is correct, there are two different realizations of the disordered state in the system with quenched orientational disorder: the network of the pinned topological defects and the pure Larkin-Imry-Ma state. Typically one has the interplay between these two realizations, but the defects can be erased by the proper annealing leaving the pure Larkin-Imry-Ma state. \subsection{Superfluid properties of A-phase in aerogel} The interesting problem concerns the superfluid density $\rho_s$ in the states with the orientational disorder in the vector $\hat{\bf l}$. In Ref. \cite{Volovik} it was suggested that $\rho_s=0$ in such a state. Whether the superfluid density is zero or not depends on the rigidity of the $\hat{\bf l}$-vector. If the $\hat{\bf l}$-texture is flexible, then due to the Mermin-Ho relation between the texture and the superfluid velocity, the texture is able to respond to the applied superflow by screening the supercurrent. As a result the superfluid density in the flexible texture could be zero. The experiments on $^3$He-A in aerogel demonstrated that $\rho_s\neq 0$ (see e.g. \cite{Nazaretski} and references therein). However, most probably these experiments have been done in the (f+c)-state. If our interpretation of this state in terms of the topological defects is correct, the non-zero value of superfluid density could be explained in terms of the pinning of the defects which leads to the effective rigidity of the $\hat{\bf l}$-texture in the (f+c)-state. Whether the superfluid density is finite in the pure Larkin-Imry-Ma state, identified here as the (c)-state, remains an open experimental and theoretical question. The theoretical discussion of the rigidity or quasi-rigidity in such a state can be found in Refs. \cite{EfetovLarkin,Itakura}. In any case, one may expect that the observed two states of $^3$He-A in aerogel, (c) and (f+c), have different superfluid properties. Recent Lancaster experiments with vibrating aerogel sample indicate that the sufficiently large superflow produces the state with the regular (non-random) orientation of $\hat{\bf l}$ in aerogel, and in the oriented state the superfluid density is bigger \cite{Fisher}. This suggests that the orientational disorder does lead to at least partial suppression of the superfluid density. \subsection{Conclusion.} In conclusion, the NMR experiments on the A-like superfluid state in the aerogel indicate two types of behavior. Both of them can be interpreted in terms of the random texture of the orbital vector $\hat{\bf l}$ of the $^3$He-A order parameter. This supports the idea that the superfluid $^3$He-A in aerogel exhibits the Larkin-Imry-Ma effect: destruction of the long-range orientational order by random anisotropy produced by the randomly oriented silicon strands of the aerogel. The extended numerical simulations are needed to clarify the role of the topological defects in the Larkin-Imry-Ma state, and to calculate the dependence of the NMR line-shape and superfluid density on concentration and pinning of the topological defects. I thank V.V. Dmitriev and D.E. Khmelnitskii for illuminating discussions, V.V. Dmitriev, Yu.M. Bunkov and S.N. Fisher for presenting the experimental results before publication, and I.A. Fomin who attracted my attention to the relation between frequencies of the longitudinal and transverse NMR in some states. This work was supported in part by the Russian Foundation for Fundamental Research and the ESF Program COSLAB.
2,869,038,156,096
arxiv
\section{Introduction} The canonical cosmological model assumes that cold dark matter (CDM) is comprised of a non-relativistic gas of weakly interacting particles. Despite its simplicity, this minimal scenario provides an excellent fit to both CMB and large-scale structure measurements~\cite{Adam:2015rua, Ade:2015xua, Aghanim:2015xee, Alam:2016hwk,Aghanim:2018eyx}. However, a precise understanding of the fundamental nature of dark matter is missing and remains at the forefront in the current list of unsolved problems in modern physics. Although dark matter is usually interpreted in terms of a new elementary particle, other alternatives exist. Black holes produced from the collapse of large over-densities seeded prior to Big Bang Nucleosynthesis (BBN)\footnote{Several mechanisms have been proposed to generate the initial seed fluctuations, including \eg inflation~\cite{Carr:1993aq, Carr:1994ar, Ivanov:1994pa, Yokoyama:1995ex, GarciaBellido:1996qt, Taruya:1998cz, Green:2000he, Bassett:2000ha}, the collapse of domain walls~\cite{Sato:1981bf, Maeda:1981gw, Berezin:1982ur} and cosmic strings~\cite{Hogan:1984zb, Hawking:1987bn, Polnarev:1988dh}, first-order phase transitions~\cite{Crawford:1982yz, Hawking:1982ga, Kodama:1982sf, Hall:1989hr, Moss:1994iq, Konoplich:1999qq, Jedamzik:1999am, Khlopov:2000js}, and the decays of non-topological solitons~\cite{Cotner:2016cvr,Cotner:2018vug}.}, i.e.\ Primordial Black Holes (PBHs), represent such an alternative --- remarkably, this solution is as old as particle dark matter~\cite{Chapline:1975}. This possibility has recently attracted much attention~\cite{Bird:2016dcv, Clesse:2016vqa, Sasaki:2016jop, Raidal:2017mfl} in the context of the LIGO and VIRGO discoveries of several binary black hole mergers~\cite{Abbott:2016blz, TheLIGOScientific:2016pea, Abbott:2017vtc, Abbott:2016nmj, Abbott:2017oio}. Should the PBHs have masses $\lesssim 10^{-16} M_\odot$, they will efficiently emit Hawking radiation~\cite{Hawking:1974rv, Hawking:1974sw} and evaporate on cosmological timescales -- this process leads to strong energy injection (see \eg~\cite{Khlopov:2008qy,Carr:2009jm,Clark:2016nst,Poulin:2016anj,Boudaud:2018hqb,DeRocco:2019fjq,Laha:2019ssq,Laha:2020ivk}), severely limiting the abundance of PBHs in this regime. Heavier PBHs can imprint observational signatures in a variety of different manners, including via gravitational lensing~\cite{Green:2017qoa, Garcia-Bellido:2017xvr, Zumalacarregui:2017qqd, Garcia-Bellido:2017imq}, the dynamical evolution of gravitionatially bound systems~\cite{Monroy-Rodriguez:2014ula, Brandt:2016aco, Green:2016xgy, Li:2016utv, Koushiappas:2017chw, Kavanagh:2018ggo}, observable radio and x-ray emission~\cite{Gaggero:2016dpq, Inoue:2017csr, Hektor:2018rul}, and spectral distortions in the CMB~\cite{Tada:2015noa, Young:2015kda, Chen:2016pud, Ali-Haimoud:2016mbv, Blum:2016cjs, Horowitz:2016lib, Poulin:2017bwe, Bernal:2017vvn, Nakama:2017xvq, Deng:2018cxb}. Collectively, these observations prohibit PBHs from constituting the entirety of dark matter, unless their masses are confined roughly to the range $10^{-16} \, M_\odot \lesssim M_{\rm PBH} \lesssim 10^{-11} \, M_\odot$ (see, e.g., Refs~\cite{Belotsky:2014kca, Carr:2016drx, Sasaki:2018dmp, Green:2020jor,Mack:2006gz, Ricotti:2007au, Josan:2009qn, Carr:2009jm, Capela:2013yf, Clesse:2016vqa, Green:2016xgy, Bellomo:2017zsr, Kuhnel:2017pwq, Carr:2017jsz, Sasaki:2018dmp,Villanueva-Domingo:2021spv} for recent reviews on PBHs). Despite stringent constraints on the abundance of heavy PBHs, even a small number of these objects can have a significant impact in astrophysics and cosmology. In particular, it has been shown that PBHs can efficiently accrete the primary component of dark matter prior to matter-radiation equality, generating dense dark matter spikes referred to as ultra-compact mini-halos (UCMHs). If dark matter is mostly comprised of Weakly Interacting Massive Particles (WIMPs), the large densities found in the UCMHs will dramatically enhance the efficiency of WIMP annihilation, imprinting powerful observational signatures \eg in gamma-ray flux~\cite{Lacki:2010zf, Josan:2010vn, Boucenna:2017ghj, Eroshenko:2016yve, Carr:2020mqm, Hertzberg:2020kpm, Adamek:2019gns, Kadota:2021jhg} and the anisotropies of the CMB~\cite{Ricotti:2007au, Tashiro:2021xnj}. In this work we revisit the cosmological and astrophysical constraints on the mixed WIMP-PBH dark matter scenario, focusing in particular on those derived using observations of the extragalactic gamma ray background and the CMB. We incorporate the state-of-the-art understanding of the UCMH density profiles, investigating a wide array of WIMP dark matter models (spanning MeV-scale to TeV-scale WIMP masses, a variety of final states, and both s-wave and p-wave annihilation), for both monochromatic and extended PBH mass functions. Our calculations show that the strength of constraints derived using the extragalactic gamma ray background have been largely overestimated in previous studies in the literature (in some cases by many orders of magnitude), and are typically comparable or sub-dominant to those obtained using the latest observations of the CMB. This manuscript is organized as follows. Section~\ref{sec:dm_all} outlines the details of WIMP dark matter annihilation in UCHMs around PBHs. Section~\ref{sec:methods} describes the methodology and procedure used to derive constraints on the abundance of PBHs using both the CMB and $\gamma$-ray observations. Section~\ref{sec:results} presents our results, as well as a critical comparison to previous analyses. We conclude in Sec.~\ref{sec:conc}. \section{Energy Injection from WIMP Annihilation near PBHs}\label{sec:dm_all} \subsection{WIMP ANNIHILATION} \label{sec:dm} Among the most studied and theoretically appealing dark matter candidates are electroweak scale WIMPs, as these particles can be efficiently produced with the correct relic abundance via the thermal freeze-out mechanism and naturally appear in a plethora of well-motivated extensions of the Standard Model~\cite{Kolb:1990vq, Bertone:2010zza}. The annihilation rate of a Majorana dark matter candidate $\chi$ is given by~\footnote{An additional factor of 1/2 must be included for Dirac dark matter.} \begin{equation} \label{ann_rate} \Gamma_{\mathrm{ann}} \equiv \frac{1}{2 m_\chi^2} \int_V dV \left\langle \sigma_A v \right\rangle \rho_\chi^2~, \end{equation} where $m_\chi$ and $\rho_\chi$ are the WIMP mass and density, and $\left\langle \sigma_A v \right\rangle$ is the thermally averaged annihilation cross section. At freeze-out, the annihilation cross section is given by $\left\langle \sigma_A v \right\rangle \simeq 3\times 10^{-26}/f_\chi \; [\mathrm{cm}^3 \mathrm{s}^{-1}]$ where $f_\chi$ is the fraction of dark matter in the form of WIMPs. The energy density injected per unit time into a species $c$ in the energy range $[E_1, E_2]$ from WIMPs annihilating in UCMHs is given by \begin{equation} \label{eq:dE_dV} \frac{dE_{\scaleto{\mathrm{UCMH}}{4pt}}}{dVdt}= \int_{E_1}^{E_2} \, dE \, n_{\scaleto{\rm UCMH}{4pt}}\sum_c B_c \, \Gamma_{\rm ann}^{(c)} \frac{dN^{(c)}}{dE} \, . \end{equation} Here, $n_{\scaleto{\rm UCMH}{4pt}}$ is the number density of UCMHs, $B_c$ is the branching fraction of WIMP annihilation to species $c$ (e.g. photons, electrons, etc), $\Gamma_{\rm ann}^{(c)}$ is the WIMP annihilation rate in a single UCMH, and $dN^{(c)}/dE$ is the spectra of the final state species. Should the annihilation be s-wave, \ie velocity independent, the annihilation cross section today is given by that at freeze-out. For p-wave annihilating dark matter, we estimate the radially-dependent annihilation cross section by assuming that the local distribution is approximately described by a Maxwell-Boltzmann distribution with a dispersion obtained by applying the virial theorem. We further assume that the gravitational potential is entirely dominated by the PBH contribution at the center of the UCMH (an assumption which we have verified has a negligible effect on the predicted annihilation rate). This allows us to parameterize the thermally averaged p-wave annihilation cross section as \begin{equation} \label{eq:sigmav_pwave} \left\langle \sigma_A v \right\rangle^{\mathrm{p-wave}} = \left\langle \sigma_A v \right\rangle_{\mathrm{fo}}\frac{v^2}{v_{\mathrm{fo}}^2}=\frac{\left\langle \sigma_A v \right\rangle_{\mathrm{fo}}}{v_{\mathrm{fo}}^2}\frac{G M_{\scaleto{\mathrm{PBH}}{4pt}} }{r}~, \end{equation} where the velocity dispersion at freeze out is estimated as $v_{\mathrm{fo}} \sim 0.3$~\cite{Kadota:2021jhg}. For WIMP masses $\gtrsim 5$~GeV, we compute the annihilation spectra $\frac{dN^{(c)}}{dE}$ in Eq.~(\ref{eq:dE_dV}) above using the publicly available tool PPPC4DMID \cite{Cirelli:2010xx}, and focus for simplicity on the case of pure annihilations into the $b\bar{b}$ and the $e^+e^-$ channels. In order to explore the parameter space of lighter dark matter candidates at sub-GeV scales we have used the spectra derived with the tool Hazma \cite{Coogan:2019qpu}, which employs chiral perturbation theory (with a Lagrangian that includes the SM, the dark sector, and hadrons) in order to calculate cross sections and spectra from decays, annihilations, and scatterings at next-to-leading order (NLO). The tool assumes the dark matter particle to be a Dirac fermion and considers models of scalar and vector mediators. We focus here on the case of a 100 MeV dark matter particle interacting with the Standard Model through a massive kinetically-mixed vector mediator. The dominant annihilation channel for this candidate is to $e^+ e^-$, but there is also a $<\mathcal{O}(10^{-6})$ suppressed annihilation into the $\pi^0 \gamma$ channel. The $e^+ e^-$ channel produces a monochromatic $e^\pm$ line at 100 MeV and a continuous gamma ray spectrum of final state radiation/internal bremsstrahlung (FSR/IB). The highly suppressed $\pi^0 \gamma$ channel, on the other hand, contributes negligibly to the photon spectrum. For the this light dark matter candidate we adopt an annihilation cross-section of $\left\langle \sigma_A v \right\rangle=10^{-28}/f_\chi \; [\text{cm}^3 \text{s}^{-1}]$\footnote{It is worth noting that while s-wave annihilation cross sections below the thermal value tend to over produce dark matter, this is not always the case, and can easily be avoided in non-minimal models. }, since this is approximately the maximally allowed value by $\textit{Planck}$~\cite{Planck:2018vyg} (note that p-wave annihilating dark matter is not constrained to this level, however we fix the cross section to have the same value for comparison purposes). \subsection{Ultra Compact Mini-Halos around PBHs} \label{sec:mass} The total amount of energy produced from dark matter annihilations in UCMHs depends both on the density profile of the UCMH and on the mass distribution of PBHs. We shall discuss each of these below. PBHs are formed in the early Universe when the cosmological horizon crosses a large enough over-density. Before kinetic decoupling at $t_{\scaleto{\mathrm{KD}}{4pt}}$, the radiation pressure does not allow PBHs to accrete a significant amount of mass~\cite{Eroshenko:2016yve}. However, after $t_{\scaleto{\mathrm{KD}}{4pt}}$, a WIMP spike forms as spherical shells enter the expanding region of influence of the PBH -- this process allows the spike to accrete a total mass that exceeds the PBH mass by up to two orders of magnitude. The region over which the PBH exerts its gravitational influence is approximately defined by the radius $r_{\mathrm{infl}}$ at which WIMPs decouple from the Hubble expansion. We follow Ref.~\cite{Adamek:2019gns} in numerically estimating this radius as $r_{\mathrm{infl}} \simeq (2GM_{\scaleto{\mathrm{PBH}}{4pt}} t^2)^{\frac{1}{3}}~$. At matter-radiation equality the sphere of this radius, $r_{\mathrm{infl}}(t_{\mathrm{eq}}) \simeq (2GM_{\scaleto{\mathrm{PBH}}{4pt}}t_{eq}^2)^{1/3}$, contains a mass comparable to the PBH mass. If the kinetic energy of WIMPs is negligible compared to the gravitational potential energy, a power-law density profile scaling like $\rho \sim r^{-9/4}$ develops from the accreting material. N-body simulations and analytical calculations have shown that the $9/4$ density profile is a good approximation in the regime of large PBH and WIMP particle masses~\cite{Adamek:2019gns}. However when the ratio of kinetic to potential energy cannot be neglected, one must consider that particles in the high energy tail may escape the gravitational pull of the PBH, while those at lower energies fall into bound orbits with varying angular momentum. These effects were first accounted for in Ref.~\cite{Eroshenko:2016yve} by adopting a Maxwell-Boltzmamn distribution, and integrating the phase-space of bound trajectories over their orbits. A semi-analytic calculation of the phase-space integral was provided in Ref.~\cite{Boucenna:2017ghj}, and later improved by Refs.~\cite{Boudaud:2021irr, Carr:2020mqm}. In this work, we shall use the later analytic result, which in general gives rise to a broken triple power-law profile, although dark matter annihilations deplete dark matter (thus setting an upper limit on the dark matter density), such that the final density profile follows a truncated single, double, or triple power law. Importantly, it is the ratio of the kinetic to potential energy which determines whether one expects a single power law profile (with a slope of $9/4$, occurring when kinetic energy is negligibly small), a double broken power law profile (with an inner slope of $3/2$, and an outer slope of $9/4$), or a triple broken power law (with slopes of $3/4$, $3/2$, and $9/4$, appearing at increasing radii, and occurring when the kinetic energy is large relative to the gravitational potential energy). The analytical expression for the density profile from Refs.~\cite{Boudaud:2021irr, Carr:2020mqm} is given by \begin{equation} \label{rho_chi} \begin{split} \rho_\chi(r) = \left\{ \begin{array}{ll} f_\chi \rho_{\scaleto{\mathrm{KD}}{4pt}}\left( \frac{r_C}{r }\right)^{\frac{3}{4}} & \mathrm{for\ } r \le r_{\scaleto{C}{4pt}}~,\\ f_\chi \frac{\rho_{\scaleto{\mathrm{eq}}{4pt}}}{2}\left( \frac{M}{M_\odot}\right)^{\frac{3}{2}}\left( \frac{\hat{r}}{r}\right)^{\frac{3}{2}} & \mathrm{for\ } r_{\scaleto{C}{4pt}} \le r \le r_{\scaleto{K}{4pt}}~,\\ f_\chi \frac{\rho_{\scaleto{\mathrm{eq}}{4pt}}}{2}\left( \frac{M}{M_\odot}\right)^{\frac{3}{4}}\left( \frac{\bar{r}}{r}\right)^{\frac{9}{4}} & \mathrm{for\ } r > r_{\scaleto{K}{4pt}}~, \end{array} \right. \end{split} \end{equation}\\ with $\hat{r}$ and $\bar{r}$ defined as \begin{equation} \label{r_hat} \hat{r} \equiv G M_\odot \frac{t_{\scaleto{\mathrm{eq}}{4pt}}}{t_{\scaleto{\mathrm{KD}}{4pt}}} \frac{m_\chi}{T_{\scaleto{\mathrm{KD}}{4pt}}}~, \;\;\;\; \bar{r} \equiv (2GM_\odot t^2_{\scaleto{\mathrm{eq}}{4pt}})^{\frac{1}{3}}~. \end{equation} Equating the first two analytic profiles of Eq.~(\ref{rho_chi}) and the last two, it is possible to obtain the values for $r_{\scaleto{C}{4pt}}$ and $r_{\scaleto{K}{4pt}}$, which are given by \begin{equation} \label{r_C} r_{\scaleto{C}{4pt}} = \frac{r_{\scaleto{S}{4pt}}}{2}\left(\frac {m_\chi}{T_{\scaleto{\mathrm{KD}}{4pt}}}\right)~, \;\;\;\; r_{\scaleto{K}{4pt}} = 4\frac{t_{\scaleto{\mathrm{KD}}{4pt}}^2 }{r_{\scaleto{S}{4pt}} }\left(\frac{T_{\scaleto{\mathrm{KD}}{4pt}}}{m_\chi}\right)^2 \, , \end{equation} where $r_s$ is the the Schwartschild radius. The time $t_{\scaleto{\mathrm{KD}}{4pt}}$ and temperature $T_{\scaleto{\mathrm{KD}}{4pt}}$ at kinetic decoupling are approximately given by \cite{Boucenna:2017ghj} \begin{equation} \label{TKD} T_{\scaleto{\mathrm{KD}}{4pt}} = \frac{m_\chi}{\Gamma[3/4]} \left(\frac{\alpha \cdot m_\chi}{M_{\scaleto{\mathrm{Pl}}{4pt}}}\right)^{\frac{1}{4}}, \;\;\;\; t_{\scaleto{\mathrm{KD}}{4pt}} = \frac{2.4}{\sqrt{g_{\scaleto{\mathrm{KD}}{4pt}}}} \left(\frac{T_{\scaleto{\mathrm{KD}}{4pt}}}{1 \; \mathrm{MeV}}\right)^{-2}~, \end{equation}\\ with $\alpha = (16\pi^3 \, g_{\scaleto{\mathrm{KD}}{4pt}}/45)^{1/2}$. For the sake of simplicity, we have fixed the relativistic degrees of freedom at kinetic decoupling to be $g_{\scaleto{\mathrm{KD}}{4pt}} = 61.75$. \begin{figure*} \centering \includegraphics[width=0.44\textwidth]{images/rho_mbh.pdf} \includegraphics[width=0.44\textwidth]{images/rho.pdf} \caption{The left (right) panel depicts the density profile before (after) WIMP annihilation as a function of radial distance. Results are illustrated for several PBH masses (left), and several redshifts for $M_{\rm PBH} = 10^4 \, M_\odot$ (right). In the right panel solid and dashed lines denote the effect of s-wave and p-wave WIMP annihilations, respectively.} \label{fig:rho} \end{figure*} The resulting density profiles are depicted in Fig.~\ref{fig:rho}. In the left panel we show the density as a function of radius (in units of $r_s$) for various PBH masses, while in the right panel we illustrate how WIMP annihilations modify the density profile in the central region of the mini-halo. This effect is shown for a $10^4 \, M_\odot$ PBH at various redshifts, and assuming both s-wave (solid lines) and p-wave (dashed lines) annihilating WIMP dark matter. The presence of annihilations saturates the density to a maximum value $\rho_{\mathrm{max}}$, which is roughly given by \cite{Bertone:2005xz} \begin{equation} \label{rho_max} \rho_{\mathrm{max}} = \frac{m_\chi}{\left\langle \sigma_A v \right\rangle t} \, , \end{equation} where $t$ is the age of the PBH (which we approximate here to be the age of the Universe). We note that the annihilation rate from an individual UCMH is dominated by the largest radius which for which the density profile is saturated to $\rho_{\mathrm{max}}$. Notice, from the right panel of Fig.~\ref{fig:rho}, that the p-wave annihilation channel has two main effects on the annihilation rate. Firstly, the density profile near the PBH grows radially (unlike in the case of s-wave annihilations, where it is flat) due to the velocity dependence of the annihilation cross section, allowing the p-wave profile to reach larger densities than in the case of s-wave annihilations. The enhancement in the annihilation rate from the larger densities, however, is offset by the suppression of the velocity averaged annihilation cross section. The other crucial ingredient when modeling the net energy injection is the mass distribution of the PBHs. While a monochromatic PBH mass function is the most commonly adopted distribution, this is an unphysical choice motivated only for simplicity. In this work we adopt both a monochromatic mass distribution, used for sake of comparison with the broader literature on PBHs, and the more physically motivated log-normal mass function given by~\cite{Carr:2017jsz}: \begin{equation} \label{psi} \begin{split} \psi(M) \equiv \frac{1}{\bar{\rho}_{\scaleto{\mathrm{PBH}}{4pt}}} \frac{d\rho(M)}{dM}= \frac{1}{\sqrt{2\pi}\sigma M} \mathrm{Exp}\left({-\frac{\mathrm{Log}^2(M/M_{\mathrm{pk}})}{2\sigma^2}}\right)~, \end{split} \end{equation} \noindent where $M_{\mathrm{pk}}$ and $\sigma$ are the mass at the peak of the spectrum and its width, respectively. The analytic derivation of the s-wave annihilating rates in the monochromatic and broad PBH mass function cases are detailed in Appendix \ref{sec:apb} and \ref{sec:apd}, respectively (analytic expressions for p-wave annihilating dark matter and a monochromatic PBH mass function is also provided in Appendix \ref{sec:apc}). \section{Methodology}\label{sec:methods} \subsection{Constraints from CMB} Since the energy deposition from the WIMP annihilation products occurs on scales much longer than the inter-UCMH distance, one can treat the cumulative energy injection from all UCMHs as uniform in the Inter Galactic Medium (IGM). One can express the rate of energy deposition per unit volume as \begin{equation} \label{deposited} \left.\frac{dE}{dVdt}\right|_{\mathrm{dep}}(z) = \left.f(z)\frac{dE}{dVdt}\right\rvert_{\mathrm{inj}}(z)~, \end{equation} with $f(z)$, the energy deposition function, given by \cite{Slatyer:2012yq} \begin{equation} \label{deposited_func} \begin{split} f(z) = \frac{\int d ln(1+z^{\prime})\frac{(1+z^{\prime})^3}{H(z^{\prime})}(1+B(z^{\prime}))}{\frac{(1+z)^3}{H(z)}(1+B(z))} \\ \, \times \, \frac{\sum_{l}\int T^{(l)}(z^{\prime}, z, E)E \left.\frac{dN}{dE}\right|^{(l)}_{\mathrm{inj}}dE}{\sum_l \int \left.E\frac{dN}{dE}\right|^{(l)}_{\mathrm{inj}}} \, . \end{split} \end{equation} Here, $B$ is the boost factor and $dN/dE$ the energy spectrum of the different annihilation products, and we have introduced the transfer functions $T^{(l)}(z^{\prime}, z, E)$. The index $l$ identifies the photon and electron/positron final states. In the following, we use the publicly available transfer functions tabulated by Ref.~\cite{Slatyer:2015kla} to compute the fraction of the deposited energy going into heating, Lyman-$\alpha$ excitation, ionization of the neutral hydrogen and the ionization of the neutral helium. The boost factor, which governs the additional energy injection arising from annihilations in UCMHs (with respect to the isotropic background), is given by \begin{equation} \label{e7} \begin{split} B &\equiv \left.\frac{dE}{dVdt}\right\rvert_{\mathrm{inj}} \left(\left.\frac{dE}{dVdt}\right\rvert_{\mathrm{bkg}}\right)^{-1} \\ &= \frac{\Gamma_{\mathrm{ann}} f_{\scaleto{\mathrm{PBH}}{4pt}} \rho_{\scaleto{\mathrm{DM,0}}{5pt}} (1+z)^3}{ M_{\scaleto{\mathrm{PBH}}{4pt}}} \left( \left\langle \sigma_A v \right\rangle \frac{\rho_{\scaleto{\mathrm{DM,0}}{5pt}}^2 (1+z)^6}{2 m_\chi^2} \right)^{-1} \, . \end{split} \end{equation} We define $f_{\scaleto{\mathrm{PBH}}{4pt}} \equiv \Omega_{\scaleto{\mathrm{PBH}}{4pt}}/\Omega_{\scaleto{\mathrm{DM}}{4pt}}$ as the redshift independent PBH fraction, and since we are considering two possible dark matter contributions, $f_{\scaleto{\mathrm{PBH}}{4pt}}= 1-f_\chi$. In the equation above, $\rho_{\scaleto{\mathrm{DM,0}}{5pt}}$ refers to the current dark (total) matter density and $M_{\scaleto{\mathrm{PBH}}{4pt}}$ is the mass (mean mass) of the PBH for the case of a monochromatic (broad) PBH mass function. In order to derive the limits from cosmological observations, we have modified the Boltzmann code CLASS~\cite{Blas:2011rf} using its ExoCLASS package~\cite{Stocker:2018avm} to include the calculation of the redshift dependent energy deposition functions, annihilation rates, and boost factors of the different models analyzed in this work. ExoCLASS relies on the recombination code RECFAST~\cite{Seager:1999bc}, which traces the cosmological evolution of the free electron fraction and gas temperature. We perform Markov Chain Monte Carlo (MCMCs) likelihood analyses using the publicly available package MontePython~\cite{Audren:2012wb}. We vary eight parameters: six from the canonical $\Lambda \rm CDM$ model ($\Omega_b$, $\Omega_{\scaleto{\mathrm{CDM}}{4pt}}$, $H_0$, $\log(10^{10}A_S)$, $n_S$ and $\tau_{\scaleto{\mathrm{reio}}{4pt}}$) plus two model-dependent parameters, $f_{\scaleto{\mathrm{PBH}}{4pt}}$ and $M_{\scaleto{\mathrm{PBH}}{4pt}}$. In addition, we have two parameters which are implicitly derived from $f_{\scaleto{\mathrm{PBH}}{4pt}}$. These are $f_\chi$ and $\left\langle \sigma_A v \right\rangle$. In our analysis we use the following data sets: \begin{itemize} \item The Cosmic Microwave Background (CMB) temperature and polarization power spectra from the final release of $\textit{Planck}$ 2018 (in particular we adopt the plikTTTEEE+lowl+lowE likelihood) \cite{Aghanim:2018eyx,Aghanim:2019ame}, plus the CMB lensing reconstruction from the four-point correlation function~\cite{Aghanim:2018oex}. \item Baryon Acoustic Oscillations (BAO) distance and expansion rate measurements from the 6dFGS~\cite{Beutler:2011hx}, SDSS-DR7 MGS~\cite{Ross:2014qpa}, BOSS DR12~\cite{Alam:2016hwk} galaxy surveys, as well as from the eBOSS DR14 Lyman-$\alpha$ (Ly$\alpha$) absorption~\cite{Agathe:2019vsu} and Ly$\alpha$-quasars cross-correlation~\cite{Blomqvist:2019rah}. These consist of isotropic BAO measurements of $D_V(z)/r_d$ (with $D_V(z)$ and $r_d$ the spherically averaged volume distance and sound horizon at baryon drag, respectively) for 6dFGS and MGS, and anisotropic BAO measurements of $D_M(z)/r_d$ and $D_H(z)/r_d$ (with $D_M(z)$ the comoving angular diameter distance and $D_H(z)=c/H(z)$ the radial distance) for BOSS DR12, eBOSS DR14 Ly$\alpha$, and eBOSS DR14 Ly$\alpha$-quasars cross-correlation. \end{itemize} \begin{figure} \centering \includegraphics[width=\linewidth]{images/broad3.pdf} \caption{$95\%$~CL constraints on the fraction of PBHs in the form of dark matter $f_{\rm PBH}$ as a function of PBH mass, assuming a monochromatic and log-normal (with $\sigma = 2$) mass functions (shown in green and blue solid lines, respectively). Results are shown assuming the remaining dark matter is made of a $100$~GeV WIMP with s-wave annihilations to $b\bar{b}$. Results are compared to the limits one would obtain using a maximally conservative treatment of the isotropic gamma-ray background (dashed), see Sec.~\ref{sec:3b} for a more optimistic analysis. We also illustrate the constraints on the fraction of PBHs in the form of dark matter in the absence of UCMHs from a number of observational probes, see Ref.~\cite{Green:2020jor}. } \label{fig:results_broad} \end{figure} \subsection{Constraints from $\gamma$-ray observations} \label{sec:3b} A highly complementary probe of WIMP annihilation in UCMHs comes from the isotropic $\gamma$-ray background (IGRB), which has historically been the main observation used to constrain the hybrid WIMP-PBH dark matter scenario (see \eg\cite{Lacki:2010zf, Josan:2010vn, Boucenna:2017ghj, Eroshenko:2016yve, Carr:2020mqm, Hertzberg:2020kpm, Adamek:2019gns, Kadota:2021jhg}). In order to highlight the benefits and drawbacks of the cosmological analysis presented here, we re-derive these IGRB constraints, showing that previous analyses have significantly over-estimated the sensitivity. The IGRB is obtained by removing all the extragalactic resolved point sources from the extragalactic $\gamma$-ray background (EGRB). In this work we make use of the 50-month \textit{Fermi} Large Area Telescope (LAT) IGRB measurements, which have been obtained using the galactic diffuse emission model A from Ref.~\cite{Fermi-LAT:2015qzw} for the full energy range (spanning from 100 MeV to 820 GeV). We derive constraints using two approaches. Firstly, we adopt a maximally conservative approach, in which we make no further assumptions about the unresolved astrophysical contribution. Secondly, a more optimistic approach in which the contribution from unresolved extragalactic sources is subtracted from the EGRB is also considered. In the following, we shall refer to the more conservative model just as IGRB (using the \textit{Fermi}-LAT terminology) and to the one with a background model as \emph{optimistic} IGRB. In our conservative approach, we require the integrated flux in every bin, as computed in Appendix~\ref{sec:ape}, not exceed the observed flux from the IGRB \textit{Fermi}-LAT~\cite{Fermi-LAT:2014ryh}. We do not attempt at this point to account for correlations among bins, but rather simply apply the $2\sigma$ upper limits in each bin, which account for both systematic and statistical uncertainties~\footnote{Example spectra are shown for a few cases alongside the observed data in Appendix~\ref{sec:ape}.} Our \emph{optimistic} approach follows the procedure outlined in Refs.~\cite{Boucenna:2017ghj,Adamek:2019gns,Kadota:2021jhg}. The idea here is to map the constraints on decaying dark matter obtained using \textit{Fermi}-LAT observations in the conservative IGRB model into constraints of WIMP annihilation near PBHs. In particular, Refs.~\cite{Boucenna:2017ghj,Adamek:2019gns,Kadota:2021jhg} derive upper limits for the PBH fraction from the results of Ref.~\cite{Ando:2015qda}, which derives lower limits on the lifetime of decaying dark matter. In order to apply these results to the case of WIMP annihilations, one must assume the annihilation rate is redshift independent; this assumption allows one to relate the constraints on the decay rate to those on $f_{\scaleto{\mathrm{PBH}}{4pt}}$ via \begin{equation} \label{eq:decay_ann} f_{\scaleto{\mathrm{PBH}}{4pt}}=\frac{\Gamma_{\mathrm{dec}}M_{\scaleto{\mathrm{PBH}}{4pt}}}{\Gamma_{\mathrm{ann}}m_ \chi} \, . \end{equation} In the high PBH mass regime the annihilation rate scales approximately as $(1+z)$, and therefore the assumption that the annihilation rate is redshift independent is not strictly valid. The error introduced using this procedure yields a result larger by a factor of $\sim 1.5$. We correct for this in what follows by re-scaling the inferred constraint on $f_{\rm PBH}$ after applying Eq.~(\ref{eq:decay_ann}). \section{Results and comparison with previous analyses}\label{sec:results} \begin{figure*} \centering \includegraphics[width=0.48\linewidth]{images/bottom_swave.pdf} \includegraphics[width=0.48\linewidth]{images/bottom_pwave.pdf} \includegraphics[width=0.48\linewidth]{images/electron_swave.pdf} \includegraphics[width=0.48\linewidth]{images/electron_pwave.pdf} \caption{Top panels: The solid (dashed) lines depict the $95\%$~CL constraints on the PBH dark matter fraction as a function of the PBH mass from CMB (conservative $\gamma$-ray) observations in hybrid WIMP-PBH models for s- (left panel) and p-wave (right panel) annihilating WIMPs into the $b\bar{b}$ channel. The cross section is assumed to be $\left\langle \sigma_A v \right\rangle=3\,\cdot\,10^{-26}/f_\chi$ $\text{cm}^3$/s and $\left\langle \sigma_A v \right\rangle_{fo}= 3\,\cdot\,10^{-26}/f_\chi$ $\text{cm}^3$/s for the s-wave and p-wave annihilating channels, respectively. We illustrate three possible WIMP dark matter masses: 10 GeV, 100 GeV, and 1 TeV. The top left panel also shows the $95\%$~CL limits from the \emph{optimistic} IGRB model (dotted lines), see main text for details. Bottom left (right) panels: As in the top panels, but for s- (p-) wave annihilating WIMPs into the $e^-e^+$ channel.} \label{fig:results} \end{figure*} Figures \ref{fig:results_broad}, \ref{fig:results} and \ref{fig:results_ldm} illustrate the main findings of this work, showing constraints on $f_{\rm PBH}$ as a function of $M_{\rm PBH}$ for various scenarios. Figure~\ref{fig:results_broad} illustrates the limits derived on both the monochromatic and broad log-normal (with $\sigma = 2$) PBH mass functions for a WIMP (Majorana) dark matter particle of $10$~GeV annihilating via s-wave purely into the $b\bar{b}$ channel. At high PBH masses, the constraints are insensitive to the details of the mass function. However, at lower masses, an extended mass distribution results into constraints which are in general orders of magnitude stronger than those of the monochromatic distribution. In Fig.~\ref{fig:results_broad}, we also illustrate for comparative purposes the constraints derived using the conservative IGRB, which are roughly two orders of magnitude weaker than those from the CMB for $M_{\rm PBH} \gtrsim 10^{-6}M_\odot$, and comparable for smaller PBH masses. Figure \ref{fig:results} depicts the constraints on $f_{\rm PBH}$ for different WIMP candidates. Namely, we vary the WIMP mass, the annihilation channel, and the velocity dependence of the annihilation cross section. We show in the top (bottom) panel the annihilation of a 10 GeV, 100 GeV, and 1 TeV Majorana dark matter particle into the $b\bar{b}$ ($e^+e^-$) channel, assuming a monochromatic PBH mass function. The cases for s-wave (p-wave) annihilations are illustrated in the left (right) panels. As before, we show the conservative IGRB constraints using dashed lines, as well as the optimistic IGRB constraints in the top left panel using dotted lines. We note that in all cases, the cosmological constraints tend to be comparable to, or stronger than, those derived using the IGRB. The strength of the cosmological constraints is particularly pronounced for the case of WIMPs annihilating into $e^+e^-$. This is due to the fact that $e^\pm$ pairs can efficiently heat and ionize the IGM, but only generate observable $\gamma$-rays via inverse Compton scattering (ICS) off the CMB photons. ICS generates a peak in the $\gamma$-ray spectra at low energies, which are less constrained by $\gamma$-ray measurements (see the figure in Appendix D). The right panels of Fig.~\ref{fig:results} illustrate that constraints on $f_{\rm PBH}$ remain relatively strong in the case of p-wave annihilating dark matter. Unlike in the case of s-wave annihilation, the p-wave constraints do not saturate to a fixed value of $f_{\rm PBH}$ at large PBH masses. The results of the vector portal light dark matter model for a 100 MeV dark matter particle are shown in Fig.~\ref{fig:results_ldm}. The fact that cosmological constraints are comparable to the IGRB limits implies that the \emph{optimistic IGRB} bounds would be stronger than those obtained using the CMB+BAO. For such models, one can only constrain PBH masses $M_{\scaleto{\mathrm{PBH}}{4pt}} \gtrsim 10^{-1} M_\odot$ (or $10^{2} M_\odot$ in the case of p-wave annihilating dark matter), as the kinematic suppression is increasingly strong at low dark matter masses. It is important to point out that the constraints derived here differ notably from a number of previous works on the subject. We highlight the relative differences for either the IGRB and/or the CMB bounds in what follows.\\ {\bf Comparison with Adamek et al \cite{Adamek:2019gns}} The optimistic IGRB limits derived in this work are weaker by a factor $\mathcal{O}(10)$ for WIMP masses in the range 100 GeV to 1 TeV, and by a factor ~$\mathcal{O}(10^2)$ for WIMP masses closer to 10 GeV with respect to those derived in Ref.~\cite{Adamek:2019gns}. \begin{itemize} \item The $\mathcal{O}(10)$ difference in the 100 GeV to 1 TeV WIMP mass range is due to the use of different cosmological parameters (precise epoch and energy density at matter-radiation equality, accounting for a factor $\sim 5$) and also due to a missing factor of 2 in the denominator of the formula for the annihilation rate of self-conjugated WIMPs. Additionally, our limits also include the factor of $\sim$1.5 which is introduced by correcting for the redshift independent annihilation rate (as mentioned above). \item The remaining ~$\mathcal{O}(10)$ difference at 10 GeV is due to the choice of the limit on $\Gamma_{\rm dec}$, which is taken from Fig. 3 of \cite{Ando:2015qda}. While the authors of \cite{Adamek:2019gns} adopt a single mass-independent limit of $10^{28}$ s (which is technically only valid for WIMP masses near 100 GeV), we have appropriately scaled the data to be valid at all WIMP masses. \end{itemize} {\bf Comparison with Carr et al \cite{Carr:2020mqm}} \newline The authors of \cite{Carr:2020mqm} follow a slightly different approach. They compare the theoretical $\gamma$-ray signal today integrated for all energies above 100 MeV with a sky-integrated flux threshold of $\Phi_{\rm res} \sim 10^{-7} \, {\rm cm^{-2} \, s^{-1}}$. Compared to our \emph{optimistic IGRB} case, the constraints of \cite{Carr:2020mqm} are more stringent by one, two and four orders of magnitude for WIMP masses of 1000, 100 and 10 GeV respectively. These differences are mostly driven by their choice of the upper limit on the flux, $\Phi_{\rm res}$, which is approximately four orders of magnitude below the observed energy-integrated IGRB~\cite{Fermi-LAT:2014ryh} -- this difference is presumably attributed to the subtraction of the astrophysical background, however it is unclear how such a background subtraction would be achieved. In addition, our results have a smaller dependence on the WIMP mass (\eg our PBH limits for 1 TeV and 100 GeV are almost identical, while for \cite{Carr:2020mqm} they differ by about one order of magnitude). This dependence is a consequence of using a limit to the flux integrated in the entire $\textit{Fermi}$-LAT energy band instead of using a binned analysis. There are other factors which also contribute to the difference, albeit to a lesser extent. These include a missing factor of $\frac{1}{2}$ in the annihilation rate, and the fact that we use the latest \textit{Planck} 2018 cosmological values for the parameters, as, for instance, for $t_{\mathrm{eq}}$ and $\rho_{\mathrm{eq}}$. {\bf Comparison with Tashiro et al \cite{Tashiro:2021xnj}} \newline The most recent CMB constraints on the hybrid PBH-WIMP scenario have been derived in \cite{Tashiro:2021xnj}. This study, however, focused exclusively on the high PBH mass region, where the kinetic energy of the WIMPs can be neglected. The results of Ref.~\cite{Tashiro:2021xnj} on the $e^\pm$ channel do not differ significantly from the results presented here, however we find discrepancies in the $b\bar{b}$ channel which lead to disagreements of up to one order of magnitude. There are a number of potential reasons for this discrepancy, as, for instance, differences in the different methodology employed. Namely, Ref.~\cite{Tashiro:2021xnj} uses the so-called ``on-the-spot'' approximation, which assumes that all energy is deposited at the redshift of injection. Furthermore, this approximation relies on simplified redshift averaged fitting functions to estimate how energy is deposited into the system. \begin{figure} \centering \includegraphics[width=1\linewidth]{images/ldm.pdf} \caption{Same as Fig.~\ref{fig:results}, but for the case of a 100 MeV particle annihilating through a vector portal coupling. As detailed in Sec.~\ref{sec:dm}, for this particle mass the dominant annihilation channel is $e^+ e^-$. The cross section at freeze out is assumed to be $\left\langle \sigma_A v \right\rangle= 10^{-28}/f_\chi$ $\text{cm}^3$/s for both the s-wave and p-wave cases. } \label{fig:results_ldm} \end{figure} \section{Conclusions}\label{sec:conc} The discovery of gravitational waves from the coalescence of binary systems of black holes has revived the interest in scenarios where the Cold Dark Matter (CDM) can be in the form of Primordial Black Holes (PBHs). While PBHs with masses $M_{\scaleto{\mathrm{PBH}}{4pt}} \gtrsim 10^{-11} \, M_\odot$ cannot comprise the entirety of dark matter, the presence of even a small abundance of such objects can have profound consequences. These objects can efficiently accrete large amounts of the primary dark matter component in the early Universe, forming high density spikes. If the major dark matter component is due to WIMPs, the high densities achieved in these spikes dramatically enhance the WIMP annihilation rate, leading therefore to strong observable effects in astrophysics and cosmology. Here, we revisit cosmological and astrophysical constraints on the hybrid WIMP-PBH dark matter scenario. This work offers a major improvement with respect to previous cosmological analyses, including the latest observational data, a proper treatment of energy deposition and propagation in the IGM, a WIMP density profile incorporating the kinematic suppression arising at low WIMP/PBH masses, and a through investigation into a broad array of scenarios (including extended PBH mass functions, s- and p-wave annihilations, and a wide variety of WIMP models). For comparison, we re-derive the astrophysical constraints from gamma-ray observations of the IGRB -- importantly, we find that previous results have notably overestimated the constraints on the PBH fraction from these measurements (in some cases by many orders of magnitude). For most scenarios we find that the cosmological constraints from CMB and BAO observations are comparable to, or even slightly stronger than those derived from the IGRB. Despite the fact that our results differ qualitatively from those previously derived in the literature, the conclusions are roughly the same: if WIMPs comprise the primary component of dark matter, PBHs are severely constrained from contributing notably to the dark matter density. Exceptions remain, however, for PBH masses $M_{\scaleto{\mathrm{PBH}}{4pt}} \lesssim 10^{-6} \, M_\odot$, and very light p-wave annihilating dark matter. \begin{acknowledgments} OM is supported by the Spanish grants PID2020-113644GB-I00, PROMETEO/2019/083 and by the European ITN project HIDDeN (H2020-MSCA-ITN-2019//860881-HIDDeN). SJW is supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 864035 - Un-Dark). This research was supported by the Excellence Cluster ORIGINS which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EX--2094 390783311. \end{acknowledgments} \setcounter{equation}{0} \setcounter{figure}{0} \setcounter{table}{0} \setcounter{section}{0} \makeatletter \renewcommand{\theequation}{S\arabic{equation}} \renewcommand{\thefigure}{S\arabic{figure}} \renewcommand{\thetable}{S\arabic{table}} \onecolumngrid \clearpage
2,869,038,156,097
arxiv
\section{Introduction}\label{sect:Introduction} Free-space optical (FSO) communication has been widely used for ground-to-satellite communication systems due to high transmission rate and high reliability \cite{9138713,9319151,8438300}. In practical applications, due to the effect of air refractive index \cite{5439306,8438298}, atmospheric turbulence (also known as scintillation) is generated in FSO communication. The atmospheric turbulence causes random fluctuations of the amplitude and phase for the received signal at the receiver, which seriously deteriorate the system performance \cite{6844864,9534661}. To quantify the effect of atmospheric turbulence, several mathematical models have been proposed to describe the distribution of turbulence fading in FSO communication systems. For example, the lognormal distribution has been used to describe weak turbulence channel in FSO communication systems \cite{9551206,8959173}. The weak turbulence channel is one of the most typical channel models to describe FSO communication scenarios, which have attracted significant research interest in the past two decades \cite{1221769,9099546}. In the FSO communication systems based on weak turbulence, the optical signal uses intensity modulation with direct detection (IM/DD)\cite{9321161}, which mainly adopts pulse position modulation (PPM) \cite{6581873} or on-off keying modulation (OOK) \cite{8862937}. Although OOK can provide high bandwidth efficiency, it suffers low energy efficiency and high synchronization complexity at the receiver \cite{5259916}. As an alternative, PPM can improve the energy efficiency by increasing the order of modulation \cite{6581873}. However, the improvement of energy efficiency is obtained at cost of bandwidth efficiency, which leads limited transmission capacity. As a variant of PPM, multipulse position modulation (MPPM) has been proposed to improve the bandwidth efficiency by utilizing multiple pulsed slots during each symbol transmission \cite{16882}. In the past decade, researchers have conceived various designs of MPPM constellations in FSO communication systems. In \cite{5439306}, a Gray labeling search (GLS) algorithm which considers the Hamming distance between adjacent symbols has been proposed to construct a new type of MPPM constellations. Furthermore, the authors in \cite{7833038} have proposed a new subset selection (MCSS) algorithm to construct MPPM constellations. The above two MPPM constellations have been only employed in single-input-single-output (SISO) technique, which cannot attain spatial diversity. To tackle the above issue, multiple-input multiple-output (MIMO) technique has been introduced in FSO communication systems \cite{9790303}. To be specific, in\cite{6241395}, a low-complexity spatial pulse position modulation (SPPM) scheme that combines SSK with PPM has been proposed. Furthermore, the authors in \cite{7881027} have developed a generalized SPPM (GSPPM) scheme by using multiple transmit antennas to send the same PPM symbols at each transmission instant. In addition, a spatial MPPM (SMPPM) scheme adopting SSK and MPPM has been considered in \cite{8647138}. So far, an in-depth investigation on GSMPPM is still lacking. Actually, a GSMPPM constellation includes a spatial-domain constellation (i.e., effective activated antenna groups) and a signal-domain constellation (i.e., MPPM constellations). The sizes of both two types of constellations are a power of two, and thus part of antenna groups and MPPM symbols keep idle. Therefore, how to design an efficient GSMPPM constellation using more antenna groups and MPPM symbols is a challenging problem. An alternative method to improve the system performance is the employment of error-correction codes (ECCs). For example, Reed-solomon (RS) codes \cite{5259916,9503410} and Low-density parity-check (LDPC) codes \cite{9272655,photonics9050349} have been used in the FSO systems. Among all ECCs, a class of structured LDPC codes, i.e., protogragh LDPC (PLDPC) codes \cite{thorpe2003low}, have received tremendous attention due to low complexity and close-to-capacity performance \cite{905935}. As well, the researchers have proposed a protograph extrinsic information transfer (PEXIT)\cite{7112076} algorithm for predicting the decoding thresholds of PLDPC codes in specific communication scenarios. With the aid of PEXIT algorithm, the authors have constructed a PLDPC code (i.e., code-B) for Poisson channels \cite{6663748}. Nevertheless, the code-B PLDPC code may not perform well over weak turbulence channels due to the different distribution characteristics. Hence, it is crucial to design a type of PLDPC code tailored for the PLDPC-coded GSMPPM system over such a scenario. Inspired by the above motivation, we make a comprehensive investigation on the PLDPC-coded GSMPPM systems over weak turbulence channels. Thus, the contributions of this work are summarized as follows. \begin{enumerate}[1.] \item We propose a new GSMPPM scheme based on an asymmetric dual-mode (ADM) constellation search algorithm by removing the constraint condition that the sizes of spatial-domain constellation and signal-domain constellation must be a power of two. \item We analyze the constellation-constrained capacity of the proposed GSMPPM constellations in the case of using different MPPM slots. \item With the aid of PEXIT algorithm, we construct an improved PLDPC code (i.e., IM-PLDPC code) to achieve excellent decoding thresholds in the GSMPPM system. \item Analytical and simulation results reveal that the proposed PLDPC-coded ADM-aided GSMPPM system remarkably outperforms the existing counterparts over weak turbulence channels. \end{enumerate} The remainder of this paper is organized as follows. In Section~\ref{sect:Systems Model}, we propose the PLDPC-coded GSMPPM system and estimate its corresponding constellation-constrained capacity. In Section~\ref{sec:Design}, we present the design method of the ADM constellations. In Section~\ref{sec:Design_PLDPC}, we construct a new PLDPC code for the PLDPC-coded GSMPPM system. Simulation results are presented in Section~\ref{sec:simulation}, and the conclusion is made in Section~\ref{sec:conclusions}. \begin{figure}[t] \centering \hspace{0mm} \includegraphics[width=4.2in,height=2.2in]{{system_model.eps}} \caption{Block diagram of a PLDPC-coded GSMPPM system over a weak turbulence channel.}\vspace{-2mm} \label{fig:system_model} \end{figure} \section{Systems Model}\label{sect:Systems Model} \subsection{PLDPC-Coded GSMPPM System} \label{subsec:Systems} The block diagram of a PLDPC-coded GSMPPM system is shown in Fig.~\ref{fig:system_model}, which has $N_{\rm t}$ transmit antennas and $N_{\rm r}$ receive antennas. At each transmission instant, ${N_{\rm{a}}}~(2\le{N_{\rm{a}}}\le{N_{\rm{t}}})$ out of $N_{\rm{t}}$ transmit antennas are chosen as an activated antenna group to transmit signals in the PLDPC-coded GSMPPM systems. In the conventional GSMPPM system, ${{2}^{\lfloor \log _{2}^{{N_{\rm{s}}}}\rfloor }}$ effective activated antenna groups are selected to transmit symbols, and each activated antenna group is used to transmit an MPPM symbol, where $N_{\rm s} = \binom{N_{\rm t}}{N_{\rm a}}$ is the number of all possible activated antenna groups, $\binom{a}{b}$ denotes the combinatorial number operator and $\lfloor \cdot \rfloor$ denotes the floor function. Specifically, at the transmitter, a length-$k$ information-bit sequence ${\rm{\bu}}=\left\{{u_{1}},{u_{2}},\ldots ,{u_{k}} \right\}$ is first encoded by a PLDPC encoder to generate a length-$s$ codeword ${\rm{\bc}}=\left\{{{c}_{1}},{{c}_{2}},\ldots ,{{c}_{s}} \right\}$. Subsequently, $\rm{\bc}$ is passed to a GSMPPM modulator after permuted by a random interleaver. After that, every $m$ coded bits are divided into two parts, the first ${m_{\rm{t}}}=\lfloor \log _{2}^{{N_{\rm{s}}}} \rfloor$ coded bits are used to select the effective activated antenna group, while the remaining ${m_{\rm{s}}}=\log _{2}^{M}$ coded bits are mapped into an $M$-ary MPPM symbol. Thus, a length-$(s/m)$ GSMPPM symbol sequence ${\rm{\bZ}}=\left\{ {{\rm{\bz}}_{1}},{{\rm{\bz}}_{2}},\ldots ,{{\rm{\bz}}_{s/m}} \right\}$ can be yielded. Generally, each MPPM symbol consists of $l$ slots, with ${l_{\rm{a}}}\left( 2\le {l_{\rm{a}}}<l/2 \right)$ pulsed slots and $\left( l-{l_{\rm{a}}} \right)$ non-pulsed slots. Thereby, the MPPM symbol can be characterized by a length-$l$ vector ${\rm{\bz}}_{p}=\left[ z_{p}^{1},z_{p}^{2},\ldots ,z_{p}^{l} \right]$, where $z_{p}^{q}\in \left\{ 0,1 \right\}$, $q=1,2,\ldots ,l$, $p=1,2,\ldots ,s/m$. In the vector ${\rm{\bz}}_{p}$, ``$1$'' represents a pulsed slot and ``$0$'' means a non-pulsed slot. Note that we use $(N_{\rm t},N_{\rm r},N_{\rm a},l,l_{\rm a},M_{\rm s})$ to represent a GSMPPM constellation with size of $M_{\rm s} = 2^{m}$ in this paper. Finally, the GSMPPM symbol sequence ${\rm{\bZ}}$ is passed through a weak turbulence channel and each GSMPPM symbol ${\rm{\bz}}_{p}$ is converted into a transmission vector ${\rm{\bx}}$. The channel output $\rm{\by}$ can be written as \begin{equation}\label{eq1} {\rm{\by}}=\frac{P_{\rm{t}}}{\sqrt{{N_{\rm{a}}}}} {\rm{\bH}} {\rm{\bx}} + {\rm{\bw}}, \end{equation} where ${\rm{\by}} = [{\by}_1,{\by}_2,\ldots, {\by}_{N_{\rm r}} ]^{{\rm \bT}}$ denotes the received signal vector with size of $N_{\rm r} \times 1$ by all receive antennas, ${\by}_i = [{y}_i^{1}, {y}_i^{2}, \ldots, {y}_i^{l}] $ is the received signal in the $i$th receive antenna; ${\rm{\bx}} = [{\bx}_1,{\bx}_2,\ldots, {\bx}_{N_{\rm t}} ]^{{\rm \bT}}$ denotes the GSMPPM signal vector with size of $N_{\rm t} \times 1$ sent by transmit antennas, ${\bx}_j = [{x}_j^{1}, {x}_j^{2}, \ldots, {x}_j^{l}]$ is the GSMPPM signal sent by the $j$th transmit antenna; ${\rm{\bw}}$ denotes the noise matrix with size of ${N}_{\rm{r}} \times l$, in which each element is the real additive Gaussian noise with zero-mean and variance ${{\sigma}^{2}}={{{N}_{0}}}/{2}$, ${N}_{0}$ is the noise power spectral density. Moreover, ${{P}_{\rm{t}}}= {P}_{\rm{a}}\cdot \gamma$ denotes the peak transmit power, where $P_{\rm{a}}$ is the average transmit power (all modulation patterns use the same ${P}_{\rm{a}}$), $\gamma = {1}/{\tau}$ is the peak-to-average power ratio (PAPR), and $\tau ={{l}_{\rm{a}}}/{l}$ is the duty cycle of each MPPM symbol. In addition, ${\rm{\bH}}=({h}_{ij})$ denotes the channel gain matrix of size $N_{\rm r} \times N_{\rm t}$, where ${h}_{ij}$ is the channel coefficient from the $j$th transmit antenna to the $i$th receive antenna in the PLDPC-coded GSMPPM system. The channel coefficient ${h}_{ij}$ follows a lognormal distribution, and its probability density function (PDF) is given by \cite{9551206} \begin{equation}\label{eq2} f({{h}_{ij}})=\frac{1}{2{{h}_{ij}}\sqrt{2\pi}{{\sigma }_{\rm{x}}}}\exp \left( -\frac{{{\left( \ln \left( {{h}_{ij}} \right)-2\mu \right)}^{2}}}{8\sigma _{\rm{x}}^{2}} \right), \end{equation} where $\sigma_{\rm{x}}^{2}=0.25\ln\left( 1+\sigma _{\rm{I}}^{2} \right)$ and ${{\sigma}_{\rm{I}}}\left( \sigma _{\rm{I}}^{2} < 1 \right)$ is the channel scintillation index\cite{8370053}. To ensure that the average power is not impacted by channel fading, the channel coefficients are normalized as $\mathbb{E}\left[h_{ij}^{2} \right]=1$, where $\mathbb{E}\left[ \cdot \right]$ stands for the expectation function. The signal-to-noise ratio (SNR) can be expressed as\cite{6241395,7881027} SNR $ = \left( {{l}_{\rm{a}}}P_{\rm{t}}^{2} \right)/\left( 2 R m{{\sigma }^{2}} \right)$, where $R$ is the code rate. At the receiver, the received signal $\by$ is detected by the max-sum approximation of log-domain maximum a-posterior probability (Max-log-MAP) algorithm \cite{7341061,5259916} in the GSMPPM detector. Then, the extrinsic log-likelihood ratios (LLRs) output from the GSMPPM detector are sent to a deinterleaver. After that, these extrinsic LLRs will be fed to a PLDPC decoder to perform belief-propagation (BP) \cite{661110, 8693678} decoding. \subsection{Constellation-Constrained Capacity}\label{subsec:AMI} Given the channel state information (CSI), the maximum rate of a reliable transmission can be determined by the average mutual information (AMI) \cite{9367298}. Assume that the selected GSMPPM symbol from the GSMPPM constellation is equiprobable, the constellation-constrained AMI of a code modulation (CM) scheme over a weak turbulence channel can be calculated as\cite{8237200} \begin{equation}\label{eq3} {{C}_{\rm{CM}}} = m-{{\mathbb{E}}_{\rm{\bx}, \rm{\by}, \rm{\bH}}}\left[ {{\log}_{2}}\frac{\sum\nolimits_{\rm{\br}\in \Omega }{p\left( \rm{\by} \mid \rm{\br},\rm{\bH} \right)}}{p\left( \rm{\by} \mid \rm{\bx}, \rm{\bH} \right)} \right], \end{equation} where \begin{equation}\label{eq4} \begin{aligned} p\left( {\rm{\by}} \mid {\rm{\bx}}, \rm{\bH} \right)&= \prod_{i=1}^{N_{\rm r}} \prod_{q=1}^{l} p\left(y_{i}^{q} \mid {\rm{\bx}}, \rm{\bH}\right) \\ &=\frac{\exp \left[-\frac{\sum\nolimits_{i=1}^{N_{\rm r}} \sum\nolimits_{q = 1}^{l} \left(y_{i}^{q}- \sum\nolimits_{j=1}^{N_{\rm t}} \frac{P_{\rm t}}{\sqrt{N_{\rm{a}}}} {h_{ij}} x_{j}^{q}\right)^{2}} {2 \sigma^{2}}\right]}{\sqrt[l N_{\rm r}] {2 \pi \sigma^{2}}}. \end{aligned} \end{equation} In Eq.~\eqref{eq3}, $\Omega $ denotes a GSMPPM constellation, $y_{i}^{q}$ denotes the received signal of the $q$th slot at the $i$th receive antenna, $x_{j}^{q}$ denotes the signal transmitted by the $q$th slot at the $j$th transmit antenna, and $p({\rm{\by}} \mid {\rm{\bx}} , {\rm{\bH}} )$ is the conditional probability density function (PDF) of the channel output $\rm{\by}$. In addition, the constellation-constrained AMI of a bit-interleaved coded modulation (BICM) scheme can be calculated as \cite{8237200} \begin{equation}\label{eq5} {{C}_{\rm{BICM}}} = m-\sum\limits_{k=1}^{m}{{{\mathbb{E}}_{b,{\rm{\by}},{\rm{\bH}}}}\left[ {{\log }_{2}}\frac{\sum\nolimits_{{\rm{\bx}}\in \Omega }{p\left( {\rm{\by}} \mid {\rm{\bx}},{\rm{\bH}} \right)}}{\sum\nolimits_{{\rm{\bx}}\in \Omega _{k}^{b}}{p\left( {\rm{\by}} \mid {\rm{\bx}},{\rm{\bH}} \right)}} \right]}, \end{equation} where $\Omega_{k}^{b}$ denotes the subset of GSMPPM constellation $\Omega$ with the $k$th bit being $b\in \left\{ 0,1 \right\}$. \section{Design of Proposed ADM constellations for PLDPC-coded GSMPPM System}\label{sec:Design} \subsection{Proposed ADM Constellations}\label{subsec:ADM} In the conventional GSMPPM modulation scheme, the $m_{\rm t}$ coded bits are used to select effective activated antenna groups, while the $m_{\rm s}$ coded bits are modulated by using MPPM symbols for effective activated antenna groups. Note that each effective activated antenna group shares the same MPPM constellation set $\Psi = \{\Psi_1,\Psi_2,\ldots,\Psi_M\}$. In practice, the signal-domain constellation set $\Phi$ consists of full MPPM symbols with size of $M_{\max }=\binom{l}{l_{\rm a}}$, while the spatial-domain constellation includes $N_{\rm s}$ possible activated antenna groups. As seen, the sizes of both constellations are typically not a power of two. For this reason, $\Phi$ cannot be directly used for bit-to-symbol mapping in the proposed system and certain activated antenna groups cannot be exploited substantially. For instance, when $N_{\rm t} = 4 $ and $N_{\rm a} = 2$, $(1, 2)$, $(3, 4)$, $(1, 4)$ and $(2, 3)$ are selected as effective activated antenna groups, while the remaining activated antenna groups $(1, 3)$ and $(2, 4)$ are idle, where $1$, $2$, $3$ and $4$ denote the corresponding transmit antenna index. Meanwhile, in the existing works, e.g., GLS \cite{5439306} and MCSS \cite{7833038} constellations, only $M$ MPPM symbols are selected from set $\Phi$ as an eligible sub-constellation set $\Psi \subset \Phi$ to perform the bit-to-symbol mapping. To overcome the above weakness, we proposed a type of asymmetric dual-mode (ADM) constellations by considering full-MPPM symbols and all possible activated antenna groups as much as possible. The detailed design steps of the proposed ADM constellations in the PLDPC-coded GSMPPM system are as follows. \vspace{0mm} \begin{algorithm}[t] \caption{ADM Constellation Parameter Selection}\label{alg:1} \textbf{Initialization}: Given parameters ${N_{\rm t}}$, ${N_{\rm a}}$, $l$ and ${l_{\rm a}}$, calculate $N_{\rm s}$, $M_{\max }$, $M_{\rm s}$ and $M\leftarrow {{2}^{\lfloor \log _{2}^{M_{\rm max}} \rfloor}}$. Set $Th = 0$, $i$ = $0$, ${N_{\rm add}} \leftarrow N_{\rm s}-2^{m_{\rm t}}$.\\ \While{$Th == 0$} {Calculate ${M_{\rm A}}\leftarrow \left\lceil {M_{\rm s}}/{N_{\rm s}}\right\rceil+i$ and ${M_{\rm B}}$;\\ \If {$({M_{\rm A}} > {M_{\rm B}})$ \&\& $({M_{\rm A}} + {M_{\rm B}}) \leq M_{\max}$ } {{\rm $Th$} = 1;\\ \textbf{continue}} {${N_{\rm add}} \leftarrow {N_{\rm add}} - 1$;}\\ \If {${N_{\rm add}} == 0$}{Reset ${N_{\rm add}}$ and $i \leftarrow i + 1$;}} {\textbf{Finalization}: Output parameters: ${N_{\rm add}}$, $M_{\rm max}$, $M$, $m$, ${M_{\rm A}}$ and ${M_{\rm B}}$.} \end{algorithm} \begin{enumerate}[1)] \item \textbf{\emph{ADM Constellation Parameter Selection}}: In order to utilize all activated antenna groups as much as possible, how to determine the number ${N_{\rm add}}$ of additional activated antenna groups is one of the most critical issues. Given system parameters $N_{\rm{t}}$, $N_{\rm{a}}$, $l$, ${l}_{\rm{a}}$ and $M_{\rm s}$, all possible activated antenna groups can be calculated as $N_{\rm s}$ and the number of full-MPPM symbols is $M_{\rm max }$. First, we randomly select $2^{m_{\rm t}}$ as effective activated antenna groups, thus the number of the remaining activated antenna groups is ${N_{\rm s}}-2^{m_{\rm t}}$. The additional activated antenna groups are selected from the remaining activated antenna groups. We define two MPPM constellation sets ${{\Psi }_{\rm A}}$ and ${{\Psi }_{\rm B}}$. As such, each effective activated antenna group sends GSMPPM symbols by using constellation set ${{\Psi }_{\rm A}}$ with size of ${M_{\rm A}}$, while each additional activated antenna group sends GSMPPM symbols by using constellation set ${{\Psi }_{\rm B}}$ with size of ${M_{\rm B}}$. Finally, the relationship between the ${M_{\rm A}}$, ${M_{\rm B}}$, and ${N_{\rm add}}$ can be represented as \begin{equation}\label{eq6} \begin{aligned} & {M_{\rm B}} = \lceil {\left({M_{\rm s}}-{ 2^{m_{\rm t}} }\cdot {M_{\rm A}}\right)}/{N_{\rm add}} \rceil, \\ & {\rm subject~to}:~{M_{\rm A}} > {M_{\rm B}}, \\ &~~~~~~~~~~\quad\quad {M_{\rm A}} + {M_{\rm B}} \leq M_{\max}, \end{aligned} \end{equation} where the initial value of ${M_{\rm A}}$ is set to $\lceil{M_{\rm s}}/{N_{\rm s}}\rceil$ and the initial value of ${N_{\rm add}}$ is set to $N_{\rm s}- 2^{m_{\rm t}} $. To elaborate further, the ADM constellation parameter selection is summarized in {\em Algorithm \ref{alg:1}}. \begin{algorithm}[t] \caption{ADM Constellation Formulation}\label{alg:2} \textbf{Initialization}: Given parameters ${N_{\rm add}}$, ${M_{\rm A}}$, ${M_{\rm B}}$, $M_{\rm max}$, $M$, $m$, $l$ and ${{l}_{\rm a}}$, generate a full-MPPM symbols set $\Phi$ of size $M_{\max}$. Set $D_{{\rm a}} = 0$, $D_{{\rm b}} = 0$, Label = $0$. \\ \For{$p = 1, 2, \ldots, \binom{M_{\rm max}}{M_{\rm A}}$} {Generate a sub-constellation ${\Psi }_{{\rm A}_p}$ and calculate $D_{{\rm{a}},p}$;\\ \If{$D_{{\rm{a}},p} > D_{{\rm a}}$}{Label $\leftarrow p$; $D_{{\rm a}} \leftarrow D_{{\rm{a}},p}$;}} \For{$i = 1, 2, \ldots, {M_{\rm A}}$}{\For{$j = i+1, i+2, \ldots, {M_{\rm A}}$ } { \If{$D^{ij}==2l_{\rm a}$}{Maximize the Hamming distance $d$ between two different labels in $\boldsymbol{\xi}_{\lambda}$; }} } ${\Psi }_{\rm A} \leftarrow {\Psi }_{{\rm A},{\rm Label}}$.\\ Remove MPPM symbols in ${\Psi }_{\rm A}$ from $\Phi$, i.e., $\Phi$ = $\Phi /{\Psi }_{\rm A}$.\\ \For{$\eta = 1, 2, \ldots, \binom{M_{\rm max} - M_{\rm A}}{M_{\rm B}}$}{Generate a sub-constellation ${\Psi }_{{\rm B}, \eta}$ and calculate $D_{{\rm b},\eta}$;\\ \If{$D_{{\rm{b}},\eta} > D_{{\rm b}}$}{Label $\leftarrow \eta$; $D_{{\rm b}} \leftarrow D_{{\rm{b}},\eta}$;}} \For{$i = 1, 2, \ldots, {M_{\rm B}}$}{ \For{$j = i+1, i+2, \ldots, {M_{\rm B}}$ }{\If{$D^{ij}==2l_{\rm a}$}{Maximize the Hamming distance $d$ between two different labels in $\boldsymbol{\zeta}_{\mu}$;}}} ${\Psi }_{\rm B} \leftarrow {\Psi }_{{\rm B},{\rm Label}}$.\\ {\textbf{Finalization}: Output ${\Psi }_{\rm A}$, ${\Psi }_{\rm B}$.} \end{algorithm} \item \textbf{\emph{ADM Constellation Formulation}}: (i) The parameters ${N_{\rm add}}$, ${M_{\rm A}}$ and ${{M}_{\rm B}}$ of an ADM constellation are determined by {\em Algorithm \ref{alg:1}}. Considering an ADM GSMPPM constellation scheme with $M_{\rm s}$ MPPM symbols, there exist $M_{\rm s}$ labels in the corresponding constellations mapper, each label $\xi_{\alpha} = [\xi_{\alpha}^{1},\xi_{\alpha}^{2},\ldots,\xi_{\alpha}^{m}]$ consists of $m$ labeling bits, where $\alpha = 1,2,\ldots,M_{\rm s}$, $\xi_{\alpha}^{m} \in \{0,1\}$, and $\xi_{\alpha}$ is the binary representation of the index value $(\alpha - 1)$. To conveniently determine the MPPM symbols in sub-constellation set ${\Psi }_{\rm A}$, we first divide the $2^{m_{\rm t}}M_{\rm A}$ labels into $2^{m_{\rm t}}$ label subsets. The index values corresponding to the labels in the $\lambda$th label subset $\boldsymbol{\xi}_{\lambda}$ within ${\Psi }_{\rm A}$ belong to the interval $[\lambda M, (M_{\rm A}-1)+\lambda M]$ (see Table~\ref{tab:ADM-constellations-5bits}), and the remaining labels correspond to the MPPM symbols in sub-constellation set ${\Psi}_{\rm B}$, where $\lambda = 0, 1, \ldots,2^{m_{\rm t}}-1$. Here, we take the $\lambda$th label subset $\boldsymbol{\xi}_{\lambda}$ as an example and select the corresponding sub-constellation set ${\Psi}_{\rm A}$. Especially, ${M_{\rm A}}$ MPPM symbols must be selected from a full-MPPM symbols constellation set $\Phi$ of size $M_{\max}$ to constitute an MPPM sub-constellation set ${{\Psi }_{{\rm A}_{p}}}$, where $p = 1,2,\cdots, \binom{M_{\rm max}}{M_{\rm A}}$. The $i$th MPPM sub-constellation symbol (i.e., ${\Psi }_{{{\rm A}_p},i} = [{\psi }_{{{\rm A}_{p}},i}^{1},{\psi }_{{{\rm A}_p},i}^{2},\ldots,{\psi }_{{{\rm A}_p},i}^{l}]$) can be marked by the $i$th label (i.e., ${{\xi}}_{\lambda,i} = [ {{\xi}}_{\lambda,i}^{1},{\xi}_{\lambda,i}^{2},\ldots ,{\xi}_{\lambda,i}^{m}]$) in $\boldsymbol{\xi}_{\lambda}$, where $i = 1,2,\ldots,M_{\rm A}$. Further, the Hamming distance of arbitrary two different MPPM symbols (e.g., ${\Psi }_{{{\rm A}_{p}},i}$ and ${\Psi }_{{{\rm A}_{p}},j}$) is defined as $D^{ij}$. One can calculate the Hamming distance $D_{{\rm A}_{p},i} = \frac{1}{M_{\rm A}-1}\sum_{j = 1,i \neq j}^{M_{\rm A}} {D^{ij}}$ between the $i$th MPPM symbol ${{\Psi }_{{\rm A}_{p},i}}$ and the remaining $M_{\rm A}-1$ MPPM symbols. Then, the average Hamming distance of the sub-constellation ${\Psi}_{{\rm A}_{p}}$ can be obtain by $D_{{\rm{a}},p} = {\frac{1}{M_{\rm A}} \sum_{i=1}^{M_{\rm A}}{D_{{{\rm A}_p},i}} }$. Assume that $D_{{\rm{a}},p}$ is the maximum average Hamming distance $D_{\rm a}$, we select the sub-constellation ${{\Psi }_{{\rm A}_{p}}}$ for the next operation. For the sub-constellation set ${{\Psi }_{{\rm A}_{p}}}$, we relabel each MPPM symbol based on the maximum Hamming distance criterion. To be specific, when $D^{ij}=2l_a$ (i.e., the largest Hamming distance), we maximize the Hamming distance $d$ of arbitrary two different labels corresponding to two different MPPM symbols ${\Psi }_{{{\rm A}_{p}},i}$ and ${\Psi }_{{{\rm A}_{p}},j}$. If the case of $d = m$ does not exist, we consider $d = m-1,m-2,\ldots,1$ and so on. Based on the above operations, we can obtain a sub-constellation set ${\Psi }_{\rm A}$.\vspace{2mm} (ii) In order to decrease the interference between GSMPPM symbols in two different activated antenna groups, the same MPPM symbol cannot exist in both ${\Psi }_{\rm A}$ and ${\Psi }_{\rm B}$. Thus, we should remove all MPPM symbols belonging to ${\Psi }_{\rm A}$ from $\Phi$ (i.e., $\widebar{\Phi}_{\rm A} = \Phi /{\Psi }_{\rm A}$). Given a spectral efficiency $\rho$, the additional activated antenna groups will cause the truncation of the effective activated antenna groups corresponding to signal-domain constellations. For example, when $N_{\rm t}=4$, $N_{\rm a}=2$, and $M_{\rm A}=6$, two MPPM symbols corresponding to the labels $[00110]$ and $[00111]$ cannot combine with the effective activated antenna groups to form two GSMPPM symbols, it can only form two GSMPPM symbols with the additional activated antenna groups $(1,3)$ and $(2,4)$, respectively. Therefore, we reclassify the remaining $M_{\rm r}$ labels into $N_{\rm add}$ label subsets in a sequential order, where $M_{\rm r} = 2^{m}(M - M_{\rm A})$. When $M-M_{\rm A}\le N_{\rm add}$, the index value corresponding to the $\beta$th label in the $\mu$th label subset $\boldsymbol{\zeta}_{\mu}$ is $(\beta-1)M + M_{\rm A}+\mu$, where $\mu = 0,1,\ldots,N_{\rm add}-1$, $\beta = 1,2,\ldots,M_{\rm B}$ (see Table~\ref{tab:ADM-constellations-5bits}). Otherwise (i.e., $M-M_{\rm A} > N_{\rm add}$), the index values corresponding to the labels are divided evenly into $N_{\rm add}$ subsets in a sequential order (see Table~\ref{tab:ADM-constellations-6bits}). Especially, each label in a label subset $\boldsymbol{\zeta}_{\mu}$ corresponds to an MPPM symbol in sub-constellation ${\Psi }_{\rm B}$\footnote{The selected sub-constellation sets ${\Psi}_{\rm A}$ and ${\Psi}_{\rm B}$ are also applicable to other label subsets $\boldsymbol{\xi}_{\lambda}$ and $\boldsymbol{\zeta}_{\mu}$, respectively.}. Taking the label subset $\boldsymbol{\zeta}_{\mu}$ as an example, we select $M_{\rm B}$ MPPM symbols from set $\widebar{\Phi}_{\rm A}$ to form a sub-constellation ${\Psi_{{\rm B},\eta}}$, where $\eta= 1,2,\ldots, \binom{M_{\rm max} - M_{\rm A}}{M_{\rm B}}$. Likewise, we calculate the average Hamming distance of the sub-constellation ${\Psi}_{{\rm B}_{\eta}}$, which can be obtain by $D_{{\rm{b}},\eta} = {\frac{1}{M_{\rm B}(M_{\rm B}-1)} \sum_{i=1}^{M_{\rm B}} \sum_{j=1, i \ne j}^{M_{\rm B}} {D^{ij}}} $. Assume that $D_{{\rm b},\eta}$ is the maximum average Hamming distance $D_{\rm b}$, we select the sub-constellation ${{\Psi }_{{\rm B}_{\eta}}}$ for the next operation. Then, when $D^{ij}=2l_a$ (i.e., the largest Hamming distance), we maximize the Hamming distance $d$ of arbitrary two different labels corresponding to two MPPM symbols ${\Psi }_{{{\rm B}_{\eta}},i}$ and ${\Psi }_{{{\rm B}_{\eta}},j}$. Finally, the subconstellation set ${\Psi }_{\rm B}$ can be obtained by the above operation. \end{enumerate} Based on the above two steps, a type of ADM constellations can be formulated. To illustrate further, the design method for ADM constellations is summarized in \emph{Algorithm \ref{alg:2}}. For example, with the aid of \emph{Algorithm \ref{alg:2}}, the ADM constellations with spectral efficiencies $\rho = 5$ bits per channel use (bpcu) and $\rho = 6$ bpcu are shown in Tables~\ref{tab:ADM-constellations-5bits} and \ref{tab:ADM-constellations-6bits}, respectively. \begin{table*}[t] \tiny \centering\vspace{-2mm} \caption{Mapping relationship among labels, MPPM symbols and activated antenna groups for the proposed ADM constellations, where ${N}_{\rm{t}}=4$, ${N}_{\rm{r}}=4$, ${N}_{\rm{t}}=2$, and $\rho = 5$ bpcu (i.e., $l = 5$, $l_{\rm a} = 2$ and $l = 6$, $l_{\rm a} = 2$).} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Effective activated &Index&Label &${\Psi }_{\rm A}$ &${\Psi }_{\rm A}$ &Effective activated & Index &Label &${\Psi }_{\rm A}$&${\Psi }_{\rm A}$\\ antenna group &value&$\boldsymbol{\xi}$ &$(l=5)$ &$(l=6)$ &antenna group &value&$\boldsymbol{\xi}$ &$(l=5)$ &$(l=6)$\\ \hline\hline &$0$ &$0\ 0\ 0\ 0\ 0$ &$1\ 0\ 1\ 0\ 0$ &$1\ 1\ 0\ 0\ 0\ 0$ & &$8$ &$0\ 1\ 0\ 0\ 0$ &$1\ 0\ 1\ 0\ 0$ &$1\ 1\ 0\ 0\ 0\ 0$\\ &$1$ &$0\ 0\ 0\ 0\ 1$ &$0\ 1\ 1\ 0\ 0$ &$0\ 0\ 1\ 0\ 0\ 1$ & &$9$ &$0\ 1\ 0\ 0\ 1$ &$0\ 1\ 1\ 0\ 0$ &$0\ 0\ 1\ 0\ 0\ 1$\\ $(1, 2)$ &$2$ &$0\ 0\ 0\ 1\ 0$ &$1\ 0\ 0\ 1\ 0$ &$0\ 1\ 0\ 1\ 0\ 0$ &$(3, 4)$&$10$ &$0\ 1\ 0\ 1\ 0$ &$1\ 0\ 0\ 1\ 0$ &$0\ 1\ 0\ 1\ 0\ 0$\\ &$3$ &$0\ 0\ 0\ 1\ 1$ &$0\ 0\ 1\ 1\ 0$ &$0\ 0\ 1\ 1\ 0\ 0$ & &$11$ &$0\ 1\ 0\ 1\ 1$ &$0\ 0\ 1\ 1\ 0$ &$0\ 0\ 1\ 1\ 0\ 0$\\ &$4$ &$0\ 0\ 1\ 0\ 0$ &$1\ 0\ 0\ 0\ 1$ &$1\ 0\ 0\ 0\ 1\ 0$ & &$12$ &$0\ 1\ 1\ 0\ 0$ &$1\ 0\ 0\ 0\ 1$ &$1\ 0\ 0\ 0\ 1\ 0$\\ &$5$ &$0\ 0\ 1\ 0\ 1$ &$0\ 1\ 0\ 0\ 1$ &$0\ 0\ 0\ 0\ 1\ 1$ & &$13$ &$0\ 1\ 1\ 0\ 1$ &$0\ 1\ 0\ 0\ 1$ &$0\ 0\ 0\ 0\ 1\ 1$\\ \hline &$16$ &$1\ 0\ 0\ 0\ 0$ &$1\ 0\ 1\ 0\ 0$ &$1\ 1\ 0\ 0\ 0\ 0$ & &$24$ &$1\ 1\ 0\ 0\ 0$ &$1\ 0\ 1\ 0\ 0$ &$1\ 1\ 0\ 0\ 0\ 0$ \\ &$17$ &$1\ 0\ 0\ 0\ 1$ &$0\ 1\ 1\ 0\ 0$ &$0\ 0\ 1\ 0\ 0\ 1$ & &$25$ &$1\ 1\ 0\ 0\ 1$ &$0\ 1\ 1\ 0\ 0$ &$0\ 0\ 1\ 0\ 0\ 1$ \\ $(1, 4)$ &$18$ &$1\ 0\ 0\ 1\ 0$ &$1\ 0\ 0\ 1\ 0$ &$0\ 1\ 0\ 1\ 0\ 0$ &$(2, 3)$&$26$ &$1\ 1\ 0\ 1\ 0$ &$1\ 0\ 0\ 1\ 0$ &$0\ 1\ 0\ 1\ 0\ 0$ \\ &$19$ &$1\ 0\ 0\ 1\ 1$ &$0\ 0\ 1\ 1\ 0$ &$0\ 0\ 1\ 1\ 0\ 0$ & &$27$ &$1\ 1\ 0\ 1\ 1$ &$0\ 0\ 1\ 1\ 0$ &$0\ 0\ 1\ 1\ 0\ 0$ \\ &$20$ &$1\ 0\ 1\ 0\ 0$ &$1\ 0\ 0\ 0\ 1$ &$1\ 0\ 0\ 0\ 1\ 0$ & &$28$ &$1\ 1\ 1\ 0\ 0$ &$1\ 0\ 0\ 0\ 1$ &$1\ 0\ 0\ 0\ 1\ 0$ \\ &$21$ &$1\ 0\ 1\ 0\ 1$ &$0\ 1\ 0\ 0\ 1$ &$0\ 0\ 0\ 0\ 1\ 1$ & &$29$ &$1\ 1\ 1\ 0\ 1$ &$0\ 1\ 0\ 0\ 1$ &$0\ 0\ 0\ 0\ 1\ 1$ \\ \hline Additional activated &Index&Label &${\Psi }_{\rm B}$ &${\Psi }_{\rm B}$ &Additional activated&Index&Label &${\Psi }_{\rm B}$&${\Psi }_{\rm B}$\\ antenna group &value&$\boldsymbol{\zeta}$ &$(l=5)$ &$(l=6)$ &antenna group &value&$\boldsymbol{\zeta}$ &$(l=5)$ &$(l=6)$\\ \hline\hline &$6$ &$0\ 0\ 1\ 1\ 0$ &$1\ 1\ 0\ 0\ 0$ &$0\ 1\ 0\ 0\ 0\ 1$ & &$7$ &$0\ 0\ 1\ 1\ 1$ &$1\ 1\ 0\ 0\ 0$ &$0\ 1\ 0\ 0\ 0\ 1$ \\ $(1, 3)$ &$14$ &$0\ 1\ 1\ 1\ 0$ &$0\ 1\ 0\ 1\ 0$ &$0\ 0\ 1\ 0\ 1\ 0$ &$(2, 4)$&$15$ &$0\ 1\ 1\ 1\ 1$ &$0\ 1\ 0\ 1\ 0$ &$0\ 0\ 1\ 0\ 1\ 0$ \\ &$22$ &$1\ 0\ 1\ 1\ 0$ &$0\ 0\ 1\ 0\ 1$ &$0\ 0\ 0\ 1\ 0\ 1$ & &$23$ &$1\ 0\ 1\ 1\ 1$ &$0\ 0\ 1\ 0\ 1$ &$0\ 0\ 0\ 1\ 0\ 1$ \\ &$30$ &$1\ 1\ 1\ 1\ 0$ &$0\ 0\ 0\ 1\ 1$ &$0\ 0\ 0\ 1\ 1\ 0$ & &$31$ &$1\ 1\ 1\ 1\ 1$ &$0\ 0\ 0\ 1\ 1$ &$0\ 0\ 0\ 1\ 1\ 0$ \\ \hline \end{tabular}\label{tab:ADM-constellations-5bits} \end{table*} \begin{table*}[!t] \tiny \centering\vspace{-2mm} \caption{Mapping relationship among labels, MPPM symbols and activated antenna groups for the proposed ADM constellations, where ${N}_{\rm{t}}=4$, ${N}_{\rm{r}}=4$, ${N}_{\rm{t}}=2$, and $\rho = 6$ bpcu (i.e., $l = 7$, $l_{\rm a} = 2$ and $l = 8$, $l_{\rm a} = 2$).} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Effective activated &Index&Label &${\Psi }_{\rm A}$ &${\Psi }_{\rm A}$ &Effective activated&Index&Label &${\Psi }_{\rm A}$&${\Psi }_{\rm A}$\\ antenna group &value&$\boldsymbol{\xi}$ &$(l=7)$ &$(l=8)$ &antenna group &value&$\boldsymbol{\xi}$ &$(l=7)$ &$(l=8)$\\ \hline\hline &$0$&$0\ 0\ 0\ 0\ 0\ 0$ &$0\ 0\ 0\ 0\ 0\ 1\ 1$ &$0\ 0\ 1\ 0\ 0\ 1\ 0\ 0$ & &$16$&$0\ 1\ 0\ 0\ 0\ 0$ &$0\ 0\ 0\ 0\ 0\ 1\ 1$ &$0\ 0\ 1\ 0\ 0\ 1\ 0\ 0$\\ &$1$&$0\ 0\ 0\ 0\ 0\ 1$ &$0\ 0\ 1\ 0\ 0\ 0\ 1$ &$0\ 0\ 1\ 1\ 0\ 0\ 0\ 0$ & &$17$&$0\ 1\ 0\ 0\ 0\ 1$ &$0\ 0\ 1\ 0\ 0\ 0\ 1$ &$0\ 0\ 1\ 1\ 0\ 0\ 0\ 0$\\ &$2$&$0\ 0\ 0\ 0\ 1\ 0$ &$0\ 0\ 0\ 1\ 0\ 1\ 0$ &$0\ 0\ 0\ 0\ 1\ 0\ 1\ 0$ & &$18$&$0\ 1\ 0\ 0\ 1\ 0$ &$0\ 0\ 0\ 1\ 0\ 1\ 0$ &$0\ 0\ 0\ 0\ 1\ 0\ 1\ 0$\\ &$3$&$0\ 0\ 0\ 0\ 1\ 1$ &$1\ 0\ 0\ 1\ 0\ 0\ 0$ &$0\ 0\ 0\ 1\ 0\ 0\ 1\ 0$ & &$19$&$0\ 1\ 0\ 0\ 1\ 1$ &$1\ 0\ 0\ 1\ 0\ 0\ 0$ &$0\ 0\ 0\ 1\ 0\ 0\ 1\ 0$\\ &$4$&$0\ 0\ 0\ 1\ 0\ 0$ &$0\ 1\ 0\ 0\ 0\ 1\ 0$ &$1\ 0\ 0\ 0\ 0\ 1\ 0\ 0$ & &$20$&$0\ 1\ 0\ 1\ 0\ 0$ &$0\ 1\ 0\ 0\ 0\ 1\ 0$ &$1\ 0\ 0\ 0\ 0\ 1\ 0\ 0$\\ $(1, 2)$ &$5$&$0\ 0\ 0\ 1\ 0\ 1$ &$0\ 1\ 1\ 0\ 0\ 0\ 0$ &$1\ 0\ 0\ 1\ 0\ 0\ 0\ 0$ &$(3, 4)$ &$21$&$0\ 1\ 0\ 1\ 0\ 1$ &$0\ 1\ 1\ 0\ 0\ 0\ 0$ &$1\ 0\ 0\ 1\ 0\ 0\ 0\ 0$ \\ &$6$&$0\ 0\ 0\ 1\ 1\ 0$ &$1\ 0\ 0\ 0\ 0\ 1\ 0$ &$0\ 0\ 0\ 0\ 1\ 1\ 0\ 0$ & &$22$&$0\ 1\ 0\ 1\ 1\ 0$ &$1\ 0\ 0\ 0\ 0\ 1\ 0$ &$0\ 0\ 0\ 0\ 1\ 1\ 0\ 0$\\ &$7$&$0\ 0\ 0\ 1\ 1\ 1$ &$1\ 1\ 0\ 0\ 0\ 0\ 0$ &$1\ 0\ 0\ 0\ 1\ 0\ 0\ 0$ & &$23$&$0\ 1\ 0\ 1\ 1\ 1$ &$1\ 1\ 0\ 0\ 0\ 0\ 0$ &$1\ 0\ 0\ 0\ 1\ 0\ 0\ 0$\\ &$8$&$0\ 0\ 1\ 0\ 0\ 0$ &$0\ 0\ 0\ 0\ 1\ 0\ 1$ &$0\ 1\ 0\ 0\ 0\ 0\ 0\ 1$ & &$24$&$0\ 1\ 1\ 0\ 0\ 0$ &$0\ 0\ 0\ 0\ 1\ 0\ 1$ &$0\ 1\ 0\ 0\ 0\ 0\ 0\ 1$\\ &$9$&$0\ 0\ 1\ 0\ 0\ 1$ &$0\ 0\ 1\ 0\ 1\ 0\ 0$ &$0\ 0\ 1\ 0\ 0\ 0\ 0\ 1$ & &$25$&$0\ 1\ 1\ 0\ 0\ 1$ &$0\ 0\ 1\ 0\ 1\ 0\ 0$ &$0\ 0\ 1\ 0\ 0\ 0\ 0\ 1$\\ &$10$&$0\ 0\ 1\ 0\ 1\ 0$ &$0\ 0\ 0\ 1\ 1\ 0\ 0$&$0\ 1\ 0\ 0\ 0\ 0\ 1\ 0$ & &$26$&$0\ 1\ 1\ 0\ 1\ 0$ &$0\ 0\ 0\ 1\ 1\ 0\ 0$ &$0\ 1\ 0\ 0\ 0\ 0\ 1\ 0$ \\ \hline &$32$&$1\ 0\ 0\ 0\ 0\ 0$ &$0\ 0\ 0\ 0\ 0\ 1\ 1$ &$0\ 0\ 1\ 0\ 0\ 1\ 0\ 0$ & &$48$ &$1\ 1\ 0\ 0\ 0\ 0$ &$0\ 0\ 0\ 0\ 0\ 1\ 1$ &$0\ 0\ 1\ 0\ 0\ 1\ 0\ 0$\\ &$33$&$1\ 0\ 0\ 0\ 0\ 1$ &$0\ 0\ 1\ 0\ 0\ 0\ 1$ &$0\ 0\ 1\ 1\ 0\ 0\ 0\ 0$ & &$49$&$1\ 1\ 0\ 0\ 0\ 1$ &$0\ 0\ 1\ 0\ 0\ 0\ 1$ &$0\ 0\ 1\ 1\ 0\ 0\ 0\ 0$\\ &$34$&$1\ 0\ 0\ 0\ 1\ 0$ &$0\ 0\ 0\ 1\ 0\ 1\ 0$ &$0\ 0\ 0\ 0\ 1\ 0\ 1\ 0$ & &$50$&$1\ 1\ 0\ 0\ 1\ 0$ &$0\ 0\ 0\ 1\ 0\ 1\ 0$ &$0\ 0\ 0\ 0\ 1\ 0\ 1\ 0$\\ &$35$&$1\ 0\ 0\ 0\ 1\ 1$ &$1\ 0\ 0\ 1\ 0\ 0\ 0$ &$0\ 0\ 0\ 1\ 0\ 0\ 1\ 0$ & &$51$&$1\ 1\ 0\ 0\ 1\ 1$ &$1\ 0\ 0\ 1\ 0\ 0\ 0$ &$0\ 0\ 0\ 1\ 0\ 0\ 1\ 0$\\ &$36$&$1\ 0\ 0\ 1\ 0\ 0$ &$0\ 1\ 0\ 0\ 0\ 1\ 0$ &$1\ 0\ 0\ 0\ 0\ 1\ 0\ 0$ & &$52$&$1\ 1\ 0\ 1\ 0\ 0$ &$0\ 1\ 0\ 0\ 0\ 1\ 0$ &$1\ 0\ 0\ 0\ 0\ 1\ 0\ 0$\\ $(1, 4)$ &$37$&$1\ 0\ 0\ 1\ 0\ 1$ &$0\ 1\ 1\ 0\ 0\ 0\ 0$ &$1\ 0\ 0\ 1\ 0\ 0\ 0\ 0$ &$(2, 3)$ &$53$&$1\ 1\ 0\ 1\ 0\ 1$ &$0\ 1\ 1\ 0\ 0\ 0\ 0$ &$1\ 0\ 0\ 1\ 0\ 0\ 0\ 0$\\ &$38$&$1\ 0\ 0\ 1\ 1\ 0$ &$1\ 0\ 0\ 0\ 0\ 1\ 0$ &$0\ 0\ 0\ 0\ 1\ 1\ 0\ 0$ & &$54$&$1\ 1\ 0\ 1\ 1\ 0$ &$1\ 0\ 0\ 0\ 0\ 1\ 0$ &$0\ 0\ 0\ 0\ 1\ 1\ 0\ 0$\\ &$39$&$1\ 0\ 0\ 1\ 1\ 1$ &$1\ 1\ 0\ 0\ 0\ 0\ 0$ &$1\ 0\ 0\ 0\ 1\ 0\ 0\ 0$ & &$55$&$1\ 1\ 0\ 1\ 1\ 1$ &$1\ 1\ 0\ 0\ 0\ 0\ 0$ &$1\ 0\ 0\ 0\ 1\ 0\ 0\ 0$\\ &$40$&$1\ 0\ 1\ 0\ 0\ 0$ &$0\ 0\ 0\ 0\ 1\ 0\ 1$ &$0\ 1\ 0\ 0\ 0\ 0\ 0\ 1$ & &$56$&$1\ 1\ 1\ 0\ 0\ 0$ &$0\ 0\ 0\ 0\ 1\ 0\ 1$ &$0\ 1\ 0\ 0\ 0\ 0\ 0\ 1$\\ &$41$&$1\ 0\ 1\ 0\ 0\ 1$ &$0\ 0\ 1\ 0\ 1\ 0\ 0$ &$0\ 0\ 1\ 0\ 0\ 0\ 0\ 1$ & &$57$&$1\ 1\ 1\ 0\ 0\ 1$ &$0\ 0\ 1\ 0\ 1\ 0\ 0$ &$0\ 0\ 1\ 0\ 0\ 0\ 0\ 1$\\ &$42$&$1\ 0\ 1\ 0\ 1\ 0$ &$0\ 0\ 0\ 1\ 1\ 0\ 0$ &$0\ 1\ 0\ 0\ 0\ 0\ 1\ 0$ & &$58$&$1\ 1\ 1\ 0\ 1\ 0$ &$0\ 0\ 0\ 1\ 1\ 0\ 0$ &$0\ 1\ 0\ 0\ 0\ 0\ 1\ 0$\\ \hline Additional activated &Index&Label &${\Psi }_{\rm B}$ &${\Psi }_{\rm B}$ &Additional activated&Index&Label &${\Psi }_{\rm B}$&${\Psi }_{\rm B}$\\ antenna group &value&$\boldsymbol{\zeta}$ &$(l=7)$ &$(l=8)$ &antenna group &value&$\boldsymbol{\zeta}$ &$(l=7)$ &$(l=8)$\\ \hline\hline &$11$&$0\ 0\ 1\ 0\ 1\ 1$ &$0\ 0\ 1\ 0\ 0\ 1\ 0$ &$1\ 0\ 1\ 0\ 0\ 0\ 0\ 0$ & &$43$&$1\ 0\ 1\ 0\ 1\ 1$ &$0\ 0\ 1\ 0\ 0\ 1\ 0$ &$1\ 0\ 1\ 0\ 0\ 0\ 0\ 0$ \\ &$12$&$0\ 0\ 1\ 1\ 0\ 0$ &$0\ 0\ 0\ 1\ 0\ 0\ 1$ &$0\ 1\ 0\ 1\ 0\ 0\ 0\ 0$ & &$44$&$1\ 0\ 1\ 1\ 0\ 0$ &$0\ 0\ 0\ 1\ 0\ 0\ 1$ &$0\ 1\ 0\ 1\ 0\ 0\ 0\ 0$ \\ &$13$&$0\ 0\ 1\ 1\ 0\ 1$ &$0\ 0\ 1\ 1\ 0\ 0\ 0$ &$0\ 1\ 0\ 0\ 1\ 0\ 0\ 0$ & &$45$&$1\ 0\ 1\ 1\ 0\ 1$ &$0\ 0\ 1\ 1\ 0\ 0\ 0$ &$0\ 1\ 0\ 0\ 1\ 0\ 0\ 0$ \\ &$14$&$0\ 0\ 1\ 1\ 1\ 0$ &$1\ 0\ 0\ 0\ 0\ 0\ 1$ &$0\ 1\ 1\ 0\ 0\ 0\ 0\ 0$ & &$46$&$1\ 0\ 1\ 1\ 1\ 0$ &$1\ 0\ 0\ 0\ 0\ 0\ 1$ &$0\ 1\ 1\ 0\ 0\ 0\ 0\ 0$ \\ $(1, 3)$ &$15$&$0\ 0\ 1\ 1\ 1\ 1$ &$1\ 0\ 1\ 0\ 0\ 0\ 0$ &$0\ 0\ 1\ 0\ 1\ 0\ 0\ 0$ &$(2, 4)$&$47$&$1\ 0\ 1\ 1\ 1\ 1$ &$1\ 0\ 1\ 0\ 0\ 0\ 0$ &$0\ 0\ 1\ 0\ 1\ 0\ 0\ 0$ \\ &$27$&$0\ 1\ 1\ 0\ 1\ 1$ &$0\ 0\ 0\ 0\ 1\ 1\ 0$ &$1\ 0\ 0\ 0\ 0\ 0\ 1\ 0$ & &$59$&$1\ 1\ 1\ 0\ 1\ 1$ &$0\ 0\ 0\ 0\ 1\ 1\ 0$ &$1\ 0\ 0\ 0\ 0\ 0\ 1\ 0$ \\ &$28$&$0\ 1\ 1\ 1\ 0\ 0$ &$0\ 1\ 0\ 0\ 0\ 0\ 1$ &$0\ 0\ 0\ 1\ 0\ 1\ 0\ 0$ & &$60$&$1\ 1\ 1\ 1\ 0\ 0$ &$0\ 1\ 0\ 0\ 0\ 0\ 1$ &$0\ 0\ 0\ 1\ 0\ 1\ 0\ 0$ \\ &$29$&$0\ 1\ 1\ 1\ 0\ 1$ &$0\ 1\ 0\ 1\ 0\ 0\ 0$ &$0\ 0\ 0\ 1\ 0\ 0\ 0\ 1$ & &$61$&$1\ 1\ 1\ 1\ 0\ 1$ &$0\ 1\ 0\ 1\ 0\ 0\ 0$ &$0\ 0\ 0\ 1\ 0\ 0\ 0\ 1$ \\ &$30$&$0\ 1\ 1\ 1\ 1\ 0$ &$0\ 1\ 0\ 0\ 1\ 0\ 0$ &$0\ 0\ 0\ 0\ 0\ 1\ 0\ 1$ & &$62$&$1\ 1\ 1\ 1\ 1\ 0$ &$0\ 1\ 0\ 0\ 1\ 0\ 0$ &$0\ 0\ 0\ 0\ 0\ 1\ 0\ 1$ \\ &$31$&$0\ 1\ 1\ 1\ 1\ 1$ &$1\ 0\ 0\ 0\ 1\ 0\ 0$ &$0\ 0\ 0\ 0\ 0\ 0\ 1\ 1$ & &$63$&$1\ 1\ 1\ 1\ 1\ 1$ &$1\ 0\ 0\ 0\ 1\ 0\ 0$ &$0\ 0\ 0\ 0\ 0\ 0\ 1\ 1$ \\ \hline \end{tabular}\label{tab:ADM-constellations-6bits} \end{table*} \vspace{-2mm} \begin{figure}[t] \centering \subfigure[\hspace{-0.6cm}]{\label{fig:Capacity-5slots} \includegraphics[width=3.2in,height=2.56in]{GSMPPM_5slots.eps}} \subfigure[\hspace{-0.1cm}]{\label{fig:Capacity-6slots} \hspace{-0.5cm}\includegraphics[width=3.2in,height=2.56in]{GSMPPM_6slots.eps}} \vspace{-0.2cm} \caption{Capacities of the PLDPC-coded GSMPPM systems with the proposed ADM constellation, the optimized constellation, natural constellation, MCSS constellation, and GLS constellation: (a) $(4,4,2,5,2,32)$-GSMPPM with $\rho=5$ bpcu; (b) $(4,4,2,6,2,32)$-GSMPPM with $\rho=5$ bpcu.}\vspace{-4mm} \label{fig:Capacity-5bits} \end{figure} \begin{figure}[t] \centering \subfigure[\hspace{-0.6cm}]{\label{fig:Capacity-7slots} \includegraphics[width=3.2in,height=2.56in]{GSMPPM_7slots.eps}} \subfigure[\hspace{-0.1cm}]{\label{fig:Capacity-8slots} \hspace{-0.5cm}\includegraphics[width=3.2in,height=2.56in]{GSMPPM_8slots.eps}} \vspace{-0.2cm} \caption{Capacities of the PLDPC-coded GSMPPM systems with the proposed ADM constellation, the optimized constellation, natural constellation, MCSS constellation, and GLS constellation: (a) $(4,4,2,7,2,64)$-GSMPPM with $\rho=6$ bpcu; (b) $(4,4,2,8,2,64)$-GSMPPM with $\rho=6$ bpcu.}\vspace{-4mm} \label{fig:Capacity-6bits} \end{figure} \subsection{Performance Analysis}\label{subsec:Capacity} To verify the effectiveness of the proposed ADM constellations, the CM and BICM capacities of different GSMPPM constellations can be calculated by using the constellation-constrained capacity analysis method in Section~\ref{subsec:AMI}. In the PLDPC-coded GSMPPM systems, we calculate the constellation-constrained capacities of the proposed ADM constellations, the optimized constellations\footnote{The optimized constellations are special cases of the proposed ADM constellations, which can be obtained by \emph{Algorithm \ref{alg:2}} with the assumption of $N_{\rm add} = 0$, $M_{\rm A} = M$ and $M_{\rm B} = 0$.}, natural constellation \cite{liu2009improved}, MCSS constellation \cite{7833038}, and GLS constellation \cite{5439306} with $\rho=5$ bpcu and $\rho=6$ bpcu are illustrated in Fig.~\ref{fig:Capacity-5bits} and Fig.~\ref{fig:Capacity-6bits}, respectively, where ${N_{\rm{t}}} = 4$, ${N_{\rm{r}}} = 4$, ${N_{\rm{a}}} = 2$ and ${{\sigma}_{\rm{x}}} = 0.3$. For the case of $(4,4,2,5,2,32)$ GSMPPM scheme with $\rho=5$ bpcu, in Fig.~\ref{fig:Capacity-5slots}, one can see that the proposed ADM constellation and the optimized constellation can obtain much larger capacities than the other three constellations when the code rate $R > 0.35$ (i.e., $R={{{\mathcal{C}}_{\rm{BICM}}}}/{m}$). In particular, the proposed ADM constellation is closest to the CM capacity. In Fig.~\ref{fig:Capacity-6slots}, we can observe similar results for $(4, 4, 2, 6, 2, 32)$ GSMPPM scheme. Likewise, the same phenomenon can be observed when spectral efficiency $\rho=6$ bpcu (see Fig.~\ref{fig:Capacity-6bits}). Therefore, it can be concluded that the proposed ADM constellations are able to obtain better performance with respect to the existing counterparts in the PLDPC-coded GSMPPM systems. To further verify the advantage of our proposed ADM constellations, the decoding thresholds of a code rate-$1/2$ accumulate-repeat-by-$4$-jagged-accumulate (AR$4$JA) code \cite{7112076} in the PLDPC-coded GSMPPM systems are analysed by utilizing the PEXIT algorithm \cite{9367298,9519519}. In addition, the optimized, natural, GSL and MCSS constellations are used as benchmarks. Note that the transmitted codeword length is assumed to be $4500$ and the maximum number of BP iterations $t_{\rm{BP}}$ is set to be $100$. As can be seen from Table~\ref{tab:thresholds-mapping}, in the cases of $(4,4,2,5,2,32)$ and $(4, 4, 2, 6, 2, 32)$ GSMPPM schemes (i.e., $\rho=5$ bpcu), the decoding thresholds of the AR$4$JA code with the proposed ADM constellation and the optimized constellation are smaller than those with other three existing constellations. When the spectral efficiency $\rho = 6$ bpcu (i.e., $(4, 4, 2, 7, 2, 64)$ and $(4, 4, 2, 8, 2, 64)$ GSMPPM schemes), the proposed ADM constellations still have excellent error performance. Importantly, the decoding thresholds of the proposed ADM constellations are the lowest, which indicates that our proposed ADM constellations are the best scheme for the PLDPC-coded GSMPPM systems. \begin{table*}[t] \center\vspace{-1.5mm} \caption{Decoding thresholds (i.e., ${\rm {dB}}$) of the AR$4$JA code in the GSMPPM systems with the proposed ADM constellation, the optimized constellation, natural constellation, MCSS constellation, and GLS constellation over a weak turbulence channel, where ${N}_{\rm{t}}=4$, ${N}_{\rm{r}}=4$, ${N}_{\rm{t}}=2$, and the modulation patterns are $(4,4,2,5,2,32)$, $(4,4,2,6,2,32)$, $(4,4,2,7,2,64)$ and $(4,4,2,8,2,64)$.} \begin{tabular}{|c|c|c|c|c|} \hline \backslashbox{Constellation}{Modulation Pattern} &$(4,4,2,5,2,32)$ &$(4,4,2,6,2,32)$ &$(4,4,2,7,2,64)$ &$(4,4,2,8,2,64)$ \\ \hline\hline Natural &$-3.0734$ &$-3.1872$ &$-3.6949$ &$-3.7976$ \\ \hline MCSS &$-2.9364$ &$-3.1692$ &$-3.6861$ &$-3.8987$ \\ \hline GLS &$-3.2572$ &$-3.3928$ &$-4.0969$ &$-4.1966$ \\ \hline Optimized &$-3.4875$ &$-3.7466$ &$-4.2324$ &$-4.3361$ \\ \hline ADM &$-3.6942$ &$-3.8542$ &$-4.3475$ &$-4.4732$ \\ \hline \end{tabular}\label{tab:thresholds-mapping} \end{table*} \begin{table*}[t] \center \caption{Decoding thresholds (i.e., ${\rm {dB}}$) of the AR$4$JA code, regular-$(3,6)$ code, the optimized code-B and the proposed IM-PLDPC code in the GSMPPM systems over a weak turbulence channel, where ${N}_{\rm{t}}=4$, ${N}_{\rm{r}}=4$, ${N}_{\rm{t}}=2$, and the modulation patterns are $(4,4,2,5,2,32)$, $(4,4,2,6,2,32)$, $(4,4,2,7,2,64)$ and $(4,4,2,8,2,64)$.} \begin{tabular}{|c|c|c|c|c|} \hline \backslashbox{Code Type}{Modulation Pattern} &$(4,4,2,5,2,32)$ &$(4,4,2,6,2,32)$ &$(4,4,2,7,2,64)$ &$(4,4,2,8,2,64)$ \\ \hline\hline Regular &$-3.3336$ &$-3.4893$ &$-3.8719$ &$-4.1558$ \\ \hline AR$4$JA &$-3.6942$ &$-3.7542$ &$-4.3475$ &$-4.4732$ \\ \hline Code-B &$-2.7952$ &$-2.9986$ &$-3.5719$ &$-3.8209$ \\ \hline IM-PLDPC &$-3.7918$ &$-3.8892$ &$-4.4693$ &$-4.5256$ \\ \hline \end{tabular}\label{tab:thresholds-code} \vspace{-4mm} \end{table*} \section{Design and Analysis of PLDPC Codes for ADM-aided GSMPPM System}\label{sec:Design_PLDPC} \subsection{Proposed IM-PLDPC Code}\label{subsec:PLDPC} A PLDPC code is represented by a Tanner graph, which consists of several small sets of check nodes (CNs), variable nodes (VNs), and edges \cite{7112076,thorpe2003low}. The VNs and CNs are connected by their associated edges. In a protograph, parallel edges are allowed. In addition, the protograph with a code rate $R={\left( {{p}_{\rm c}}-{{p}_{\rm v}} \right)}/{{{p}_{\rm c}}}$ can be represented by a base matrix ${{\bB}_{\rm{o}}}=\left( {{b}_{i,j}} \right)$ of size ${{p}_{\rm c}}\times {{p}_{\rm v}}$, where ${{b}_{i,j}}$ denotes the number of edges connecting CN ${{c}_{i}}$ and VN ${{v}_{j}}$. A large protograph (resp. parity-check matrix) of size ${{P}_{\rm c}}\times {{P}_{\rm v}}$ corresponding to the PLDPC code, can be constructed by performing a lifting operation on a given protograph (resp. base matrix), where ${{P}_{\rm c}} = T{{p}_{\rm c}}$, ${{P}_{\rm v}} = T{{p}_{\rm v}}$ and $T$ denotes a lifting factor. Typically, the lifting operation (i.e., copy and permute) can be implemented by a modified progressive-edge-growth (PEG) algorithm \cite{van2012design}. \begin{table}[t] \center\vspace{-1.5mm} \caption{TMDRs of the rate-$1/2$ AR$4$JA code, regular-$(3,6)$ code, code-B and the proposed IM-PLDPC code.} \begin{tabular}{|c|c|c|c|c|} \hline {Code Type} &IM-PLDPC &AR$4$JA &Regular &Code-B\\ \hline TMDR &0.007 &0.014 &0.023 &N.A.\\ \hline \end{tabular}\label{tab:AWD}\vspace{-4mm} \end{table} It is well known that the BER performance of a PLDPC code may show different performance in different communication scenarios. For specific communication scenarios, it is essential to design PLDPC codes with outstanding performance. In the PLDPC code design, a pre-coding structure and a certain proportion of degree-$2$ VNs can improve the decoding thresholds \cite{6266764, 7112076}. Nevertheless, as the typical minimum distance ratio (TMDR) \cite{5174517} is extremely susceptible to degree-$2$ VNs, a finite-length code with excessive degree-$2$ VNs always has error floor in the high SNR region. For instance, the accumulate-repeat-$3$-accumulate (AR$3$A) code \cite{7112076} and the AR$4$JA code possess two degree-$2$ VNs and one degree-$2$ VN, respectively. According to the analyses, the AR$3$A code does not possess TMDR and has an error floor in the high SNR region. Conversely, the AR$4$JA codes benefits from TMDR and does not have any error floor in the high SNR region \cite{7112076}. As such, some constraints should be imposed on the protograph at the initial stage of PLDPC-code design, as follows. \begin{enumerate}[1)] \item A pre-coding structure: the pre-coding structure includes a CN and two VNs. Especially, the CN connects only a degree-1 VN and a highest-degree punctured VN. \item Appropriate proportion of degree-$2$ VNs: If the number of CNs is ${{p}_{\rm c}}$, the number of degree-$2$ VNs must satisfy $1\le {{p}_{{\rm v},2}}\le {{{p}_{\rm c}}}/{2}$. Otherwise, TMDR cannot be guaranteed. \item Low complexity: To ensure the low encoding/decoding complexity, the number of parallel edges connecting the CN ${{c}_{i}}$ and the VN ${{v}_{j}}$ is limited up to $3$ (i.e., ${{b}_{i,j}}\in \left\{0,1,2,3 \right\}$). To further reduce the search space, an additional constraint is imposed, i.e., $b_{1,5}+b_{2,5}+b_{3,5}+b_{4,5} = b_{1,6}+b_{2,6}+b_{3,6}+b_{4,6} = b_{1,7}+b_{2,7}+b_{3,7}+b_{4,7} > 2$. \end{enumerate} Taking into account the three constraints discussed above, we design a new PLDPC code with excellent performance in the PLDPC-coded GSMPPM system. Specifically, in order to reduce the computational complexity for search, we consider a rate-$1/2$ PLDPC code and a 4$\times$7 base matrix containing $28$ elements. The corresponding initial base matrix $\bB_{\rm o}$ can be expressed as \begin{equation}\label{base-matrix} \bB_{\rm o}= \begin{bmatrix} \begin{array}{ccccccc} 1 &\ 0 &\ 0 &\ b_{1,4} &\ b_{1,5} &\ b_{1,6} &\ b_{1,7} \\ 0 &\ 1 &\ 1 &\ b_{2,4} &\ b_{2,5} &\ b_{2,6} &\ b_{2,7} \\ 0 &\ 0 &\ 1 &\ b_{3,4} &\ b_{3,5} &\ b_{3,6} &\ b_{3,7} \\ 0 &\ 1 &\ 0 &\ b_{4,4} &\ b_{4,5} &\ b_{4,6} &\ b_{4,7} \\ \end{array} \end{bmatrix}. \end{equation} After a simple search with a PEXIT algorithm\cite{9367298,9519519}, one can obtain the rate-$1/2$ improved PLDPC code, referred to as {\em IM-PLDPC code}, which enables the lowest decoding threshold and effective TMDR. The base matrix ${\bB}_{\rm IM}$ is represented as \begin{equation}\label{enhanced-PLDPC} {\bB}_{\rm IM}= \begin{bmatrix} \begin{array}{ccccccc} 1 &\ 0 &\ 0 &\ 2 &\ 0 &\ 0 &\ 0 \\ 0 &\ 1 &\ 1 &\ 3 &\ 1 &\ 1 &\ 0 \\ 0 &\ 0 &\ 1 &\ 1 &\ 2 &\ 2 &\ 1 \\ 0 &\ 1 &\ 0 &\ 2 &\ 0 &\ 0 &\ 2 \\ \end{array} \end{bmatrix}, \end{equation} where the fourth column denotes a punctured VN and the total number of edges is $22$. \subsection{Performance Analysis}\label{subsec:PLDPC-analysis} Referring to Table~\ref{tab:thresholds-code}, we compare the decoding thresholds of the rate-$1/2$ IM-PLDPC code with those of three other existing codes (i.e., the AR$4$JA code \cite{7112076}, regular-$(3,6)$ code \cite{4448365}, and the optimized code-B \cite{6663748}) in the GSMPPM system over a week turbulence channel. It is revealed that the IM-PLDPC code achieves gains about $0.1$ dB, $0.45$ dB and $0.99$ dB compared to the AR$4$JA code, regular-$(3, 6)$ code and code-B in $(4, 4, 2, 5, 2, 32)$ GSMPPM, respectively. Similar results can be obtained from the $(4, 4, 2, 6, 2, 32)$ GSMPPM. Moreover, when the spectral efficiency $\rho = 6$ bpcu (i.e., $(4, 4, 2, 7, 2, 64)$ and $(4, 4, 2, 8, 2, 64)$ GSMPPM schemes), the decoding thresholds of the IM-PLDPC code are better than the other counterparts. Furthermore, we measure the TMDRs of the IM-PLDPC code, AR$4$JA code, regular-$(3,6)$ code, code-B by utilizing the AWD function \cite{5174517}. Referring to Table \ref{tab:AWD}, we observe that the IM-PLDPC code, AR$4$JA code and regular-$(3,6)$ code possess effective TMDRs, while code-B does not have a TMDR. It implies that the IM-PLDPC code enjoys the linear-minimum-distance-growth property and does not suffer from an error floor in the high SNR region. Based on the above analyses, it can be derived that the IM-PLDPC code has desirable error performance in both the low and high SNR regions in the GSMPPM systems. \section{Simulation results}\label{sec:simulation} In this section, we provide simulations of the PLDPC-coded GSMPPM systems with the proposed ADM constellations, the optimized constellations and three existing constellations (i.e., natural \cite{liu2009improved}, GSL\cite{5439306}, MCSS\cite{7833038} constellations) over weak turbulence channels. We also compare the bit-error-rates (BERs) of the proposed IM-PLDPC code, AR$4$JA code \cite{7112076}, regular-$(3,6)$ code \cite{4448365}, and the optimized code-B\cite{6663748} in such scenarios. Unless otherwise mentioned, we assume that transmitted codeword length $s = 4500$, and the maximum number of BP iterations $t_{\rm{BP}}$ is $100$. \begin{figure}[t] \centering \subfigure[\hspace{-0.7cm}]{\label{fig:BER-5slots} \includegraphics[width=3.2in,height=2.56in]{GSSK_5slots.eps}} \subfigure[\hspace{-0.5cm}]{\label{fig:BER-6slots} \hspace{-0.2cm}\includegraphics[width=3.2in,height=2.56in]{GSSK_6slots.eps}} \vspace{-0.2cm} \caption{BER curves of the AR4JA-coded GSMPPM systems with the proposed ADM constellation, the optimized constellation, natural constellation, MCSS constellation, and GLS constellation: (a) $(4,4,2,5,2,32)$ GSMPPM with $\rho=5$ bpcu; (b) $(4,4,2,6,2,32)$ GSMPPM with $\rho=5$ bpcu.}\vspace{-4mm} \label{fig:BER-5bits} \end{figure} \begin{figure}[!t] \centering \subfigure[\hspace{-0.7cm}]{\label{fig:BER-7slots} \includegraphics[width=3.2in,height=2.56in]{GSSK_7slots.eps}} \subfigure[\hspace{-0.5cm}]{\label{fig:BER-8slots} \hspace{-0.2cm}\includegraphics[width=3.2in,height=2.56in]{GSSK_8slots.eps}} \vspace{-0.2cm} \caption{BER curves of the AR4JA-coded GSMPPM systems with the proposed ADM constellation, the optimized constellation, natural constellation, MCSS constellation, and GLS constellation: (a) $(4,4,2,7,2,64)$ GSMPPM with $\rho=6$ bpcu; (b) $(4,4,2,8,2,64)$ GSMPPM with $\rho=6$ bpcu.}\vspace{-4mm} \label{fig:BER-6bits} \end{figure} \subsection{BER Performance of Different GSMPPM Constellations}\label{subsec:mapping-anal} In Fig.~\ref{fig:BER-5bits}, we consider the AR4JA-coded GSMPPM systems with the spectral efficiency $\rho = 5$ bpcu. As seen from Fig.~\ref{fig:BER-5slots}, the AR$4$JA code with the proposed ADM constellation exhibits better performance compared to the other four constellations in $(4, 4, 2, 5, 2, 32)$ GSMPPM scheme. Specifically, at a BER of $10^{-5}$, the proposed ADM constellation has about $0.4$ dB, $0.6$ dB, $0.9$ dB and $0.8$ dB gains over the optimized constellation, GSL constellation, MCSS constellation and natural constellation, respectively. As illustrated in Fig.~\ref{fig:BER-6slots}, similar performance can be obtained for $(4, 4, 2, 6, 2, 32)$ GSMPPM scheme. When the spectral efficiency $\rho = 6$ bpcu, Fig.~\ref{fig:BER-6bits} shows the BER curves of the AR$4$JA-coded GSMPPM systems with five different constellations. In Fig.~\ref{fig:BER-7slots}, the proposed ADM constellation requires $-2.62$ dB to obtain a BER of $10^{-5}$, while the natural, GSL , MCSS and the optimized constellations require $-1.85$ dB, $-1.81$ dB, $-2.15$ dB and $-2.36$ dB, respectively. Likewise, Fig.~\ref{fig:BER-8slots} also shows that the proposed ADM constellation requires the smallest SNR to achieve a BER of $10^{-5}$ in $(4, 4, 2, 8, 2, 64)$ GSMPPM scheme. In addition, simulation results are consistent with the decoding-threshold analysis in Section~\ref{subsec:Capacity} (see Table~\ref{tab:thresholds-mapping}). Therefore, we can conclude that the proposed ADM constellations are well suited to the PLDPC-coded GSMPPM systems. \subsection{BER Performance of Different PLDPC Codes}\label{subsec:PLDPC-anal} Fig.~\ref{fig:code-5bits} shows the BER curves of the proposed IM-PLDPC code, regular-$(3, 6)$ code \cite{4448365}, AR$4$JA code \cite{7112076}, and the optimized code-B \cite{6663748} in the GSMPPM systems with the spectral efficiency $\rho = 5$ bpcu. In Fig.~\ref{fig:code-5slots}, we can see that at a BER of $10^{-5}$, the IM-PLDPC code respectively obtains about $0.4$ dB, $0.5$ dB and $0.8$ dB gain compared to the AR$4$JA code, the regular-$(3,6)$ code and the code-B in $(4, 4, 2, 5, 2, 32)$ GSMPPM scheme. Similar BER simulation results can also be observed in the case of $(4, 4, 2, 6, 2, 32)$ GSMPPM (see Fig.~\ref{fig:code-6slots}). On the other hand, when the spectral efficiency $\rho = 6$ bpcu, Fig.~\ref{fig:code-6bits} show that IM-PLDPC code does not exhibit any error floor in the high SNR region. Specifically, Fig.~\ref{fig:code-7slots} shows that the IM-PLDPC code needs about $-2.95$ dB to obtain a BER of $10^{-5}$ in $(4, 4, 2, 7, 2, 64)$ GSMPPM, while the regular-$(3, 6)$ code, the AR$4$JA code and code-B require $-2.52$ dB, $-2.62$ dB and $-2.15$ dB, respectively, to do so. In $(4, 4, 2, 8, 2, 64)$ GSMPPM scheme, the IM-PLDPC code also exhibits the lowest SNR to achieve a BER of $10^{-5}$, as shown in Fig.~\ref{fig:code-8slots}. Based on the above discussion, the IM-PLDPC code is promising to be used in the PLDPC-coded GSMPPM systems. \begin{figure}[t] \centering \subfigure[\hspace{-0.76cm}]{\label{fig:code-5slots} \includegraphics[width=3.2in,height=2.56in]{code_5slots.eps}} \subfigure[\hspace{-0.5cm}]{\label{fig:code-6slots} \hspace{-0.2cm}\includegraphics[width=3.2in,height=2.56in]{code_6slots.eps}} \vspace{-0.2cm} \caption{BER curves of the IM-PLDPC code, AR$4$JA code, regular-$(3,6)$ code and code-B with the proposed ADM constellations in GSMPPM systems: (a) $(4,4,2,5,2,32)$ GSMPPM with $\rho = 5$ bpcu; (b) $(4,4,2,6,2,32)$ GSMPPM with $\rho = 5$ bpcu.}\vspace{-4mm} \label{fig:code-5bits} \end{figure} \begin{figure}[t] \centering \subfigure[\hspace{-0.7cm}]{\label{fig:code-7slots} \includegraphics[width=3.2in,height=2.56in]{code_7slots.eps}} \subfigure[\hspace{-0.5cm}]{\label{fig:code-8slots} \hspace{-0.2cm}\includegraphics[width=3.2in,height=2.56in]{code_8slots.eps}} \vspace{-0.2cm} \caption{BER curves of the IM-PLDPC, AR$4$JA code, regular-$(3,6)$ code and code-B with the proposed ADM constellations in GSMPPM systems: (a) $(4,4,2,7,2,64)$ GSMPPM with $\rho = 6$ bpcu; (b) $(4,4,2,8,2,64)$ GSMPPM with $\rho = 6$ bpcu.}\vspace{-4mm} \label{fig:code-6bits} \end{figure} {\em Remark:} We have also carried out analyses and simulations with other parameter settings (i.e., different values of $N_{\rm t}$, $N_{\rm r}$, $N_{\rm a}$, $l$, $l_{\rm a}$, and $\sigma_{\rm x}$) and obtained similar observations, which substantially demonstrate the superiority of our proposed constellations and code design. \section{Conclusion}\label{sec:conclusions} This paper investigated the performance of PLDPC-coded GSMPPM systems over weak turbulence channels. We proposed a type of novel GSMPPM constellations, called ADM constellations, which can achieve desirable capacities and convergence performance in this scenario. Furthermore, we constructed an improved PLDPC code using the PEXIT-aided computer search method, which possesses desirable decoding threshold and effective TMDR. Theoretical analyses and simulated results indicated that the PLDPC-coded GSMPPM system using the proposed ADM constellations and IM-PLDPC code can exhibit noticeable performance gains with respect to the state-of-the-art counterparts. Based on the appealing advantages, the proposed PLDPC-coded GSMPPM transmission scheme stands out as a competitive alternative for high-reliability FSO applications.
2,869,038,156,098
arxiv
\section{Introduction} \label{sec1} Hadronic jets are observed in large momentum-transfer interactions. They are theoretically interpreted to arise when partons -- quarks ($q$) and gluons ($g$) -- are emitted in collision events of subatomic particles. Partons then evolve into hadronic jets in a two-step process. The first can be described by perturbation theory and gives rise to a parton shower, the second is non-perturbative and is responsible for the hadronisation. The internal structure of a jet is expected to depend primarily on the type of parton it originated from, with some residual dependence on the quark production and fragmentation process. For instance, due to the different colour factors in $ggg$ and $qqg$ vertices, gluons lead to more parton radiation and therefore gluon-initiated jets are expected to be broader than quark-initiated jets.\\ For jets defined using cone or $k_t$ algorithms \cite{cone, kt}, jet shapes, i.e. the normalised transverse momentum flow as a function of the distance to the jet axis \cite{ellis}, have been traditionally used as a means of understanding the evolution of partons into hadrons in $e^+ e^-$, $ep$ and hadron colliders \cite{cdf01,d0,lep,hera2,h1,chek,lhc,cms}. It is experimentally observed that jets in $e^+ e^-$ and $ep$ are narrower than those observed in $p\bar{p}$ and $pp$ collisions and this is interpreted as a result of the different admixtures of quark and gluon jets present in these different types of interactions \cite{kellis}. Furthermore, at high momentum transfer, where fragmentation effects are less relevant, jet shapes have been found to be in qualitative agreement with next-to-leading-order (NLO) QCD predictions and in quantitative agreement with those including leading logarithm corrections \cite{leadLog}. Jet shapes have also been proposed as a tool for studies of substructure or in searches for new phenomena in final states with highly boosted particles \cite{boost1,boost2,boost3,gluinos}.\\ Due to the mass of the $b$-quark, jets originating from a $b$-quark (hereafter called $b$-jets) are expected to be broader than light-quark jets, including charm jets, hereafter called light jets. This expectation is supported by observations by the CDF collaboration in Ref. \cite{CDF}, where a comparison is presented between jet shapes in a $b$-jet enriched sample with a purity of roughly 25\% and an inclusive sample where no distinction is made between the flavours.\\ This paper presents the first measurement of $b$-jet shapes in top pair events. The $t\bar{t}$ final states are a source of $b$-jets, as the top quark decays almost exclusively via $t\rightarrow \Wboson b$. While the dilepton channel, where both $\Wboson$ bosons decay to leptons, is a very pure source of $b$-jets, the single-lepton channel contains $b$-jets and light jets, the latter originating from the dominant $\Wboson^+ \rightarrow u\bar{d}, c\bar{s}$ decays and their charge conjugates. A comparison of the light- and $b$-jet shapes measured in the $t\bar{t}$ decays improves the CDF measurement discussed above, as the jet purity achieved using $t\bar{t}$ events is much higher. In addition, these measurements could be used to improve the modelling of jets in $t\bar{t}$ production Monte Carlo (MC) models in a new kinematic regime.\\ This paper is organised as follows. In Sect. \ref{sec2} the ATLAS detector is described, while Sect. \ref{sec3} is dedicated to the MC samples used in the analysis. In Sects. \ref{sec4} and \ref{sec5}, the physics object and event selection for both the dilepton and single-lepton $t\bar{t}$ samples is presented. Section \ref{sec6} is devoted to the description of both the $b$-jet and light-jet samples obtained in the single-lepton final state. The differential and the integrated shape distributions of these jets are derived in Sect. \ref{sec7}. In Sect. \ref{sec8} the results on the average values of the jet shape variables at the detector level are presented, including those for the $b$-jets in the dilepton channel. Results corrected for detector effects are presented in Sect. \ref{sec9}. In Sect. \ref{sec10} the systematic uncertainties are discussed, and Sect. \ref{sec11} contains a discussion of the results. Finally, Sect. \ref{sec12} includes the summary and conclusions. \section{The ATLAS detector} \label{sec2} The ATLAS detector \cite{detector} is a multi-purpose particle physics detector with a forward-backward symmetric cylindrical geometry \footnote{ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the $z$-axis along the beam pipe. The $x$-axis points from the IP to the centre of the LHC ring, and the $y$-axis points upward. Cylindrical coordinates $(r,\phi)$ are used in the transverse plane, $\phi$ being the azimuthal angle around the beam pipe. The pseudorapidity is defined in terms of the polar angle $\theta$ as $\eta=-\ln\tan(\theta/2)$.} and a solid angle coverage of almost $4\pi$.\\ The inner tracking system covers the pseudorapidity range $|\eta|< 2.5$, and consists of a silicon pixel detector, a silicon microstrip detector, and, for $|\eta|<2.0$, a transition radiation tracker. The inner detector (ID) is surrounded by a thin superconducting solenoid providing a 2 $\mathrm{T}$ magnetic field along the beam direction. A high-granularity liquid-argon sampling electromagnetic calorimeter covers the region $|\eta|<3.2$. An iron/scintillator tile hadronic calorimeter provides coverage in the range $|\eta|<1.7$. The endcap and forward regions, spanning $1.5<|\eta|<4.9$, are instrumented with liquid-argon calorimeters for electromagnetic and hadronic measurements. The muon spectrometer surrounds the calorimeters. It consists of three large air-core superconducting toroid systems and separate trigger and high-precision tracking chambers providing accurate muon tracking for $|\eta|<2.7$.\\ The trigger system \cite{atlasTrigger} has three consecutive levels: level 1 (L1), level 2 (L2) and the event filter (EF). The L1 triggers are hardware-based and use coarse detector information to identify regions of interest, whereas the L2 triggers are based on fast software-based online data reconstruction algorithms. Finally, the EF triggers use offline data reconstruction algorithms. For this analysis, the relevant triggers select events with at least one electron or muon. \section{Monte Carlo Samples} \label{sec3} Monte Carlo generators are used in which $t\bar{t}$ production is implemented with matrix elements calculated up to NLO accuracy. The generated events are then passed through a detailed \textsc{Geant4} simulation \cite{geant1,geant2} of the ATLAS detector. The baseline MC samples used here are produced with the \textsc{MC@NLO} \cite{mcnlo} or \textsc{Powheg} \cite{powheg} generators for the matrix element calculation; the parton shower and hadronisation processes are implemented with \textsc{Herwig} \cite{Herwig} using the cluster hadronisation model \cite{cluster} and \textsc{CTEQ6.6} \cite{pdf1} parton distribution functions (PDFs). Multi-parton interactions are simulated using \textsc{Jimmy} \cite{Jimmy} with the AUET1 tune \cite{auet1}. This MC generator package has been used for the description of the $t\bar{t}$ final states for ATLAS measurements of the cross section \cite{xsec1l,xsec2l} and studies of the kinematics \cite{spinCorr}. \\ Additional MC samples are used to check the hadronisation model dependence of the jet shapes. They are based on \textsc{Powheg+Pythia} \cite{powheg,pythia}, with the \textsc{MRST2007LO*} PDFs \cite{pdf2}. The \textsc{AcerMC} generator \cite{acer} interfaced to \textsc{Pythia} with the \textsc{Perugia 2010} tune \cite{Perugia} for parton showering and hadronisation is also used for comparison. Here the parton showers are ordered by transverse momentum and the hadronisation proceeds through the Lund string fragmentation scheme \cite{lund}. The underlying event and other soft effects are simulated by \textsc{Pythia} with the AMBT1 tune \cite{ambt1}. Comparisons of different event generators show that jet shapes in top-quark decays show little sensitivity to initial-state radiation effects, different PDF choices or underlying-event effects. They are more sensitive to details of the parton shower and the fragmentation scheme.\\ Samples of events including $\Wboson$ and $\Zboson$ bosons produced in association with light- and heavy-flavour jets are generated using the \textsc{Alpgen} \cite{alpgen} generator with the \textsc{CTEQ6L} PDFs \cite{pdf3}, and interfaced with \textsc{Herwig} and \textsc{Jimmy}. The same generator is used for the diboson backgrounds, $\Wboson\Wboson$, $\Wboson\Zboson$ and $\Zboson\Zboson$, while \textsc{MC@NLO} is used for the simulation of the single-top backgrounds, including the $\mathrm{t}$- and $\mathrm{s}$-channels as well as the $\Wboson t$-channel.\\ The MC-simulated samples are normalised to the corresponding cross sections. The $t\bar{t}$ signal is normalised to the cross section calculated at approximate next-to-next-to-leading order (NNLO) using the \textsc{Hathor} package \cite{HATHOR}, while for the single-top production cross section, the calculations in Refs. \cite{kidonakis1,kidonakis2,kidonakis3} are used. The $\Wboson$+jets and $\Zboson$+jets cross sections are taken from \textsc{Alpgen} \cite{alpgen} with additional NNLO $K$-factors as given in Ref. \cite{fewz}.\\ The simulated events are weighted such that the distribution of the number of interactions per bunch crossing in the simulated samples matches that of the data. Finally, additional correction factors are applied to take into account the different object efficiencies in data and simulation. The scale factors used for these corrections tipically differ from unity by 1\% for electrons and muons, and by a few percent for $b$-tagging. \section{Physics object selection} \label{sec4} Electron candidates are reconstructed from energy deposits in the calorimeter that are associated with tracks reconstructed in the ID. The candidates must pass a tight selection \cite{elec}, which uses calorimeter and tracking variables as well as transition radiation for $|\eta| < 2.0$, and are required to have transverse momentum $p_{\mathrm{T}} > 25 \GeV$ and $|\eta| < 2.47$. Electrons in the transition region between the barrel and endcap calorimeters, $1.37 < |\eta| < 1.52$, are not considered.\\ Muon candidates are reconstructed by searching for track segments in different layers of the muon spectrometer. These segments are combined and matched with tracks found in the ID. The candidates are refitted using the complete track information from both detector systems and are required to have a good fit and to satisfy $p_{\mathrm{T}} > 20 \GeV$ and $|\eta| < 2.5$.\\ Electron and muon candidates are required to be isolated to reduce backgrounds arising from jets and to suppress the selection of leptons from heavy-flavour semileptonic decays. For electron candidates, the transverse energy deposited in the calorimeter and which is not associated with the electron itself ($E^{\mathrm{iso}}_{\mathrm{T}}$) is summed in a cone in $\eta-\phi$ space of radius \footnote{The radius in the $\eta-\phi$ space is defined as $\Delta R = \sqrt{(\Delta\eta)^2+(\Delta\phi)^2}$} $\Delta R = 0.2$ around the electron. The $E^{\mathrm{iso}}_{\mathrm{T}}$ value is required to be less than $3.5 \GeV$. For muon candidates, both the corresponding calorimeter isolation $E^{\mathrm{iso}}_{\mathrm{T}}$ and the analogous track isolation transverse momentum ($p^{\mathrm{iso}}_{\mathrm{T}}$) must be less than $4 \GeV$ in a cone of $\Delta R = 0.3$. The track isolation is calculated from the scalar sum of the transverse momenta of tracks with $p_{\mathrm{T}} > 1 \GeV$, excluding the muon.\\ Muon candidates arising from cosmic rays are rejected by removing candidate pairs that are back-to-back in the transverse plane and that have transverse impact parameter relative to the beam axis $|d_0| > 0.5$ mm.\\ Jets are reconstructed with the anti-$k_{t}$ algorithm \cite{jets,fastjet} with radius parameter $R = 0.4$. This choice for the radius has been used in measurements of the top-quark mass \cite{topmass} and also in multi-jet cross-section measurements \cite{atlasJets}. The inputs to the jet algorithm are topological clusters of calorimeter cells. These clusters are seeded by calorimeter cells with energy $|E_{\mathrm{cell}}| > 4 \sigma$ , where $\sigma$ is the cell-by-cell RMS of the noise (electronics plus pileup). Neighbouring cells are added if $|E_{\mathrm{cell}}| > 2 \sigma$ and clusters are formed through an iterative procedure \cite{lampl}. In a final step, all remaining neighbouring cells are added to the cluster.\\ The baseline calibration for these clusters calculates their energy using the electromagnetic energy scale \cite{JES}. This is established using test-beam measurements for electrons and muons in the electromagnetic and hadronic calorimeters \cite{lampl,aleksa,aharouche}. Effects due to the differing response to electromagnetic and hadronic showers, energy losses in the dead material, shower leakage, as well as inefficiencies in energy clustering and jet reconstruction are also taken into account. This is done by matching calorimeter jets with MC particle jets in bins of $\eta$ and $E$, and supplemented by in situ calibration methods such as jet momentum imbalance in $\Zboson / \gamma^{*}$ + 1 jet events. This is called the Jet Energy Scale (JES) calibration, thoroughly discussed in Ref. \cite{JES}. The JES uncertainty contains an extra term for $b$-quark jets, as the jet response is different for $b$-jets and light jets because they have different particle composition. References \cite{atlasJets} and \cite{atlasBxsec} contain more details on the JES and a discussion of its uncertainties.\\ Jets that overlap with a selected electron are removed if they are closer than $\Delta R = 0.2$, while if a jet is closer than $\Delta R = 0.4$ to a muon, the muon is removed.\\ The primary vertex is defined as the $pp$ interaction vertex with the largest $\sum_{i} p_{\mathrm{T}i}^2$, where the sum runs over the tracks with $\pt > 150 \MeV$ associated with the vertex.\\ Jets are identified as candidates for having originated from a $b$-quark ($b$-tagged) by an algorithm based on a neural-network approach, as discussed in Sect. \ref{sec6}.\\ The reconstruction of the direction and magnitude ($\met{}$) of the missing transverse momentum is described in Ref. \cite{etmiss} and begins with the vector sum of the transverse momenta of all jets with $p_{\mathrm{T}} > 20 \GeV$ and $|\eta| < 4.5$. The transverse momenta of electron candidates are added. The contributions from all muon candidates and from all calorimeter clusters not belonging to a reconstructed object are also included. \section{Event selection} \label{sec5} Two samples of events are selected: a dilepton sample, where both $\Wboson$ bosons decay to leptons ($e$, $\mu$, including leptonic $\tau$ decays), and a single-lepton sample, where one $\Wboson$ boson decays to leptons and the other to a $q\bar{q'}$ pair, giving rise to two more jets (see Fig. \ref{fig:feyn}). The selection criteria follow those in Ref. \cite{xsec1l} for the single-lepton sample and Ref. \cite{xsec2l} for the dilepton sample. Events are triggered by inclusive high-$p_{\mathrm{T}}$ electron or muon EF triggers. The trigger thresholds are $18\GeV$ for muons and $20\GeV$ for electrons. The dataset used for the analysis corresponds to the first half of the data collected in 2011, with a centre-of-mass energy $\sqrt{s} = 7 \TeV$ and an integrated luminosity of $1.8\ \ifb$. This data-taking period is characterised by an instantaneous luminosity smaller than $1.5\times 10^{33}$ cm$^{-2}$ s$^{-1}$, for which the mean number of interactions per bunch crossing is less than six. To reject the non-collision background, the primary vertex is required to have at least four tracks, each with $p_{\mathrm{T}} > 150 \MeV$, associated with it. Pile-up effects are therefore small and have been taken into account as a systematic uncertainty. \begin{figure*} \centering \includegraphics[width=6.8cm,height=4.15cm]{fig1a.pdf} \hspace{0.8cm} \includegraphics[width=6.8cm,height=4.15cm]{fig1b.pdf} \caption{Example LO Feynman diagrams for $gg\rightarrow t\bar{t}$ in the dilepton (left) and single-lepton (right) decay modes.} \label{fig:feyn} \end{figure*} \subsection{Dilepton sample} In the dilepton sample, events are required to have two charged leptons and $\met{}$ from the leptonic $\Wboson$-boson decays to a neutrino and an electron or muon. The offline lepton selection requires two isolated leptons ($e$ or $\mu$) with opposite charge and with transverse momenta $p_{\mathrm{T}}(e) > 25 \GeV$, where $p_{\mathrm{T}}(e) = E_{\mbox{\small{cluster}}}\sin(\theta_{\mbox{\small{track}}})$, $E_{\mbox{\small{cluster}}}$ being the cluster energy and $\theta_{\mbox{\small{track}}}$ the track polar angle, and $p_{\mathrm{T}}(\mu) > 20 \GeV$. At least one of the selected leptons has to match the corresponding trigger object.\\ Events are further filtered by requiring at least two jets with $\pt > 25 \GeV$ and $\left|\eta\right| < 2.5$ in the event. In addition, at least one of the selected jets has to be tagged as a $b$-jet, as discussed in the next section. The whole event is rejected if a jet is identified as an out-of-time signal or as noise in the calorimeter.\\ The missing transverse momentum requirement is $\met{} > 60 \GeV$ for the $ee$ and $\mu\mu$ channels. For the $e\mu$ channel, $H_{\mathrm{T}}$ is required to be greater than 130 \GeV, where $H_{\mathrm{T}}$ is the scalar sum of the $p_{\mathrm{T}}$ of all muons, electrons and jets. To reject the Drell--Yan lepton pair background in the $ee$ and $\mu\mu$ channels, the lepton pair is required to have an invariant mass $m_{\ell\ell}$ greater than 15 $\GeV$ and to lie outside of a $\Zboson$-boson mass window, rejecting all events where the two-lepton invariant mass satisfies $\left|m_{\ell\ell}-m_Z\right| < 10 \GeV$.\\ The selected sample consists of 95\% $t\bar{t}$ events, but also backgrounds from the final states $\Wboson$ + jets and $\Zboson$ + jets, where the gauge bosons decay to leptons. All backgrounds, with the exception of multi-jet production, have been estimated using MC samples. The multi-jet background has been estimated using the jet--electron method \cite{qcdBkg}. This method relies on the identification of jets which, due to their high electromagnetic energy fraction, can fake electron candidates. The jet--electron method is applied with some modifications to the muon channel as well. The normalisation is estimated using a binned likelihood fit to the $\met{}$ distribution. The results are summarised in Table \ref{table1}. \begin{table} \caption{The expected composition of the dilepton sample. Fractions are relative to the total number of expected events. `Other EW' corresponds to the $\Wboson$ + jets and diboson ($\Wboson\Wboson$, $\Wboson\Zboson$ and $\Zboson\Zboson$) contributions.} \label{table1} \begin{center} \begin{tabular}{ccc} \hline Process & Expected events & Fraction\\ \hline $t\bar{t}$ & 2100 $\pm$ 110 & 94.9\%\\ $\Zboson$ + jets ($\Zboson \rightarrow \ell^{+}\ell^{-}$) & 14 $\pm$ 1 & 0.6\%\\ Other EW (\Wboson, diboson) & 4 $\pm$ 2 & 0.2\%\\ Single top & 95 $\pm$ 2 & 4.3\%\\ Multi-jet & $0^{+2}_{-0}$ & 0.0\%\\ \hline \bf{Total Expected} & 2210 $\pm$ 110 & \\ \hline \bf{Total Observed} & 2067 & \\ \end{tabular} \end{center} \end{table} \subsection{Single-Lepton sample} In this case, the event is required to have exactly one isolated lepton with $p_{\mathrm{T}} > 25 \GeV$ for electrons and $p_{\mathrm{T}} > 20 \GeV$ for muons. To account for the neutrino in the leptonic $\Wboson$ decay, $\met{}$ is required to be greater than $35 \GeV$ in the electron channel and greater than $20 \GeV$ in the muon channel. The $\met{}$ resolution is below 10 $\GeV$ \cite{etmiss}. Furthermore, the transverse mass \footnote{The transverse mass is defined as $m_{\mathrm{T}} = \sqrt{2p_{\mathrm{T}}^\ell \met{}(1-\cos\Delta\phi_{\ell\nu})}$, where $\Delta\phi_{\ell\nu}$ is the angle in the transverse plane between the selected lepton and the $\met{}$ direction.} ($m_{\mathrm{T}}$) is required to be greater than $25 \GeV$ in the $e$-channel and to satisfy the condition $\met{}+m_{\mathrm{T}} > 60 \GeV$ in the $\mu$-channel.\\ The jet selection requires at least four jets ($p_{\mathrm{T}} > 25 \GeV$ and $\left|\eta\right| < 2.5$) in the final state, and at least one of them has to be tagged as a $b$-jet. The fraction of $t\bar{t}$ events in the sample is 77\%; the main background contributions for the single-lepton channel have been studied as in the previous case, and are summarised in Table \ref{table2}. As in the dileptonic case, the multi-jet background has been estimated using the jet--electron method. \begin{table} \caption{The expected composition of the single-lepton sample. Fractions are relative to the total number of expected events. In this case `Other EW' includes $\Zboson$ + jets and diboson processes.} \label{table2} \begin{center} \begin{tabular}{ccc} \hline Process & Expected events & Fraction\\ \hline $t\bar{t}$ & 14000 $\pm$ 700 & 77.4\%\\ $\Wboson$ + jets ($\Wboson\rightarrow \ell\nu$) & 2310 $\pm$ 280 & 12.8\%\\ Other EW (\Zboson, diboson) & 198 $\pm$ 18 & 1.1\%\\ Single top & 668 $\pm$ 14 & 3.7\%\\ Multi-jet & 900 $\pm$ 450 & 5.0\%\\ \hline \bf{Total Expected} & 18000 $\pm$ 900 &\\ \hline \bf{Total Observed} & 17019 & \\ \end{tabular} \end{center} \end{table} \section{Jet sample definition} \label{sec6} Jets reconstructed in the single-lepton and dilepton samples are now subdivided into $b$-jet and light-jet samples. In order to avoid contributions from non-primary collisions, it is required that the jet vertex fraction (JVF) be greater than 0.75. After summing the scalar $p_{\mathrm{T}}$ of all tracks in a jet, the JVF is defined as the fraction of the total scalar $p_{\mathrm{T}}$ that belongs to tracks originating from the primary vertex. This makes the average jet multiplicity independent of the number of $pp$ interaction vertices. This selection is not applied to jets with no associated tracks. Also, to reduce the impact of pileup on the jets, the $p_{\mathrm{T}}$ threshold has been raised to $30 \GeV$.\\ Jets whose axes are closer than $\Delta R = 0.8$, which is twice the jet radius, to some other jet in the event are not considered. This is done to avoid possible overlaps between the jet cones, which would bias the shape measurement. These configurations are typical in boosted $\Wboson$ bosons, leading to light jets which are not well separated. The resulting $\Delta R$ distributions for any pair of $b$-jets or light jets are approximately constant between 0.8 and $\pi$ and exhibit an exponential fall-off between $\pi$ and the endpoint of the distribution. \subsection{$b$-jet samples} To select $b$-jets, a neural-network algorithm, which relies on the reconstruction of secondary vertices and impact parameter information in the three spatial dimensions, is used. The reconstruction of the secondary decay vertices makes use of an iterative Kalman-filter algorithm \cite{kalman} which relies on the hypothesis that the $b\to c\to X$ decay chains lie in a straight line originally taken to be parallel to the jet axis. The working point of the algorithm is chosen to maximise the purity of the sample. It corresponds to a $b$-tagging efficiency of 57\% for jets originating from $b$-quarks in simulated $t\bar{t}$ events, and a $u,d,s$-quark jet rejection factor of about 400, as well as a $c$-jet rejection factor of about 10 \cite{btagAtl,btagAtl2}. The resulting number of $b$-jets selected in the dilepton (single-lepton) sample is 2279 (16735). A second working point with a $b$-tagging efficiency of 70\% is also used in order to evaluate the dependence of the measured jet shapes on $b$-tagging.\\ Figure \ref{fig:bsample_ds} shows the $b$-tagged jet transverse momentum distributions for the single-lepton and dilepton channels. \begin{figure} \centering \includegraphics[width=8.5cm,height=6.0cm]{fig2a.pdf} \hspace{0.7cm} \includegraphics[width=8.5cm,height=6.0cm]{fig2b.pdf} \caption{The $p_{\mathrm{T}}$ distributions for $b$-tagged jets in the single-lepton (top) and dilepton (bottom) samples along with the sample composition expectations.} \label{fig:bsample_ds} \end{figure} The $p_{\mathrm{T}}$ distributions for the $b$-jets in both the dilepton and single-lepton samples show a similar behaviour, since they come mainly from top-quark decays. This is well described by the MC expectations from the \textsc{MC@NLO} generator coupled to \textsc{Herwig}. In the dilepton sample the signal-to-background ratio is found to be greater than in the single-lepton sample, as it is quantitatively shown in Tables \ref{table1} and \ref{table2}. \subsection{Light-quark jet sample} The hadronic decays $\Wboson\rightarrow q\bar{q'}$ are a clean source of light-quark jets, as gluons and $b$-jets are highly suppressed; the former because gluons would originate in radiative corrections of order $\mathcal{O}(\alpha_s)$, and the latter because of the smallness of the CKM matrix elements $\Vub$ and $\Vcb$. To select the light-jet sample, the jet pair in the event which has the invariant mass closest to the $\Wboson$-boson mass is selected. Both jets are also required to be non-tagged by the $b$-tagging algorithm. The number of jets satisfying these criteria is 7158. Figure \ref{fig:lsampleA} shows the transverse momentum distribution of these jets together with the invariant mass of the dijet system. \begin{figure} \centering \includegraphics[width=8.5cm,height=6.0cm]{fig3a.pdf} \hspace{0.7cm} \includegraphics[width=8.5cm,height=6.0cm]{fig3b.pdf} \caption{The distribution of light-jet $p_{\mathrm{T}}$ (top) and of the invariant mass of light-jet pairs (bottom) along with the sample composition expectations. The latter shows a peak at the $\Wboson$ mass, whose width is determined by the dijet mass resolution.} \label{fig:lsampleA} \end{figure} As expected, the $p_{\mathrm{T}}$ distribution of the light jets from $\Wboson$-boson decays exhibits a stronger fall-off than that for the $b$-jets. This dependence is again well described by the MC simulations in the jet $p_{\mathrm{T}}$ region used in this analysis. Agreement between the invariant mass distributions for observed and simulated events is good, in particular in the region close to the $\Wboson$-boson mass. \subsection{Jet purities} To estimate the actual number of $b$-jets and light jets in each of the samples, the MC simulation is used by analysing the information at generator level. For $b$-jets, a matching to a $b$-hadron is performed within a radius $\Delta R = 0.3$. For light jets, the jet is required not to have a $b$-hadron within $\Delta R = 0.3$ of the jet axis. Additionally, to distinguish light quarks and $c$-quarks from gluons, the MC parton with highest $p_{\mathrm{T}}$ within the cone of the reconstructed jet is required to be a ($u,d,c$ or $s$)-quark. The purity $p$ is then defined as \begin{eqnarray} p = \sum_{k}\alpha_{k}p_{k}; \ \ \ p_{k} = 1-\frac{N_{\mathrm{f}}^{(k)}}{N_{\mathrm{T}}^{(k)}} \label{eq:pur} \end{eqnarray} where $\alpha_k$ is the fraction of events in the $k$-th MC sample (signal or background), given in Tables \ref{table1} and \ref{table2} and $N_{\mathrm{f}}^{(k)}$, $N_{\mathrm{T}}^{(k)}$ are the number of fakes (jets not assigned to the correct flavour, e.g. charm jets in the $b$-jet sample), and the total number of jets in a given sample, respectively. The purity in the multi-jet background is determined using \textsc{Pythia} MC samples.\\ In the single-lepton channel, the resulting purity of the $b$-jet sample is $p^{(\mathrm{s})}_b=(88.5 \pm 5.7) \%$, while the purity of the light-jet sample is found to be $p^{(\mathrm{s})}_{\mathrm{l}}= (66.2 \pm 4.1)\%$, as shown in Table \ref{tablePur}. The uncertainty on the purity arises from the uncertainties on the signal and background fractions in each sample. The charm content in the light-jet sample is found to be 16\%, with the remaining 50\% ascribed to $u,d$ and $s$.\\ MC studies indicate that the contamination of the $b$-jet sample is dominated by charm-jet fakes and that the gluon contamination is about $0.7\%$. For the light-jet sample, the fraction of gluon fakes amounts to $19\%$, while the $b$-jet fakes correspond to $15\%$.\\ In the dilepton channel, a similar calculation yields the purity of the $b$-jet sample to be $p^{(\mathrm{d})}_b = (99.3^{+0.7}_{-6.5})\%$ as shown in Table \ref{tablePur2}. Thus, the $b$-jet sample purity achieved using $t\bar{t}$ final states is much higher than that obtained in inclusive $b$-jet measurements at the Tevatron \cite{CDF} or the LHC \cite{atlasBxsec}. \begin{table} \caption{Purity estimation for $b$-jets and light jets in the single-lepton channel. The uncertainty on the purity arises from the uncertainties in the signal and background fractions.} \label{tablePur} \begin{center} \begin{tabular}{cccc} \hline Process & $\alpha_k$ & $p_k (b)$ & $p_k$ (light)\\ \hline $t\bar{t}$ & 0.774 & 0.961 & 0.725\\ $\Wboson\rightarrow \ell\nu$ & 0.128 & 0.430 & 0.360\\ Multi-jet & 0.050 & 0.887 & 0.485\\ Other EW ($\Zboson$, diboson) & 0.011 & 0.611 & 0.342\\ Single top & 0.037 & 0.958 & 0.716\\ \hline \bf{Weighted total} & - & $(88.5 \pm 5.7)\%$ & $(66.2 \pm 4.1)\%$ \\ \end{tabular} \end{center} \end{table} \begin{table} \caption{Purity estimation for $b$-jets in the dilepton channel. The uncertainty on the purity arises from the uncertainties in the signal and background fractions.} \label{tablePur2} \begin{center} \begin{tabular}{ccc} \hline Process & $\alpha_k$ & $p_k (b)$\\ \hline $t\bar{t}$ & 0.949 & 0.997\\ $\Zboson \rightarrow \ell^{+}\ell^{-}$& 0.006 & 0.515\\ Other EW ($\Wboson$, diboson) & 0.002 & 0.375\\ Single top & 0.043 & 0.987\\ Multi-jet & - & -\\ \hline \bf{Weighted total} & - & $(99.3^{+0.7}_{-6.5})\%$\\ \end{tabular} \end{center} \end{table} \section{Jet shapes in the single-lepton channel} \label{sec7} For the jet shape calculation, locally calibrated topological clusters are used \cite{JES,lc1,lc2}. In this procedure, effects due to calorimeter response, leakage, and losses in the dead material upstream of the calorimeter are taken into account separately for electromagnetic and hadronic clusters \cite{menke}.\\ The differential jet shape $\rho(r)$ in an annulus of inner radius $r-\Delta r/2$ and outer radius $r+\Delta r/2$ from the axis of a given jet is defined as \begin{eqnarray} \rho(r) = \frac{1}{\Delta r}\frac{p_{\mathrm{T}}(r-\Delta r/2,r+\Delta r/2)}{p_{\mathrm{T}}(0,R)} \end{eqnarray} Here, $\Delta r = 0.04$ is the width of the annulus; $r$, such that $\Delta r/2 \leq r \leq R-\Delta r/2$, is the distance to the jet axis in the $\eta$-$\phi$ plane, and $p_{\mathrm{T}}(r_1,r_2)$ is the scalar sum of the $p_{\mathrm{T}}$ of the jet constituents with radii between $r_1$ and $r_2$.\\ Some distributions of $\rho (r)$ are shown in Fig. \ref{fig:rhob} for the $b$-jet sample selected in the single-lepton channel. \begin{figure} \centering \includegraphics[width=8.5cm,height=11.0cm]{fig4.pdf} \caption{Distribution of $R = 0.4$ $b$-jets in the single-lepton channel as a function of the differential jet shapes $\rho(r)$ for different values of $r$.} \label{fig:rhob} \end{figure} There is a marked peak at zero energy deposit, which indicates that energy is concentrated around relatively few particles. As $r$ increases, the distributions of $\rho(r)$ are concentrated at smaller values because of the relatively low energy density at the periphery of the jets. Both effects are well reproduced by the MC generators.\\ The analogous $\rho(r)$ distributions for light jets are shown in Fig. \ref{fig:rhol}. The gross features are similar to those previously discussed for $b$-jets, but for small values of $r$, the $\rho(r)$ distributions for light jets are somewhat flatter than those for $b$-jets.\\ \begin{figure} \centering \includegraphics[width=8.5cm,height=11.0cm]{fig5.pdf} \caption{Distribution of $R = 0.4$ light jets in the single-lepton channel as a function of the differential jet shapes $\rho(r)$ for different values of $r$.} \label{fig:rhol} \end{figure} The integrated jet shape in a cone of radius $r \leq R$ around the jet axis is defined as the cumulative distribution for $\rho(r)$, i.e. \begin{eqnarray} \Psi(r) = \frac{p_{\mathrm{T}}(0,r)}{p_{\mathrm{T}}(0,R)}; \ \ r \leq R \end{eqnarray} which satisfies $\Psi(r = R) = 1$. Figure \ref{fig:psib} (\ref{fig:psil}) shows distributions of the integrated jet shapes for $b$-jets (light jets) in the single-lepton sample. \begin{figure} \centering \includegraphics[width=8.5cm,height=11.0cm]{fig6.pdf} \caption{Distribution of $R = 0.4$ $b$-jets in the single-lepton channel as a function of the integrated jet shapes $\Psi(r)$ for different values of $r$.} \label{fig:psib} \end{figure} \begin{figure} \centering \includegraphics[width=8.5cm,height=11.0cm]{fig7.pdf} \caption{Distribution of $R = 0.4$ light jets in the single-lepton channel as a function of the integrated jet shapes $\Psi(r)$ for different values of $r$.} \label{fig:psil} \end{figure} These figures show the inclusive (i.e. not binned in either $\eta$ or $p_{\mathrm{T}}$) $\rho (r)$ and $\Psi(r)$ distributions for fixed values of $r$. Jet shapes are only mildly dependent on pseudorapidity, while they strongly depend on the transverse momentum. This behaviour has been verified in previous analyses \cite{d0,h1,hera2,chek,cdf01,lhc,cms}. This is illustrated in Figs. \ref{fig:shapePt} and \ref{fig:shapeEta}, which show the energy fraction in the outer half of the cone as a function of $p_{\mathrm{T}}$ and $|\eta|$. For this reason, all the data presented in the following are binned in five $p_{\mathrm{T}}$ regions with $p_{\mathrm{T}} < 150 \GeV$, where the statistical uncertainty is small enough. In the following, only the average values of these distributions are presented: \begin{figure} \centering \includegraphics[width=8.5cm,height=6.0cm]{fig8a.pdf} \includegraphics[width=8.5cm,height=6.0cm]{fig8b.pdf} \caption{Dependence of the $b$-jet (top) and light-jet (bottom) shapes on the jet transverse momentum. This dependence is quantified by plotting the mean value $\langle 1-\Psi(r = 0.2)\rangle$ (the fraction of energy in the outer half of the jet cone) as a function of $p_{\mathrm{T}}$ for jets in the single-lepton sample.} \label{fig:shapePt} \end{figure} \begin{figure} \centering \includegraphics[width=8.5cm,height=6.0cm]{fig9a.pdf} \includegraphics[width=8.5cm,height=6.0cm]{fig9b.pdf} \caption{Dependence of the $b$-jet (top) and light-jet (bottom) shape on the jet pseudorapidity. This dependence is quantified by plotting the mean value $\langle 1-\Psi(r = 0.2)\rangle$ (the fraction of energy in the outer half of the jet cone) as a function of $|\eta|$ for jets in the single-lepton sample.} \label{fig:shapeEta} \end{figure} \begin{eqnarray} \langle\rho(r)\rangle & = & \frac{1}{\Delta r}\frac{1}{N_{\mathrm{jets}}}\sum_{\mathrm{jets}}\frac{p_{\mathrm{T}}(r-\Delta r/2,r+\Delta r/2)}{p_{\mathrm{T}}(0,R)}\\ \langle\Psi(r)\rangle & = & \frac{1}{N_{\mathrm{jets}}}\sum_{\mathrm{jets}}\frac{p_{\mathrm{T}}(0,r)}{p_{\mathrm{T}}(0,R)} \label{meanShape} \end{eqnarray} where the sum is performed over all jets of a given sample, light jets ($l$) or $b$-jets ($b$) and $N_{\mathrm{jets}}$ is the number of jets in the sample. \section{Results at the detector level} \label{sec8} In the following, the detector-level results for the average values $\langle \rho(r) \rangle$ and $\langle \Psi(r) \rangle$ as a function of the jet internal radius $r$, are presented. A comparison has been made between $b$-jet shapes obtained in both the dilepton and single-lepton samples, and it is found that they are consistent with each other within the uncertainties. Thus the samples are merged. In Figure \ref{fig:detRho}, the distributions for the average values of the differential jet shapes are shown for each $p_{\mathrm{T}}$ bin, along with a comparison with the expectations from the simulated samples described in Sect. \ref{sec3}. \begin{figure} \centering \includegraphics[width=8.5cm,height=12.5cm]{fig10.pdf} \caption{Average values of the differential jet shapes $\langle\rho(r)\rangle$ for light jets (triangles) and $b$-jets (squares), with $\Delta r = 0.04$, as a function of $r$ at the detector level, compared to \textsc{MC@NLO+Herwig} and \textsc{Powheg+Pythia} event generators. The uncertainties shown for data are only statistical.} \label{fig:detRho} \end{figure} There is a small but clear difference between light- and $b$-jet differential shapes, the former lying above (below) the latter for smaller (larger) values of $r$. These differences are more visible at low transverse momentum. In Figure \ref{fig:detPsi}, the average integrated jet shapes $\langle\Psi(r)\rangle$ are shown for both the light jets and $b$-jets, and compared to the MC expectations discussed earlier. Similar comments apply here: \begin{figure} \centering \includegraphics[width=8.5cm,height=12.5cm]{fig11.pdf} \caption{Average values of the integrated jet shapes $\langle\Psi(r)\rangle$ for light jets (triangles) and $b$-jets (squares), with $\Delta r = 0.04$, as a function of $r$ at the detector level, compared to \textsc{MC@NLO+Herwig} and \textsc{Powheg+Pythia} event generators. The uncertainties shown for data are only statistical.} \label{fig:detPsi} \end{figure} The values of $\langle\Psi(r)\rangle$ are consistently larger for light jets than for $b$-jets for small values of $r$, while they tend to merge as $r \rightarrow R$ since, by definition, $\Psi(R) = 1$. \section{Unfolding to particle level} \label{sec9} In order to correct the data for acceptance and detector effects, thus enabling comparisons with different models and other experiments, an unfolding procedure is followed. The method used to correct the measurements based on topological clusters to the particle level relies on a bin-by-bin correction. Correction factors $F(r)$ are calculated separately for differential, $\langle\rho(r)\rangle$, and integrated, $\langle\Psi(r)\rangle$, jet shapes in both the light- and $b$-jet samples. For differential ($\rho$) and integrated jet shapes ($\Psi$), they are defined as the ratio of the particle-level quantity to the detector-level quantity as described by the MC simulations discussed in Sect. \ref{sec3}, i.e. \begin{eqnarray} F^{\rho}_{\mathrm{l},b}(r) = \frac{\langle\rho(r)_{\mathrm{l},b}\rangle_{\mathrm{MC,part}}} {\langle\rho(r)_{\mathrm{l},b}\rangle_{\mathrm{MC,det}}}\\ F^{\Psi}_{\mathrm{l},b}(r) = \frac{\langle\Psi(r)_{\mathrm{l},b}\rangle_{\mathrm{MC,part}}} {\langle\Psi(r)_{\mathrm{l},b}\rangle_{\mathrm{MC,det}}} \end{eqnarray} While the detector-level MC includes the background sources described before, the particle-level jets are built using all particles in the signal sample with an average lifetime above $10^{-11} \mathrm{s}$, excluding muons and neutrinos. The results have only a small sensitivity to the inclusion or not of muons and neutrinos, as well as to the background estimation. For particle-level $b$-jets, a $b$-hadron with $p_{\mathrm{T}} > 5 \GeV$ is required to be closer than $\Delta R = 0.3$ from the jet axis, while for light jets, a selection equivalent to that for the detector-level jets is applied, selecting the non-$b$-jet pair with invariant mass closest to $m_{\Wboson}$. The same kinematic selection criteria are applied to these particle-level jets as for the reconstructed jets, namely $p_{\mathrm{T}} > 25 \GeV$, $|\eta| < 2.5$ and $\Delta R > 0.8$ to avoid jet--jet overlaps.\\ A Bayesian iterative unfolding approach \cite{bayes} is used as a cross-check. The RooUnfold software \cite{roounfold} is used by providing the jet-by-jet information on the jet shapes, in the $p_{\mathrm{T}}$ intervals defined above. This method takes into account bin-by-bin migrations in the $\rho(r)$ and $\Psi(r)$ distributions for fixed values of $r$. The results of the bin-by-bin and the Bayesian unfolding procedures agree at the 2\% level.\\ As an additional check of the stability of the unfolding procedure, the directly unfolded integrated jet shapes are compared with those obtained from integrating the unfolded differential distributions. The results agree to better than 1\%. These results are reassuring since the differential and integrated jet shapes are subject to migration and resolution effects in different ways. Both quantities are also subject to bin-to-bin correlations. For the differential measurement, the correlations arise from the common normalisation. They increase with the jet transverse momentum, varying from 25\% to 50\% at their maximum, which is reached for neighbouring bins at low $r$. The correlations for the integrated measurement are greater and their maximum varies from 60\% to 75\% as the jet $\pt$ increases. \section{Systematic uncertainties} \label{sec10} The main sources of systematic uncertainty are described below. \begin{itemize} \item{} The energy of individual clusters inside the jet is varied according to studies using isolated tracks \cite{singleHadron}, parameterising the uncertainty on the calorimeter energy measurements as a function of the cluster $p_{\mathrm{T}}$. The impact on the differential jet shape increases from 2\% to 10\% as the edge of the jet cone is approached. \item{} The coordinates $\eta$, $\phi$ of the clusters are smeared using a Gaussian distribution with an RMS width of 5~mrad accounting for small differences in the cluster position between data and Monte Carlo \cite{boostedJets}. This smearing has an effect on the jet shape which is smaller than 2\%. \item{} An uncertainty arising from the amount of passive material in the detector is derived using the algorithm described in Ref. \cite{boostedJets} as a result of the studies carried out in Ref. \cite{singleHadron}. Low-energy clusters ($E < 2.5 \GeV$) are removed from the reconstruction according to a probability function given by $\mathcal{P}(E=0)\times \mathrm{e}^{-2E}$, where $\mathcal{P}(E = 0)$ is the measured probability (28\%) of a charged particle track to be associated with a zero energy deposit in the calorimeter and $E$ is the cluster energy in $\GeV$. As a result, approximately 6\% of the total number of clusters are discarded. The impact of this cluster-removing algorithm on the measured jet shapes is smaller than 2\%. \item{} As a further cross-check an unfolding of the track-based jet shapes to the particle level has also been performed. The differences from those obtained using calorimetric measurements are of a similar scale to the ones discussed for the cluster energy, angular smearing and dead material. \item{} An uncertainty arising from the jet energy calibration (JES) is taken into account by varying the jet energy scale in the range 2\% to 8\% of the measured value, depending on the jet $p_{\mathrm{T}}$ and $\eta$. This variation is different for light jets and $b$-jets since they have a different particle composition. \item{} The jet energy resolution is also taken into account by smearing the jet $p_{\mathrm{T}}$ using a Gaussian centred at unity and with standard deviation $\sigma_{\mathrm{r}}$ \cite{jetResol}. The impact on the measured jet shapes is about 5\%. \item{} The uncertainty due to the JVF requirement is estimated by comparing the jet shapes with and without this requirement. The uncertainty is smaller than 1\%. \item{} An uncertainty is also assigned to take pile-up effects into account. This is done by calculating the differences between samples where the number of $pp$ interaction vertices is smaller (larger) than five and the total sample. The impact on the differential jet shapes varies from 2\% to 10\% as $r$ increases. \item{} An additional uncertainty due to the unfolding method is determined by comparing the correction factors obtained with three different MC samples, \textsc{Powheg + Pythia}, \textsc{Powheg + Jimmy} and \textsc{AcerMC} \cite{acer} with the \textsc{Perugia 2010} tune \cite{Perugia}, to the nominal correction factors from the \textsc{MC@NLO} sample. The uncertainty is defined as the maximum deviation of these three unfolding results, and it varies from 1\% to 8\%. \end{itemize} Additional systematic uncertainties associated with details of the analysis such as the working point of the $b$-tagging algorithm and the $\Delta R > 0.8$ cut between jets, as well as those related to physics object reconstruction efficiencies and variations in the background normalisation are found to be negligible. All sources of systematic uncertainty are propagated through the unfolding procedure. The resulting systematic uncertainties on each differential or integrated shape are added in quadrature. In the case of differential jet shapes, the uncertainty varies from 1\% to 20\% in each $p_{\mathrm{T}}$ bin as $r$ increases, while the uncertainty for the integrated shapes decreases from 10\% to 0\% as one approaches the edge of the jet cone, where $r = R$. \section{Discussion of the results} \label{sec11} The results at the particle level are presented, together with the total uncertainties arising from statistical and systematic effects. The averaged differential jet shapes $\langle\rho(r)\rangle$ are shown in the even-numbered Figs. \ref{fig:diff1}--\ref{fig:diff5} as a function of $r$ and in bins of $\pt$, while numerical results are presented in the odd-numbered Tables \ref{tabDiff1}--\ref{tabDiff5}. The observation made at the detector level in Sect. \ref{sec8} that $b$-jets are broader than light jets is strengthened after unfolding because it also corrects the light-jet sample for purity effects. Similarly, the odd-numbered Figs. \ref{fig:int1}--\ref{fig:int5} show the integrated shapes $\langle\Psi(r)\rangle$ as a function of $r$ and in bins of $\pt$ for light jets and $b$-jets. Numerical results are presented in the even-numbered Tables \ref{tabInt1}--\ref{tabInt5}. As before, the observation is made that $b$-jets have a wider energy distribution inside the jet cone than light jets, as it can be seen that $\langle\Psi_b\rangle < \langle\Psi_\mathrm{l}\rangle$ for low $p_{\mathrm{T}}$ and small $r$.\\ These observations are in agreement with the MC calculations, where top-quark pair-production cross sections are implemented using matrix elements calculated to NLO accuracy, which are then supplemented by angular- or transverse momentum-ordered parton showers. Within this context, both \textsc{MC@NLO} and \textsc{Powheg+Pythia} give a good description of the data, as illustrated in Fig. \ref{fig:diff1}--\ref{fig:int5}.\\ Comparisons with other MC approaches have been made (see Fig. \ref{fig:mcComps}). The \textsc{Perugia 2011} tune, coupled to \textsc{Alpgen+Pythia}, \textsc{Powheg+Pythia} and \textsc{AcerMC+Pythia}, has been compared to the data, and found to be slightly disfavoured. The \textsc{AcerMC} generator \cite{acer} coupled to \textsc{Pythia} for the parton shower and with the \textsc{Perugia 2010} tune \cite{Perugia} gives a somewhat better description of the data, as does the \textsc{Alpgen} \cite{alpgen} generator coupled to \textsc{Herwig}.\\ \textsc{AcerMC} coupled to \textsc{Tune A Pro} \cite{tuneA1,tuneA2} is found to give the best description of the data within the tunes investigated. Colour reconnection effects, as implemented in \textsc{Tune A CR Pro} \cite{tuneA1, tuneA2} have a small impact on this observable, compared to the systematic uncertainties.\\ Since jet shapes are dependent on the method chosen to match parton showers to the matrix-element calculations and, to a lesser extent, on the fragmentation and underlying-event modelling, the measurements presented here provide valuable inputs to constrain present and future MC models of colour radiation in $t\bar{t}$ final states.\\ MC generators predict jet shapes to depend on the hard scattering processs. MC studies were carried out and it was found that inclusive $b$-jet shapes, obtained from the underlying hard processes $gg\to b\bar{b}$ and $gb \to gb$ with gluon splitting $g\to b\bar{b}$ included in the subsequent parton shower, are wider than those obtained in the $t\bar{t}$ final states. The differences are interpreted as due to the different colour flows in the two different final states i.e. $t\bar{t}$ and inclusive multi-jet production. Similar differences are also found for light-jet shapes, with jets generated in inclusive multi-jet samples being wider than those from $\Wboson$-boson decays in top-quark pair-production. \section{Summary} \label{sec12} The structure of jets in $t\bar{t}$ final states has been studied in both the dilepton and single-lepton modes using the ATLAS detector at the LHC. The first sample proves to be a very clean and copious source of $b$-jets, as the top-quark decays predominantly via $t\to \Wboson b$. The second is also a clean source of light jets produced in the hadronic decays of one of the $\Wboson$ bosons in the final state. The differences between the $b$-quark and light-quark jets obtained in this environment have been studied in terms of the differential jet shapes $\rho(r)$ and integrated jet shapes $\Psi(r)$. These variables exhibit a marked (mild) dependence on the jet transverse momentum (pseudorapidity).\\ The results show that the mean value $\langle\Psi(r)\rangle$ is smaller for $b$-jets than for light jets in the region where it is possible to distinguish them, i.e. for low values of the jet internal radius $r$. This means that $b$-jets are broader than light-quark jets, and therefore the cores of light jets have a larger energy density than those of $b$-jets. The jet shapes are well reproduced by current MC generators for both light and $b$-jets. \newpage \small{ \paragraph{\bf{\small{Acknowledgements}}}{We thank CERN for the very successful operation of the LHC, as well as the support staff from our institutions without whom ATLAS could not be operated efficiently.\\ We acknowledge the support of ANPCyT, Argentina; YerPhI, Armenia; ARC, Australia; BMWF and FWF, Austria; ANAS, Azerbaijan; SSTC, Belarus; CNPq and FAPESP, Brazil; NSERC, NRC and CFI, Canada; CERN; CONICYT, Chile; CAS, MOST and NSFC, China; COLCIENCIAS, Colombia; MSMT CR, MPO CR and VSC CR, Czech Republic; DNRF, DNSRC and Lundbeck Foundation, Denmark; EPLANET, ERC and NSRF, European Union; IN2P3-CNRS, CEA-DSM/ IRFU, France; GNSF, Georgia; BMBF, DFG, HGF, MPG and AvH Foundation, Germany; GSRT and NSRF, Greece; ISF, MINERVA, GIF, DIP and Benoziyo Center, Israel; INFN, Italy; MEXT and JSPS, Japan; CNRST, Morocco; FOM and NWO, Netherlands; BRF and RCN, Norway; MNiSW, Poland; GRICES and FCT, Portugal; MERYS (MECTS), Romania; MES of Russia and ROSATOM, Russian Federation; JINR; MSTD, Serbia; MSSR, Slovakia; ARRS and MIZ\v{S}, Slovenia; DST/NRF, South Africa; MICINN, Spain; SRC and Wallenberg Foundation, Sweden; SER, SNSF and Cantons of Bern and Geneva, Switzerland; NSC, Taiwan; TAEK, Turkey; STFC, the Royal Society and Leverhulme Trust, United Kingdom; DOE and NSF, United States of America.\\ The crucial computing support from all WLCG partners is acknowledged gratefully, in particular from CERN and the ATLAS Tier-1 facilities at TRIUMF (Canada), NDGF (Denmark, Norway, Sweden), CC-IN2P3 (France), KIT/GridKA (Germany), INFN-CNAF (Italy), NL-T1 (Netherlands), PIC (Spain), ASGC (Taiwan), RAL (UK) and BNL (USA) and in the Tier-2 facilities worldwide.}} \clearpage \begin{figure}[] \vspace{0.8cm} \includegraphics[width=8.5cm,height=10.5cm]{fig12.pdf} \caption{Differential jet shapes $\langle\rho(r)\rangle$ as a function of the radius $r$ for light jets (triangles) and $b$-jets (squares). The data are compared to \textsc{MC@NLO+Herwig} and \textsc{Powheg+Pythia} event generators for $30 \GeV < p_{\mathrm{T}} < 40 \GeV$. The uncertainties shown include statistical and systematic sources, added in quadrature.} \label{fig:diff1} \end{figure} \begin{table}[!h] \vspace{0.5cm} \caption{Unfolded values for $\langle\rho(r)\rangle$, together with statistical and systematic uncertainties for $30 \GeV < p_{\mathrm{T}} < 40 \GeV$.} \normalsize \label{tabDiff1} \begin{center} \begin{tabular}{ccc} \hline $r$ & $\langle\rho_b (r)\rangle$ [$b$-jets] & $\langle\rho_{\mathrm{l}} (r)\rangle$ [light jets]\\ \hline 0.02 & 3.84 $\pm$ 0.15 $^{+ 0.29 }_{- 0.36 }$ & 7.64 $\pm$ 0.27 $^{+ 0.93 }_{- 1.10 }$ \\[3pt] 0.06 & 6.06 $\pm$ 0.14 $^{+ 0.31 }_{- 0.36 }$ & 6.10 $\pm$ 0.16 $^{+ 0.48 }_{- 0.47 }$ \\[3pt] 0.10 & 5.20 $\pm$ 0.11 $^{+ 0.24 }_{- 0.23 }$ & 3.75 $\pm$ 0.10 $^{+ 0.32 }_{- 0.33 }$ \\[3pt] 0.14 & 3.45 $\pm$ 0.09 $^{+ 0.12 }_{- 0.13 }$ & 2.28 $\pm$ 0.07 $^{+ 0.14 }_{- 0.16 }$ \\[3pt] 0.18 & 2.21 $\pm$ 0.06 $^{+ 0.13 }_{- 0.11 }$ & 1.50 $\pm$ 0.05 $^{+ 0.14 }_{- 0.12 }$ \\[3pt] 0.22 & 1.58 $\pm$ 0.04 $^{+ 0.10 }_{- 0.11 }$ & 1.08 $\pm$ 0.03 $^{+ 0.09 }_{- 0.10 }$ \\[3pt] 0.26 & 1.15 $\pm$ 0.03 $^{+ 0.13 }_{- 0.13 }$ & 0.83 $\pm$ 0.03 $^{+ 0.11 }_{- 0.09 }$ \\[3pt] 0.30 & 0.80 $\pm$ 0.02 $^{+ 0.08 }_{- 0.07 }$ & 0.64 $\pm$ 0.02 $^{+ 0.07 }_{- 0.08 }$ \\[3pt] 0.34 & 0.60 $\pm$ 0.01 $^{+ 0.06 }_{- 0.06 }$ & 0.53 $\pm$ 0.01 $^{+ 0.07 }_{- 0.08 }$ \\[3pt] 0.38 & 0.32 $\pm$ 0.01 $^{+ 0.04 }_{- 0.04 }$ & 0.28 $\pm$ 0.01 $^{+ 0.04 }_{- 0.04 }$ \\[3pt] \hline \end{tabular} \end{center} \end{table} \begin{figure}[!h] \vspace{0.8cm} \includegraphics[width=8.5cm,height=10.5cm]{fig13.pdf} \caption{Integrated jet shapes $\langle\Psi(r)\rangle$ as a function of the radius $r$ for light jets (triangles) and $b$-jets (squares). The data are compared to \textsc{MC@NLO+Herwig} and \textsc{Powheg+Pythia} event generators for $30 \GeV < p_{\mathrm{T}} < 40 \GeV$. The uncertainties shown include statistical and systematic sources, added in quadrature.} \label{fig:int1} \end{figure} \begin{table}[!h] \vspace{0.81cm} \caption{Unfolded values for $\langle\Psi(r)\rangle$, together with statistical and systematic uncertainties for $30 \GeV < p_{\mathrm{T}} < 40 \GeV$.} \normalsize \label{tabInt1} \begin{center} \begin{tabular}{ccc} \hline $r$ & $\langle\Psi_b (r)\rangle$ [$b$-jets] & $\langle\Psi_{\mathrm{l}} (r)\rangle$ [light jets]\\ \hline 0.04 & 0.154 $\pm$ 0.006 $^{+ 0.012 }_{- 0.014 }$ & 0.306 $\pm$ 0.011 $^{+ 0.037 }_{- 0.043 }$ \\[3pt] 0.08 & 0.395 $\pm$ 0.007 $^{+ 0.023 }_{- 0.028 }$ & 0.550 $\pm$ 0.009 $^{+ 0.031 }_{- 0.037 }$ \\[3pt] 0.12 & 0.602 $\pm$ 0.006 $^{+ 0.025 }_{- 0.026 }$ & 0.706 $\pm$ 0.007 $^{+ 0.028 }_{- 0.034 }$ \\[3pt] 0.16 & 0.739 $\pm$ 0.004 $^{+ 0.025 }_{- 0.025 }$ & 0.802 $\pm$ 0.005 $^{+ 0.025 }_{- 0.030 }$ \\[3pt] 0.20 & 0.825 $\pm$ 0.003 $^{+ 0.020 }_{- 0.023 }$ & 0.863 $\pm$ 0.004 $^{+ 0.020 }_{- 0.025 }$ \\[3pt] 0.24 & 0.887 $\pm$ 0.003 $^{+ 0.016 }_{- 0.017 }$ & 0.907 $\pm$ 0.003 $^{+ 0.016 }_{- 0.019 }$ \\[3pt] 0.28 & 0.934 $\pm$ 0.002 $^{+ 0.012 }_{- 0.012 }$ & 0.942 $\pm$ 0.002 $^{+ 0.011 }_{- 0.014 }$ \\[3pt] 0.32 & 0.964 $\pm$ 0.001 $^{+ 0.007 }_{- 0.007 }$ & 0.967 $\pm$ 0.001 $^{+ 0.007 }_{- 0.008 }$ \\[3pt] 0.36 & 0.988 $\pm$ 0.001 $^{+ 0.004 }_{- 0.002 }$ & 0.989 $\pm$ 0.001 $^{+ 0.003 }_{- 0.003 }$ \\[3pt] 0.40 & 1.000 & 1.000 \\[3pt] \hline \end{tabular} \end{center} \end{table} \begin{figure}[!h] \vspace{0.8cm} \includegraphics[width=8.5cm,height=10.5cm]{fig14.pdf} \caption{Differential jet shapes $\langle\rho(r)\rangle$ as a function of the radius $r$ for light jets (triangles) and $b$-jets (squares). The data are compared to \textsc{MC@NLO+Herwig} and \textsc{Powheg+Pythia} event generators for $40 \GeV < p_{\mathrm{T}} < 50 \GeV$. The uncertainties shown include statistical and systematic sources, added in quadrature.} \label{fig:diff2} \end{figure} \begin{table}[!h] \vspace{0.8cm} \caption{Unfolded values for $\langle\rho(r)\rangle$, together with statistical and systematic uncertainties for $40 \GeV < p_{\mathrm{T}} < 50 \GeV$.} \normalsize \label{tabDiff2} \begin{center} \begin{tabular}{ccc} \hline $r$ & $\langle\rho_b (r)\rangle$ [$b$-jets] & $\langle\rho_{\mathrm{l}} (r)\rangle$ [light jets]\\ \hline 0.02 & 4.66 $\pm$ 0.15 $^{+ 0.58 }_{- 0.61 }$ & 9.39 $\pm$ 0.34 $^{+ 1.10 }_{- 1.10 }$ \\[3pt] 0.06 & 7.23 $\pm$ 0.14 $^{+ 0.33 }_{- 0.35 }$ & 6.14 $\pm$ 0.17 $^{+ 0.44 }_{- 0.43 }$ \\[3pt] 0.10 & 5.22 $\pm$ 0.11 $^{+ 0.25 }_{- 0.28 }$ & 3.27 $\pm$ 0.10 $^{+ 0.27 }_{- 0.27 }$ \\[3pt] 0.14 & 3.12 $\pm$ 0.07 $^{+ 0.15 }_{- 0.15 }$ & 1.85 $\pm$ 0.07 $^{+ 0.16 }_{- 0.12 }$ \\[3pt] 0.18 & 1.83 $\pm$ 0.05 $^{+ 0.15 }_{- 0.17 }$ & 1.28 $\pm$ 0.05 $^{+ 0.11 }_{- 0.11 }$ \\[3pt] 0.22 & 1.12 $\pm$ 0.03 $^{+ 0.06 }_{- 0.06 }$ & 0.95 $\pm$ 0.04 $^{+ 0.10 }_{- 0.11 }$ \\[3pt] 0.26 & 0.83 $\pm$ 0.02 $^{+ 0.10 }_{- 0.09 }$ & 0.69 $\pm$ 0.03 $^{+ 0.08 }_{- 0.05 }$ \\[3pt] 0.30 & 0.59 $\pm$ 0.02 $^{+ 0.06 }_{- 0.06 }$ & 0.56 $\pm$ 0.02 $^{+ 0.05 }_{- 0.05 }$ \\[3pt] 0.34 & 0.46 $\pm$ 0.01 $^{+ 0.05 }_{- 0.05 }$ & 0.41 $\pm$ 0.01 $^{+ 0.04 }_{- 0.04 }$ \\[3pt] 0.38 & 0.26 $\pm$ 0.01 $^{+ 0.03 }_{- 0.03 }$ & 0.23 $\pm$ 0.01 $^{+ 0.03 }_{- 0.03 }$ \\[3pt] \hline \end{tabular} \end{center} \end{table} \begin{figure}[!h] \vspace{0.8cm} \includegraphics[width=8.5cm,height=10.5cm]{fig15.pdf} \caption{Integrated jet shapes $\langle\Psi(r)\rangle$ as a function of the radius $r$ for light jets (triangles) and $b$-jets (squares). The data are compared to \textsc{MC@NLO+Herwig} and \textsc{Powheg+Pythia} event generators for $40 \GeV < p_{\mathrm{T}} < 50 \GeV$. The uncertainties shown include statistical and systematic sources, added in quadrature.} \label{fig:int2} \end{figure} \begin{table}[!h] \vspace{0.81cm} \caption{Unfolded values for $\langle\Psi(r)\rangle$, together with statistical and systematic uncertainties for $40 \GeV < p_{\mathrm{T}} < 50 \GeV$.} \normalsize \label{tabInt2} \begin{center} \begin{tabular}{ccc} \hline $r$ & $\langle\Psi_b (r)\rangle$ [$b$-jets] & $\langle\Psi_{\mathrm{l}} (r)\rangle$ [light jets]\\ \hline 0.04 & 0.187 $\pm$ 0.006 $^{+ 0.023 }_{- 0.024 }$ & 0.376 $\pm$ 0.013 $^{+ 0.044 }_{- 0.043 }$ \\[3pt] 0.08 & 0.475 $\pm$ 0.007 $^{+ 0.033 }_{- 0.034 }$ & 0.621 $\pm$ 0.011 $^{+ 0.032 }_{- 0.034 }$ \\[3pt] 0.12 & 0.683 $\pm$ 0.005 $^{+ 0.027 }_{- 0.029 }$ & 0.757 $\pm$ 0.008 $^{+ 0.025 }_{- 0.027 }$ \\[3pt] 0.16 & 0.805 $\pm$ 0.004 $^{+ 0.023 }_{- 0.025 }$ & 0.832 $\pm$ 0.006 $^{+ 0.021 }_{- 0.022 }$ \\[3pt] 0.20 & 0.876 $\pm$ 0.003 $^{+ 0.017 }_{- 0.018 }$ & 0.885 $\pm$ 0.004 $^{+ 0.017 }_{- 0.018 }$ \\[3pt] 0.24 & 0.918 $\pm$ 0.002 $^{+ 0.015 }_{- 0.016 }$ & 0.925 $\pm$ 0.003 $^{+ 0.012 }_{- 0.014 }$ \\[3pt] 0.28 & 0.950 $\pm$ 0.002 $^{+ 0.010 }_{- 0.011 }$ & 0.953 $\pm$ 0.002 $^{+ 0.010 }_{- 0.011 }$ \\[3pt] 0.32 & 0.973 $\pm$ 0.001 $^{+ 0.007 }_{- 0.006 }$ & 0.976 $\pm$ 0.001 $^{+ 0.006 }_{- 0.006 }$ \\[3pt] 0.36 & 0.990 $\pm$ 0.001 $^{+ 0.003 }_{- 0.002 }$ & 0.992 $\pm$ 0.001 $^{+ 0.003 }_{- 0.003 }$ \\[3pt] 0.40 & 1.000 & 1.000 \\[3pt] \hline \end{tabular} \end{center} \end{table} \begin{figure}[!h] \vspace{0.8cm} \includegraphics[width=8.5cm,height=10.5cm]{fig16.pdf} \caption{Differential jet shapes $\langle\rho(r)\rangle$ as a function of the radius $r$ for light jets (triangles) and $b$-jets (squares). The data are compared to \textsc{MC@NLO+Herwig} and \textsc{Powheg+Pythia} event generators for $50 \GeV < p_{\mathrm{T}} < 70 \GeV$. The uncertainties shown include statistical and systematic sources, added in quadrature.} \label{fig:diff3} \end{figure} \begin{table}[!h] \vspace{0.8cm} \caption{Unfolded values for $\langle\rho(r)\rangle$, together with statistical and systematic uncertainties for $50 \GeV < p_{\mathrm{T}} < 70 \GeV$.} \normalsize \label{tabDiff3} \begin{center} \begin{tabular}{ccc} \hline $r$ & $\langle\rho_b (r)\rangle$ [$b$-jets] & $\langle\rho_{\mathrm{l}} (r)\rangle$ [light jets]\\ \hline 0.02 & 6.19 $\pm$ 0.13 $^{+ 0.46 }_{- 0.44 }$ & 10.82 $\pm$ 0.31 $^{+ 0.64 }_{- 0.84 }$ \\[3pt] 0.06 & 8.14 $\pm$ 0.11 $^{+ 0.27 }_{- 0.29 }$ & 6.17 $\pm$ 0.14 $^{+ 0.45 }_{- 0.44 }$ \\[3pt] 0.10 & 4.62 $\pm$ 0.06 $^{+ 0.17 }_{- 0.18 }$ & 2.92 $\pm$ 0.08 $^{+ 0.14 }_{- 0.15 }$ \\[3pt] 0.14 & 2.50 $\pm$ 0.04 $^{+ 0.20 }_{- 0.21 }$ & 1.56 $\pm$ 0.05 $^{+ 0.05 }_{- 0.06 }$ \\[3pt] 0.18 & 1.40 $\pm$ 0.03 $^{+ 0.11 }_{- 0.10 }$ & 1.04 $\pm$ 0.04 $^{+ 0.08 }_{- 0.08 }$ \\[3pt] 0.22 & 0.87 $\pm$ 0.02 $^{+ 0.05 }_{- 0.04 }$ & 0.75 $\pm$ 0.03 $^{+ 0.05 }_{- 0.05 }$ \\[3pt] 0.26 & 0.60 $\pm$ 0.01 $^{+ 0.05 }_{- 0.04 }$ & 0.54 $\pm$ 0.02 $^{+ 0.07 }_{- 0.06 }$ \\[3pt] 0.30 & 0.45 $\pm$ 0.01 $^{+ 0.04 }_{- 0.04 }$ & 0.44 $\pm$ 0.01 $^{+ 0.05 }_{- 0.04 }$ \\[3pt] 0.34 & 0.36 $\pm$ 0.01 $^{+ 0.04 }_{- 0.04 }$ & 0.34 $\pm$ 0.01 $^{+ 0.04 }_{- 0.05 }$ \\[3pt] 0.38 & 0.21 $\pm$ 0.00 $^{+ 0.03 }_{- 0.03 }$ & 0.23 $\pm$ 0.01 $^{+ 0.03 }_{- 0.04 }$ \\[3pt] \hline \end{tabular} \end{center} \end{table} \begin{figure}[!h] \vspace{0.8cm} \includegraphics[width=8.5cm,height=10.5cm]{fig17.pdf} \caption{Integrated jet shapes $\langle\Psi(r)\rangle$ as a function of the radius $r$ for light jets (triangles) and $b$-jets (squares). The data are compared to \textsc{MC@NLO+Herwig} and \textsc{Powheg+Pythia} event generators for $50 \GeV < p_{\mathrm{T}} < 70 \GeV$. The uncertainties shown include statistical and systematic sources, added in quadrature.} \label{fig:int3} \end{figure} \begin{table}[!h] \vspace{0.81cm} \caption{Unfolded values for $\langle\Psi(r)\rangle$, together with statistical and systematic uncertainties for $50 \GeV < p_{\mathrm{T}} < 70 \GeV$.} \normalsize \label{tabInt3} \begin{center} \begin{tabular}{ccc} \hline $r$ & $\langle\Psi_b (r)\rangle$ [$b$-jets] & $\langle\Psi_{\mathrm{l}} (r)\rangle$ [light jets]\\ \hline 0.04 & 0.248 $\pm$ 0.005 $^{+ 0.019 }_{- 0.018 }$ & 0.433 $\pm$ 0.012 $^{+ 0.026 }_{- 0.034 }$ \\[3pt] 0.08 & 0.573 $\pm$ 0.005 $^{+ 0.024 }_{- 0.023 }$ & 0.686 $\pm$ 0.009 $^{+ 0.020 }_{- 0.024 }$ \\[3pt] 0.12 & 0.753 $\pm$ 0.004 $^{+ 0.025 }_{- 0.025 }$ & 0.807 $\pm$ 0.006 $^{+ 0.017 }_{- 0.019 }$ \\[3pt] 0.16 & 0.851 $\pm$ 0.003 $^{+ 0.019 }_{- 0.018 }$ & 0.868 $\pm$ 0.004 $^{+ 0.017 }_{- 0.019 }$ \\[3pt] 0.20 & 0.905 $\pm$ 0.002 $^{+ 0.015 }_{- 0.015 }$ & 0.909 $\pm$ 0.003 $^{+ 0.014 }_{- 0.016 }$ \\[3pt] 0.24 & 0.938 $\pm$ 0.001 $^{+ 0.012 }_{- 0.013 }$ & 0.939 $\pm$ 0.002 $^{+ 0.012 }_{- 0.014 }$ \\[3pt] 0.28 & 0.961 $\pm$ 0.001 $^{+ 0.008 }_{- 0.009 }$ & 0.960 $\pm$ 0.002 $^{+ 0.008 }_{- 0.009 }$ \\[3pt] 0.32 & 0.978 $\pm$ 0.001 $^{+ 0.005 }_{- 0.005 }$ & 0.977 $\pm$ 0.001 $^{+ 0.006 }_{- 0.006 }$ \\[3pt] 0.36 & 0.992 $\pm$ 0.000 $^{+ 0.003 }_{- 0.002 }$ & 0.990 $\pm$ 0.001 $^{+ 0.003 }_{- 0.003 }$ \\[3pt] 0.40 & 1.000 & 1.000 \\[3pt] \hline \end{tabular} \end{center} \end{table} \begin{figure}[!h] \vspace{0.8cm} \includegraphics[width=8.5cm,height=10.5cm]{fig18.pdf} \caption{Differential jet shapes $\langle\rho(r)\rangle$ as a function of the radius $r$ for light jets (triangles) and $b$-jets (squares). The data are compared to \textsc{MC@NLO+Herwig} and \textsc{Powheg+Pythia} event generators for $70 \GeV < p_{\mathrm{T}} < 100 \GeV$. The uncertainties shown include statistical and systematic sources, added in quadrature.} \label{fig:diff4} \end{figure} \begin{table}[!h] \vspace{0.8cm} \caption{Unfolded values for $\langle\rho(r)\rangle$, together with statistical and systematic uncertainties for $70 \GeV < p_{\mathrm{T}} < 100 \GeV$.} \normalsize \label{tabDiff4} \begin{center} \begin{tabular}{ccc} \hline $r$ & $\langle\rho_b (r)\rangle$ [$b$-jets] & $\langle\rho_{\mathrm{l}} (r)\rangle$ [light jets]\\ \hline 0.02 & 8.98 $\pm$ 0.15 $^{+ 0.55 }_{- 0.54 }$ & 12.37 $\pm$ 0.38 $^{+ 0.93 }_{- 1.10 }$ \\[3pt] 0.06 & 8.14 $\pm$ 0.10 $^{+ 0.17 }_{- 0.17 }$ & 5.44 $\pm$ 0.16 $^{+ 0.38 }_{- 0.39 }$ \\[3pt] 0.10 & 3.80 $\pm$ 0.05 $^{+ 0.25 }_{- 0.25 }$ & 2.42 $\pm$ 0.08 $^{+ 0.18 }_{- 0.21 }$ \\[3pt] 0.14 & 1.74 $\pm$ 0.03 $^{+ 0.10 }_{- 0.10 }$ & 1.52 $\pm$ 0.06 $^{+ 0.11 }_{- 0.13 }$ \\[3pt] 0.18 & 1.00 $\pm$ 0.02 $^{+ 0.03 }_{- 0.03 }$ & 0.89 $\pm$ 0.04 $^{+ 0.05 }_{- 0.05 }$ \\[3pt] 0.22 & 0.66 $\pm$ 0.01 $^{+ 0.04 }_{- 0.04 }$ & 0.68 $\pm$ 0.03 $^{+ 0.05 }_{- 0.04 }$ \\[3pt] 0.26 & 0.47 $\pm$ 0.01 $^{+ 0.03 }_{- 0.03 }$ & 0.45 $\pm$ 0.02 $^{+ 0.05 }_{- 0.04 }$ \\[3pt] 0.30 & 0.34 $\pm$ 0.01 $^{+ 0.03 }_{- 0.03 }$ & 0.38 $\pm$ 0.02 $^{+ 0.04 }_{- 0.04 }$ \\[3pt] 0.34 & 0.26 $\pm$ 0.01 $^{+ 0.03 }_{- 0.03 }$ & 0.28 $\pm$ 0.01 $^{+ 0.03 }_{- 0.03 }$ \\[3pt] 0.38 & 0.17 $\pm$ 0.00 $^{+ 0.02 }_{- 0.02 }$ & 0.18 $\pm$ 0.01 $^{+ 0.03 }_{- 0.03 }$ \\[3pt] \hline \end{tabular} \end{center} \end{table} \begin{figure}[!h] \vspace{0.8cm} \includegraphics[width=8.5cm,height=10.5cm]{fig19.pdf} \caption{Integrated jet shapes $\langle\Psi(r)\rangle$ as a function of the radius $r$ for light jets (triangles) and $b$-jets (squares). The data are compared to \textsc{MC@NLO+Herwig} and \textsc{Powheg+Pythia} event generators for $70 \GeV < p_{\mathrm{T}} < 100 \GeV$. The uncertainties shown include statistical and systematic sources, added in quadrature.} \label{fig:int4} \vspace{0.8cm} \end{figure} \begin{table}[!h] \caption{Unfolded values for $\langle\Psi(r)\rangle$, together with statistical and systematic uncertainties for $70 \GeV < p_{\mathrm{T}} < 100 \GeV$.} \normalsize \label{tabInt4} \begin{center} \begin{tabular}{ccc} \hline $r$ & $\langle\Psi_b (r)\rangle$ [$b$-jets] & $\langle\Psi_{\mathrm{l}} (r)\rangle$ [light jets]\\ \hline 0.04 & 0.359 $\pm$ 0.006 $^{+ 0.022 }_{- 0.021 }$ & 0.495 $\pm$ 0.015 $^{+ 0.037 }_{- 0.042 }$ \\[3pt] 0.08 & 0.678 $\pm$ 0.005 $^{+ 0.023 }_{- 0.023 }$ & 0.718 $\pm$ 0.010 $^{+ 0.032 }_{- 0.037 }$ \\[3pt] 0.12 & 0.827 $\pm$ 0.003 $^{+ 0.017 }_{- 0.018 }$ & 0.818 $\pm$ 0.007 $^{+ 0.019 }_{- 0.021 }$ \\[3pt] 0.16 & 0.891 $\pm$ 0.002 $^{+ 0.012 }_{- 0.013 }$ & 0.883 $\pm$ 0.005 $^{+ 0.012 }_{- 0.014 }$ \\[3pt] 0.20 & 0.928 $\pm$ 0.002 $^{+ 0.011 }_{- 0.012 }$ & 0.919 $\pm$ 0.004 $^{+ 0.010 }_{- 0.011 }$ \\[3pt] 0.24 & 0.954 $\pm$ 0.001 $^{+ 0.009 }_{- 0.009 }$ & 0.947 $\pm$ 0.003 $^{+ 0.008 }_{- 0.009 }$ \\[3pt] 0.28 & 0.972 $\pm$ 0.001 $^{+ 0.006 }_{- 0.007 }$ & 0.965 $\pm$ 0.002 $^{+ 0.007 }_{- 0.008 }$ \\[3pt] 0.32 & 0.984 $\pm$ 0.001 $^{+ 0.004 }_{- 0.004 }$ & 0.981 $\pm$ 0.001 $^{+ 0.004 }_{- 0.005 }$ \\[3pt] 0.36 & 0.993 $\pm$ 0.000 $^{+ 0.002 }_{- 0.002 }$ & 0.992 $\pm$ 0.001 $^{+ 0.002 }_{- 0.002 }$ \\[3pt] 0.40 & 1.000 & 1.000 \\[3pt] \hline \end{tabular} \end{center} \end{table} \begin{figure}[!h] \vspace{0.8cm} \includegraphics[width=8.5cm,height=10.5cm]{fig20.pdf} \caption{Differential jet shapes $\langle\rho(r)\rangle$ as a function of the radius $r$ for light jets (triangles) and $b$-jets (squares). The data are compared to \textsc{MC@NLO+Herwig} and \textsc{Powheg+Pythia} event generators for $100 \GeV < p_{\mathrm{T}} < 150 \GeV$. The uncertainties shown include statistical and systematic sources, added in quadrature.} \label{fig:diff5} \end{figure} \begin{table}[!h] \vspace{0.8cm} \caption{Unfolded values for $\langle\rho(r)\rangle$, together with statistical and systematic uncertainties for $100 \GeV < p_{\mathrm{T}} < 150 \GeV$.} \normalsize \label{tabDiff5} \begin{center} \begin{tabular}{ccc} \hline $r$ & $\langle\rho_b (r)\rangle$ [$b$-jets] & $\langle\rho_{\mathrm{l}} (r)\rangle$ [light jets]\\ \hline 0.02 & 11.48 $\pm$ 0.20 $^{+ 0.71 }_{- 0.74 }$ & 13.89 $\pm$ 0.54 $^{+ 1.60 }_{- 1.70 }$ \\[3pt] 0.06 & 7.08 $\pm$ 0.11 $^{+ 0.24 }_{- 0.25 }$ & 4.68 $\pm$ 0.20 $^{+ 0.50 }_{- 0.37 }$ \\[3pt] 0.10 & 2.94 $\pm$ 0.05 $^{+ 0.23 }_{- 0.23 }$ & 2.31 $\pm$ 0.11 $^{+ 0.28 }_{- 0.29 }$ \\[3pt] 0.14 & 1.37 $\pm$ 0.03 $^{+ 0.06 }_{- 0.06 }$ & 1.27 $\pm$ 0.07 $^{+ 0.09 }_{- 0.10 }$ \\[3pt] 0.18 & 0.85 $\pm$ 0.02 $^{+ 0.05 }_{- 0.05 }$ & 0.74 $\pm$ 0.05 $^{+ 0.08 }_{- 0.07 }$ \\[3pt] 0.22 & 0.58 $\pm$ 0.02 $^{+ 0.04 }_{- 0.03 }$ & 0.58 $\pm$ 0.05 $^{+ 0.12 }_{- 0.10 }$ \\[3pt] 0.26 & 0.39 $\pm$ 0.01 $^{+ 0.03 }_{- 0.02 }$ & 0.39 $\pm$ 0.03 $^{+ 0.08 }_{- 0.06 }$ \\[3pt] 0.30 & 0.29 $\pm$ 0.01 $^{+ 0.02 }_{- 0.02 }$ & 0.31 $\pm$ 0.02 $^{+ 0.04 }_{- 0.03 }$ \\[3pt] 0.34 & 0.21 $\pm$ 0.01 $^{+ 0.02 }_{- 0.02 }$ & 0.24 $\pm$ 0.01 $^{+ 0.03 }_{- 0.04 }$ \\[3pt] 0.38 & 0.14 $\pm$ 0.00 $^{+ 0.02 }_{- 0.02 }$ & 0.15 $\pm$ 0.01 $^{+ 0.02 }_{- 0.02 }$ \\[3pt] \hline \end{tabular} \end{center} \end{table} \newpage \begin{figure}[!h] \vspace{0.8cm} \includegraphics[width=8.5cm,height=10.5cm]{fig21.pdf} \caption{Integrated jet shapes $\langle\Psi(r)\rangle$ as a function of the radius $r$ for light jets (triangles) and $b$-jets (squares). The data are compared to \textsc{MC@NLO+Herwig} and \textsc{Powheg+Pythia} event generators for $100 \GeV < p_{\mathrm{T}} < 150 \GeV$. The uncertainties shown include statistical and systematic sources, added in quadrature.} \label{fig:int5} \vspace{-0.15cm} \end{figure} \begin{table}[!h] \vspace{0.95cm} \caption{Unfolded values for $\langle\Psi(r)\rangle$, together with statistical and systematic uncertainties for $100 \GeV < p_{\mathrm{T}} < 150 \GeV$.} \normalsize \label{tabInt5} \begin{center} \begin{tabular}{ccc} \hline $r$ & $\langle\Psi_b (r)\rangle$ [$b$-jets] & $\langle\Psi_{\mathrm{l}} (r)\rangle$ [light jets]\\ \hline 0.04 & 0.459 $\pm$ 0.008 $^{+ 0.028 }_{- 0.030 }$ & 0.556 $\pm$ 0.022 $^{+ 0.062 }_{- 0.067 }$ \\[3pt] 0.08 & 0.734 $\pm$ 0.005 $^{+ 0.019 }_{- 0.020 }$ & 0.743 $\pm$ 0.014 $^{+ 0.033 }_{- 0.036 }$ \\[3pt] 0.12 & 0.852 $\pm$ 0.004 $^{+ 0.013 }_{- 0.012 }$ & 0.843 $\pm$ 0.010 $^{+ 0.021 }_{- 0.017 }$ \\[3pt] 0.16 & 0.904 $\pm$ 0.002 $^{+ 0.010 }_{- 0.010 }$ & 0.898 $\pm$ 0.007 $^{+ 0.017 }_{- 0.014 }$ \\[3pt] 0.20 & 0.937 $\pm$ 0.002 $^{+ 0.008 }_{- 0.008 }$ & 0.928 $\pm$ 0.005 $^{+ 0.014 }_{- 0.011 }$ \\[3pt] 0.24 & 0.960 $\pm$ 0.001 $^{+ 0.006 }_{- 0.006 }$ & 0.954 $\pm$ 0.003 $^{+ 0.008 }_{- 0.007 }$ \\[3pt] 0.28 & 0.975 $\pm$ 0.001 $^{+ 0.005 }_{- 0.005 }$ & 0.970 $\pm$ 0.002 $^{+ 0.006 }_{- 0.006 }$ \\[3pt] 0.32 & 0.986 $\pm$ 0.001 $^{+ 0.003 }_{- 0.003 }$ & 0.983 $\pm$ 0.001 $^{+ 0.003 }_{- 0.003 }$ \\[3pt] 0.36 & 0.994 $\pm$ 0.000 $^{+ 0.001 }_{- 0.001 }$ & 0.994 $\pm$ 0.001 $^{+ 0.001 }_{- 0.001 }$ \\[3pt] 0.40 & 1.000 & 1.000 \\[3pt] \hline \end{tabular} \end{center} \end{table} \clearpage \begin{figure*} \centering \vspace{0.5cm} \includegraphics[width=16.5cm,height=8.0cm]{fig22.pdf} \caption{Comparison of the $t\bar{t}$ differential jet shape data for $50 \GeV < \pt < 70 \GeV$ with several MC event generators. As stated in the text, \textsc{AcerMC} \cite{acer} coupled to \textsc{Pythia} \cite{pythia} with the \textsc{A Pro} and \textsc{A CR Pro} tunes \cite{tuneA1,tuneA2} give the best description of the data, while the \textsc{Perugia 2011} \cite{Perugia} tunes are found to be slightly disfavoured. \textsc{Alpgen + Jimmy} \cite{alpgen,Jimmy} provides an intermediate description.} \label{fig:mcComps} \end{figure*} \clearpage
2,869,038,156,099
arxiv
\section{INTRODUCTION} One of the most recognizable features of galaxies along the Hubble sequence is the wide range in young stellar content and star formation activity. This variation in stellar content is part of the basis of the Hubble classification itself (Hubble 1926), and understanding its physical nature and origins is fundamental to understanding galaxy evolution in its broader context. This review deals with the global star formation properties of galaxies, the systematics of those properties along the Hubble sequence, and their implications for galactic evolution. I interpret ``Hubble sequence" in this context very loosely, to encompass not only morphological type but other properties such as gas content, mass, bar structure, and dynamical environment, which can strongly influence the large-scale star formation rate (SFR). Systematic investigations of the young stellar content of galaxies trace back to the early studies of resolved stellar populations by Hubble and Baade, and analyses of galaxy colors and spectra by Stebbins, Whitford, Holmberg, Humason, Mayall, Sandage, Morgan, and de Vaucouleurs (see Whitford 1975 for a summary of the early work in this field). This piecemeal information was synthesized by Roberts (1963), in an article for the first volume of the {\it Annual Review of Astronomy and Astrophysics}. Despite the limited information that was available on the SFRs and gas contents of galaxies, Roberts' analysis established the basic elements of the contemporary picture of the Hubble sequence as a monotonic sequence in present-day SFRs and past star formation histories. Quantifying this picture required the development of more precise diagnostics of global SFRs in galaxies. The first quantitative SFRs were derived from evolutionary synthesis models of galaxy colors (Tinsley 1968, 1972, Searle et al 1973). These studies confirmed the trends in SFRs and star formation histories along the Hubble sequence, and led to the first predictions of the evolution of the SFR with cosmic lookback time. Subsequent modelling of blue galaxies by Bagnuolo (1976), Huchra (1977), and Larson \& Tinsley (1978) revealed the importance of star formation bursts in the evolution of low-mass galaxies and interacting systems. Over the next decade the field matured fully, with the development of more precise direct SFR diagnostics, including integrated emission-line fluxes (Cohen 1976, Kennicutt 1983a), near-ultraviolet continuum fluxes (Donas \& Deharveng 1984), and infrared continuum fluxes (Harper \& Low 1973, Rieke \& Lebofsky 1978, Telesco \& Harper 1980). These provided absolute SFRs for large samples of nearby galaxies, and these were subsequently interpreted in terms of the evolutionary properties of galaxies by Kennicutt (1983a), Gallagher et al (1984), and Sandage (1986). Activity in this field has grown enormously in the past decade, stimulated in large part by two major revelations. The first was the discovery of a large population of ultraluminous infrared starburst galaxies by the Infrared Astronomical Satellite (IRAS) in the mid-1980's. Starbursts had been identified (and coined) from groundbased studies (Rieke \& Lebofsky 1979; Weedman et al 1981), but IRAS revealed the ubiquity of the phenomenon and the extreme nature of the most luminous objects. The latest surge of interest in the field has been stimulated by the detection of star forming galaxies at high redshift, now exceeding $z=3$ (Steidel et al 1996, Ellis 1997). This makes it possible to apply the locally calibrated SFR diagnostics to distant galaxies, and directly trace the evolution of the SFR density and the Hubble sequence with cosmological lookback time. The focus of this review is on the broad patterns in the star formation properties of galaxies, and their implications for the evolutionary properties of the Hubble sequence. It begins with a summary of the diagnostic methods used to measure SFRs in galaxies, followed by a summary of the systematics of SFRs along the Hubble sequence, and the interpretation of those trends in terms of galaxy evolution. It concludes with a brief discussion of the physical regulation of the SFR in galaxies and future prospects in this field. Galaxies exhibit a huge dynamic range in SFRs, over six orders of magnitude even when normalized per unit area and galaxy mass, and the continuity of physical properties over this entire spectrum of activities is a central theme of this review. With this broad approach in mind, I cannot begin to review the hundreds of important papers on the star formation properties of individual galaxies, or the rich theoretical literature on this subject. Fortunately, there are several previous reviews in this series that provide thorough discussions of key aspects of this field. A broad review of the physical properties of galaxies along the Hubble sequence can be found in Roberts \& Haynes (1994). The star formation and evolutionary properties of irregular galaxies are reviewed by Gallagher \& Hunter (1984). The properties of IR-luminous starbursts are the subject of several reviews, most recently those by Soifer et al (1987), Telesco (1988), and Sanders \& Mirabel (1996). Finally an excellent review of faint blue galaxies by Ellis (1997) describes many applications to high-redshift objects. \section{DIAGNOSTIC METHODS} Individual young stars are unresolved in all but the closest galaxies, even with the {\it Hubble Space Telescope} (HST), so most information on the star formation properties of galaxies comes from integrated light measurements in the ultraviolet (UV), far-infrared (FIR), or nebular recombination lines. These direct tracers of the young stellar population have largely supplanted earlier SFR measures based on synthesis modelling of broadband colors, though the latter are still applied to multicolor observations of faint galaxies. This section begins with a brief discussion of synthesis models, which form the basis of all of the methods, followed by more detailed discussions of the direct SFR tracers. \subsection{{\it Integrated Colors and Spectra, Synthesis Modelling}} The basic trends in galaxy spectra with Hubble type are illustrated in Figure 1, which shows examples of integrated spectra for E, Sa, Sc, and Magellanic irregular galaxies (Kennicutt 1992b). When progressing along this sequence, several changes in the spectrum are apparent: a broad rise in the blue continuum, a gradual change in the composite stellar absorption spectrum from K-giant dominated to A-star dominated, and a dramatic increase in the strengths of the nebular emission lines, especially H$\alpha$. Although the integrated spectra contain contributions from the full range of stellar spectral types and luminosities, it is easy to show that the dominant contributors at visible wavelengths are intermediate-type main sequence stars (A to early F) and G-K giants. As a result, the integrated colors and spectra of normal galaxies fall on a relatively tight sequence, with the spectrum of any given object dictated by the ratio of early to late-type stars, or alternatively by the ratio of young ($< 1$~Gyr) to old (3--15~Gyr) stars. This makes it possible to use the observed colors to estimate the fraction of young stars and the mean SFR over the past $10^8$--$10^9$ years. \begin{figure}[!ht] \begin{center} \leavevmode \centerline{\epsfig{file=kennf1.eps,width=16cm}} \end{center} \caption{\em Integrated spectra of elliptical, spiral, and irregular galaxies, from Kennicutt (1992b). The fluxes have been normalized to unity at 5500~\AA.} \end{figure} The simplest application of this method would assume a linear scaling between the SFR and the continuum luminosity integrated over a fixed bandpass in the blue or near-ultraviolet. Although this may be a valid approximation in starburst galaxies, where young stars dominate the integrated light across the visible spectrum, the approximation breaks down in most normal galaxies, where a considerable fraction of the continuum is produced by old stars, even in the blue (Figure 1). However the scaling of the SFR to continuum luminosity is a smooth function of the color of the population, and this can be calibrated using an evolutionary synthesis model. Synthesis models are used in all of the methods described here, so it is useful to summarize the main steps in the construction of a model. A grid of stellar evolution tracks is used to derive the effective temperatures and bolometric luminosities for various stellar masses as a function of time, and these are converted into broadband luminosities (or spectra) using stellar atmosphere models or spectral libraries. The individual stellar templates are then summed together, weighted by an initial mass function (IMF), to synthesize the luminosities, colors, or spectra of single-age populations as functions of age. These isochrones can then be added in linear combination to synthesize the spectrum or colors of a galaxy with an arbitrary star formation history, usually parametrized as an exponential function of time. Although a single model contains at least four free parameters, the star formation history, galaxy age, metal abundance, and IMF, the colors of normal galaxies are well represented by a one-parameter sequence with fixed age, composition and IMF, varying only in the time dependence of the SFR (Searle et al 1973, Larson \& Tinsley 1978; Charlot \& Bruzual 1991). Synthesis models have been published by several authors, and are often available in digital form. An extensive library of models has been compiled by Leitherer et al (1996a), and the models are described in a companion conference volume (Leitherer et al 1996b). Widely used models for star forming galaxies include those of Bruzual \& Charlot (1993), Bertelli et al (1994), and Fioc \& Rocca-Volmerange (1997). Leitherer \& Heckman (1995) have published an extensive grid of models that is optimized for applications to starburst galaxies. \begin{figure}[!ht] \begin{center} \leavevmode \centerline{\epsfig{file=kennf2.eps,width=16cm}} \end{center} \caption{\em Relationship between SFR per unit broadband luminosity in the $UBV$ passbands and integrated color, from the evolutionary synthesis models of Kennicutt et al (1994). The models are for 10-billion-year-old disks, a Salpeter IMF, and exponential star formation histories. The $U$, $B$, and $V$ luminosities are normalized to those of the Sun in the respective bandpasses.} \end{figure} The synthesis models provide relations between the SFR per unit mass or luminosity and the integrated color of the population. An example is given in Figure 2, which plots the SFR per unit $U$, $B$, and $V$ luminosity as functions of $U-V$ color, based on the models of Kennicutt et al (1994). Figure 2 confirms that the broadband luminosity by itself is a poor tracer of the SFR; even the SFR/$L_U$ ratio varies by more than an order of magnitude over the relevant range of galaxy colors. However the integrated color provides a reasonable estimate of the SFR per unit luminosity, especially for the bluer galaxies. SFRs derived in this way are relatively imprecise, and are prone to systematic errors from reddening or from an incorrect IMF, age, metallicity, of star formation history (Larson \& Tinsley 1978). Nevertheless, the method offers a useful means of comparing the average SFR properties of large samples of galaxies, when absolute accuracy is not required. The method should be avoided in applications where the dust content, abundances, or IMFs are likely to change systematically across a population. \subsection{{\it Ultraviolet Continuum}} The limitations described above can be avoided if observations are made at wavelengths where the integrated spectrum is dominated by young stars, so that the SFR scales linearly with luminosity. The optimal wavelength range is 1250--2500~\AA, longward of the Ly$\alpha$ forest but short enough to minimize spectral contamination from older stellar populations. These wavelengths are inaccessible from the ground for local galaxies ($z<0.5$), but the region can be observed in the redshifted spectra of galaxies at $z\sim$1--5. The recent detection of the redshifted UV continua of large numbers of $z>3$ galaxies with the Keck telescope has demonstrated the enormous potential of this technique (Steidel et al 1996). The most complete UV studies of nearby galaxies are based on dedicated balloon, rocket, and space experiments (Smith \& Cornett 1982, Donas \& Deharveng 1984, Donas et al 1987, 1995, Buat 1992, Deharveng et al 1994). The database of high-resolution UV imaging of galaxies is improving rapidly, mainly from HST (Meurer et al 1995, Maoz 1996) and the {\it Ultraviolet Imaging Telescope} (Smith et al 1996, Fanelli et al 1997). An atlas of UV spectra of galaxies from the {\it International Ultraviolet Explorer} has been published by Kinney et al (1993). A recent conference volume by Waller et al (1997) highlights recent UV observations of galaxies. The conversion between the UV flux over a given wavelength interval and the SFR can be derived using the synthesis models described earlier. Calibrations have been published by Buat et al (1989), Deharveng et al (1994), Leitherer et al (1995b), Meurer et al (1995), Cowie et al (1997), and Madau et al (1998), for wavelengths in the range 1500--2800~\AA. The calibrations differ over a full range of $\sim$0.3 dex, when converted to a common reference wavelength and IMF, with most of the difference reflecting the use of different stellar libraries or different assumptions about the star formation timescale. For integrated measurements of galaxies, it is usually appropriate to assume that the SFR has remained constant over timescales that are long compared to the lifetimes of the dominant UV emitting population ($<$10$^8$ yr), in the ``continuous star formation" approximation. Converting the calibration of Madau et al (1998) to a Salpeter (1955) IMF with mass limits 0.1 and 100 M$_\odot$ yields: \begin{equation} {\rm SFR}~(M_\odot~yr^{-1}) = 1.4 \times 10^{-28}~L_\nu~({\rm ergs~s^{-1}~Hz^{-1}}). \end{equation} For this IMF, the composite UV spectrum happens to be nearly flat in $L_\nu$, over the wavelength range 1500--2800~\AA, and this allows us to express the conversion in Equation 1 in such simple form. The corresponding conversion in terms of $L_\lambda$ will scale as $\lambda^{-2}$. Equation 1 applies to galaxies with continuous star formation over timescales of $10^8$ years or longer; the SFR/$L\nu$ ratio will be significantly lower in younger populations such as young starburst galaxies. For example, continuous burst models for a 9 Myr old population yield SFRs that are 57\%\ higher than those given in Equation 1 (Leitherer et al 1995b). It is important when using this method to apply an SFR calibration that is appropriate to the population of interest. The main advantages of this technique are that it is directly tied to the photospheric emission of the young stellar population, and it can be applied to star forming galaxies over a wide range of redshifts. As a result, it is currently the most powerful probe of the cosmological evolution in the SFR (Madau et al 1996, Ellis 1997). The chief drawbacks of the method are its sensitivity to extinction and the form of the IMF. Typical extinction corrections in the integrated UV magnitudes are 0--3 magnitudes (Buat 1992, Buat \& Xu 1996). The spatial distribution of the extinction is very patchy, with the emergent UV emission being dominated by regions of relatively low obscuration (Calzetti et al 1994), so calibrating the extinction correction is problematic. The best determinations are based on two-component radiative transfer models which take into account the clumpy distribution of dust, and make use of reddening information from the Balmer decrement or IR recombination lines (e.g., Buat 1992, Calzetti et al 1994, Buat \& Xu 1996, Calzetti 1997). The other main limitation, which is shared by all of the direct methods, is the dependence of the derived SFRs on the assumed form of the IMF. The integrated spectrum in the 1500--2500~\AA\ range is dominated by stars with masses above $\sim$5~$M_\odot$, so the SFR determination involves a large extrapolation to lower stellar masses. Fortunately there is little evidence for large systematic variations in the IMF among star forming galaxies (Scalo 1986, Gilmore et al 1998), with the possible exception of IR-luminous starbursts, where the UV emission is of little use anyway. \subsection{{\it Recombination Lines}} Figure 1 shows that the most dramatic change in the integrated spectrum with galaxy type is a rapid increase in the strengths of the nebular emission lines. The nebular lines effectively re-emit the integrated stellar luminosity of galaxies shortward of the Lyman limit, so they provide a direct, sensitive probe of the young massive stellar population. Most applications of this method have been based on measurements of the H$\alpha$\ line, but other recombination lines including H$\beta$, P$\alpha$, P$\beta$, Br$\alpha$, and Br$\gamma$ have been used as well. The conversion factor between ionizing flux and the SFR is usually computed using an evolutionary synthesis model. Only stars with masses $>$10~$M_\odot$\ and lifetimes $<$20 Myr contribute significantly to the integrated ionizing flux, so the emission lines provide a nearly instantaneous measure of the SFR, independent of the previous star formation history. Calibrations have been published by numerous authors, including Kennicutt (1983a), Gallagher et al (1984), Kennicutt et al (1994), Leitherer \& Heckman (1995), and Madau et al (1998). For solar abundances and the same Salpeter IMF (0.1--100~$M_\odot$) as was used in deriving equation [1], the calibrations of Kennicutt et al (1994) and Madau et al (1998) yield: \begin{equation} {\rm SFR}~(M_\odot~yr^{-1}) = 7.9 \times 10^{-42}~L(H\alpha)~({\rm ergs~s^{-1}}) = 1.08 \times 10^{-53}~Q(H^0)~({\rm s^{-1}}). \end{equation} where $Q(H^0)$ is the ionizing photon luminosity, and the H$\alpha$ calibration is computed for Case B recombination at $T_e$ = 10000~K. The corresponding conversion factor for $L$(Br$\gamma$) is $8.2 \times 10^{-40}$ in the same units, and it is straightforward to derive conversions for other recombination lines. Equation 2 yields SFRs that are 7\%\ lower than the widely used calibration of Kennicutt (1983a), with the difference reflecting a combination of updated stellar models and a slightly different IMF (Kennicutt et al 1994). As with other methods, there is a significant variation among published calibrations ($\sim$30\%), with most of the dispersion reflecting differences in the stellar evolution and atmosphere models. Large H$\alpha$\ surveys of normal galaxies have been published by Cohen (1976), Kennicutt \& Kent (1983), Romanishin (1990), Gavazzi et al (1991), Ryder \& Dopita (1994), Gallego et al (1995), and Young et al (1996). Imaging surveys have been published by numerous other authors, with some the largest including Hodge \& Kennicutt (1983), Hunter \& Gallagher (1985), Ryder \& Dopita (1993), Phillips (1993), Evans et al (1996), Gonz\'alez Delgado et al (1997), and Feinstein (1997). Gallego et al (1995) have observed a complete emission-line selected sample, in order to measure the volume-averaged SFR in the local universe, and this work has been applied extensively to studies of the evolution in the SFR density of the universe (Madau et al 1996). The primary advantages of this method are its high sensitivity, and the direct coupling between the nebular emission and the massive SFR. The star formation in nearby galaxies can be mapped at high resolution even with small telescopes, and the H$\alpha$\ line can be detected in the redshifted spectra of starburst galaxies to $z$$\gg$2 (e.g. Bechtold et al 1997). The chief limitations of the method are its sensitivity to uncertainties in extinction and the IMF, and to the assumption that all of the massive star formation is traced by the ionized gas. The escape fraction of ionizing radiation from individual HII regions has been measured both directly (Oey \& Kennicutt 1997) and from observations of the diffuse H$\alpha$\ emission in nearby galaxies (e.g., Hunter et al 1993, Walterbos \& Braun 1994, Kennicutt et al 1995, Ferguson et al 1996, Martin 1997), with fractions of 15--50\%\ derived in both sets of studies. Thus it is important when using this method to include the diffuse H$\alpha$\ emission in the SFR measurement (Ferguson et al 1996). However the escape fraction from a galaxy as a whole should be much lower. Leitherer et al (1995a) directly measured the redshifted Lyman continuum region in four starburst galaxies, and they derived an upper limit of 3\%\ on the escape fraction of ionizing photons. Much higher global escape fractions of 50--94\%, and local escape fractions as high as 99\%\ have been estimated by Patel \& Wilson (1995a, b), based on a comparison of O-star densities and H$\alpha$\ luminosities in M33 and NGC~6822, but those results are subject to large uncertainties, because the O-star properties and SFRs were derived from $UBV$ photometry, without spectroscopic identifications. If the direct limit of $<$3\%\ from Leitherer et al (1995a) is representative, then density bounding effects are a negligible source of error in this method. However it is very important to test this conclusion by extending these types of measurements to a more diverse sample of galaxies. Extinction is probably the most important source of systematic error in H$\alpha$-derived SFRs. The extinction can be measured by comparing H$\alpha$\ fluxes with those of IR recombination lines or the thermal radio continuum. Kennicutt (1983a) and Niklas et al (1997) have used integrated H$\alpha$\ and radio fluxes of galaxies to derive a mean extinction $A$(H$\alpha$) = 0.8--1.1 mag. Studies of large samples of individual HII regions in nearby galaxies yield similar results, with mean $A$(H$\alpha$) = 0.5--1.8 mag (e.g. Caplan \& Deharveng 1986, Kaufman et al 1987, van der Hulst et al 1988, Caplan et al 1996). Much higher extinction is encountered in localized regions, especially in the the dense HII regions in circumnuclear starbursts, and there the near-IR Paschen or Brackett recombination lines are required to reliably measure the SFR. Compilations of these data include Puxley et al (1990), Ho et al (1990), Calzetti et al (1996), Goldader et al (1995, 1997), Engelbracht (1997), and references therein. The Paschen and Brackett lines are typically 1--2 orders of magnitude weaker than H$\alpha$, so most measurements to date have been restricted to high surface brightness nuclear HII regions, but it is gradually becoming feasible to extend this approach to galaxies as a whole. The same method can be applied to higher-order recombination lines or the thermal continuum emission at submillimeter and radio wavelengths. Examples of such applications include H53$\alpha$ measurements of M82 by Puxley et al (1989), and radio continuum measurements of disk galaxies and starbursts by Israel \& van der Hulst (1983), Klein \& Grave (1986), Turner \& Ho (1994), and Niklas et al (1995). The ionizing flux is produced almost exclusively by stars with M $>$ 10~M$_\odot$, so SFRs derived from this method are especially sensitive to the form of the IMF. Adopting the Scalo (1986) IMF, for example, yields SFRs that are $\sim$3 times higher than derived with a Salpeter IMF. Fortunately, the H$\alpha$\ equivalent widths and broadband colors of galaxies are very sensitive to the slope of the IMF over the mass range 1--30~M$_\odot$, and these can be used to constrain the IMF slope (Kennicutt 1983a, Kennicutt et al 1994). The properties of normal disks are well fitted by a Salpeter IMF (or by a Scalo function with Salpeter slope above 1~M$_\odot$), consistent with observations of resolved stellar populations in nearby galaxies (e.g. Massey 1998). As with the UV continuum method, it is important when applying published SFRs to take proper account of the IMF that was assumed. \subsection{{\it Forbidden Lines}} The H$\alpha$\ emission line is redshifted out of the visible window beyond $z$$\sim$0.5, so there is considerable interest in calibrating bluer emission lines as quantitative SFR tracers. Unfortunately the integrated strengths of H$\beta$ and the higher order Balmer emission lines are poor SFR diagnostics, because the lines are weak and stellar absorption more strongly influences the emission-line fluxes. These lines in fact are rarely seen in emission at all in the integrated spectra of galaxies earlier than Sc (Kennicutt 1992a, also see Figure 1). The strongest emission feature in the blue is the [OII]$\lambda$3727 forbidden-line doublet. The luminosities of forbidden lines are not directly coupled to the ionizing luminosity, and their excitation is sensitive to abundance and the ionization state of the gas. However the excitation of [OII] is sufficiently well behaved that it can be calibrated empirically (through H$\alpha$) as a quantitative SFR tracer. Even this indirect calibration is extremely useful for lookback studies of distant galaxies, because [OII] can be observed in the visible out to redshifts $z \sim 1.6$, and it has been measured in several large samples of faint galaxies (Cowie et al 1996, 1997, Ellis 1997, and references therein). Calibrations of SFRs in terms of [OII] luminosity have been published by Gallagher et al (1989), based on large-aperture spectrophotometry of 75 blue irregular galaxies, and by Kennicutt (1992a), using integrated spectrophotometry of 90 normal and peculiar galaxies. When converted to the same IMF and H$\alpha$\ calibration the resulting SFR scales differ by a factor of 1.57, reflecting excitation differences in the two samples. Adopting the average of these calibrations yields: \begin{equation} SFR~(M_\odot~yr^{-1}) = {(1.4 \pm 0.4)} \times 10^{-41}~L[OII]~({\rm ergs~s^{-1}}), \end{equation} where the uncertainty indicates the range between blue emission-line galaxies (lower limit) and samples of more luminous spiral and irregular galaxies (upper limit). As with Equations 1 and 2, the observed luminosities must be corrected for extinction, in this case the extinction at H$\alpha$, because of the manner in which the [OII] fluxes were calibrated. The SFRs derived from [OII] are less precise than from H$\alpha$, because the mean [OII]/H$\alpha$\ ratios in individual galaxies vary considerably, over 0.5--1.0 dex in the Gallagher et al (1989) and Kennicutt (1992a) samples, respectively. The [OII]-derived SFRs may also be prone to systematic errors from extinction and variations in the diffuse gas fraction. The excitation of [OII] is especially high in the diffuse ionized gas in starburst galaxies (Hunter \& Gallagher 1990, Hunter 1994, Martin 1997), enough to more than double the L[OII]/SFR ratio in the integrated spectrum (Kennicutt 1992a). On the other hand, metal abundance has a relatively small effect on the [OII] calibration, over most of the abundance range of interest ($0.05~Z_\odot \le Z \le 1~Z_\odot$). Overall the [OII] lines provide a very useful estimate of the systematics of SFRs in samples of distant galaxies, and are especially useful as a consistency check on SFRs derived in other ways. \subsection{{\it Far-Infrared Continuum}} A significant fraction of the bolometric luminosity of a galaxy is absorbed by interstellar dust and re-emitted in the thermal IR, at wavelengths of roughly 10--300~$\mu$m. The absorption cross section of the dust is strongly peaked in the ultraviolet, so in principle the FIR emission can be a sensitive tracer of the young stellar population and SFR. The IRAS survey provides FIR fluxes for over 30,000 galaxies (Moshir et al 1992), offering a rich reward to those who can calibrate an accurate SFR scale from the 10--100~$\mu$m FIR emission. The efficacy of the FIR luminosity as a SFR tracer depends on the contribution of young stars to heating of the dust, and on the optical depth of the dust in the star forming regions. The simplest physical situation is one in which young stars dominate the radiation field thoughout the UV--visible, and the dust opacity is high everywhere, in which case the FIR luminosity measures the bolometric luminosity of the starburst. In such a limiting case the FIR luminosity is the ultimate SFR tracer, providing what is essentially a calorimetric measure of the SFR. Such conditions roughly hold in the dense circumnuclear starbursts that power many IR-luminous galaxies. The physical situation is more complex in the disks of normal galaxies, however (e.g. Lonsdale \& Helou 1987, Rowan-Robinson \& Crawford 1989, Cox \& Mezger 1989). The FIR spectra of galaxies contain both a ``warm" component associated with dust around young star forming regions ($\bar{\lambda}\sim$ 60$\mu$m), and a cooler ``infrared cirrus" component ($\bar{\lambda}\ge$ 100$\mu$m) which is associated with more extended dust heated by the interstellar radiation field. In blue galaxies, both spectral components may be dominated by young stars, but in red galaxies, where the composite stellar continuum drops off steeply in the blue, dust heating from the visible spectra of older stars may be very important. The relation of the global FIR emission of galaxies to the SFR has been a controversial subject. In late-type star forming galaxies, where dust heating from young stars is expected to dominate the 40--120$\mu$m emission, the FIR luminosity correlates with other SFR tracers such as the UV continuum and H$\alpha$\ luminosities (e.g. Lonsdale \& Helou 1987, Sauvage \& Thuan 1992, Buat \& Xu 1996). However, early-type (S0--Sab) galaxies often exhibit high FIR luminosities but much cooler, cirrus-dominated emission. This emission has usually been attributed to dust heating from the general stellar radiation field, including the visible radiation from older stars (Lonsdale \& Helou 1987, Buat \& Deharveng 1988, Rowan-Robinson \& Crawford 1989, Sauvage \& Thuan 1992, 1994, Walterbos \& Greenawalt 1996). This interpretation is supported by anomalously low UV and H$\alpha$\ emission (relative to the FIR luminosity) in these galaxies. However Devereux \& Young (1990) and Devereux \& Hameed (1997) have argued that young stars dominate the 40--120$\mu$m emission in all of these galaxies, so that the FIR emission directly traces the SFR. They have provided convincing evidence that young stars are an important source of FIR luminosity in at least some early-type galaxies, including barred galaxies with strong nuclear starbursts and some unusually blue objects (Section 4). On the other hand, many early-type galaxies show no independent evidence of high SFRs, suggesting that the older stars or active galactic nuclei (AGNs) are responsible for much of the FIR emission. The {\it Space Infrared Telescope Facility}, scheduled for launch early in the next decade, should provide high-resolution FIR images of nearby galaxies and clarify the relationship between the SFR and IR emission in these galaxies. The ambiguities discussed above affect the calibration of SFRs in terms of FIR luminosity, and there probably is no single calibration that applies to all galaxy types. However the FIR emission should provide an excellent measure of the SFR in dusty circumnuclear starbursts. The SFR {\it vs} $L_{FIR}$ conversion is derived using synthesis models as described earlier. In the optically thick limit, it is only necessary to model the bolometric luminosity of the stellar population. The greatest uncertainty in this case is the adoption of an appropriate age for the stellar population; this may be dictated by the timescale of the starburst itself or by the timescale for the dispersal of the dust (so the $\tau$$\gg$1 approximation no longer holds). Calibrations have been published by several authors under different assumptions about the star formation timescale (e.g. Hunter et al 1986, Lehnert \& Heckman 1996, Meurer et al 1997, Kennicutt 1998). Applying the models of Leitherer \& Heckman (1995) for continuous bursts of age 10--100 Myr, and adopting the IMF in this paper yields the relation (Kennicutt 1998): \begin{equation} {\rm SFR}~(M_\odot~yr^{-1}) = 4.5 \times 10^{-44}~L_{FIR}~({\rm ergs~s^{-1}}) ~~~(starbursts), \end{equation} where $L_{FIR}$ refers to the infrared luminosity integrated over the full mid and far-IR spectrum (8--1000 $\mu$m), though for starbursts most of this emission will fall in the 10--120$\mu$m region (readers should beware that the definition of $L_{FIR}$ varies in the literature). Most of the other published calibrations lie within $\pm$30\%\ of Equation 4. Strictly speaking, the relation given above applies only to starbursts with ages less than $10^8$ years, where the approximations applied are valid. In more quiescent normal star forming galaxies, the relation will be more complicated; the contribution of dust heating from old stars will tend to lower the effective coefficient in equation [4], whereas the lower optical depth of the dust will tend to increase the coefficient. In such cases, it is probably better to rely on an empirical calibration of SFR/$L_{FIR}$, based on other methods. For example, Buat \& Xu (1996) derive a coefficient of $8{^{+8}_{-3}} \times 10^{-44}$, valid for galaxies of type Sb and later only, based on IRAS and UV flux measurements of 152 disk galaxies. The FIR luminosities share the same IMF sensitivity as the other direct star formation tracers, and it is important to be consistent when comparing results from different sources. \section{DISK STAR FORMATION} The techniques described above have been used to measure SFRs in hundreds of nearby galaxies, and these have enabled us to delineate the main trends in SFRs and star formation histories along the Hubble sequence. Although it is customary to analyze the integrated SFRs of galaxies, taken as a whole, large-scale star formation takes place in two very distinct physical environments: one in the extended disks of spiral and irregular galaxies, the other in compact, dense gas disks in the centers of galaxies. Both regimes are significant contributors to the total star formation in the local universe, but they are traced at different wavelengths and follow completely different patterns along the Hubble sequence. Consequently I will discuss the disk and circumnuclear star formation properties of galaxies separately. \subsection{{\it Global SFRs Along the Hubble Sequence}} Comprehensive analyses of the global SFRs of galaxies have been carried out using H$\alpha$\ surveys (Kennicutt 1983a, Gallagher et al 1984, Caldwell et al 1991, 1994, Kennicutt et al 1994, Young et al 1996), UV continuum surveys (Donas et al 1987, Deharveng et al 1994), FIR data (Sauvage \& Thuan 1992, Walterbos \& Greenawalt 1996, Tomita et al 1996, Devereux \& Hameed 1997), and multi-wavelength surveys (Gavazzi \& Scodeggio 1996, Gavazzi et al 1996). The absolute SFRs in galaxies, expressed in terms of the total mass of stars formed per year, show an enormous range, from virtually zero in gas-poor elliptical, S0, and dwarf galaxies to $\sim$20~M$_\odot$~yr$^{-1}$ in gas-rich spirals. Much larger global SFRs, up to $\sim$100~M$_\odot$~yr$^{-1}$, can be found in optically-selected starburst galaxies, and SFRs as high as 1000~$M_\odot$~yr$^{-1}$\ may be reached in the most luminous IR starburst galaxies (Section 4). The highest SFRs are associated almost uniquely with strong tidal interactions and mergers. \begin{figure}[!ht] \begin{center} \leavevmode \centerline{\epsfig{file=kennf3.eps,width=16cm}} \end{center} \caption{\em Distribution of integrated H$\alpha$+[NII] emission-line equivalent widths for a large sample of nearby spiral galaxies, subdivided by Hubble type and bar morphology. The right axis scale shows corresponding values of the stellar birthrate parameter $b$, which is the ratio of the present SFR to that averaged over the past, as described in Section 5.1.} \end{figure} Part of the large dynamic range in absolute SFRs simply reflects the enormous range in galaxy masses, so it is more illuminating to examine the range in relative SFRs, normalized per unit mass or luminosity. This is illustrated in Figure 3, which shows the distribution of H$\alpha$$+$[NII] equivalent widths (EWs) in a sample of 227 nearby bright galaxies ($B_T<13$), subdivided by Hubble type. The data were taken from the photometric surveys of Kennicutt \& Kent (1983) and Romanishin (1990). The measurements include the H$\alpha$\ and the neighboring [NII] lines; corrections for [NII] contamination are applied when determining the SFRs. The EW is defined as the emission-line luminosity normalized to the adjacent continuum flux, and hence it is a measure of the SFR per unit (red) luminosity. Figure 3 shows a range of more than two orders of magnitude in the SFR per unit luminosity. The EWs show a strong dependence on Hubble type, increasing from zero in E/S0 galaxies (within the observational errors) to 20--150~\AA\ in late-type spiral and irregular galaxies. When expressed in terms of absolute SFRs, this corresponds to range of 0--10 M$_\odot$~yr$^{-1}$ for an $L^*$ galaxy (roughly comparable in luminosity to the Milky Way). The SFR measured in this way increases by approximately a factor of 20 between types Sa and Sc (Caldwell et al 1991, Kennicutt et al 1994). SFRs derived from the UV continuum and broadband visible colors show comparable behavior (e.g. Larson \& Tinsley 1978, Donas et al 1987, Buat et al 1989, Deharveng et al 1994). High-resolution imaging of individual galaxies reveals that the changes in the disk SFR along the Hubble sequence are produced in roughly equal parts by an increase in the total number of star forming regions per unit mass or area, and an increase in the characteristic masses of individual regions (Kennicutt et al 1989a, Caldwell et al 1991, Bresolin \& Kennicutt 1997). These trends are seen both in the H$\alpha$\ luminosities of the HII regions as well as in the continuum luminosity functions of the embedded OB associations (Bresolin \& Kennicutt 1997). A typical OB star in an Sa galaxy forms in a cluster containing only a few massive stars, whereas an average massive star in a large Sc or Irr galaxy forms in a giant HII/OB association containing hundreds or thousands of OB stars. These differences in clustering properties of the massive stars may strongly influence the structure and dynamics of the interstellar medium (ISM) along the Hubble sequence (Norman \& Ikeuchi 1989, Heiles 1990). Although there is a strong trend in the {\it average} SFRs with Hubble type, a dispersion of a factor of ten is present in SFRs among galaxies of the same type. The scatter is much larger than would be expected from observational errors or extinction effects, so most of it must reflect real variations in the SFR. Several factors contribute to the SFR variations, including variations in gas content, nuclear emission, interactions, and possibly short-term variations in the SFR within individual objects. Although the absolute SFR varies considerably among spirals (types Sa and later), some level of massive star formation is always observed in deep H$\alpha$\ images (Caldwell et al 1991). However many of the earliest disk galaxies (S0--S0/a) show no detectable star formation at all. Caldwell et al (1994) obtained deep Fabry-Perot H$\alpha$\ imaging of 8 S0--S0/a galaxies, and detected HII regions in only 3 objects. The total SFRs in the latter galaxies are very low, $<$0.01~M$_\odot$~yr$^{-1}$, and the upper limits on the other 4 galaxies rule out HII regions fainter than those of the Orion nebula. On the other hand, H$\alpha$\ surveys of HI-rich S0 galaxies by Pogge \& Eskridge (1987, 1993) reveal a higher fraction of disk and/or circumnuclear star forming regions, emphasizing the heterogeneous star formation properties of these galaxies. Thronson et al (1989) reached similar conclusions based on an analysis of IRAS observations of S0 galaxies. The relative SFRs can also be parametrized in terms of the mean SFR per unit disk area. This has the advantage of avoiding any effect of bulge contamination on total luminosities (which biases the EW distributions). Analyses of the SFR surface density distributions have been published by Deharveng et al (1994), based on UV continuum observations, and by Ryder (1993), Ryder \& Dopita (1994), and Young et al (1996), based on H$\alpha$\ observations. The average SFR surface densities show a similar increase with Hubble type, but the magnitude of the change is noticeably weaker than is seen in SFRs per unit luminosity (e.g. Figure 3), and the dispersion among galaxies of the same type is larger (see below). The stronger type dependence in the H$\alpha$\ EWs (see Figure 3) is partly due to the effects of bulge contamination, which exaggerate the change in {\it disk} EWs by a factor of two between types Sa--Sc (Kennicutt et al 1994), but the change in disk EWs with type is still nearly twice as large as the comparable trend in SFR per unit area (Young et al 1996). The difference reflects the tendency for the late-type spirals to have somewhat more extended (i.e. lower surface brightness) star forming disks than the early-type spirals, at least in these samples. This comparison demonstrates the danger in applying the term SFR too loosely when characterizing the systematic behavior of star formation across the Hubble sequence, because the quantitative trends are dependent on the manner in which the SFR is defined. Generally speaking, a parameter that scales with the SFR per unit mass (e.g. the H$\alpha$\ equivalent width) is most relevant to interpreting the evolutionary properties of disks, whereas the SFR per unit area is more relevant to parametrizing the dependence of the SFR on gas density in disks. \begin{figure}[!ht] \begin{center} \leavevmode \centerline{\epsfig{file=kennf4.eps,width=16cm}} \end{center} \caption{\em Distributions of 40- to 120-$\mu$m infrared luminosity for nearby galaxies, normalized to near-infrared $H$ luminosity, as a function of Hubble type. Adapted from Devereux \& Hameed (1997), with elliptical and irregular galaxies excluded.} \end{figure} Similar comparisons can be made for the FIR properties of disk galaxies, and these show considerably weaker trends with Hubble type (Devereux \& Young 1991, Tomita et al 1996, Devereux \& Hameed 1997). This is illustrated in Figure 4, which shows the distributions of $L_{FIR}$/$L_H$ from a sample of nearby galaxies studied by Devereux \& Hameed (1997). Since the near-IR $H$-band luminosity is a good indicator of the total stellar mass, the L$_{FIR}$/L$_H$ ratio provides an approximate measure of the FIR emission normalized to the mass of the parent galaxy. Figure 4 shows the expected trend toward stronger FIR emission with later Hubble type, but the trend is considerably weaker, in the sense that early-type galaxies show much higher FIR luminosities than would be expected given their UV-visible spectra. Comparisons of L$_{FIR}$/L$_B$ distributions show almost no dependence on Hubble type at all (Isobe \& Feigelson 1992, Tomita et al 1996, Devereux \& Hameed 1997), but this is misleading because the $B$-band luminosity itself correlates with the SFR (see Figure 2). The inconsistencies between the FIR and UV--visible properties of spiral galaxies appear to be due to a combination of effects (as mentioned above in Section 2.5). In at least some early-type spirals, the strong FIR emission is produced by luminous, dusty star forming regions, usually concentrated in the central regions of barred spiral galaxies (Devereux 1987, Devereux \& Hameed 1997). This exposes an important bias in the visible and UV-based studies of SFRs in galaxies, in that they often do not take into account the substantial star formation in the dusty nuclear regions, which can dominate the global SFR in an early-type galaxy. Devereux \& Hameed emphasize the importance of observing a sufficiently large and diverse sample of early-type galaxies, in order to fully characterize the range of star formation properties. However it is also likely that much of the excess FIR emission in early-type spirals is unrelated to star formation, reflecting instead the effects of dust heating from evolved stellar populations (Section 2.5). Radiative transfer modelling by Walterbos \& Greenawalt (1996) demonstrates that this effect can readily account for the trends seen in Figure 4. The interpretation in the remainder of this review is based on the SFR trends revealed by the H$\alpha$, UV continuum, broadband colors, and integrated spectra, which are consistent with a common evolutionary picture of the Hubble sequence. However it is important to bear in mind that this picture applies only to the extended, extranuclear star formation in spiral and irregular disks. The circumnuclear star formation follows quite different patterns, as discussed in Section 4.2. \subsection{{\it Dependence of SFRs on Gas Content}} The strong trends in disk SFRs that characterize the Hubble sequence presumably arise from more fundamental relationships between the global SFR and other physical properties of galaxies, such as their gas contents or dynamical structure. The physical regulation of the SFR is a mature subject in its own right, and a full discussion is beyond the scope of this review. However it is very instructive to examine the global relationships between the disk-averaged SFRs and gas densities of galaxies, because they reveal important insights into the physical nature of the star formation sequence, and they serve to quantify the range of physical conditions and evolutionary properties of disks. Comparisons of the large-scale SFRs and gas contents of galaxies have been carried out by several authors, most recently Buat et al (1989), Kennicutt (1989), Buat (1992), Boselli (1994), Deharveng et al (1994), Boselli et al (1995) and Kennicutt (1998). Figure 5 shows the relationship between the disk-averaged SFR surface density $\Sigma_{SFR}$ and average total (atomic plus molecular) gas density $\Sigma_{gas}$, for a sample of 61 normal spiral galaxies with H$\alpha$, HI, and CO observations (Kennicutt 1998). The SFRs were derived from extinction-corrected H$\alpha$\ fluxes, using the SFR calibration in Equation 2. The surface densities were averaged within the corrected optical radius $R_0$, as taken from the {\it Second Reference Catalog of Bright Galaxies} (de Vaucouleurs et al 1976). \begin{figure}[!ht] \begin{center} \leavevmode \centerline{\epsfig{file=kennf5.eps,width=16cm}} \end{center} \caption{\em Correlation between disk-averaged SFR per unit area and average gas surface density, for 61 normal disk galaxies. Symbols are coded by Hubble type: Sa--Sab (open triangles); Sb--Sbc (open circles); Sc--Sd (solid points); Irr (cross). The dashed and dotted lines show lines of constant global star formation efficiency. The error bars indicate the typical uncertainties for a given galaxy, including systematic errors.} \end{figure} Figure 5 shows that disks possess large ranges in both the mean gas density (factor of 20--30) and mean SFR surface density (factor of 100). The data points are coded by galaxy type, and they show that both the gas and SFR densities are correlated with Hubble type on average, but with large variations among galaxies of a given type. In addition, there is an underlying correlation between SFR and gas density that is largely independent of galaxy type. This shows that much of the scatter in SFRs among galaxies of the same type can be attributed to an underlying dispersion in gas contents. The data can be fitted to a Schmidt (1959) law of the form\ \ $\Sigma_{SFR} = A~\Sigma{_{gas}^N}$. The best fitting slope $N$ ranges from 1.4 for a conventional least squares fit (minimizing errors in SFRs only) to $N$=2.4 for a bivariate regression, as shown by the solid lines in Figure 5. Values of $N$ in the range 0.9--1.7 have been derived by previous workers, based on SFRs derived from H$\alpha$, UV, and FIR data (Buat et al 1989, Kennicutt 1989, Buat 1992, Deharveng et al 1994). The scatter in SFRs at a given gas density is large, and most of this dispersion is probably introduced by averaging the SFRs and gas densities over a large dynamic range of local densities within the individual disks (Kennicutt 1989, 1998). Figure 5 also contains information on the typical global efficiencies of star formation and gas consumption time scales in disks. The dashed and dotted lines indicate constant, disk-averaged efficiencies of 1\%, 10\%, and 100\% per $10^8$ years. The average value for these galaxies is 4.8\%, meaning that the average disk converts 4.8\%\ of its gas (within the radius of the optical disk) every $10^8$ years. Since the typical gas mass fraction in these disk is about 20\%, this implies that stellar mass of the disk grows by about 1\%\ per $10^8$ years, i.e. the time scale for building the disk (at the present rate) is comparable to the Hubble time. The efficiencies can also be expressed in terms of the average gas depletion timescale, which for this sample is 2.1~Gyr. Recycling of interstellar gas from stars extends the actual time scale for gas depletion by factors of 2--3 (Ostriker \& Thuan 1975, Kennicutt et al 1994). \subsection{{\it Other Global Influences on the SFR}} What other global properties of a galaxy influence its SFR? One might plausibly expect the mass, bar structure, spiral arm structure, or environment to be important, and empirical information on all of these are available. 3.3.1~~LUMINOSITY AND MASS~~~Gavazzi \& Scodeggio (1996) and Gavazzi et al (1996) have compiled UV, visible, and near-IR photometry for over 900 nearby galaxies, and they found an anti-correlation between the SFR per unit mass and the galaxy luminosity, as indicated by broadband colors and H$\alpha$\ EWs. At least part of this trend seems to reflect the same dependence of SFR on Hubble type discussed above, but a mass dependence is also observed among galaxies of the same Hubble type. It is interesting that there is considerable overlap between the color-luminosity relations of different spiral types, which suggests that part of the trends that are attributed to morphological type may be more fundamentally related to total mass. A strong correlation between $B$--$H$ color and galaxy luminosity or linewidth has been discussed previously by Tully et al (1982) and Wyse (1983). The trends seem to be especially strong for redder colors, which are more closely tied to the star formation history and mean metallicity than the current SFR. More data are needed to fully disentangle the effects of galaxy type and mass, both for the SFR and the star formation history. 3.3.2~~BARS~~~Stellar bars can strongly perturb the gas flows in disks, and trigger nuclear star formation (see next section), but they do not appear to significantly affect the total disk SFRs. Figure 3 plots the H$\alpha$\ EW distributions separately for normal (SA and SAB) and barred (SB) spirals, as classified in the {\it Second Reference Catalog of Bright Galaxies}. There is no significant difference in the EW distributions (except possibly for the Sa/SBa galaxies), which suggests that the global effect of a bar on the {\it disk} SFR is unimportant. Ryder \& Dopita (1994) reached the same conclusion based on H$\alpha$\ observations of 24 southern galaxies. Tomita et al (1996) have carried out a similar comparison of FIR emission, based on IRAS data and broadband photometry for 139 normal spirals and 260 barred Sa--Sc galaxies. They compared the distributions of $L_FIR$/$L_B$ ratios for Sa/SBa, Sb/SBb, and Sc/SBc galaxies, and concluded that there is no significant correlation with bar structure, consistent with the H$\alpha$\ results. There is evidence for a slight excess in FIR emission in SBa galaxies, which could reflect bar-triggered circumnuclear star formation in some of the galaxies, though the statistical significance of the result is marginal (Tomita et al 1996). Recent work by Martinet \& Friedli (1997) suggests that influence of bars on the global SFR may not be as simple as suggested above. They analyzed H$\alpha$\ and FIR-based SFRs for a sample of 32 late-type barred galaxies, and found a correlation between SFR and the strength and length of the bar. This suggests that large samples are needed to study the effects of bars on the large-scale SFR, and that the structural properties of the bars themselves need to be incorporated in the analysis. 3.3.3~~SPIRAL ARM STRUCTURE~~~Similar tests have been carried out to explore whether a strong grand-design spiral structure enhances the global SFR. Elmegreen \& Elmegreen (1986) compared UV and visible broadband colors and H$\alpha$\ EWs for galaxies they classified as grand-design (strong two-armed spiral patterns) and flocculent (ill-defined, patchy spiral arms), and they found no significant difference in SFRs. McCall \& Schmidt (1986) compared the supernova rates in grand-design and flocculent spirals, and drew similar conclusions. Grand-design spiral galaxies show strong local enhancements of star formation in their spiral arms (e.g. Cepa \& Beckman 1990, Knapen et al 1992), so the absence of a corresponding excess in their total SFRs suggests that the primary effect of the spiral density wave is to concentrate star formation in the arms, but not to increase the global efficiency of star formation. 3.3.4~~GALAXY-GALAXY INTERACTIONS~~~Given the modest effects of internal disk structure on global SFRs, it is perhaps somewhat surprising that external environmental influences can have much stronger effects on the SFR. The most important influences by far are tidal interactions. Numerous studies of the global H$\alpha$\ and FIR emission of interacting and merging galaxies have shown strong excess star formation (e.g. Bushouse 1987, Kennicutt et al 1987, Bushouse et al 1988, Telesco et al 1988, Xu \& Sulentic 1991, Liu \& Kennicutt 1995). The degree of the SFR enhancement is highly variable, ranging from zero in gas-poor galaxies to on the order of 10--100 times in extreme case. The average enhancement in SFR over large samples is a factor of 2--3. Much larger enhancements are often seen in the circumnuclear regions of strongly interacting and merging systems (see next section). This subject is reviewed in depth in Kennicutt et al (1998). 3.3.5~~CLUSTER ENVIRONMENT~~~There is evidence that cluster environment systematically alters the star formation properties of galaxies, independently of the well-known density-morphology relation (Dressler 1984). Many spiral galaxies located in rich clusters exhibit significant atomic gas deficiencies (Haynes et al 1984, Warmels 1988, Cayatte et al 1994), which presumably are the result of ram pressure stripping from the intercluster medium, combined with tidal stripping from interactions with other galaxies and the cluster potential. In extreme cases one would expect the gas removal to affect the SFRs as well. Kennicutt (1983b) compared H$\alpha$\ EWs of 26 late-type spirals in the Virgo cluster core with the field sample of Kennicutt \& Kent (1983) and found evidence for a 50\%\ lower SFR in Virgo, comparable to the level of HI deficiency. The UV observations of the cluster Abell 1367 by Donas et al (1990) also show evidence for lower SFRs. However subsequent studies of the Coma, Cancer, and A1367 clusters by Kennicutt et al (1984) and Gavazzi et al (1991) showed no reduction in the average SFRs, and if anything a higher number of strong emission-line galaxies. A comprehensive H$\alpha$\ survey of 8 Abell clusters by Moss \& Whittle (1993) suggests that the effects of cluster environoment on global star formation activity are quite complex. They found a 37--46\%\ lower H$\alpha$\ detection rate among Sb, Sc, and irregular galaxies in the clusters, but a 50\%\ higher detection rate among Sa--Sab galaxies. They argue that these results arise from a combination of competing effects, including reduced star formation from gas stripping as well as enhanced star formation triggered by tidal interactions. Ram-pressure induced star formation may also be taking place in a few objects (Gavazzi \& Jaffe 1985). \section{CIRCUMNUCLEAR STAR FORMATION \\ AND STARBURSTS} It has been known from the early photographic work of Morgan (1958) and S\'ersic \& Pastoriza (1967) that the circumnuclear regions of many spiral galaxies harbor luminous star forming regions, with properties that are largely decoupled from those of the more extended star forming disks. Subsequent spectroscopic surveys revealed numerous examples of bright emission-line nuclei with spectra resembling those of HII regions (e.g. Heckman et al 1980, Stauffer 1982, Balzano 1983, Keel 1983). The most luminous of these were dubbed ``starbursts" by Weedman et al (1981). The opening of the mid-IR and FIR regions fully revealed the distinctive nature of the nuclear star formation (e.g. Rieke \& Low 1972, Harper \& Low 1973, Rieke \& Lebofsky 1978, Telesco \& Harper 1980). The IRAS survey led to the discovery of large numbers of ultraluminous star forming galaxies (Soifer et al 1987). This subject has grown into a major subfield of its own, which has been thoroughly reviewed elsewhere in this series (Soifer et al 1987, Telesco 1988, Sanders \& Mirabel 1996). The discussion here focusses on the range of star formation properties of the nuclear regions, and the patterns in these properties along the Hubble sequence. \subsection{{\it SFRs and Physical Properties}} Comprehensive surveys of the star formation properties of galactic nuclei have been carried out using emission-line spectroscopy in the visible (Stauffer 1982, Keel 1983, Kennicutt et al 1989b, Ho et al 1997a, b) and mid-IR photometry (Rieke \& Lebofsky 1978, Scoville et al 1983, Devereux et al 1987, Devereux 1987, Giuricin et al 1994). Nuclear emission spectra with HII region-like line ratios are found in 42\%\ of bright spirals ($B_T < 12.5$), with the fraction increasing from 8\%\ in S0 galaxies (and virtually zero in elliptical galaxies) to 80\%\ in Sc--Im galaxies (Ho et al 1997a). These fractions are lower limits, especially in early-type spirals, because the star formation often is masked by a LINER or Seyfert nucleus. Similar detection fractions are found in 10$\mu$m surveys of optically-selected spiral galaxies, but with a stronger weighting toward early Hubble types. The nuclear SFRs implied by the H$\alpha$\ and IR fluxes span a large range, from a lower detection limit of $\sim$10$^{-4}$~M$_\odot$~yr$^{-1}$ to well over 100~$M_\odot$~yr$^{-1}$\ in the most luminous IR galaxies. The physical character of the nuclear star forming regions changes dramatically over this spectrum of SFRs. The nuclear SFRs in most galaxies are quite modest, averaging $\sim$0.1~$M_\odot$~yr$^{-1}$\ (median 0.02~$M_\odot$~yr$^{-1}$) in the H$\alpha$\ sample of Ho et al (1997a), and $\sim$0.2~$M_\odot$~yr$^{-1}$\ in the (optically selected) 10$\mu$m samples of Scoville et al (1983) and Devereux et al (1987). Given the different selection criteria and completeness levels in these surveys, the SFRs are reasonably consistent with each other, and this suggests that the nuclear star formation at the low end of the SFR spectrum typically occurs in moderately obscured regions ($A_{H\alpha}\sim$0--3 mag) that are not physically dissimilar from normal disk HII regions (Kennicutt et al 1989b, Ho et al 1997a). However the IR observations also reveal a population of more luminous regions, with $L_{FIR} \sim 10^{10}$--$10^{13}$~$L_\odot$, and corresponding SFRs of order 1--1000~$M_\odot$~yr$^{-1}$\ (Rieke \& Low 1972, Scoville et al 1983, Joseph \& Wright 1985, Devereux 1987). Such high SFRs are not seen in optically-selected samples, mainly because the luminous starbursts are uniquely associated with dense molecular gas disks (Young \& Scoville 1991 and references therein), and for normal gas-to-dust ratios, one expects visible extinctions of several magnitudes or higher. The remainder of this section will focus on these luminous nuclear starbursts, because they represent a star formation regime that is distinct from the more extended star formation in disks, and because these bursts often dominate the total SFRs in their parent galaxies. \begin{figure}[!ht] \begin{center} \leavevmode \centerline{\epsfig{file=kennf6.eps,width=16cm}} \end{center} \caption{\em Relationship between integrated far-infrared (FIR) luminosity and molecular gas mass for bright IR galaxies, from Tinney et al (1990; open circles) and a more luminous sample from Sanders et al (1991; solid points). The solid line shows the typical $L/M$ ratio for galaxies similar to the Milky Way. The dashed line shows the approximate limiting luminosity for a galaxy forming stars with 100\%\ efficiency on a dynamical timescale, as described in the text.} \end{figure} The IRAS all-sky survey provided the first comprehensive picture of this upper extreme in the SFR spectrum. Figure 6 shows a comparison of the total 8--1000~$\mu$m luminosities (as derived from IRAS) and total molecular gas masses for 87 IR-luminous galaxies, taken from the surveys of Tinney et al (1990) and Sanders et al (1991). Tinney et al's sample (open circles) includes many luminous but otherwise normal star forming galaxies, while Sanders et al's brighter sample (solid points) mainly comprises starburst galaxies and a few AGNs. Strictly speaking these measurements cannot be applied to infer the nuclear SFRs of the galaxies, because they are low-resolution measurements and the samples are heterogeneous. However circumnuclear star formation sufficiently dominates the properties of the luminous infrared galaxies (e.g. Veilleux et al 1995, Lutz et al 1996) that Figure 6 (solid points) provides a representative indication of the range of SFRs in these IRAS-selected samples. The most distinctive feature in Figure 6 is the range of infrared luminosities. The lower range overlaps with the luminosity function of normal galaxies (the lower limit of $10^{10}~L_\odot$ is the sample definition cutoff), but the population of infrared galaxies extends upward to $>$10$^{12.5}~L_\odot$. This would imply SFRs of up to 500 $M_\odot$~yr$^{-1}$\ (Equation 4), if starbursts are primarily responsible for the dust heating, about 20 times larger than the highest SFRs observed in normal galaxies. Figure 6 also shows that the luminous IR galaxies are associated with unusally high molecular gas masses, which partly accounts for the high SFRs. However the typical SFR per unit gas mass is much higher than in normal disks; the solid line in Figure 6 shows the typical $L/M$ ratio for normal galaxies, and the efficiencies in the IR galaxies are higher by factors of 2--30 (Young et al 1986, Solomon \& Sage 1988, Sanders et al 1991). The H$_2$ masses used here have been derived using a standard Galactic H$_2$/CO conversion ratio, and if the actual conversion factor in the IRAS galaxies is lower, as is suggested by several lines of evidence, the contrast in star formation efficiencies would be even larger (e.g. Downes et al 1993, Aalto et al 1994, Solomon et al 1997). High-resolution IR photometry and imaging of the luminous infrared galaxies reveals that the bulk of the luminosity originates in compact circumnuclear regions (e.g. Wright et al 1988, Carico et al 1990, Telesco et al 1993, Smith \& Harvey 1996, and references therein). Likewise, CO interferometric observations show that a large fraction of the molecular gas is concentrated in central disks, with typical radii on the order of 0.1--1 kpc, and implied surface densities on the order of $10^2$--$10^5~M_\odot$~pc$^{-2}$ (Young \& Scoville 1991, Scoville et al 1994, Sanders \& Mirabel 1996). Less massive disks but with similar gas and SFR surface densities are associated with the infrared-bright nuclei of spiral galaxies (e.g. Young \& Scoville 1991, Telesco et al 1993, Scoville et al 1994, Smith \& Harvey 1996, Rubin et al 1997). The full spectrum of nuclear starburst regions will be considered in the remainder of this section. \begin{table} \caption{Star Formation in Disks and Nuclei of Galaxies} \bigskip \begin{tabular}{lcc}\hline\hline \\ \bigskip Property& Spiral Disks & Circumnuclear Regions \\ \hline & & \\ Radius & $1 - 30$ kpc & $0.2 - 2$ kpc \\ SFR & $0 - 20$ M$_\odot$~yr$^{-1}$ & $0 - 1000$ M$_\odot$~yr$^{-1}$ \\ Bolometric Luminosity & $10^6 - 10^{11}$ L$_\odot$ & $10^6 - 10^{13}$ L$_\odot$ \\ Gas Mass & $10^8 - 10^{11}$ M$_\odot$ & $10^6 - 10^{11}$ M$_\odot$ \\ Star Formation Timescale & $1 - 50$ Gyr & $0.1 - 1$ Gyr \\ Gas Density & $1 - 100$ M$_\odot$~pc$^{-2}$ & $10^2 - 10^5$ M$_\odot$~pc$^{-2}$ \\ Optical Depth (0.5 $\mu$m) & $0 - 2$ & $1 - 1000$ \\ SFR Density & $0 - 0.1$ M$_\odot$~yr$^{-1}$~kpc$^{-2}$ & $1 - 1000$ M$_\odot$~yr$^{-1}$~kpc$^{-2}$ \\ Dominant Mode & steady state & steady state $+$ burst \\ & & \\ \hline \\ Type Dependence? & strong & weak/none \\ Bar Dependence? & weak/none & strong \\ Spiral Structure Dependence? & weak/none & weak/none \\ Interactions Dependence? & moderate & strong \\ Cluster Dependence? & moderate/weak & ? \\ Redshift Dependence? & strong & ? \\ \\ \hline \end{tabular} \end{table} The physical conditions in the circumnuclear star forming disks are distinct in many respects from the more extended star forming disks of spiral galaxies, as is summarized in Table 1. The circumnuclear star formation is especially distinctive in terms of the absolute range in SFRs, the the much higher spatial concentrations of gas and stars, its burstlike nature (in luminous systems), and its systematic variation with galaxy type. The different range of physical conditions in the nuclear starbursts is clearly seen in Figure 7, which plots the average SFR surface densities and mean molecular surface densities for the circumnuclear disks of 36 IR-selected starbursts (Kennicutt 1998). The comparison is identical to the SFR--density plot for spiral disks in Figure 5, except that in this case the SFRs are derived from FIR luminosities (equation [4]), and only molecular gas densities are plotted. HI observations show that the atomic gas fractions in these regions are of the order of only a few percent, and can be safely neglected (Sanders \& Mirabel 1996). The SFRs and densities have been averaged over the radius of the circumnuclear disk, as measured from high-resolution CO or IR maps (see Kennicutt 1998). \begin{figure}[!ht] \begin{center} \leavevmode \centerline{\epsfig{file=kennf7.eps,width=16cm}} \end{center} \caption{\em Correlation between disk-averaged SFR per unit area and average gas surface density, for 36 infrared-selected circumnuclear starbursts. See Figure 5 for a similar comparison for normal spiral disks. The dashed and dotted lines show lines of constant star formation conversion efficiency, with the same notation as in Figure 5. The error bars indicate the typical uncertainties for a given galaxy, including systematic errors.} \end{figure} Figure 7 shows that the surface densities of gas and star formation in the nuclear starbursts are 1--4 orders of magnitude higher than in spiral disks overall. Densities of this order can be found in large molecular cloud complexes within spiral disks, of course, but the physical conditions in many of the nuclear starbursts are extraordinary even by those standards. For example, the typical mean densities of the largest molecular cloud complexes in M31, M33, and M51 are in the range 40--500 $M_\odot$~pc$^{-2}$, which corresponds to the lower range of densities in Figure 7 (Kennicutt 1998). Likewise the SFR surface densities in the 30 Doradus giant HII region, the most luminous complex in the Local Group, reaches 100~$M_\odot$~yr$^{-1}$~kpc$^{-2}$ only in the central 10 pc core cluster. The corresponding densities in many of the starbursts exceed these values, over regions as large as a kiloparsec in radius. The starbursts follow a relatively well-defined Schmidt law, with index $N$$\sim$1.4. The nature of the star formation law will be discussed further in Section 5, where we examine the SFR {\it vs} gas density relation for all of the data taken together. Figure 7 also shows that the characteristic star formation efficiencies and timescales are quite different in the starbursts. The mean conversion efficiency is 30\%\ per $10^8$ years, 6 times larger than in the spiral disks. Likewise, the gas consumption timescale is 6 times shorter, about 0.3 Gyr on average. This is hardly surprising--- these objects are starbursts by definition--- but Figure 7 serves to quantify the characteristic timescales for the starbursts. As pointed out by Heckman (1994) and Lehnert \& Heckman (1996), the luminous IR galaxies lie close to the limiting luminosity allowed by stellar energy generation, for a system which converts all of its gas to stars over a dynamical timescale. For a galaxy with dimensions comparable to the Milky Way, the minimum timescale for feeding the central starburst is $\sim$10$^8$ years; this is also consistent with the minimum gas consumption timescale in Figure 7. At the limit of 100\%\ star formation efficiency over this timescale, the corresponding SFR is trivially: \begin{equation} {{\rm SFR}_{max}} = { {100~M_\odot~{\rm yr^{-1}}}~{({{M_{gas}} \over {10^{10}~M_\odot}})}~{({{10^8~years} \over {\tau_{dyn}}})} }. \end{equation} The corresponding maximum bolometric luminosity can be estimated using Equation 4, or by calculating the maximum nuclear energy release possible from stars over $10^8$ years. The latter is $\sim$0.01~$\epsilon \dot{m} c^2$, where $\dot{m}$ in this case is the SFR, and $\epsilon$ is the fraction of the total stellar mass that is burned in $10^8$ yr. A reasonable value of $\epsilon$ for a Salpeter IMF is about 0.05; it could be as high as 0.2 if the starburst IMF is depleted in low-mass stars (e.g. Rieke et al 1993). Combining these terms and assuming further that all of the bolometric luminosity is reradiated by the dust yields: \begin{equation} {L_{max}} \sim {{7 \times 10^{11}~L_\odot}~{({M_{gas} \over {10^{10}~M_\odot}})}~ {({\epsilon \over 0.05})}}. \end{equation} Using Equation 4 to convert the SFR to FIR luminosity gives nearly the same coefficient ($6 \times 10^{11}$). This limiting $L/M$ relation is shown by the dashed line in Figure 6, and it lies very close to the actual upper envelope of the luminous IR galaxies. Given the number of assumptions that went into Equation 6, this agreement may be partly fortuitous; other physical processes, such as optical depth effects in the cloud, may also be important in defining the upper luminosity limits (e.g. Downes et al 1993). However the main intent of this exercise is to illustrate that many of the most extreme circumnuclear starbursts lie near the physical limit for maximum SFRs in galaxies. Heckman (1994) extended this argument and derived the maximum SFR for a purely self-gravitating protogalaxy, and he showed that the most luminous infrared galaxies lie close to this limit as well. Note that none of these limits apply to AGN-powered galaxies, because the mass consumption requirements for a given mass are 1--2 orders of magnitude lower. Taken together, these results reveal the extraordinary character of the most luminous IR starburst galaxies (Heckman 1994, Scoville et al 1994, Sanders \& Mirabel 1996). They represent systems in which a mass of gas comparable to the entire ISM of a galaxy has been driven into a region of order 1 kpc in size, and this entire ISM is being formed into stars, with almost 100\%\ efficiency, over a timescale of order $10^8$ years. Such a catastrophic transfer of mass can only take place in a violent interaction or merger, or perhaps during the initial collapse phase of protogalaxies. \subsection{{\it Dependence on Type and Environment}} The star formation that takes place in the circumnuclear regions of galaxies also follows quite different patterns along the Hubble sequence, relative to the more extended star formation in disks. These distinctions are especially important in early-type galaxies, where the nuclear regions often dominate the global star formation in their parent galaxies. 4.2.1~~HUBBLE TYPE~~~In contrast to the extended star formation in disks, which varies dramatically along the Hubble sequence, circumnuclear star formation is largely decoupled with Hubble type. Stauffer (1982), Keel (1983), and Ho et al (1997a) have investigated the dependence of nuclear H$\alpha$\ emission in star forming nuclei as a function of galaxy type. The detection frequency of HII region nuclei is a strong monotonic function of type, increasing from 0\%\ in elliptical galaxies, to 8\%\ in SO, 22\%\ in Sa, 51\%\ in Sb, and 80\%\ in Sc--Im galaxies (Ho et al 1997a), though these fractions may be influenced somewhat by AGN contamination. Among the galaxies with nuclear star formation, the H$\alpha$\ luminosities show the opposite trend; the average extinction-corrected luminosity of HII region nuclei in S0--Sbc galaxies is 9 times higher than in Sc galaxies (Ho et al 1997a). Thus the bulk of the total nuclear star formation in galaxies is weighted toward the earlier Hubble types, even though the frequency of occurence is higher in the late types. Similar trends are observed in 10~$\mu$m surveys of nearby galaxies (Rieke \& Lebofsky 1978, Scoville et al 1983, Devereux et al 1987, Devereux 1987, Giuricin et al 1994). Interpreting the trends in nuclear 10$\mu$m luminosities by themselves is less straightforward, because the dust can be heated by active nuclei as well as by star formation, but one can reduce this problem by excluding known AGNs from the statistics. Devereux et al (1987) analyzed the properties of an optically selected sample of 191 bright spirals, chosen to lie within or near the distance of the Virgo cluster, and found that the average nuclear 10$\mu$m flux was virtually independent of type and if anything, decreased by 25--30\%\ from types Sa--Sbc to Sc--Scd. An analysis of a larger sample by Giuricin et al (1994) shows that among galaxies with HII region nuclei (as classified from optical spectra), Sa--Sb nuclei are 1.7 times more luminous at 10$\mu$m than Sc galaxies. By contrast the disk SFRs are typically 5--10 times lower in the early-type spirals, so the fractional contribution of the nuclei to the total SFR increases dramatically in the early-type spirals. The nuclear SFRs in some early-type galaxies are comparable to the {\it integrated} SFRs of late-type spirals (e.g. Devereux 1987, Devereux \& Hameed 1997). Thus while luminous nuclear starbursts may occur in across the entire range of spiral host types (e.g. Rieke \& Lebofsky 1978, Devereux 1987), the relative effect is much stronger for the early-type galaxies; most of the star formation in these galaxies occurs in the circumnuclear regions. Clearly the physical mechanisms that trigger these nuclear outbursts are largely decoupled from the global gas contents and SFRs of their parent galaxies. 4.2.2~~BAR STRUCTURE~~~These same surveys show that nuclear star formation is strongly correlated with the presence of strong stellar bars in the parent galaxy. The first clear evidence came from the photographic work of S\'ersic \& Pastoriza (1967), who showed that 24\%\ of nearby SB and SAB galaxies possessed detectable circumnuclear ``hotspot" regions, now known to be bright HII regions and stellar associations. In contrast none of the non-barred galaxies studied showed evidence for hotspots. This work was followed up by Phillips (1993) who showed that hotspots are found preferentially in early-type barred galaxies, a tendency noted already by S\'ersic \& Pastoriza. The effects of bars on the H$\alpha$\ emission from HII region nuclei have been analyzed by Ho et al (1997b). They found that the incidence of nuclear star formation is higher among the barred galaxies, but the difference is marginally significant, and no excess is seen among early-type barred galaxies. However the distributions of H$\alpha$\ luminosities are markedly different, with the barred galaxies showing an extended tail of bright nuclei that is absent in samples of non-barred galaxies. This tail extends over a range in H$\alpha$\ luminosities of 3--100 $\times 10^{40}$ ergs~s$^{-1}$, which corresponds to SFRs in the range 0.2--8~$M_\odot$~yr$^{-1}$. This tail is especially strong in the early-type barred galaxies (SB0/a--SBbc), where $\sim$30\%\ of the star forming nuclei have luminosities in this range. Bars appear to play an especially strong role in triggering the strong IR-luminous starbursts that are found in early-type spiral galaxies. Hawarden et al (1986) and Dressel (1988) found strong excess FIR emission in early-type barred spirals, based on IRAS observations, and hypothesized that this emission was associated with circumnuclear star forming regions. This interpretation was directly confirmed by Devereux (1987), who detected strong nuclear 10$\mu$m emission in 40\%\ of the early-type barred spirals in his sample. Similar excesses are not seen in samples of late-type barred galaxies. These results have been confirmed in more extensive later studies by Giuricin et al (1994) and Huang et al (1996). Although early-type barred galaxies frequently harbor a bright nuclear starburst, bars are not a necessary condition for such a starburst, as shown by Pompea \& Rieke (1990). The strong association of nuclear and circumnuclear star formation with bar structure, and the virtual absence of any other dependence on morphological type, contrasts sharply with the behavior of the disk SFRs. This implies that the evolution of the circumnuclear region is largely decoupled from that of the disk at larger radii. The strong distinctions between early-type and late-type barred galaxies appear to be associated with the structural and dynamical properties of the bars. Bars in bulge-dominated, early-type spirals tend to be very strong and efficient at transporting gas from the disk into the central regions, while bars in late-type galaxies are much weaker and are predicted to be much less efficient in transporting gas (e.g. Athanassoula 1992, Friedli \& Benz 1995). All of the results are consistent with a general picture in which the circumnuclear SFRs of galaxies are determined largely by the rate of gas transport into the nuclear regions. 4.2.3~~GALAXY INTERACTIONS AND MERGERS~~~Numerous observations have established a clear causal link between strong nuclear starbursts and tidal interactions and mergers of galaxies. Since this subject is reviewed in depth elsewhere (Heckman 1990, 1994, Barnes \& Hernquist 1992, Sanders \& Mirabel 1996, Kennicutt et al 1998), only the main results are summarized here. The evidence for interaction-induced nuclear star formation comes from two types of studies, statistical comparisons of the SFRs in complete samples of interacting and non-interacting galaxies, and studies of the frequency of interactions and mergers among samples of luminous starburst galaxies. Keel et al (1985) and Bushouse (1986) showed that the nuclear H$\alpha$\ emission in nearby samples of interacting spiral galaxies is 3--4 times stronger than that in a control sample of isolated spirals. Groundbased 10--20$\mu$m observations of the nuclear regions of interacting and merging galaxies showed similar or stronger enhancements, depending on how the samples were selected (Lonsdale et al 1984, Cutri \& McAlary 1985, Joseph \& Wright 1985, Wright et al 1988). There is an enormous range of SFRs among individual objects. Spatially resolved data also show a stronger central concentration of the star formation in strongly interacting systems (e.g. Bushouse 1987, Kennicutt et al 1987, Wright et al 1988). Thus while the interactions tend to increase the SFR throughout galaxies, the effects in the nuclear regions are stronger. This radial concentration is consistent with the predictions of numerical simulations of interacting and merging systems (Barnes \& Hernquist 1992, Mihos \& Hernquist 1996, Kennicutt et al 1998). The complementary approach is to measure the frequencies of interacting systems in samples of starburst galaxies. The most complete data of this kind come from IRAS, and they show that the importance of tidal triggering is a strong function of the strength of the starburst, with the fraction of interactions increasing from 20--30\%\ for $L_{IR}$$<$10$^{10}~L_\odot$ to 70--95\%\ for $L_{IR}$$>$10$^{12}~L_\odot$. The relatively low fraction (Sanders et al 1988, Lawrence et al 1989, Gallimore \& Keel 1993, Leech et al 1994, Sanders \& Mirabel 1996). The relatively low fraction for the lower luminosity starbursts is understandable, because the corresponding SFRs ($<$1~$M_\odot$~yr$^{-1}$) can be sustained with relatively modest gas supplies, and can be fed by internal mechanisms such as a strong bar. The most luminous starbursts, on the other hand, are associated almost exclusively with strong tidal interactions and mergers. SFRs larger than $\sim$20~$M_\odot$~yr$^{-1}$\ are rarely observed in isolated galaxies, though some possible exceptions have been identified by Leech et al (1994). In view of the enormous fueling requirements for such starbursts (Equations 5 and 6), however, it is perhaps not surprising that an event as violent as a merger is required. These results underscore the heterogeneous nature of the starburst galaxy population, and they suggest that several triggering mechanisms are involved in producing the population. \section{INTERPRETATION AND IMPLICATIONS \\ FOR GALAXY EVOLUTION} The observations described above can be fitted together into a coherent evolutionary picture of disk galaxies and the Hubble sequence. This section summarizes the evolutionary implications of these data, taking into account the distinct patterns seen in the disks and galactic nuclei. It concludes with a discussion of the critical role of the interstellar gas supply in regulating the SFR, across the entire range of galaxy types and environments. \subsection{{\it Disk Evolution Along the Hubble Sequence}} The strong trends observed in the SFR per unit luminosity along the Hubble sequence mirror underlying trends in ther past star formation histories of disks (Roberts 1963, Kennicutt 1983a, Gallagher et al 1984, Sandage 1986, Kennicutt et al 1994). A useful parameter for characterizing the star formation histories is the ratio of the current SFR to the past SFR averaged over the age of the disk, denoted $b$ by Scalo (1986). The evolutionary synthesis models discussed in Section 2 provide relations between $b$ and the broadband colors and H$\alpha$\ EWs. Figure 3 shows the distribution of $b$ (right axis scale) for an H$\alpha$-selected sample of galaxies, based on the calibration of Kennicutt et al (1994). The typical late-type spiral has formed stars at a roughly constant rate ($b \sim 1$), which is consistent with direct measurements of the stellar age distribution in the Galactic disk (e.g., Scalo 1986). By contrast, early-type spiral galaxies are characterized by rapidly declining SFRs, with $b \sim 0.01 - 0.1$, whereas elliptical and S0 galaxies have essentially ceased forming stars ($b = 0$). Although the values of $b$ given above are based solely on synthesis modelling of the H$\alpha$\ equivalent widths, analysis of the integrated colors and spectra of disks yield similar results (e.g. Kennicutt 1983a, Gallagher et al 1984, Bruzual \& Charlot 1993, Kennicutt et al 1994). The trends in $b$ shown in Figure 3 are based on integrated measurements, so they are affected by bulge and nuclear contamination, which bias the trends seen along the Hubble sequence. A more detailed analysis by Kennicutt et al (1994) includes corrections for bulge contamination on the H$\alpha$\ EWs. The mean value of $b$ (for the disks alone) increases from $<$0.07 for Sa disks, to 0.3 for Sb disks and 1.0 for Sc disks. This change is much larger than the change in bulge mass fraction over the same range of galaxy types, implying that most of the variation in the integrated photometric properties of spiral galaxies is produced by changes in the star formation histories of the disks, not in the bulge-to-disk ratio. Variations in bulge-disk structure may be play an important role, however, in physically driving the evolution of the disks. As discussed earlier, this picture has been challenged by Devereux \& Hameed (1997), based on the much weaker variation in FIR luminosities along the Hubble sequence. The results of the previous section provide part of the resolution to this paradox. Many early-type barred spirals harbor luminous circumnuclear starbursts, with integrated SFRs that can be as high as the disk SFRs in late-type galaxies. If this nuclear star formation is included, then the interpretation of the Hubble sequence given above is clearly oversimplistic. For that reason it is important to delineate between the the nuclear regions and more extended disks when characterizing the evolutionary properties of galaxies. Much of the remaining difference in interpretations hinges on the nature of the FIR emission in early-type galaxies, which may not directly trace the SFR in all galaxies. Although Figure 3 shows a strong change in the {\it average} star formation history with galaxy type, it also shows a large dispersion in $b$ among galaxies of the same type. Some of this must be due to real long-term variations in star formation history, reflecting the crudeness of the Hubble classification itself. Similar ranges are seen in the gas contents (Roberts \& Haynes 1994), and these correlate roughly with the SFR and $b$ variations (Figure 5). Short-term variations in the SFR can also explain part of the dispersion in $b$. Nuclear starbursts clearly play a role in some galaxies, especially early-type barred galaxies, and interaction-induced starbursts are observed in a small percentage of nearby galaxies. Starbursts are thought to be an important, if not the dominant mode of star formation in low-mass galaxies (e.g. Hunter \& Gallagher 1985, Hodge 1989, Ellis 1997), but the role of large-scale starbursts in massive galaxies is less well established, (Bothun 1990, Kennicutt et al 1994, Tomita et al 1996). A definitive answer to this question will probably come from lookback studies of large samples of disk galaxies. \begin{figure}[!ht] \begin{center} \leavevmode \centerline{\epsfig{file=kennf8.eps,width=16cm}} \end{center} \caption{\em A schematic illustration of the evolution of the stellar birthrate for different Hubble types. The left panel shows the evolution of the relative SFR with time, following Sandage (1986). The curves for spiral galaxies are exponentially declining SFRs that fit the mean values of the birthrate parameter $b$ measured by Kennicutt et al (1994). The curve for elliptical galaxies and bulges is an arbitrary dependence for a e-folding time of 0.5 Gyr, for comparative purposes only. The right panel shows the corresponding evolution in SFR with redshift, for an assumed cosmological density parameter $\Omega = 0.3$ and an assumed formation redshift $z_f = 5$.} \end{figure} A schematic illustration of the trends in star formation histories is shown in Figure 8. The left panel compares the stellar birthrate histories of typical elliptical galaxies (and spiral bulges), and the disks of Sa, Sb, and Sc galaxies, following Sandage (1986). The curves for the spiral disks are exponential functions which correspond to the average values of $b$ from Kennicutt et al (1994). For illustrative purposes, an exponentially declining SFR with an e-folding timescale of 0.5 Gyr is also shown, as might be appropriate for an old spheroidal population. In this simple picture the Hubble sequence is primarily one dictated by the characteristic timescale for star formation. In the more contemporary hierarchical pictures of galaxy formation, these smooth histories would be punctuated by merger-induced starbursts, but the basic long-term histories would be similar, especially for the disks. The righthand panel in Figure 8 shows the same star formation histories, but transformed into SFRs as functions of redshift (assuming $\Omega$=0.3 and a formation redshift $z_f$=5). This diagram illustrates how the dominant star forming (massive) host galaxy populations might evolve with redshift. Most star formation at the present epoch resides in late-type gas-rich galaxies, but by $z\sim1$ all spiral types are predicted to have comparable SFRs, and (present-day) early-type systems become increasingly dominant at higher redshifts. The tendency of early-type galaxies to have higher masses will make the change in population with redshift even stronger. It will interesting to see whether these trends are observed in future lookback studies. Many readers are probably aware that the redshift dependence of the volume averaged SFR shows quite a different character, with a broad maximum between $z$$\sim$1--2 and a decline at higher redshifts (Madau et al 1996, 1998). This difference probably reflects the importance of hierarchical processes such as mergers in the evolution of galaxies, mechanisms which are not included in the simple phenomenological description in Figure 8 (Pei \& Fall 1995, Madau et al 1998). \subsection{{\it Evolution of Circumnuclear Star Formation}} The SFRs in the circumnuclear regions are largely decoupled from those of disks, and show no strong relationship to either the gas contents or the bulge/disk properties of the parent galaxies. Instead, the nuclear SFRs are closely associated with dynamical influences such as gas transport by bars or external gravitational perturbations, which stimulate the flow of gas into the circumnuclear regions. The temporal properties of the star formation in the nuclear regions show a wide variation. Approximately 80--90\%\ of spiral nuclei in optically-selected samples exhibit modest levels of Balmer emission, with an average H$\alpha$\ emission-line equivalent width of 20--30 \AA\ (Stauffer 1982, Kennicutt et al 1989b, Ho et al 1997a, b). This is comparable to the average value in the disks of late-type spiral galaxies, and is in the range expected for constant star formation over the age of the disk (Kennicutt 1983a, Kennicutt et al 1994). Hence most nuclei show SFRs consistent with steady-state or declining star formation, though it is likely that some of these nuclei are observed in a quiescent stage between major outbursts. Starbursts are clearly the dominant mode of star formation in IR-selected samples of nuclei. The typical gas consumption times are in the range $10^8 - 10^9$ years (Figure 7), so the high SFRs can only be sustained for a small percentage of the Hubble time. These timescales can be extended if a steady supply is introduced from the outside, for example by a strong dissipative bar. The most luminous nuclear starbursts ($L_{bol}$$\ge$10$^{12}~L_\odot$) are singular events. Maintaining such luminosities for even $10^8$ years requires a total gas mass on the order of $10^{10}$--$10^{11}~M_\odot$, equivalent to the total gas gas supply in most galaxies. Violent interactions and mergers are the only events capable of triggering such a catastrophic mass transfer. \subsection{{\it Physical Regulation of the SFR}} Although star forming galaxies span millionfold ranges in their present SFRs and physical conditions, there is a remarkable continuity in some of the their properties, and these relationships provide important insights into the physical regulation of the SFR over this entire spectrum of activities. \begin{figure}[!ht] \begin{center} \leavevmode \centerline{\epsfig{file=kennf9.eps,width=16cm}} \end{center} \caption{\em (Left) The global Schmidt law in galaxies. Solid points denote the normal spirals in Figure 5, squares denote the circumnuclear starbursts in Figure 7. The open circles show the SFRs and gas densities of the central regions of the normal disks. (Right) The same SFR data, but plotted against the ratio of the gas density to the average orbital time in the disk. Both plots are adapted from Kennicutt (1998).} \end{figure} We have already seen evidence from Figures 5 and 7 that the global SFRs of disks and nuclear starbursts are correlated with the local gas density, though over very different ranges in density and SFR per unit area. The left panel of Figure 9 shows both sets of data plotted on a common scale, and it reveals that the entire range of activities, spanning 5--6 orders of magnitude in gas and SFR densities, fit on a common power law with index $N$$\sim$1.4 (Kennicutt 1998). The SFRs for the two sets of data were derived using separate methods (H$\alpha$\ luminosities for the normal disks and FIR luminosities for the starbursts), and to verify that they are measured on a self-consistent scale, Figure 9 also shows H$\alpha$-derived SFRs gas densities for the centers (1--2 kpc) of the normal disks (plotted as open circles). The tight relation shows that a simple Schmidt (1959) power law provides an excellent empirical parametrization of the SFR, across an enormous range of SFRs, and it suggests that the gas density is the primary determinant of the SFR on these scales. The uncertainty in the slope of the best fitting Schmidt law is dominated by systematic errors in the SFRs, with the largest being the FIR-derived SFRs and CO-derived gas densities in the starburst galaxies. Changing either scale individually by a factor of two introduces a change of 0.1 in the fitted value of $N$, and this is a reasonable estimate of the systematic errors involved (Kennicutt 1998). Incorporating these uncertainties yields the following relation for the best-fitting Schmidt law: \begin{equation} \Sigma_{SFR} = {{{(2.5 \pm 0.7)} \times 10^{-4}}~{({\Sigma_{gas} \over {1~M_\odot~{\rm pc}^{-2}}})^{1.4\pm0.15}}~M_\odot~{\rm yr^{-1}~kpc^{-2}}}. \end{equation} \noindent where $\Sigma_{SFR}$ and $\Sigma_{gas}$ are the disk-averaged SFR and gas densities, respectively. As discussed by Larson (1992) and Elmegreen (1994), a large-scale Schmidt law with index $N$$\sim$1.5 would be expected for self-gravitating disks, if the SFR scales as the ratio of the gas density ($\rho$) to the free-fall timescale ($\propto \rho^{-0.5}$), and the average gas scale height is roughly constant across the sample ($\Sigma \propto \rho$). In a variant on this picture, Elmegreen (1997) and Silk (1997) have suggested that the SFR might scale with the ratio of the gas density to the average orbital timescale; this is equivalent to postulating that disks process a fixed fraction of their gas into stars in each orbit around the galactic center. The right panel of Figure 9, also adapted from Kennicutt (1998), shows the correlation between the SFR density and $\Sigma_{gas}/\tau_{dyn}$ for the same galaxies and starbursts. For this purpose $\tau_{dyn}$ is defined as one disk orbit time, measured at half of the outer radius of the star forming disk, in units of years (see Kennicutt 1998 for details). The line is a median fit to the normal disks with slope contrained to unity, as predicted by the simple Silk model. This alternative ``recipe" for the SFR provides a fit that is nearly as good as the Schmidt law. The equation of the fit is: \begin{equation} \Sigma_{SFR} = 0.017~\Sigma_g~\Omega_g . \end{equation} In this parametrization the SFR is simply $\sim$10\%\ of the available gas mass per orbit. These parametrizations offer two distinct interpretations of the high SFRs in the centers of luminous starburst galaxies. In the context of the Schmidt law picture, the star formation efficiency scales as $\Sigma{_g^{(N-1)}}$, or $\Sigma{_g^{0.4}}$ for the observed relation in Figure 9. The central starbursts have densities that are on the order of 100--10000 times higher than in the extended star forming disks of spirals, so the global star formation efficiencies should be 6--40 times higher. Alternatively, in the kinematical picture, the higher efficiencies in the circumnuclear starbursts are simply a consequence of the shorter orbital timescales in the galaxy centers, independent of the gas density. Whether the observed Schmidt law is a consequence of the shorter dynamical times or {\it vice versa} cannot be ascertained from these data alone, but either description provides an excellent empirical description or ``recipe" for the observed SFRs. These simple pictures can account for the high SFRs in the starburst galaxies, as well as for the observed radial variation of SFRs within star forming disks (Kennicutt 1989, 1997). However the relatively shallow $N$$\sim$1.4 Schmidt law cannot account for the strong changes in disk SFRs observed across the Hubble sequence, if the disks evolved as nearly closed systems (Kennicutt et al 1994). Likewise the modest changes in galaxy rotation curves with Hubble type are too small to account for the large differences in star formation histories with a kinematical model such as in equation [8]. The explanation probably involves star formation thresholds in the gas-poor galaxies (Kennicutt 1989, Kennicutt et al 1997), but the scenario has not been tested quantitatively, and it is possible that other mechanisms, such as infall of gas, merger history, or bulge-disk interactions are responsible for the strong changes in star formation histories across the spiral sequence. \section{FUTURE PROSPECTS} The observations described in this review have provided us with the beginnings of a quantitative picture of the star formation properties and evolutionary properties of the Hubble sequence. However the picture remains primitive in many respects, being based in large part on integrated, one-zone averages over entire galaxies, and extrapolations from present-day SFRs to crude characterizations of the past star formation histories. Uncertainties in fundamental parameters such as the IMF and massive stellar evolution undermine the accuracy of the entire SFR scale, and weaken the interpretations that are based on these measurements. Ongoing work on several fronts should lead to dramatic progress over the next decade, however. The most exciting current development is the application of the SFR diagnostics described in Section 2 to galaxies spanning the full range of redshifts and lookback times (Ellis 1997). This work has already provided the first crude measures of the evolution in the volume-averaged SFR (Madau et al 1996, 1998). The combination of 8--10 meter class groundbased telescopes, HST, and eventually the {\it Next Generation Space Telescope} should eventually provide detailed inventories of integrated spectra, SFRs and morphologies for complete samples of galaxies at successive redshifts. This should give the definitive picture of the star formation history of the Hubble sequence, and impose strong tests on galaxy formation and evolution models. At the same time, a new generation of IR space observatories, including the {\it Wide Field Infrared Explorer} and the {\it Space Infrared Telsescope Facility}, will provide high-resolution observations of nearby starburst galaxies, and the first definitive measurements of the cosmological evolution of the infrared-luminous starburst galaxy population. Although studies of the star formation histories of nearby galaxies are largely being supplanted by the more powerful lookback studies, observations of nearby galaxies will remain crucial for understanding many critical aspects of galaxy formation and evolution. Perhaps the greatest potential is for understanding the physical processes that determine the local and global SFRs in galaxies, and understanding the feedback processes between the star formation and the parent galaxies. This requires spatially-resolved measurements of SFRs over the full spectrum of insterstellar and star formation environments, and complementary measurements of the densities, dynamics, and abundances of the interstellar gas. Uncertainty about the nature of the star formation law and the SFR--ISM feedback cycle remain major stumbling blocks to realistic galaxy evolution models, but observations over the next decade should provide the foundations of a physically-based model of galactic star formation and the Hubble sequence. \section{ACKNOWLEDGEMENTS} I wish to express special thanks to my collaborators in the research presented here, especially my current and former graduate students Audra Baleisis, Fabio Bresolin, Charles Congdon, Murray Dixson, Kevin Edgar, Paul Harding, Crystal Martin, Sally Oey, Anne Turner, and Rene Walterbos. During the preparation of this review my research was supported by the National Science Foundation though grant AST-9421145. \newpage {\it Literature Cited} \medskip \parskip=5pt Aalto S, Booth RS, Black JH, Koribalski B., Wielebinski R. 1994. {\it Astron. Astrohys.} 286:365-80 Athanassoula E. 1992. {\it MNRAS} 259:345-64 Bagnuolo WG. 1976. {\it The Stellar Content and Evolution of Irregular and Other Late-Type Galaxies}. PhD thesis. Caltech Balzano VA. 1983. {\it Ap. J.} 268:602-27 Barnes JE, Hernquist L. 1992. {\it Annu. Rev. Astron. Astrophys.} 30:705-42 Bechtold J, Yee HKC, Elston R, Ellingson E. 1997. {\it Ap. J. Lett.} 477:L29-L32 Bertelli G, Bressan A, Chiosi C, Fagotto F, Nasi E. 1994. {\it Astron. Astrophys. Suppl.} 106:275-302 Boselli A. 1994. {\it Astron. Astrophys.} 292:1-12 Boselli A, Gavazzi G, Lequeux J, Buat V, Casoli F, et al. 1995. {\it Astron. Astrophys.} 300:L13-L16 Bothun GD. 1990. In {\it Evolution of the Universe of Galaxies,} ed. RG Kron, {ASP Conf. Proc.} 10:54-66, San Francisco: Astron. Soc. Pac. Bresolin F, Kennicutt RC. 1997, {\it Astron. J.} 113:975-80 Bruzual G, Charlot S. 1993. {\it Ap. J.} 405:538-53 Buat V. 1992. {\it Astron. Astrophys.} 264:444-54 Buat V, Deharveng JM. 1988. {\it Astron. Astrophys.} 195:60-70 Buat V, Deharveng JM, Donas J. 1989. {\it Astron. Astrophys.} 223:42-46 Buat V, Xu C. 1996. {\it Astron. Astrophys.} 306:61-72 Bushouse HA. 1986. {\it Astron. J.} 91:255-70 Bushouse HA. 1987. {\it Ap. J.} 320:49-72 Bushouse HA, Werner MW, Lamb SA. 1988. {\it Ap. J.} 335:74-92 Caldwell N, Kennicutt R, Phillips AC, Schommer RA. 1991. {\it Ap. J.} 370:526-40 Caldwell N, Kennicutt R, Schommer R. 1994. {\it Astron.J. 108:1186-90} Calzetti D, Kinney AL, Storchi-Bergmann T. 1994. {\it Ap.J.} 429:582-601 Calzetti D, Kinney AL, Storchi-Bergmann T. 1996. {\it Ap. J.} 458:132-5 Calzetti D. 1997. {\it Astron. J.} 113:162-84 Caplan J, Deharveng L. 1986. {\it Astron. Astrophys.} 155:297-313 Caplan J, Ye T, Deharveng L, Turtle AJ, Kennicutt RC. 1996. {\it Astron. Astrophys.} 307:403-16 Carico DP, Sanders DB, Soifer BT, Matthews K, Neugebauer G. 1990. {\it Astron. J.} 100:70-83 Cayatte V, Kotanyi C, Balkowski C, van Gorkom JH. 1994. {\it Astron. J.} 107:1003-17 Cepa J, Beckman JE 1990. {\it Ap. J.} 349:497-502 Charlot S, Bruzual G. 1991. {\it Ap. J.} 367:126-40 Cohen JG. 1976. {\it Ap. J.} 203:587-92 Cox, P, Mezger PG. 1989. {\it Astron. Astrophys. Rev.} 1:49-83 Cowie LL, Hu EM, Songaila A, Egami E. 1997. {\it Ap. J. Lett.} 481:L9-L13 Cowie LL, Songaila A, Hu EM, Cohen JG. 1996. {\it Astron. J.} 112:839-64 Cutri RM, McAlary CW. 1985. {\it Ap. J.} 296:90-105 Deharveng JM, Sasseen TP, Buat V, Bowyer S, Lampton M, Wu X. 1994. {\it Astron. Astrophys.} 289:715-728 de Vaucouleurs G, de Vaucouleurs A, Corwin HG. 1976. {\it Second Reference Catalog of Bright Galaxies}. Austin: Univ. of Texas Press (RC2) Devereux N. 1987. {\it Ap. J.} 323:91-107 Devereux NA, Becklin EE, Scoville N. 1987. {\it Ap. J.} 312:529-41 Devereux NA, Hameed S. 1997. {\it Astron. J.} 113:599-608 Devereux NA, Young JS. 1990. {\it Ap. J. Lett.} 350:L25-28 Devereux NA, Young JS. 1991. {\it Ap. J.} 371:515-24 Donas J, Deharveng JM. 1984. {\it Astron. Astrophys.} 140:325-333 Donas J, Deharveng JM, Laget M, Milliard B, Huguenin D. 1987. {\it Astron. Astrophys.} 180:12-26 Donas J, Milliard B, Laget M, Buat V. 1990. {\it Astron. Astrophys.} 235:60-68 Donas J, Milliard B, Laget M. 1995. {\it Astron. Astrophys.} 303:661-672 Downes D, Solomon PM, Radford SJE. 1993. {\it Ap. J. Lett.} 414:L13-L16 Dressel LL. 1988. {\it Ap. J. Lett.} 329:L69-L73 Dressler A. 1984. {\it Annu. Rev. Astron. Astrophys.} 22:185-222 Ellis RS. 1997. {\it Annu. Rev. Astron. Astrophys.} 35:389-443 Elmegreen BG. 1994. {\it Ap. J. Lett.} 425:L73-76 Elmegreen BG. 1997, In {\it Starburst Activity in Galaxies}, ed J Franco, R Terlevich, A Serrano, {\it Rev. Mex. Astron. Astrophys. Conf. Ser.} 6:165-71 Elmegreen BG, Elmegreen DM. 1986. {\it Ap. J.} 311:554-562 Engelbracht CW. 1997. {\it Infrared Observations and Stellar Populations Modelling of Starburst Galaxies.} PhD thesis, Univ. Arizona Evans IN, Koratkar AP, Storchi-Bergmann T, Kirkpatrick H, Heckman TM, Wilson AS. 1996. {\it Ap. J. Suppl.} 105:93-127 Fanelli MN, Marcum PM, Waller WH, Cornett RH, O'Connell RW, et al. 1997. In {\it The Ultraviolet Universe at Low and High Redshift,} ed. W Waller, M Fanelli, J Hollis, A Danks. New York: Am. Inst. Phys. Feinstein C. 1997. {\it Ap. J. Suppl.} 112:29-47 Ferguson AMN, Wyse RFG, Gallagher JS, Hunter DA. 1996. {\it Astron. J.} 111:2265-79 Fioc M, Rocca-Volmerange B. 1997. {\it Astron. Astrophys.} 326:950-62 Friedli D, Benz W. 1995. {\it Astron. Astrophys.} 301:649-65 Gallagher JS, Hunter DA. 1984. {\it Ann. Rev. Astron. Astrophys.} 22:37-74 Gallagher JS, Hunter DA, Bushouse H. 1989. {\it Astron. J.} 97:700-07 Gallagher JS, Hunter DA, Tutukov AV. 1984. {\it Ap. J.} 284:544-56 Gallego J, Zamorano J, Aragon-Salamanca A, Rego M. 1995. {\it Ap. J. Lett.} 445:L1-L4 Gallimore JF, Keel WC. 1993. {\it Astron. J.} 106:1337-43 Gavazzi G, Boselli A, Kennicutt R. 1991. {\it Astron. J.} 101:1207-30 Gavazzi G, Jaffe W. 1985. {\it Ap. J. Lett.} 294:L89-L92 Gavazzi G, Pierini D, Boselli A. 1996. {\it Astron. Astrophys.} 312:397-408 Gavazzi G, Scodeggio M. 1996. {\it Astron. Astrophys.} 312:L29-L32 Gilmore GF, Howell DJ. (eds.) 1998. {\it The Stellar Initial Mass Function.}, {\it ASP Conf. Proc.}, Vol. 142. San Francisco: Astron. Soc. Pac. Giuricin G, Tamburini L, Mardirossian F, Mezzetti M, Monaco P. 1994. {\it Astron. Astrophys.} 427:202-20 Goldader JD, Joseph RD, Doyon R, Sanders DB. 1995. {\it Ap. J.} 444:97-112 Goldader JD, Joseph RD, Doyon R, Sanders DB. 1997. {\it Ap. J. Suppl.} 108:449-70 Gonz\'alez Delgado, RM, Perez E, Tadhunter C, Vilchez J, Rodr\'iguez-Espinoza JM. 1997. {\it Ap. J. Suppl.} 108:155-98 Hawarden TG, Mountain CM, Leggett SK, Puxley PJ. 1986. {\it MNRAS} 221:41P-45P Harper DA, Low FJ. 1973. {\it Ap. J. Lett.} 182:L89-L93 Haynes MP, Giovanelli R, Chincarini GL. 1984. {\it Ann. Rev. Astron. Astrophys.} 22:445-70 Heckman TM 1990. In {\it Paired and Interacting Galaxies, IAU Colloq. 124,} ed. JW Sulentic, WC Keel, CM Telesco, NASA Conf. Publ. CP-3098, pp. 359-82. Washington DC: NASA Heckman TM. 1994. In {\it Mass-Transfer Induced Activity in Galaxies}, ed. I Shlosman, pp. 234-50. Cambridge: Cambridge Univ. Press Heckman TM, Crane PC, Balick B. 1980. {\it Astron. Astrophys. Suppl.} 40:295-305 Heiles C. 1990. {\it Ap. J.} 354:483-91 Ho LC, Filippenko AV, Sargent WLW. 1997. {\it Ap. J.} 487:579-90 Ho LC, Filippenko AV, Sargent WLW. 1997. {\it Ap. J.} 487:591-602 Ho PTP, Beck SC, Turner JL. 1990. {\it Ap. J.} 349:57-66 Hodge PW. 1989. {\it Annu. Rev. Astron. Astrophys.} 27:139-59 Hodge PW, Kennicutt RC. 1983. {\it Astron. J.} 88:296-328 Huang JH, Gu QS, Su HJ, Hawarden TG, Liao XH, Wu GX. 1996: {\it Astron. Astrophys.} 313:13-24 Hubble E. 1926. {\it Ap. J.} 64:321-69 Huchra JP. 1977. {\it Ap. J.} 217:928-39 Hunter DA. 1994. {\it Astron. J.} 107:565-81 Hunter DA, Gallagher JS. 1985. {\it Ap. J. Suppl.} 58:533-60 Hunter DA, Gallagher JS. 1990. {\it Ap. J.} 362:480-90 Hunter DA, Gillett FC, Gallagher JS, Rice WL, Low FJ. 1986. {\it Ap. J.} 303:171-85 Hunter DA, Hawley WN, Gallagher JS. 1993. {\it Astron. J.} 106:1797-1811 Isobe T, Feigelson E. 1992. {\it Ap. J. Suppl.} 79:197-211 Israel FP, van der Hulst JM. 1983. {\it Astron. J.} 88:1736-48 Joseph RD, Wright GS. 1985. {\it MNRAS} 214:87-95 Kaufman M, Bash FN, Kennicutt RC, Hodge PW. 1987. {\it Ap. J.} 319:61-75 Keel WC. 1983. {\it Ap. J.} 269:466-86 Keel WC, Kennicutt RC, Hummel E, van der Hulst JM. 1985. {\it Astron. J.} 90:708-30 Kennicutt RC. 1983a. {\it Ap. J.} 272:54-67 Kennicutt RC. 1983b. {\it Astron. J.} 88:483-88 Kennicutt RC. 1989. {\it Ap. J.} 344:685-703 Kennicutt RC. 1992a. {\it Ap. J.} 388:310-27 Kennicutt RC. 1992b. {\it Ap. J. Suppl.} 79:255-84 Kennicutt RC. 1997, In {\it The Interstellar Medium in Galaxies,} ed. JM van der Hulst, pp. 171-95. Dordrecht: Kluwer Kennicutt RC. 1998. {\it Ap. J.} 498:541-52 Kennicutt RC, Schweizer F, Barnes JE. 1998. {\it Galaxies: Interactions and Induced Star Formation, Saas-Fee Advanced Course 26,} ed. D Friedli, L Martinet, D Pfenniger, Berlin:Springer Kennicutt RC, Bothun GD, Schommer RA. 1984. {\it Astron. J.} 89:1279-87 Kennicutt RC, Bresolin F, Bomans DJ, Bothun GD, Thompson IB. 1995. {\it Astron. J.} 109:594-604 Kennicutt RC, Edgar BK, Hodge PW. 1989a. {\it Ap. J.} 337:761-81 Kennicutt RC, Keel WC, Blaha CA. 1989b. {\it Astron. J.} 97:1022-35 Kennicutt RC, Keel WC, van der Hulst JM, Hummel E, Roettiger KA. 1987. {\it Astron. J.} 93:1011-23 Kennicutt RC, Kent SM. 1983. {\it Astron. J.} 88:1094-1107 Kennicutt RC, Tamblyn P, Congdon CW. 1994. {\it Ap. J.} 435:22-36 Kinney AL, Bohlin RC, Calzetti D, Panagia N, Wyse RFG. 1993. {\it Ap. J. Suppl.} 86:5-93 Klein U, Grave R. 1986. {\it Astron. Astrophys.} 161:155-68 Knapen J, Beckman JE, Cepa J, van der Hulst JM, Rand RJ. 1992. {\it Ap. J. Lett.} 385:L37-L40 Larson RB. 1992, In {\it Star Formation in Stellar Systems}, ed. G Tenorio-Tagle, M Prieto, F S\'anchez. pp. 125-190. Cambridge: Cambridge Univ. Press Larson RB, Tinsley BM. 1978. {\it Ap. J.} 219:46-59 Lawrence A, Rowan-Robinson M, Leech K, Jones DHP, Wall JV. 1989. {\it MNRAS} 240:329-48 Leech M, Rowan-Robinson M, Lawrence A, Hughes JD. 1994. {\it MNRAS} 267:253-69 Lehnert MD, Heckman TM. 1996. {\it Ap. J.} 472:546-63 Leitherer C, Ferguson HC, Heckman TM, Lowenthal JD. 1995a. {\it Ap. J. Lett.} 454:L19-L22 Leitherer C, Fritze-v. Alvensleben U, Huchra JP. (eds.) 1996b. {\it From Stars to Galaxies: The Impact of Stellar Physics on Galaxy Evolution.} {\it ASP Conf. Proc.} Vol. 98. San Francisco: Astron. Soc Pac. Leitherer C, Heckman TM. 1995. {\it Ap. J. Suppl.} 96:9-38 Leitherer C, Robert C, Heckman TM. 1995b. {\it Ap. J. Suppl.} 99:173-87 Leitherer C, Alloin D, Alvensleben UF, Gallagher JS, Huchra JP, et al. 1996a. {\it Publ. Astron. Soc. Pac.} 108:996-1017 Liu CT, Kennicutt RC. 1995. {\it Ap. J.} 450:547-58 Lonsdale CJ, Helou G. 1987. {\it Ap. J.} 314:513-24 Lonsdale CJ, Persson SE, Matthews K. 1984. {\it Ap. J.} 287:95-107 Lutz D, Genzel R, Sternberg A, Netzer H, Kunze D, et al. 1996. {\it Astron. Astrophys.} 315:L137-L140 Madau P, Ferguson H, Dickinson M, Giavalisco M, Steidel CC, Fruchter A. 1996. {\it MNRAS} 283:1388-1404 Madau P, Pozzetti L, Dickinson M. 1998. {\it Ap. J.} in press Maoz D, Filippenko AV, Ho LC, Macchetto D, Rix H-W, Schneider DP. 1996. {\it Ap. J. Suppl.} 107:215-26 Martin CL. 1997. {\it Ap. J.} in press Martinet L, Friedli D. 1997. {\it Astron. Astrophys.} 323:363-73 Massey P. 1998. In {\it The Stellar Initial Mass Function.} ed. BF Gilmore, DJ Howell. {\it ASP Conf. Proc.} 142:17-44. San Francisco: Astron. Soc. Pac. McCall ML, Schmidt FH. 1986. {\it Ap. J.} 311:548-53 Meurer GR, Heckman TM, Leitherer C, Kinney A, Robert C, Garnett DR. 1995. {\it Astron. J.} 110:2665-91 Meurer GR, Gerhardt R, Heckman TM, Lehnert MD, Leitherer C, Lowenthal J. 1997, {\it Astron. J.} 114:54-68 Mihos JC, Hernquist L. 1996. {\it Ap. J.} 464:641-63 Morgan WW. 1958. {\it Publ. Astron. Soc. Pac.} 70:364-91 Moss C, Whittle M. 1993. {\it Ap. J. Lett.200} 407:L17-L20 Moshir M, Kopan, G, Conrow J, McCallon H, Hacking P, et al. 1992. {\it Explanatory Supplement to the IRAS Faint Source Survey, Version 2}, JPL D-10015 8/92, (Pasadena: JPL) Niklas S, Klein U, Braine J, Wielebinski R. 1995. {\it Astron. Astrophys. Suppl.} 114:21-49 Niklas S, Klein U, Wielebinski R. 1997. {\it Astron. Astrophys.} 322:19-28 Norman C, Ikeuchi S. 1989. {\it Ap. J.} 345:372-83 Oey MS, Kennicutt RC. 1997. {\it MNRAS} 291:827-32 Ostriker JP, Thuan TX. 1975. {\it Ap. J.} 202:353-64 Patel K, Wilson CD. 1995a. {\it Ap. J.} 451:607-15 Patel K, Wilson CD. 1995b. {\it Ap. J.} 453:162-72 Pei YC, Fall SM. 1995. {\it Ap. J.} 454:69-76 Phillips AC. 1993. {\it Star Formation in Barred Spiral Galaxies.} PhD thesis, Univ. Washington, Seattle Pogge RW, Eskridge PB. 1987. {\it Astron. J.} 93:291-300 Pogge RW, Eskridge PB. 1993. {\it Astron. J.} 106:1405-19 Pompea SM, Rieke GH. 1990. {\it Ap. J.} 356:416-29 Puxley PJ, Brand PWJL, Moore TJT, Mountain CM, Nakai N, Yamashita AT. 1989. {\it Ap. J.} 345:163-68 Puxley PJ, Hawarden TG, Mountain CM. 1990. {\it Ap. J.} 364:77-86 Rieke GH, Lebofsky MJ. 1978. {\it Ap. J. Lett.} 220:L37-L41 Rieke GH, Lebofsky MJ. 1979. {\it Ann. Rev. Astron. Astrophys.} 17:477-511 Rieke, GH, Loken K, Rieke MJ, Tamblyn P. 1993. {\it Ap. J.} 412:99-110 Rieke GH, Low FJ. 1972. {\it Ap. J. Lett.} 176:L95-L100 Roberts MS. 1963. {\it Ann. Rev. Astron. Astrophys.} 1:149-78 Roberts MS, Haynes MP. 1994. {\it Ann. Rev. Astron. Astrophys.} 32:115-52 Romanishin W. 1990. {\it Astron. J.} 100:373-76 Rowan-Robinson M, Crawford J. 1989. {\it MNRAS} 238:523-58 Rubin VC, Kenney JDP, Young JS. 1997. {\it Astron. J.} 113:1250-78 Ryder SD. 1993. {\it Massive Star Formation in Galactic Disks.} PhD thesis. Australian National Univ. Ryder SD, Dopita MA. 1993. {\it Ap. J. Suppl.} 88:415-21 Ryder SD, Dopita MA. 1994. {\it Ap. J.} 430:142-62 Salpeter EE. 1955. {\it Ap. J.} 121:161-67 Sandage A. 1986. {\it Astron. Astrophys.} 161:89-101 Sanders DB, Mirabel IF. 1996. {\it Ann. Rev. Astron. Astrophys.} 34:749-92 Sanders DB, Scoville NZ, Soifer BT. 1991. {\it Ap. J.} 370:158-71 Sanders DB, Soifer BT, Elias JH, Madore, BF, Matthews K, et al. 1988. {\it Ap. J.} 325:74-91 Sauvage M, Thuan TX. 1992. {\it Ap. J. Lett.} 396:L69-L73 Sauvage M, Thuan TX. 1994. {\it Ap. J.} 429:153-71 Scalo JM. 1986. {\it Fund. Cos. Phys.} 11:1-278 Schmidt M. 1959. {\it Ap. J.} 129:243-58 Scoville NZ, Becklin EE, Young JS, Capps RW. 1983. {\it Ap. J.} 271:512-23 Scoville NZ, Hibbard JE, Yun MS, van Gorkom JH. 1994. In {\it Mass-Transfer Induced Activity in Galaxies}, ed. I Shlosman, pp. 191-212. Cambridge: Cambridge Univ. Press Searle L, Sargent WLW, Bagnuolo WG. 1973. {\it Ap. J.} 179:427-38 S\'ersic JL, Pastoriza M. 1967. {\it Publ. Astron. Soc. Pac} 79:152-55 Silk J. 1997. {\it Ap. J.} 481:703-09 Smith AM, Cornett, RH. 1982, {\it Ap. J.} 261:1-11 Smith BJ, Harvey PM. 1996. {\it Ap. J.} 468:139-66 Smith EP, Pica AJ, Bohlin RC, Cornett RH, Fanelli MN. 1996. {\it Ap. J. Suppl.} 104:207-315 Soifer BT, Houck JR, Neugebauer G. 1987. {\it Ann. Rev. Astron. Astrophys.} 25:187-230 Solomon PM, Downes D, Radford SJE, Barrett JW. 1997. {\it Ap. J.} 478:144-61 Solomon PM, Sage LJ. 1988. {\it Ap. J.} 334:613-25 Stauffer JR. 1982. {\it Ap. J. Suppl.} 50:517-27 Steidel CC, Giavalisco M, Pettini M, Dickinson M, Adelberger KL. 1996. {\it Ap. J. Lett.} 462:L17-L21 Telesco CM. 1988. {\it Ann. Rev. Astron. Astrophys.} 26:343-76 Telesco CM, Dressel LL, Wolstencroft RD. 1993. {\it Ap. J.} 414:120-43 Telesco CM, Harper DA. 1980. {\it Ap. J.} 235:392-404 Telesco CM, Wolstencroft RD, Done C. 1988. {\it Ap. J.} 329:174-86 Thronson HA, Bally J, Hacking P. 1989. {\it Astron. J.} 97:363-74 Tinney CG, Scoville NZ, Sanders DB, Soifer BT 1990, {\it Ap. J.} 362:473-79 Tinsley BM. 1968. {\it Ap. J.} 151:547-65 Tinsley BM. 1972. {\it Astron. Astrophys.} 20:383-96 Tomita A, Tomita Y, Saito M. 1996. {\it Pub. Astron. Soc. J.} 48:285-303 Tully RB, Mould JR, Aaronson M. 1982. {\it Ap. J.} 257:527-37 Turner JL, Ho PTP. 1994. {\it Ap. J.} 421:122-39 van der Hulst JM, Kennicutt RC, Crane PC, Rots AH. 1988. {\it Astron. Astrophys.} 195:38-52 Veilleux S, Kim D-C, Sanders DB, Mazzarella JM, Soifer BT. 1995. {\it Ap. J. Suppl.} 98:171-217 Waller W, Fanelli M, Danks A, Hollis J. 1997. {\it The Ultraviolet Universe at Low and High Redshift, AIP Conf.} 408. New York: Am. Inst. Phys. Walterbos RAM, Braun R. 1994. {\it Ap. J.} 431:156-71 Walterbos RAM, Greenawalt B. 1996. {\it Ap. J.} 460:696-710 Warmels RH. 1988. {\it Astron. Astrophys. Suppl.} 72:427-47 Weedman DW, Feldman FR, Balzano VA, Ramsey LW, Sramek RA, Wu C-C. 1981. {\it Ap. J.} 248:105-12 Whitford AE. 1975. in {\it Galaxies in the Universe,} ed. A Sandage, M Sandage, J Kristian, {\it Stars Stellar Syst. Compend.} 9:159-76. Chicago: Univ. Chicago Press Wright GS, Joseph RD, Robertson NA, James PA, Meikle WPS. 1988. {\it MNRAS} 233:1-23 Wyse RFG. 1983. {\it MNRAS} 199:1P-8P Xu C, Sulentic JW. 1991. {\it Ap. J.} 374:407-30 Young JS, Allen L, Kenney JDP, Lesser A, Rownd B. 1996. {\it Astron. J.} 112:1903-27 Young JS, Scoville NZ. 1991. {\it Ann. Rev. Astron. Astrophys.} 29:581-625 Young JS, Schloerb, FP, Kenney JDP, Lord SD. 1986. {\it Ap. J.} 304:443-58 \end{document}
2,869,038,156,100
arxiv
\section{Introduction} It is recognized that one of the major conceptual advances in the understanding of synchronization phenomena in physical systems, was its interpretation as a phase transition phenomenon by Yoshiki Kuramoto. In fact, making analogies with the mean-field theories for magnetic systems, Kuramoto developed its approach by defining an order parameter for your coupled oscillators model, which allowed the characterization, under various conditions, of the transition of the oscillator system for the synchronized state \cite{Kuramoto1}. It was subsequently found that the distribution of natural frequencies influences the type of phase transition model, which can exhibit continuous or discontinuous transitions and therefore different critical exponents \cite{Basnarkov1,Basnarkov2}. Moreover, a phase transition has been also characterized by a symmetry breaking and a discontinuity in the first derivative of the order parameter in coupled oscillators \cite{Ewa}. Other studies have shown that in addition to the frequency distribution, the form of the coupling function can also change the critical exponent of the order parameter in the transition \cite{Daido1,Crawford}. Although much has been discussed about the critical exponent of the order parameter, there is still a lack of studies that systematically address other critical exponents that characterize the complete theory of phase transitions of these coupled phase oscillators systems. In part, this is due to the lack of proposals that deal with the thermodynamic extension of these models, especially the absence in the literature of the field concept associated with synchronization, as well as an analytical expression for susceptibility. Recently, several studies have been reported \cite{Daido2,Yoon,Hong} that address the susceptibility to phase oscillators but give priority to its numerical analysis. In this article we present a broad analytical approach to the critical behavior of phase oscillators, where we calculate the main mean-field critical exponents for the generalized Kuramoto model systematically developed from a thermodynamic equilibrium approach \cite{Pinto}. We analyze the critical behavior of the various thermodynamic quantities involved, such as entropy, free energy, specific heat, synchronization field and susceptibility. We derive a fluctuation-dissipation relation which connects the order parameter fluctuation with susceptibility. Moreover, we show that the obtained exponents corroborate the well-known scale relationship of Rushbrooke and Widom. \section{The model} In previous work \cite{Pinto}, we introduced the It\^{o} stochastic differential equation for phase oscillators in the form \begin{equation} \label{eqq} \dot{\theta_i}=\omega_{i}+f_{i}(\{\theta\})+\sqrt{g_{i}(\{\theta\})}\xi_i(t)\,, \end{equation} where $\omega_{i}$ are the natural frequencies of the oscillators, and the drift force $f_{i}(\{\theta\})$ and the noise strength $g_{i}(\{\theta\})$ are general functions of the phases $\{\theta\}=\theta_{1},...,\theta_{N}$ and $\xi_{i}$ is a Gaussian white noise which obeys the relations \begin{equation} \label{fdt} \langle\xi_i(t)\xi_j(t')\rangle=2D\delta_{ij}\delta(t-t')\quad\mbox{with}\quad\langle\xi_i(t)\rangle=0\,, \end{equation} in which $D$ is the diffusion constant. We have already demonstrated that the equation for identical coupled phase oscillators, whose dynamics is governed by Eq. (\ref{eqq}), can be reduced to the mean-field approach as \begin{equation} \dot{\theta_i}= f(\theta_i)+\sqrt{g(\theta_i)}\xi_i(t). \end{equation} For this case, the functions $f_{i}(\{\theta\})$ and $g_{i}(\{\theta\})$ take the forms \begin{eqnarray} &&f_{i}(\{\theta\})=f(\theta_i)=rK\sin(\psi -\theta_i)\\ &&g_{i}(\{\theta\})=g(\theta_i)=1+r\sigma\cos(\psi - \theta_i)\,, \end{eqnarray} where the drift term is controlled by the coupling constant $K$ and the noise strength is driven by the noise coupling $\sigma$ that determines the intensity of global modulation of the noise. The order parameter $r$ of the system as well as its average phase $\psi$ are defined by \begin{equation} re^{i\psi}=\frac{1}{N}\sum^{N}_{j=1}e^{i\theta_{j}}\,\,. \end{equation} Here, $r$ measures the phase coherence, {\it i.e.}, for $r=1$, the system is fully synchronized, whereas for $r=0$, the system is fully incoherent. A partially synchronized state is obtained when $0< r < 1$. When the interactions of the oscillator $\theta_i$ with the other oscillators of the system are no longer considered individually, but in terms of the mean-field effects of the system acting on the oscillator $i$, we can omit the index $i$ of the individual oscillator $\theta_i=\theta$, such that Eq. (3) is given by \begin{equation} \dot{\theta}= f(\theta)+\sqrt{g(\theta)}\xi(t). \label{eq:lang_mult} \end{equation} The corresponding Fokker-Planck equation from Eq. (\ref{eq:lang_mult}), in It\^{o} prescription, is given by \begin{eqnarray} \frac{\partial\rho}{\partial t} &=& D\frac{\partial^2}{\partial\theta^2}[g(\theta)\rho]-\frac{\partial}{\partial\theta}[f(\theta)\rho]\nonumber\\ \nonumber\\ &=&D{\partial^2\over\partial\theta^2}\big[(1+r\sigma\cos(\psi-\theta))\rho\big]-{\partial\over\partial\theta}\big[rK\sin(\psi-\theta)\rho\big]\,. \label{eq:F-P_mult} \end{eqnarray} This equation has an exact analytical expression for the stationary distribution $\rho_{s}(\theta)$, given by \begin{equation} \label{hypermises} \rho_{s}(\theta)={\cal N}^{-1}\big[z+\sgn(\sigma)\sqrt{z^2-1}\cos(\psi-\theta)\big]^\nu\,, \end{equation} where $\sgn(\sigma)$ is the sign function. The normalization constant ${\cal N}$ and parameters $z$ and $\nu$ are \begin{eqnarray} \label{constn} &&{\cal N}=2\pi P_{\nu}^{0}(z)\,,\\ &&z=(1-\sigma^{2}r^2)^{-1/2}\,,\\ &&\nu=\frac{K}{D\sigma}-1=\frac{2}{T\sigma}-1\,, \end{eqnarray} where $P_{\nu}^{0}(z)$ is the associated Legendre function of zero order and $T=2D/K$ is the temperature of the system. \section{Critical behavior} We have already established the first law of thermodynamics of phase oscillators that takes the form \begin{equation} \label{Fhelm} dF=-SdT-H_{s}dr\,, \end{equation} where $F=F(T,r)$ is the Helmholtz free energy, and $S$, $T$, and $r$ are entropy, temperature and order parameter, respectively. Moreover, we have defined a new quantity $H_{s}$ that plays the role of a synchronization field on the oscillator system. All these thermodynamics quantities have been previously obtained and studied in \cite{Pinto}. Here, we will use these results to analyze the critical behavior of the system. \subsection{Order parameter} From stationary density $\rho_{s}$, the order parameter $r$ can now be properly calculated \begin{equation} \label{order2} r=\int_{0}^{2\pi}e^{i(\theta-\psi)}\rho_{s}(\theta)d\theta=\frac{\sgn{(\sigma)}}{1+\nu}\frac{P_{\nu}^{1}(z)}{P_{\nu}^{0}(z)}\,. \end{equation} Therefore, we can establish a critical region of the system for which $r\approx 0$, such that \begin{equation} z=(1-\sigma^{2}r^2)^{-1/2}\approx 1\,. \end{equation} We can now take the first terms of expanded Eq. (\ref{order2}) for $z\approx 1$, which leads to \begin{equation} \label{r2} r=\frac{\sqrt{2}\sgn(\sigma)\nu}{2}(z-1)^{1/2}-\frac{\sqrt{2}\sgn(\sigma)(\nu^2+\nu+1)\nu}{8}(z-1)^{3/2}\,, \end{equation} where for $r\approx 0$, we also assume \begin{equation} \label{z4} z=(1-r^{2}\sigma^{2})^{-1/2}\approx 1+\frac{r^{2}\sigma^{2}}{2}+\frac{3r^{4}\sigma^{4}}{8}\,, \end{equation} and $(1+\frac{3}{4}r^{2}\sigma^{2})^{1/2}\approx 1+\frac{3}{8}r^{2}\sigma^{2}$ has been used. \subsection{Entropy} We have consistently shown that the free energy for a phase oscillators system with multiplicative noise is given by \begin{equation} F=-T\ln[2\pi P_{\nu}^{0}(z)]\,, \label{F_geral} \end{equation} which allows us to obtain entropy $S$ by the definition Eq. (\ref{Fhelm}) as \begin{equation} \label{hyperentropy2} S=-\left(\frac{\partial F}{\partial T}\right)_r = \Big(1-\nu\frac{\partial}{\partial\nu}\Big)\ln\big[2\pi P_{\nu}^{0}(z)\big]\,. \end{equation} We now analyze the entropy Eq. (\ref{hyperentropy2}) in the critical region by expanding the Legendre function for $z\approx 1$, which results in \begin{equation} P_{\nu}^{0}(z)\sim 1+\frac{\nu(\nu+1)(z-1)}{2}\,, \end{equation} where the critical entropy is given by \begin{equation} \label{criticals} S=S_{max}-\frac{\nu^2\sigma^2}{4}r^2\,. \end{equation} Here $S_{max}=\ln(2\pi)$ is the maximum entropy of the system. \subsection{Synchronization field} The synchronization field is properly defined by the first law of thermodynamics Eq. (\ref{Fhelm}) as \begin{equation} \label{fullH} H_{s}=-\left(\frac{\partial F}{\partial r}\right)_{T}= \frac{T(1+\nu)z^3\sigma^2r^{2}\sgn(\sigma)}{\sqrt{z^2-1}}\,. \end{equation} We have already shown that field the $H_{s}$ is associated with the noise effect of the system, i.e., the Gaussian white noise and multiplicative noise. Hence $H_{s}$ can be decomposed as \begin{equation} \label{Hbroken} H_{s}= H_{0} + H_{\sigma}\,, \end{equation} where $H_{0}$ and $H_{\sigma}$ are the internal and external synchronization fields, respectively, defined as \begin{eqnarray} \label{HGmf} &&H_{0}=\lim_{\sigma\rightarrow 0}H_{s}=2r=2\frac{I_{1}(2r/T)}{I_{0}(2r/T)}\\ \nonumber\\ &&H_{\sigma}=H_{s}(\sigma,r,T)-H_{0}\,, \end{eqnarray} in which $I_{0}(x)$ and $I_{1}(x)$ are the modified Bessel functions of first kind of order $0$ and $1$, respectively. Note that $H_{0}$ is associated to part of field $H_{s}$, which does not depend explicitly on $\sigma$, i.e., $H_{0}$ is the field related to the Gaussian white noise behavior. On the other hand, $H_{\sigma}$ corresponds to part of field $H_{s}$ which explicitly depends on $\sigma$. This is directly associated to the non null multiplicative noise. Now expanding Eq. (\ref{fullH}) in the critical region $r\approx 0$, we obtain \begin{equation} \label{cfield} H_{s}\sim \frac{1}{2}T\nu^{2}\sigma^{2}r+\frac{1}{2}T\nu\sigma^{2}r\,. \end{equation} Thus inserting $\nu=2/T\sigma-1$ in Eq. (\ref{cfield}) and retaining the terms in $r$, we obtain the critical synchronization field \begin{equation} \label{ccfield} H_{s}=\frac{2r}{T}-\sigma r=H_{0}+H_{\sigma}\,. \end{equation} Note that $H_{0}=\lim_{\sigma\rightarrow 0}H_{s}=2r/T$ is the critical internal field and \begin{equation} \label{cHsig} H_{\sigma}=-\sigma r\,, \end{equation} is the critical external field. Indeed, the field $H_\sigma$ depends explicitly on the noise coupling $\sigma$, as expected. The negative sign also shows that synchronization $r$ should decrease when $-\sigma$ increases in the critical region. \subsection{Specific heat} We can now define the critical specific heat at constant external field $H_{\sigma}$. First, we express critical entropy Eq. (\ref{criticals}) in terms of $H_{\sigma}$, Eq. (\ref{cHsig}), which gives \begin{equation} S=S_{max}-\frac{r^2}{T^2}-\frac{H_{\sigma}}{T}r-\frac{H_{\sigma}^{2}}{4}. \end{equation} It follows that the critical specific heat at constant external field is given by \begin{equation} \label{sc11} C_{H_{\sigma}}=T\left(\frac{\partial S}{\partial T} \right)_{H_{\sigma}}=\frac{2r^2}{T^2}-\frac{2r}{T}\frac{\partial r}{\partial T}+H_{\sigma}\Big(\frac{r}{T}-\frac{\partial r}{\partial T}\Big)\,. \end{equation} It is important to emphasize that we have achieved the main thermodynamic functions in the critical region for the phase oscillator system. Furthermore, the concept of critical synchronization field shown above is of crucial importance for determining the complete thermodynamics. Indeed, this reflects the precise determination of the main critical exponents for the model, as we will see below. \subsection{Fluctuation-dissipation relation} We can also examine the relation between order parameter fluctuation $\left<r^{2}\right>$ and its response function in the vicinity of the phase transition. We begin by considering the probability of fluctuation $w(r)$ of the order parameter which can be directly obtained as \begin{equation} \label{Sd} w(r)\propto \exp{\left(\Delta S\right)}\,, \end{equation} where $\Delta S$ is the change in entropy in the fluctuation which can be expressed as $\Delta S=-W_{min}/T$ \cite{Landau}, such that $W_{min}$ is the minimum external work which must be performed on the system in order to reversibly produce this fluctuation. At the critical region, the minimum external work is then related to the external field $H_{\sigma}$, Eq. (\ref{cHsig}), according to \begin{equation} W_{min}=\int_{0}^{r}H_{\sigma}dr=-\frac{\sigma r^{2}}{2}\,. \end{equation} On the other hand, from the definition of susceptibility \begin{equation} \chi^{-1} =\left(\frac{\partial H_{\sigma}}{\partial r}\right)_{T}=-\sigma\,, \end{equation} which allows rewrite Eq. (\ref{Sd}) in the form \begin{equation} w(r)\propto \exp{\left(-\frac{r^2}{2\chi T}\right)}\,, \end{equation} which is the usual Gaussian distribution, whose mean square fluctuation $\left<r^{2}\right>$ takes the form \begin{equation} \left<r^{2}\right>=T_{c}\chi\,. \label{fltdis} \end{equation} This is the fluctuation-dissipation relation for the oscillators system, relating its linear response $\chi$, due to the action of the external field $H_{\sigma}$, with the fluctuations of the order parameter around the equilibrium region $\left<r\right>=0$. Indeed, a similar relationship holds in magnetic systems for the magnetization fluctuation \cite{Falk}. \section{Critical exponents} We are now able to determine the main critical exponents for the phase oscillator system. First we must define, in the critical region, the quantity \begin{equation} \tau=T_c-T=1-T. \end{equation} with $T\approx T_{c}$ where $T_{c}=1$ is the critical temperature. \subsection{Order parameter} We start by determining the critical exponents of the order parameter $r$. Note that Eq. (\ref{r2}) can be written in terms of external field Eq. (\ref{cHsig}) as \begin{equation} \label{r20} r^{2}\Lambda=1-\frac{1}{T}+\frac{\sigma}{2}=1-\frac{1}{T}-\frac{H_{\sigma}}{2r}\,, \end{equation} where \begin{equation} \label{r21} \Lambda=\frac{\nu\sigma^3}{16}\big(3-\nu^2-\nu-1\big)=\frac{H_{\sigma}^{3}}{8r^3}+\frac{H_{\sigma}^{2}}{8rT}-\frac{H_{\sigma}}{2rT^2}-\frac{1}{2T^3}\,. \end{equation} We should now impose that $T\rightarrow T_{c}=1$ for null field $H_{\sigma}=0$, such that Eq.(\ref{r21}) goes to $\Lambda=-\frac{1}{2}$ and Eq. (\ref{r20}) converges to \begin{equation} \label{ordbe} r=\lim_{\tau\rightarrow 0}\Big[\Lambda^{-1}\big(1-\frac{1}{T}\big)\Big]^{1/2}\propto (1-T)^{1/2}\propto \tau^{\beta}, \end{equation} where $\beta=\frac{1}{2}$ is the critical exponent for the order parameter $r$, as has already been obtained for the Kuramoto model \cite{Basnarkov1,Crawford}. Here, however, we have presented a rigorous demonstration through the synchronization field concept. Furthermore, note also that along the isotherm $T=T_{c}=1$, Eq. (\ref{r20}) leads to \begin{equation} \label{rr33} r^{3}\Lambda=-\frac{1}{2}H_{\sigma}\,, \end{equation} and taking $\tau=0$ and $H_{\sigma} \rightarrow 0$ (with $\Lambda=-1/2$), we get the relation between $r$ and external field $H_{\sigma}$ as \begin{equation} \label{orddel} r=-\lim_{H_{\sigma}\rightarrow 0}\frac{1}{(2\Lambda)^{1/3}} H_{\sigma}^{1/3}\propto H_{\sigma}^{1/\delta}, \end{equation} where we obtain the critical exponent $\delta=3$ for the system. \subsection{Specific heat} In order to obtain the critical exponent for the specific heat, we start from Eq.(\ref{sc11}) for null field $H_{\sigma}=0$ and $\tau\rightarrow0$, which leads to \begin{equation} \label{CH} C_{H_{\sigma}=0}=\lim_{\tau\rightarrow 0}\left[\frac{2r^2}{T^2}-\frac{2r}{T}\frac{\partial r}{\partial T}\right]=\left(-2r\frac{\partial r}{\partial T}\right)_{r= \tau^{\frac{1}{2}}}\propto \tau^{\frac{1}{2}}\tau^{-\frac{1}{2}}\propto \tau^{\alpha}\,, \end{equation} where $\alpha=0$ is the critical exponent for the specific heat. \subsection{Susceptibility} Now, from the definition of inverse susceptibility $\chi^{-1}$ and taking Eq. (\ref{rr33}) for the null field $H_{\sigma}=0$, $\Lambda=-1/2$, and $\tau\rightarrow 0$, we obtain \begin{equation} \label{sus} \chi^{-1} =\left( \frac{\partial H_{\sigma} }{ \partial r} \right)_{H_{\sigma}=0}\propto r^{2}\propto \tau\,, \end{equation} where we use $r\propto \tau^{1/2}$. This strictly shows that the susceptibility is divergent, i.e., \begin{equation} \label{sus2} \chi\propto \tau^{-\gamma}, \end{equation} with critical exponent $\gamma=1$, in precise accordance with the mean field theory. According to Eq.~(\ref{sus2}), the mean square fluctuation of the order parameter Eq.~(\ref{fltdis}) increases as $1/\tau$ when $T\rightarrow T_{c}$. These results conclude the study of the four main critical exponents of the generalized Kuramoto model for which we have also shown the fluctuation-dissipation relation. It is important to note that the critical exponents $\beta=\frac{1}{2}$, $\alpha=0$, $\delta=3$, and $\gamma=1$ are the mean field exponents which satisfy $\alpha +2\beta+\gamma=2$ and $\gamma=\beta(\delta-1)$, which are the Rushbrooke and Widom scaling laws, respectively. \section{Conclusions} In this article we have systematically studied the critical behavior of phase oscillators with multiplicative noise from a thermodynamic equilibrium approach. We derived the set of the four main mean-field critical exponents $\alpha=0$, $\beta=1/2$, $\gamma=1$ and $\delta=3$ for the system which obey the universal scale laws of Rushbrooke and Widom. Indeed, this is the first time that all of these exponents have been presented for phase oscillator systems. The critical behavior associated with phase oscillators may appear in many physical systems, such as biomolecular networks and neural systems \cite{Plentz,Luonan,Kromer}, in particular in neuronal avalanches phenomenon \cite{Yu}, where synchronization transition is present. Furthermore, susceptibility and specific heat as presented in this article can play important roles in the description of the thermodynamics of these physical systems. \section{Acknowledgments} We acknowledge the support of the Conselho Nacional de Desenvolvimento Cient\'{i}fico e Tecnol\'{o}gico (CNPq) Brazil, the Coordena\c{c}\~{a}o de Aperfei\c{c}oamento de Pessoal de N\'{i}vel Superior (CAPES) Brazil, and the Funda\c{c}\~{a}o de Apoio \`{a} Pesquisa do Distrito Federal (FAP-DF), Brazil. \section*{References}
2,869,038,156,101
arxiv
\section{Introduction} In this paper, we consider the following generalized Korteweg-de Vries (KdV) equation \begin{equation}\label{eq:Kdv_Equation} u_{t} +\varepsilon u_{xxx} + f(u)_{x} = g(x,t), \qquad x\in\Omega=[a, b],\, t>0 \end{equation} with periodic boundary conditions and the initial condition $u(x,0)=u_{0}(x)$. Here, $f(u)$ is usually some polynomial of $u$. When \bo{$\varepsilon=1$,} $f(u)=3u^2$ and $g \equiv 0$, \eqref{eq:Kdv_Equation} represents the \bo{original} KdV equation. KdV equations are widely adopted to model one-dimensional long waves and have applications in plasma physics, biology, nonlinear optics, quantum mechanics, and fluid mechanics; \bo{see \cite{HammackSegur74,HammackSegur78,Osborne90, GardnerMorikawa60,vanWijngaarden72,Kluwick83,HelfrichWhitehead89}}. There is also a lot of interest in theoretical studies on the mathematical properties of solutions to KdV equations. Many modern areas of mathematics and theoretical physics opened up thanks to the basic research into the KdV equations. As a consequence, there have been intense efforts on developing numerical methods for KdV equations, including finite difference methods \cite{Vliegenthart71,Goda77,LiVisbal06}, finite element methods \cite{Winther80,ArnoldWinther82,SANZSERNA1981,BakerDougalisKarakashian83}, spectral methods \cite{FornbergWhitham78,HuangSloan92,GuoShen01,MaSun01} and operator slitting methods \cite{Holden1999,Tao11}. KdV equations feature a combination of the nonlinear term and the dispersive term $u_{xxx}$, which makes it difficult to achieve numerical properties such as stability and convergence. Moreover, it is known that KdV equations may have ``blow-up" solutions but the mechanism of the singularity formation is not clear \cite{Merle01,MartelMerle02}. The study in \cite{BonaDougalisKarakashianMcKinney90} showed that the simulation of blow-up solutions, almost for sure, will require highly nonuniform meshes. This makes Discontinuous Galerkin (DG) methods suitable for solving KdV equations due to their advantages including high-order accuracy, compact stencil, capability of handling nonuniform meshes and variable degrees, and flexibility in constructing the numerical fluxes to achieve conservation of particular physical quantities. DG methods \cite{shu2009discontinuous, YanShu02, xu2005local, ChengShu08, XuShu12, HuffordXing14, chen2016new} have been developed for KdV type equations. In particular, there have been continuous efforts on developing DG methods that conserve physically interesting quantities of their solutions. Indeed, all KdV equations have three such quantities: $$ \textrm{Mass: } \int_\Omega u dx, \qquad \textrm {Energy:} \int_\Omega u^2 dx, \qquad \textrm {Hamiltonian:} \int_\Omega (\frac{\varepsilon}{2}u_x^2-V(u)) dx,$$where $V(\cdot)$ is an anti-derivative of $f(\cdot)$. This property is \bo{crucial} for their solitary wave solutions to maintain amplitude, shape, and speed even after colliding with another such wave. Numerical results \cite{bona2013conservative,KarakashianXing16,liu2016hamiltonian,zhang2019conservative} showed that DG methods preserving these invariants can maintain numerical stability over a long time period and help reduce phase and shape error after long time integration. However, existing conservative DG methods cannot conserve the energy and Hamiltonian simultaneously though the conservation of mass is easy to achieve. In Table \ref{tab:WhoConserve} we list some conservative DG methods for KdV equations. This is in no way an exhaustive list, but it shows the trend and main efforts in the development of conservative DG methods for KdV equations. We can see that the methods in \cite{bona2013conservative,yi2013direct, KarakashianXing16,chen2016new} and the first method in \cite{zhang2019conservative} conserve the energy but not the Hamiltonian, while the method in \cite{liu2016hamiltonian} and the second method in \cite{zhang2019conservative} conserve the Hamiltonian but not the energy. \begin{table}[h!] \centering \begin{tabular}{|p{6.5cm}|p{1cm}|p{2.0cm}|p{1.5cm}|} \hline \multicolumn{1}{|c|}{Method} & Year & Hamiltonian & Energy \\ \hline {Conservative DG for the Generalized KdV (GKdV) \cite{bona2013conservative}} & \multicolumn{1}{c|}{\multirow{2}{*}{2013}} & \multicolumn{1}{c|}{\multirow{2}{*}{\ding{55}}} & \multicolumn{1}{c|}{\multirow{2}{*}{\ding{51}}} \\ \hline {Direct DG for GKdV \cite{yi2013direct}} & \multicolumn{1}{c|}{\multirow{1}{*}{2013}} & \multicolumn{1}{c|}{\multirow{1}{*}{\ding{55}}} & \multicolumn{1}{c|}{\multirow{1}{*}{\ding{51}}} \\ \hline {Conservative LDG for GKdV \cite{KarakashianXing16}} & \multicolumn{1}{c|}{\multirow{1}{*}{2016}} & \multicolumn{1}{c|}{\multirow{1}{*}{\ding{55}}} & \multicolumn{1}{c|}{\multirow{1}{*}{\ding{51}}} \\ \hline {$H^2$-Conservative DG for Third-Order Equations\cite{chen2016new}} & \multicolumn{1}{c|}{\multirow{2}{*}{2016}} &\multicolumn{1}{c|}{\multirow{2}{*}{\ding{55}}} & \multicolumn{1}{c|}{\multirow{2}{*}{\ding{51}}}\\ \hline {Hamiltonian-Preserving DG for GKdV \cite{liu2016hamiltonian}} & \multicolumn{1}{c|}{\multirow{1}{*}{2016}} & \multicolumn{1}{c|}{\multirow{1}{*}{\ding{51}}} & \multicolumn{1}{c|}{\multirow{1}{*}{\ding{55}}} \\ \hline {Conservative and Dissipative LDG for KdV \cite{zhang2019conservative}, \quad Scheme I} & \multicolumn{1}{c|}{\multirow{2}{*}{2019}} & \multicolumn{1}{c|}{\multirow{2}{*}{\ding{51}}} & \multicolumn{1}{c|}{\multirow{2}{*}{\ding{55}}} \\ \hline {Conservative and Dissipative LDG for KdV \cite{zhang2019conservative}, \quad Scheme II} & \multicolumn{1}{c|}{\multirow{2}{*}{2019}} & \multicolumn{1}{c|}{\multirow{2}{*}{\ding{55}}} & \multicolumn{1}{c|}{\multirow{2}{*}{\ding{51}}} \\ \hline \end{tabular}\\ \caption{\label{tab:WhoConserve} The conservation properties of the previous DG methods} \end{table} Most of these conservative DG methods have an optimal convergence order for even degree polynomials and sub-optimal order for odd degree polynomials except that the Hamiltonian conserving method in \cite{zhang2019conservative} has optimal convergence order for any polynomial degrees. In this work, we develop a new DG method for KdV equations that conserves all three invariants: mass, energy, and Hamiltonian. This conservative DG method will allow us to model and simulate the soliton wave more accurately over a long time period. Our novel idea on designing the method is to treat the penalization/stabilization parameters in the numerical fluxes implicitly (i.e.\dong{,} as new unknowns), which allow two more equations in the formulation of the DG method that explicitly enforce the conservation of energy and Hamiltonian. The stabilization parameters are solved together with the approximations of the exact solutions. Due to the time-step constraint implied by the third-order spatial derivative, we use implicit time marching schemes to avoid extremely small time steps. Since our DG scheme for spatial discretization is conservative, in implementation we use the implicit midpoint method which is conservative for time discretization. Our numerical results show that, just like most other conservative DG methods in literature, our method has optimal convergence for the even polynomial degrees and sub-optimal convergence for the odd ones. More significantly, our method can conserve both the energy and the Hamiltonian over a long time period. \dong{ As shown in Table \ref{tab:WhoConserve}, both standard DG and LDG methods have appeared in literature to achieve conservation. We choose the LDG-like framework for our method because it has three numerical traces, and thus more room for tuning for better conservation properties. We would like to point out that our method has computational complexity that is only negligibly more than standard LDG. When the equation is nonlinear (which is our focus), both discretized systems are nonlinear thus needing iterative solvers. Standard LDG system has $3 N (k+1)$ equations when $N$ elements and polynomials of degree $k$ are used. Our system has $3 N (k+1) + 2$ equations due to the introduction of two new unknown (constant) parameters.} We would like to \dong{further} remark that our idea of enforcing conservation properties by using implicit stabilization parameters can be applied to develop new conservative methods for other types of problems that feature conservation of physical quantities. \dong{It can also be extended to preserve more invariants for the KdV equation by introducing more than two implicit stabilization parameters.} This opens the door to promising future extensions. The rest of the paper is structured as follows: Section 2 will describe the formulation of our DG method and prove the conservation properties. Implementation of our method is briefly discussed in Section 3\bo{,} leaving further details \bo{to} the Appendix. We display numerical results on solving third-order linear and nonlinear equations and the classical KdV equation, showing the order of convergence and conservation properties we have observed in our numerical experiments in Section 4. Finally, we end with concluding remarks in Section 5. \section{Main Results} \label{sec:mainresults} In this section, we discuss our main results. We start by introducing our notations. Next, we describe our DG method and discuss the choice of penalization parameters that ensure the conservation of the Hamiltonian and energy. After that, we prove that our numerical solutions do conserve the three invariants: mass, energy, and Hamiltonian. \subsection{Notation} To define our DG method, first let us introduce some notations. We partition the domain $\Omega = (a,b)$ as \[ {\mathcal T}_h=\{I_i:=(x_{i-1}, x_i): a=x_0<x_1<\cdots<x_{N-1}<x_N=b\}. \] We use $\partial{\mathcal T}_h:=\{ \partial I_i: i=1,\dots,N\}$ to denote the set of all element boundaries, and $\mathscr{E}_h:=\{x_i\}_{i=0}^N$ to denote all the nodes. We also set $h_i = x_i - x_{i-1}$ and $h:=\max_{1\le i\le N} h_i$. For any function $\zeta\in L^2(\partial{\mathcal T}_h)$, we denote its values on $\partial I_i:=\{x^+_{i-1}, x^-_i\}$ by $\zeta(x_{i-1}^+)$ (or simply $\zeta^+_{i-1}$) and $\zeta(x_i^-)$ (or simply $\zeta^-_i$). Note that $\zeta(x_{i}^+)$ does not have to be equal to $\zeta(x_i^-)$. In contrast, for any function $\eta\in L^2(\mathscr{E}_h)$, its value at $x_i$, $\eta(x_i)$ (or simply $\eta_i$) is uniquely defined; in this case, $\eta(x_i^-) = \eta(x_{i}^+) = \eta(x_i)$. We let $$(\varphi, v):=\sum_{i=1}^{N} (\varphi,v)_{I_i}, \quad \langle \varphi, vn\rangle:=\sum_{i=1}^{N}\langle\varphi, vn\rangle_{\partial I_i},$$ where $$(\varphi, v)_{I_i}=\int_{I_i} \varphi v dx, \quad \langle \varphi,v n\rangle_{\partial I_i}=\varphi(x_i^-)v(x_i^-)n(x_i^-) +\varphi(x_{i-1}^+)v(x_{i-1}^+)n(x_{i-1}^+).$$ Here $n$ denotes the outward unit normal to $I_i$, that is $n(x_{i-1}^+):=-1$ and $n(x_i^-):=1$. We define the average and jump of $\varphi$ as $$\{\varphi\}(x_i):=\frac{1}{2}\big(\varphi(x_i^-)+\varphi(x_i^+)\big), \quad \jmp{\varphi}(x_i):= \varphi(x_i^-)-\varphi(x_i^+).$$ We also define the finite element space \begin{equation*} {W}_h^k = \{\omega \in L^2(\mathcal{T}_h): \;\; \omega|{{_K}} \in {P}_{k}(K) \textrm{ for any } K\in\mathcal{T}_h,\; \mbox{ and } \omega(a)=\omega(b)\}, \end{equation*} where $P_k(D)$ is the space of piecewise polynomials of degree up to $k$ on the set $D$. Finally, the $H^s(D)$-norm is denoted by $\|\cdot\|_{s, D}$. We drop the first subindex if $s=0$, and the second if $D=\Omega$ or \dong{$\mathcal{T}_h$}. \subsection{The DG method} \label{sec:dgm} To define our DG method for the KdV equation \eqref{eq:Kdv_Equation}, we first rewrite it as the following system of first-order equations \begin{equation}\label{eq:kdvsystem} \begin{split q - u_x \,&=\, 0, \qquad\;\,{\textrm {in} }\; \Omega,\\ p -\varepsilon q_x\,&=\, f(u), \quad {\textrm {in} }\; \Omega,\\ u_t+ p_x \,&=\,g(x), \quad \, {\textrm {in} }\; \Omega, \end{split} \end{equation} with the initial condition $u(x,0)=u_0(x)$ and the periodic boundary conditions $$u(a) = u(b), \quad q(a) = q(b), \quad p(a) = p(b).$$ We discretize \eqref{eq:kdvsystem} by seeking $(u_h, q_h, p_h)$ as approximations to $(u, q, p)$ in the space $\left(W_h^{k}\right)^3$ such that \begin{subequations} \label{eq:scheme_time} \begin{alignat}{2} \label{eq:scheme_time1} ({q}_h,{v}) + (u_h,v_x) - \langle \widehat{u}_h,{v}n \rangle & = 0, & \forall v \in W_h^k,\\ \label{eq:scheme_time2} ({p}_h,{z}) + \varepsilon (q_h,z_x) - \varepsilon \langle \widehat{q}_h,{z}n \rangle & = (f(u_h), z), \quad & \forall z \in W_h^k,\\ \label{eq:scheme_time3} ({u}_{ht},{w})-(p_h,w_x) + \langle \widehat{p_h}, {w}n \rangle & = (g, w),& \forall w \in W_h^k. \end{alignat} \end{subequations} Here, $\widehat{u_h}, \widehat{q_h}, \widehat{p_h}$ are the so-called numerical traces whose definitions are in general critical for the accuracy and stability of the DG method \cite{ArnoldBrezziCockburnMarini02}. There are multiple ways of defining them. We adopt the one that is similar to the Local Discontinuous Galerkin (LDG) methods \cite{ArnoldBrezziCockburnMarini02} \begin{subequations} \label{eq:scheme_hats} \begin{alignat}{1} \label{eq:uhat} \widehat{u_h} = & \{u_h\},\\ \label{eq:qhat} \widehat{q_h} = & \{q_h\} + \tau_{qu} \jmp{u_h},\\ \label{eq:phat} \widehat{p_h} = & \{p_h\} + \tau_{pu} \jmp{u_h}. \end{alignat} \end{subequations} The key difference is that, instead of specifying the values of the penalty parameters $(\tau_{qu}, \tau_{pu})$ as done by LDG \cite{ArnoldBrezziCockburnMarini02}, we leave them as unknowns. It is exactly due to the resulting freedom of placing two more constraints that, as shown in Lemma \ref{lemma:conserv_general} in the next section, the scheme is able to conserve the mass, $L^2$-energy, and the Hamiltonian of the numerical solutions. Toward that end, we require that the penalization parameters $\tau_{qu}$ and $\tau_{pu}$ be constants that satisfy \begin{subequations} \label{eq:trace1} \begin{alignat}{1} \label{eq:tau_pu1} &\tau_{pu}\sum_{i=1}^N \jmp{u_h}^2(x_i) - \varepsilon\tau_{qu}\sum_{i=1}^N \jmp{u_h} \jmp{q_h}(x_i)=\sum_{i=1}^N \Big(\jmp{V(u_h)} - \{\varPi f(u_h)\} \jmp{u_h}\Big)(x_i),\\ \label{eq:tau_qu1} &\tau_{pu}\sum_{i=1}^N \jmp{p_h} \jmp{u_h}(x_i)+\varepsilon\tau_{qu}\sum_{i=1}^N\jmp{u_h}_t\jmp{u_h}(x_i)=0. \end{alignat} \end{subequations} Here, $V(\cdot)$ is an antiderivative of $f(\cdot)$. In summary, our method is to seek $(u_h, q_h, p_h) \in \left(W_h^{k}\right)^3$ and penalty parameters $(\tau_{qu}, \tau_{pu})$ such that \eqref{eq:scheme_time1} - \eqref{eq:scheme_time3}\bo{,} \eqref{eq:tau_pu1}\bo{,} and \eqref{eq:tau_qu1} are satisfied. \begin{remark} Here we would like to point out that our scheme is not an LDG method. To our knowledge, existing LDG methods do not conserve the energy of solutions to KdV equations. The penalty parameters in LDG methods are known constants, while in our schemes $\tau_{qu}$ and \dong{$\tau_{pu}$} are considered as new unknowns. Correspondingly we have two more equations from \eqref{eq:trace1}. In fact, we can write $\tau_{qu}$ and \dong{$\tau_{pu}$} in terms of $u_h, q_h,p_h$ as \begin{alignat*}{1} \tau_{qu}&=-\frac{1}{\varepsilon}\frac{\eta(p_h, u_h)\sum_{i=1}^{N}\Big(\jmp{V(u_h)}-\{\Pi f(u_h)\}\jmp{u_h}\Big) }{\eta(q_h, u_h)\eta(p_h, u_h)+\eta(u_{ht}, u_h)\eta(u_h, u_h)},\\ \tau_{pu}&= -\varepsilon\frac{\eta(u_{ht}, u_h)}{\eta(p_h,u_h)} \tau_{qu}\dong{,} \end{alignat*} where we have used the notation $\eta(w,v)=\sum_{i=1}^N\jmp{w} \jmp{v}(x_i).$ These expressions show that our method is different from LDG Methods. \end{remark} \subsection{Conservative properties} Now we discuss the conservation properties of the schemes in the previous section. First, in the following Lemma we give general conditions for $\widehat{u_h}, \widehat{q_h}, \widehat{p_h}$ under which DG methods that satisfy \eqref{eq:scheme_time} conserve the mass, $L^2$ energy, and Hamiltonian. Then we apply the Lemma to prove the conservation properties for the DG method defined by \eqref{eq:scheme_time}-\eqref{eq:trace1}. \begin{lemma}\label{lemma:conserv_general} Suppose $(u_h, q_h, p_h)$ satisfy \eqref{eq:scheme_time} with $g=0$. \\ (i) If $\widehat{p_h}$ is single-valued, then we have \label{eq:mass_conserv} \[ \frac{d}{dt}\int_{{\mathcal T}_h} u_h \,dx=0, \quad \, {\rm (mass \,-\, conservation)}. \] (ii) If $\widehat{u_h}, \widehat{q_h}, \widehat{p_h}$ are single-valued and satisfy the condition \begin{alignat}{1} \label{eq:conserv_cond1} 0=&\sum_{i=1}^N \Big(\jmp{V(u_h)} - \{\varPi f(u_h)\} \jmp{u_h} +(\jmp{\varPi f(u_h)} - \jmp{p_h}) (\widehat{u_h} - \{u_h\})\\ \nonumber & \quad\quad-\jmp{u_h}(\widehat{p_h} - \{p_h\}) + \varepsilon\jmp{q_h} (\widehat{q_h} - \{q_h\}) \Big)(x_i), \end{alignat} then we have \begin{alignat}{1} \label{eq:L2_conserv} \frac{d}{dt}\int_{{\mathcal T}_h} u_h^2 \,dx=0, & \quad \, ({\rm energy}-{\rm conservation}). \end{alignat} (iii) If $\widehat{u_h}, \widehat{q_h}, \widehat{p_h}$ are single-valued and satisfy the condition \begin{alignat}{1} \label{eq:conserv_cond2} 0= & \sum_{i=1}^{N} \left( \jmp{p_h} (\widehat{p_h} - \{p_h\}) + \varepsilon\jmp{q_h} (\widehat{u_h} - \{u_h\})_t +\varepsilon \jmp{u_h}_t (\widehat{q_h} - \{q_h\})\right)(x_i), \end{alignat} then we have \begin{alignat}{1} \label{eq:Hamiltonian_conserv} \frac{d}{dt}\int_{{\mathcal T}_h} \Big(\,\frac{\varepsilon}{2} q_h^2 - V(u_h)\Big) \,dx=0, & \quad\, ({\rm Hamiltonian}-{\rm conservation}). \end{alignat} \end{lemma} \begin{proof} (i) To prove the mass conservation, we just need to take $w = 1$ in \eqref{eq:scheme_time3} and use the fact that $\widehat{p_h}$ is single-valued. (ii) Next, we prove the energy-conservation, which is also called $L^2$-conservation. We take $w := u_h$, $v := -p_h + \varPi f(u_h)$, and $z := q_h$ in \eqref{eq:scheme_time} and add the three equations together to get \begin{alignat*}{1} (f(u_h), q_h) = & (u_{ht}, u_h) - (p_h, u_{hx}) + \langle \widehat{p_h}, u_h n \rangle - (u_h, p_{hx}) + \langle \widehat{u}_h, p_hn \rangle + \varepsilon(q_h, q_{hx}) \\ & - \varepsilon\langle \widehat{q}_h, q_h n\rangle + (q_h, \varPi f(u_h)) + \langle u_h - \widehat{u_h}, \varPi f(u_h) n\rangle - (\varPi f(u_h), u_{hx}) \end{alignat*} Since $$(f(u_h), q_h) = (\varPi f(u_h), q_h)$$ and $$(\varPi f(u_h), u_{hx})=(f(u_h), u_{hx})= \langle V(u_h), n\rangle,$$ we have that \begin{alignat*}{1} 0 = & (u_{ht}, u_h) - \langle p_h, u_h n\rangle + \langle \widehat{p_h}, u_h n\rangle + \langle \widehat{u}_h, p_h n\rangle + \frac{\varepsilon}{2}\langle q_h^2,n \rangle -\varepsilon \langle \widehat{q}_h q_h,n\rangle\\ & + \langle u_h - \widehat{u_h}, \varPi f(u_h) n\rangle - \langle V(u_h), n\rangle\\ = & (u_{ht}, u_h) - \langle \widehat{p_h} - p_h + \varPi f(u_h), (\widehat{u}_h - u_h)n\rangle + \frac{\varepsilon }{2}\langle (q_h - \widehat{q_h})^2,n \rangle - \langle V(u_h), n\rangle, \end{alignat*} where we have used the single-valuedness of numerical traces. This means that \begin{alignat*}{1} \frac{1}{2} \frac{d}{dt} (u_{h}, u_h) = &\langle V(u_h), n\rangle + \langle \widehat{p_h} - p_h + \varPi f(u_h), (\widehat{u}_h - u_h)n\rangle - \frac{\varepsilon}{2}\langle (q_h - \widehat{q_h})^2,n \rangle\\ = &\sum_{i=1}^N \left(\jmp{V(u_h)} - \{\varPi f(u_h)\} \jmp{u_h} +(\jmp{\varPi f(u_h)} - \jmp{p_h}) (\widehat{u_h} - \{u_h\}) \right.\\ & \left.-\jmp{u_h}(\widehat{p_h} - \{p_h\}) + \varepsilon\jmp{q_h} (\widehat{q_h} - \{q_h\})\right)(x_i). \end{alignat*} Here, we used the equality $\langle \rho, v n \rangle = \sum_{i=1}^N (\jmp{\rho} \{ v\} + \jmp{v} \{ \rho\})(x_i)$ for any $\rho, v\in W_h^k$. When the condition \eqref{eq:conserv_cond1} is satisfied, we immediately get the energy-conservation, \eqref{eq:L2_conserv}. (iii) To prove the Hamiltonian conservation properties in \eqref{eq:Hamiltonian_conserv}, we first differentiate the equation \eqref{eq:scheme_time1} with respect to $t$ to obtain \[ ({q}_{ht},{v}) + (u_{ht},v_x) - \langle \widehat{u}_{ht},{v}n \rangle = 0. \] Then, we take $v := \varepsilon q_{h}$ in the equation above, $z := u_{ht}$ in \eqref{eq:scheme_time2} and $w := -p_h$ in \eqref{eq:scheme_time3} and add the three equations together to get \begin{alignat*}{1} (f(u_h), u_{ht}) = \;& \varepsilon(q_{ht}, q_h) + (p_h, p_{hx}) - \langle \widehat{p_h}, p_h n \rangle \\ &+ \varepsilon (u_{ht}, q_{hx}) + \varepsilon(q_h, u_{htx}) - \varepsilon\langle \widehat{u}_{ht}, q_hn \rangle - \varepsilon \langle \widehat{q}_h, u_{ht} n\rangle. \end{alignat*} Since $\widehat{u_h}, \widehat{q_h}$, and $\widehat{p_h}$ are single-valued, we have \begin{alignat*}{1} (f(u_h), u_{ht}) =\; & \varepsilon (q_{ht}, q_h) + \langle \frac{1}{2} p_h^2,n\rangle - \langle \widehat{p_h} p_h, n \rangle + \varepsilon\langle u_{ht} - \widehat{u}_{ht}, (q_h - \widehat{q}_h)n\rangle\\ = \;& \varepsilon(q_{ht}, q_h) + \frac{1}{2}\langle (p_h - \widehat{p_h})^2,n\rangle + \varepsilon\langle u_{ht} - \widehat{u}_{ht}, (q_h - \widehat{q}_h)n\rangle. \end{alignat*} This implies that \begin{alignat*}{1} &\frac{d}{dt} \left(\frac{\varepsilon}{2} (q_h, q_h) - (V(u_h),1) \right) \\ = & \sum_{i=1}^N \left( \jmp{p_h} (\widehat{p_h} - \{p_h\}) + \varepsilon\jmp{q_h} (\widehat{u_h} - \{u_h\})_t + \varepsilon\jmp{u_h}_t (\widehat{q_h} - \{q_h\})\right)(x_i). \end{alignat*} If the numerical traces satisfy \eqref{eq:conserv_cond2}, we get the conservation of the Hamiltonian \eqref{eq:Hamiltonian_conserv}. This concludes the proof of Lemma \ref{lemma:conserv_general}. \end{proof} Next we use Lemma \ref{lemma:conserv_general} to show that our scheme defined by \eqref{eq:scheme_time} - \eqref{eq:trace1} conserves the mass, the $L^2$-energy, and the Hamiltonian of the numerical solutions. \begin{theorem} \label{thm:conserv} For $(u_h, q_h, p_h)$ satisfying \eqref{eq:scheme_time} with $g=0$ and numerical traces defined by \eqref{eq:scheme_hats} - \eqref{eq:trace1}, the mass, $L^2$-energy and Hamiltonian conservation properties in Lemma \ref{lemma:conserv_general} hold. \end{theorem} \begin{proof} (i) The numerical traces in \eqref{eq:scheme_hats} are single-valued, so the DG scheme conserves the mass of the approximate solutions. (ii) Using \eqref{eq:scheme_hats}, we see that \begin{alignat*}{1} &\sum_{i=1}^N \Big(\jmp{V(u_h)} - \{\varPi f(u_h)\} \jmp{u_h} +(\jmp{\varPi f(u_h)} - \jmp{p_h}) (\widehat{u_h} - \{u_h\})\\ & \quad\quad-\jmp{u_h}(\widehat{p_h} - \{p_h\}) + \varepsilon\jmp{q_h} (\widehat{q_h} - \{q_h\}) \Big)(x_i)\\ = &\sum_{i=1}^N \left(\jmp{V(u_h)} - \{\varPi f(u_h)\} \jmp{u_h} - \tau_{pu} \jmp{u_h}^2 + \varepsilon\tau_{qu}\jmp{u_h} \jmp{q_h} \right), \end{alignat*} which is equal to $0$ when the condition \eqref{eq:tau_pu1} holds. Then we get the $L^2$ conservation by Lemma \ref{lemma:conserv_general}. (iii) Using the definition of the numerical traces \eqref{eq:scheme_hats}, we get \begin{alignat*}{1} & \sum_{i=1}^N \left( \jmp{p_h} (\widehat{p_h} - \{p_h\}) +\varepsilon \jmp{q_h} (\widehat{u_h} - \{u_h\})_t + \varepsilon \jmp{u_h}_t (\widehat{q_h} - \{q_h\})\right)(x_i)\\ = & \sum_{i=1}^N \left( \tau_{pu} \jmp{p_h} \jmp{u_h} + \tau_{qu} \jmp{u_h}_t \jmp{u_h}\right)=0 \end{alignat*} by \eqref{eq:tau_qu1}. So we immediately get the conservation of the Hamiltonian \eqref{eq:Hamiltonian_conserv} using Lemma \ref{lemma:conserv_general}. This concludes the proof of Theorem \ref{thm:conserv}. \end{proof} \begin{remark}\label{remark:choice_hats} We would like to point out that Lemma \ref{lemma:conserv_general} provides a framework for achieving full conservation of mass, energy and Hamiltonian. Specifically, any choices of $\widehat{u_h}, \widehat{q_h}, \widehat{p_h}$ that satisfy the conditions \eqref{eq:conserv_cond1} and \eqref{eq:conserv_cond2} will do. The numerical traces we have in \eqref{eq:scheme_hats} are just one of them. There are many other choices. For example, one can choose $\widehat{q_h}=\{q_h\}$ and determine $\widehat{u_h}$ and $\widehat{p_h}$ from equations \eqref{eq:conserv_cond1} and \eqref{eq:conserv_cond2}. The scope of this paper is to discover a novel paradigm for designing new conservative DG methods by letting the stabilization parameters be new unknowns so that conservation properties can be explicitly embedded into the scheme and therefore their achievement guaranteed. \end{remark} \section{Implementation} \label{sec:implementation} In this section, we provide a high-level summary of the implementation of our method. Further details are deferred to Appendix \ref{appendix:Implementation}. \subsection{Time-stepping scheme} Since KdV equations have the third-order spatial derivative term, we choose implicit time-marching schemes to avoid using extremely small time steps. Moreover, we need the time stepping method to be conservative so that the fully discrete scheme is conservative. Here, we use the following implicit second-order Midpoint method, which preserves the conservation laws up to round-off error. \dong{This is proven in \cite{DekkerVerwer1984} and adopted in \cite{bona2013conservative,KarakashianXing16} for the development of energy-conserving DG methods and \cite{liu2016hamiltonian} for a Hamiltonian-preserving DG scheme. Numerical results therein and of our paper demonstrate numerically that the Midpoint method does indeed conserve conservation laws including Hamiltonian.} Let $0=t_0< t_1<\cdots<t_M=T$ be a uniform partition of the interval $[0, T]$ and $\Delta t= t_{n+1}-t_n$ be the step size. For $n=0, \dong{\ldots}, M-1$, let $u_h^{n+1}\in W^k_h$ be defined as: \[u_h^{n+1} = 2u_h^{n+\frac{1}{2}}-u_h^{n},\] where $u_h^{n+\frac{1}{2}}\in W^k_h$ is the DG solution to the equation \[\frac{u-u_h^n}{\frac{1}{2}\Delta t}+\varepsilon u_{xxx}+f(u)_x=g(x,t_{n+\frac{1}{2}}). \] At every time step $t_{n+\frac{1}{2}}, n=0, \dong{\ldots}, M-1$, we need to solve equations \eqref{eq:scheme_time}, \eqref{eq:tau_pu1}, and \eqref{eq:tau_qu1} for $u_h$, $q_h$, $p_h$, $\tau_{qu}$, and $\tau_{pu}$. We can rewrite the nonlinear system into the following matrix-vector form and use MATLAB's built-in function ``Fsolve" to solve it. \begin{subequations}\label{MatrixEqsF} \begin{align} &\textbf{M}[q]+(\textbf{D}+\textbf{A})[u] = 0 \label{eq:F1}\\ &\textbf{M}[p] + \varepsilon (\textbf{D}+ \textbf{A})[q] + \varepsilon \tau_{qu}\textbf{J}[u] - \textbf{M}[f(u_h))] = 0\label{eq:F2}\\ &\textbf{M}[u] - \frac{1}{2}\Delta t(\textbf{D}+ \textbf{A})[p] -\frac{1}{2}\Delta t\tau_{pu}\textbf{J}[u] -\textbf{M}[\bar{u}]-\frac{1}{2}\Delta t \textbf{M}[g] = 0\label{eq:F3}\\ & V_f - \tau_{pu} \eta(u_h,u_h) + \varepsilon \tau_{qu} \eta(q_h, u_h) = 0\label{eq:F4}\\ &\tau_{pu}\eta(p_h, u_h) + \tau_{qu}\sum_{i=1}^N \varepsilon \jmp{u_h}\jmp{u_h}_{t}(x_i) = 0\label{eq:F5} \end{align} \end{subequations} where $[u], [q], [p]$ are vectors consisting of degrees of freedom of $u_h^{n+ \frac{1}{2}}, q_h^{n+ \frac{1}{2}}, p_h^{n+ \frac{1}{2}}$, respectively, $[\bar{u}]$ is the {\em known} vector for the degrees of freedom of $u_h^n$, $\textbf{M}$ is the mass matrix, $\textbf{D}$ is the derivative matrix, $\textbf{A}$ is \dong{the} matrix associated to the average flux, and $\textbf{J}$ is the matrix associated to the jump; see Appendix \ref{appendix:Implementation} for details on these matrices. In \eqref{eq:F4} and \eqref{eq:F5}, we have adopted the notation defined in Section \ref{sec:dgm} \[ \eta(w,v)=\sum_{i=1}^N\jmp{w} \jmp{v}(x_i) \quad \textrm{ for any } w, v \in \big \{u_h,q_h,p_h\big\}, \] and a new quantity $V_f := {\displaystyle \sum_{i=1}^N (\jmp{V(u_h)} - \{\varPi f(u_h)\} \jmp{u_h})(x_i)}.$ The solution of this system, $([u], [q], [p], \tau_{qu}, \tau_{pu})$, can be considered as a column vector of size $[3(N-1)(k+1) + 2]$. So by introducing two more unknowns ($\tau_{qu}, \tau_{pu}$) and enforcing the two equations for conservation of energy and Hamiltonian, we only increase the size of the system by 2. \subsection{Three-point difference formulas for $\jmp{u_h}_t$} The last equation of the system, \eqref{eq:F5}, contains the non-traditional term $\jmp{u_h}_t$. We approximate it by the following three-point difference formula on uniform stencil to maintain the second-order accuracy in time \[ \jmp{u_h}_t^{n+ \frac{1}{2}}=\frac{1}{\Delta t}\big(\jmp{u_h}^{n- \frac{1}{2}}-4\jmp{u_h}^n+3\jmp{u_h}^{n+ \frac{1}{2}}\big)+\mathcal{O}(\Delta t ^2). \] When $n=0$, we approximate $\jmp{u_h}_t^{\frac{1}{2}}$ by a three-point difference formula on a non-uniform stencil using $\jmp{u_h}$ at $t=0, (\frac{\Delta t}{2})^2,$ and $\frac{\Delta t}{2}$, where $u_h^0$ is obtained by the $L^2$-projection of $u_0$ and $u_h^{(\frac{\Delta t}{2})^2}$ is computed using the backward Euler method. The nonuniform three-point difference formula for $\jmp{u_h}_t^{\frac{1}{2}}$ is as follows: \[ \jmp{u_h}_t^{\frac{1}{2}}=c_1 \jmp{u_h}^0 +c_2 \jmp{u_h}^{(\frac{\Delta t}{2})^2} +c_3\jmp{u_h}^{\frac{\Delta t}{2}}+\mathcal{O}(\Delta t^2) \] where \[ c_1=\frac{1-\frac{\Delta t}{2}}{\big(\frac{\Delta t}{2}\big)^2},\quad c_2=-\frac{1}{\big(\frac{\Delta t}{2}\big)^2 \big(1-\frac{\Delta t}{2}\big)},\quad c_3=\frac{2-\frac{\Delta t}{2}}{\big(\frac{\Delta t}{2}\big)\big(1-\frac{\Delta t}{2}\big)}. \] \subsection{The flowchart of the whole algorithm} After solving for $u_h^{n+\frac{1}{2}}$ from the system \eqref{MatrixEqsF} with the $\jmp{u_h}_t$ term approximated by the three-point difference formulas above, we compute $u_h^{n+1}$ through the midpoint method. Then we solve for $q_h^{n+1}$ from the linear equation \eqref{eq:F1} using $u_h^{n+1}$. In order to obtain $p_h^{n+1}$, $\tau_{qu}^{n+1}$ and $\tau_{pu}^{n+1}$, we solve a smaller nonlinear system consisting of equations \eqref{eq:F2}, \eqref{eq:F4} and \eqref{eq:F5} using $u_h^{n+1}$ and $q_h^{n+1}$. To summarize, we use the following flowchart to describe the whole algorithm.\\ \begin{tikzpicture}[font=\small,thick] \node[draw, rounded rectangle, minimum width=2cm, minimum height=1cm] (block1) {\bf START}; \node[draw, align=center, trapezium, trapezium left angle = 55, trapezium right angle = 125, trapezium stretches, right=of block1, minimum width=1cm, minimum height=1cm ] (block2) { Compute $u_h^0$ from $u_0$ \\ and set $n=0$. }; \node[draw, align=center, right=of block2, minimum width=1.0cm, minimum height=1cm ] (block3) { Use backward Euler method \\to evaluate $u_h$ at $t=\big(\frac{\Delta t}{2}\big)^2$ }; \node[draw, below =of block3, align=center, minimum width=2.5cm, minimum height=1.5cm ] (block4) { Solve system \eqref{MatrixEqsF} for \\$u_h^{n+\frac{1}{2}}$ and use the Midpoint \\ method to get $u_h^{n+1}$}; \node[draw, left=of block4, align=center, minimum width=3.5cm, minimum height=1.5cm ] (block5) {Solve \eqref{eq:F1} for $q_{h}^{n+1}$. \\ Solve \eqref{eq:F2}, \eqref{eq:F4} and \eqref{eq:F5} for\\ $\big(p_{h}^{n+1}, \tau_{qu}^{n+1}, \tau_{pu}^{n+1}\big)$.\\ Let $n \leftarrow n+1$}; \node[draw, diamond, below=of block5, minimum width=2.5cm, inner sep=0] (block6) { $t_{n} \ge T$ ?}; \node[draw, rounded rectangle, left =of block6, minimum width=2cm, minimum height=1cm ] (block7) {\bf End}; \draw[-latex] (block1) edge (block2) (block2) edge (block3) (block3) edge (block4) (block4) edge (block5) (block5) edge (block6); \draw[-latex] (block6)--node[anchor=south,pos=0.5,fill=white,inner sep=3]{Yes} (block7); \draw[-latex] (block6) -| (block4) node[anchor=south,pos=0.25,fill=white,inner sep=3]{No}; \end{tikzpicture} \bigskip \section{Numerical Results} \label{sec:numericalresults} In this section, we carry out numerical experiments to test the convergence and conservation properties of our DG method. \bo{In the first test problem, we consider a third-order linear equation with $f(u) = u$.} In the second test problem, we use our DG method to solve a third-order nonlinear equation with $\varepsilon=1$, $0.1$, and $0.01$ and the solutions are sine waves that are periodic on the domain. In the \bo{last} test problem, we solve the classical KdV equation with a cnoidal wave solution and compare the approximate solution with the exact one. For \dong{all the} test problems, we compute the $L^2$-errors and convergence orders and check the conservation of the energy and Hamiltonian of the DG solutions. \subsection{Numerical Experiment 1} In this test, we solve the following third-order linear equation \bo{in \cite{zhang2019conservative}} \begin{equation*} u_{t} + \varepsilon u_{xxx} + (f(u))_{x} = 0, \end{equation*} where $\varepsilon = 1$ and $f(u) = u$, with periodic boundary conditions on the domain $\Omega = [0,4\pi]$ and the initial condition $u_{0} = \sin(\frac{1}{2}x)$. The exact solution to this problem is \[u(x,t) = \sin\bigg(\frac{1}{2}x - \frac{3}{8}t\bigg).\] First, we test the convergence of the DG method for this linear problem. We use polynomials of degree $k=0, 1, 2$ for approximate solutions\dong{, the mesh size $h=\frac{4\pi}{N}$ for $N=2^l, l=3,\ldots, 7$, and $\Delta t = \dong{0.2}(\frac{h}{4\pi})^{\min\{k,1\}}$ for time discretization.} The $L^2$-errors and orders of convergence of the approximate solutions are displayed in Table \ref{fig:TableLinear} for the final time $T = 0.1$. \dong{We see that the approximate solutions for the variable $u$ converge with an optimal order for all polynomial degrees $k$, those for the auxiliary variable $q$ have an optimal convergence order for even $k$ and a sub-optimal order for odd $k$, and those for $p$ have sub-optimal orders for $k=1, 2$. } \begin{table} \dong{ \begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{\textbf{k}}& \multirow{2}{*}{\textbf{N}}& \multicolumn{2}{c|}{$\mathbf{u_h}$} &\multicolumn{2}{c|}{$\mathbf{q_h}$} & \multicolumn{2}{c|}{$\mathbf{p_h}$} \\ \cline{3-8} & &\textbf{$L_{2}$ Error}&\textbf{Order }&\textbf{$L_{2}$ Error}&\textbf{Order }&\textbf{$L_{2}$ Error}&\textbf{Order }\\\hline \multirow{4}{*}{0}&\textbf{8}& 5.70e-1&-& 3.10e-1&-& 1.81e-0&-\\%\hline &\textbf{16}& 2.98e-1& 0.93& 1.52e-1& 1.02& 1.89e-0& -0.06\\%\hline &\textbf{32}& 1.42e-1& 1.07& 7.14e-2& 1.09& 1.07e-1& 4.15\\%\hline &\textbf{64}& 7.10e-2& 1.00& 3.56e-2& 1.01& 5.33e-2& 1.00\\%\hline &\textbf{128}& 3.55e-2& 1.00& 1.78e-2& 1.00& 2.66e-2&1.00\\\hline \multirow{4}{*}{1}&\textbf{8}& 5.80e-2&-& 2.55e-1&-& 2.34e-1&-\\%\hline &\textbf{16}& 1.44e-2& 2.01& 1.38e-1& 0.88& 1.35e-1& 0.79\\%\hline &\textbf{32}& 3.60e-3& 2.00& 7.06e-2&0.97 & 7.02e-2& 0.95\\%\hline &\textbf{64}& 9.00e-4& 2.00& 3.55e-2& 0.99& 3.62e-2& 0.95\\%\hline &\textbf{128}& 2.25e-4& 2.00& 1.78e-2& 1.00& 1.13e-2& 1.68\\\hline \multirow{4}{*}{2}&\textbf{8}& 3.93e-3&-& 7.92e-3&-& 4.07e-2&-\\%\hline &\textbf{16}& 4.84e-4& 3.02& 9.76e-4&3.02 &9.45e-3 & 2.11\\%\hline &\textbf{32}& 6.00e-5& 3.01& 1.23e-4&2.99 & 2.31e-3& 2.03\\%\hline &\textbf{64}& 7.47e-6& 3.01& 1.52e-5& 3.02& 5.78e-4& 2.00\\%\hline \hline \end{tabular} \caption{Numerical Experiment 1 (third-order linear equation): Error and convergence order of $u_h$, $q_h$, and $p_h$} \label{fig:TableLinear} } \end{table} Next, we test the conservation of the energy and Hamiltonian of the approximate solution using polynomials of degree $k=2$ on 32 intervals for the final time $T = 50$. In Figure \ref{fig:linear}, we see that the Hamiltonian and energy of the approximate solution remain the same over the whole time period. \dong{The errors of the Hamiltonian and energy are very small, as shown on the second row of Figure \ref{fig:linear}.} \begin{figure}[!h] \begin{center} \includegraphics[scale=0.25]{HamiltonianandEnergyLinearSineWaveRevisedJune.png} \includegraphics[scale=0.25]{ConservationErrorLinearSinewaveCaseJune.png} \end{center} \caption{Numerical Experiment 1 (third-order linear equation): Hamiltonian (Left) and energy (Right) conservation. \dong{Shown on the bottom are the corresponding errors.}} \label{fig:linear} \end{figure} \subsection{Numerical Experiment 2} In the second test, we consider the following third-order nonlinear equation \begin{equation*} u_{t} + \varepsilon u_{xxx} + (f(u))_{x} = g \end{equation*} with periodic boundary conditions on $\Omega=[0, 1]$ and the initial condition $u_0=\sin{(2\pi x)}$, where $f(u) = \frac{u^2}{2}$ and $g$ is the function which gives the solution \[u(x,t) = \sin(2\pi x + t).\] For this problem, we first test the convergence orders of our DG method for $\varepsilon=1$, $0.1$ and $0.01$ when using polynomials of degree $k=0, 1, 2$. We use \dong{$h=1/N$, where $N=2^l, l=3, \ldots, 7$, and $\Delta t=0.2\, h^{\min\{k,1\}}$ for time discretization}, and the final time is $T=0.1$. The $L^2$-errors and orders of convergence for $\varepsilon=1$, 0.1, 0.01 are displayed in Table \ref{fig:Table1}, Table \ref{fig:Table2}, and Table \ref{fig:Table3}, respectively. \dong{Note that for existing energy-conserving DG methods in \cite{bona2013conservative,KarakashianXing16,chen2016new}, it is typical that approximate solutions to $u$ have optimal convergence orders when $k$ is even and sub-optimal orders when $k$ is odd. Here, we see that our method has comparable convergence rates.} \begin{table} \dong{ \begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{\textbf{k}}& \multirow{2}{*}{\textbf{N}}& \multicolumn{2}{c|}{\bo{$\mathbf{u_h}$}} &\multicolumn{2}{c|}{\bo{$\mathbf{q_h}$}} & \multicolumn{2}{c|}{\bo{$\mathbf{p_h}$}} \\ \cline{3-8} & &\textbf{$L_{2}$ Error}&\textbf{Order }&\textbf{$L_{2}$ Error}&\textbf{Order }&\textbf{$L_{2}$ Error}&\textbf{Order }\\\hline \multirow{4}{*}{0}&\textbf{8}& 3.81e-1&-&1.96e-0 &-& 1.05e+1&-\\%\hline &\textbf{16}&8.00e-2 & 2.25& 5.14e-1 & 1.93 & 3.41e-0& 1.61\\%\hline &\textbf{32}&4.83e-2 & 0.73& 2.89e-1 &0.83 & 1.73e-0& 0.97\\%\hline &\textbf{64}&2.03e-2 & 1.25& 1.27e-1&1.18 & 7.99e-1& 1.12\\%\hline &\textbf{128}&1.01e-2 &1.01 & 6.33e-2&1.01 &3.97e-1 &1.01\\\hline \multirow{4}{*}{1}&\textbf{8}& 5.29e-2&-& 9.51e-1&-& 1.05e+1&-\\%\hline &\textbf{16}& 7.19e-2& -0.44& 8.16e-1& 0.22& 2.00e-0& 2.40\\%\hline &\textbf{32}& 1.67e-2& 2.10& 1.92e-1& 2.09& 3.78e-0& -0.92\\%\hline &\textbf{64}& 1.09e-3& 3.93& 1.26e-1& 0.61& 3.46e-2& 6.77\\%\hline &\textbf{128}& 1.49e-4& 2.88& 6.29e-2& 1.00& 1.08e-1& -1.64\\\hline \multirow{4}{*}{2}&\textbf{8}& 2.08e-3&-& 1.23e-1&-& 7.89e-0&-\\%\hline &\textbf{16}& 1.35e-4& 3.94& 3.48e-3& 5.14 &4.27e-1 & 4.21\\%\hline &\textbf{32}& 1.69e-5& 3.00& 4.31e-4& 3.01 &1.04e-1 & 2.04\\%\hline &\textbf{64}& 2.11e-6& 3.00& 5.38e-5& 3.00 &2.59e-2 & 2.01\\%\hline \hline \end{tabular} \caption{Numerical Experiment \bo{2 (third-order nonlinear equation)}: Errors and convergence orders of $u_h$, $q_h$, and $p_h$ for $\varepsilon = 1$} \label{fig:Table1} } \end{table} \begin{table} \dong{ \begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{\textbf{k}}& \multirow{2}{*}{\textbf{N}}& \multicolumn{2}{c|}{\bo{$\mathbf{u_h}$}} &\multicolumn{2}{c|}{\bo{$\mathbf{q_h}$}} & \multicolumn{2}{c|}{\bo{$\mathbf{p_h}$}} \\ \cline{3-8} & &\textbf{$L_{2}$ Error}&\textbf{Order }&\textbf{$L_{2}$ Error}&\textbf{Order }&\textbf{$L_{2}$ Error}&\textbf{Order }\\\hline \multirow{4}{*}{0}&\textbf{8}& 3.51e-1&-& 1.87e-0&-& 1.06e-0&-\\%\hline &\textbf{16}& 1.17e-1 & 1.58 & 6.75e-1 & 1.47 & 4.00e-1 & 1.40\\%\hline &\textbf{32}& 4.63e-2 & 1.34 & 2.80e-1 & 1.27 & 1.73e-1 & 1.21\\%\hline &\textbf{64}& 2.11e-2 & 1.14 & 1.31e-1 & 1.10 & 8.18e-2 & 1.08\\%\hline &\textbf{128}& 1.02e-2 & 1.05 & 6.36e-2 & 1.04 & 4.01e-2 &1.03\\\hline \multirow{4}{*}{1}&\textbf{8}& 6.62e-2&-& 8.79e-1&-& 1.17e-0&-\\%\hline &\textbf{16}& 4.08e-2& 0.70& 3.11e-1& 1.50& 7.86e-1& 0.58\\%\hline &\textbf{32}& 2.05e-2& 0.99& 1.48e-1& 1.07& 4.11e-1& 0.94\\%\hline &\textbf{64}& 1.48e-3& 3.80& 1.26e-1& 0.24& 5.84e-2& 2.82\\%\hline &\textbf{128}& 3.71e-4& 2.00& 6.29e-2& 1.00& 5.14e-2& 0.18\\\hline \multirow{4}{*}{2}&\textbf{8}& 1.30e-3&-& 3.00e-2&-& 1.86e-1&-\\%\hline &\textbf{16}& 1.44e-4& 3.17& 3.52e-3& 3.09& 4.06e-2& 2.19\\%\hline &\textbf{32}& 1.69e-5& 3.09& 4.29e-4& 3.04& 1.04e-2 & 1.96\\%\hline &\textbf{64}& 2.11e-6& 3.00& 5.42e-5& 2.98& 2.63e-3& 1.99\\%\hline \hline \end{tabular} \caption{Numerical Experiment \bo{2 (third-order nonlinear equation)}: Errors and convergence orders of $u_h$, $q_h$, and $p_h$ for $\varepsilon = 0.1$} \label{fig:Table2} } \end{table} \begin{table} \dong{ \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{\textbf{k}}& \multirow{2}{*}{\textbf{N}}& \multicolumn{2}{c|}{\bo{$\mathbf{u_h}$}} &\multicolumn{2}{c|}{\bo{$\mathbf{q_h}$}} & \multicolumn{2}{c|}{\bo{$\mathbf{p_h}$}} \\ \cline{3-8} & &\textbf{$L_{2}$ Error}&\textbf{Order }&\textbf{$L_{2}$ Error}&\textbf{Order }&\textbf{$L_{2}$ Error}&\textbf{Order }\\\hline \multirow{4}{*}{0}&\textbf{8}& 1.75e-1&-& 1.20e-0&-& 1.43e-1&-\\%\hline &\textbf{16}& 8.33e-2& 1.07& 5.62e-1& 1.10& 6.19e-2& 1.20\\%\hline &\textbf{32}& 4.07e-2& 1.03& 2.62e-1& 1.10& 2.73e-2& 1.18\\%\hline &\textbf{64}& 2.01e-2& 1.02& 1.27e-1& 1.05& 1.30e-2& 1.08\\%\hline &\textbf{128}& 1.00e-2& 1.00& 6.31e-2& 1.01& 6.40e-3&1.02\\\hline \multirow{4}{*}{1}&\textbf{8}& 2.30e-2&-& 8.93e-1&-& 9.15e-2&-\\%\hline &\textbf{16}& 4.08e-2& -0.83& 3.14e-1& 1.51& 7.09e-2&0.37 \\%\hline &\textbf{32}& 2.05e-3& 4.31& 2.53e-1& 0.31& 3.20e-2& 1.15\\%\hline &\textbf{64}& 1.04e-3& 0.99& 1.26e-1& 1.01& 1.59e-2& 1.01\\%\hline &\textbf{128}&5.89e-4 &0.81 & 6.29e-2 & 1.00 & 8.34e-3 &0.93 \\\hline \multirow{4}{*}{2}&\textbf{8}& 1.24e-3&-& 3.23e-2&-& 1.63e-2&-\\%\hline &\textbf{16}& 1.41e-4& 3.13& 3.39e-3& 3.25& 4.12e-3& 1.99\\%\hline &\textbf{32}& 1.70e-5& 3.06& 7.20e-4& 2.23& 1.75e-3& 1.24\\%\hline &\textbf{64}& 2.37e-6& 2.84& 1.08e-4& 2.74& 5.30e-4& 1.72\\%\hline \hline \end{tabular} \caption{Numerical Experiment \bo{2 (third-order nonlinear equation)}: Errors and convergence orders of $u_h$, $q_h$, and $p_h$ for $\varepsilon = 0.01$} \label{fig:Table3} } \end{table} Next, we plot the exact solutions and the numerical solutions with quadratic polynomials on 32 elements for different $\varepsilon$. Note that $u$ and $q$ are not changing with respect to $\varepsilon$ in this test problem, but $p$ depends on $\varepsilon$. So we plot $u, q$, $u_h$ and $q_h$ over the time period [0, 5] in Figure \ref{fig:graph5a} and the snapshot of them at the time $T=5$ in Figure \ref{fig:graph5a_finaltime}. The graphs of $p$ and $p_h$ for different $\varepsilon$ over the time period [0, 5] are plotted in Figure \ref{fig:graph5b} and the snapshots of them at the time $T=5$ are in Figure \ref{fig:graph5b_finaltime}. We see that in all the figures the graphs of numerical solutions match well with those of exact solutions. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.25]{SurfacePlot_Soluq_forall_eps.png} \end{center} \caption{Numerical Experiment 2 (third-order nonlinear equation): Solutions in time (Left: exact solution, Right: approximate solution) for the $\varepsilon$-independent $u$ and $q$.} \label{fig:graph5a} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[scale=0.25]{SinewavePlotT5uq.png} \end{center} \caption{\dong{Numerical Experiment 2 (third-order nonlinear equation): Solutions at the final time $T=5$ (Top: $u$ and $u_h$, bottom: $q$ and $q_h$).}} \label{fig:graph5a_finaltime} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[scale=0.25]{SurfacePlot_Solp_forall_eps.png} \end{center} \caption{{Numerical Experiment 2 (third-order nonlinear equation): Solution in time (Left: exact, Right: approximate, $\varepsilon = 1, 0.1, 0.01$ from top to bottom) for the $\varepsilon$-dependent $p$.}} \label{fig:graph5b} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[scale=0.25]{SinewavePlotT5p.png} \end{center} \caption{\dong{Numerical Experiment 2 (third-order nonlinear equation): the $\varepsilon$-dependent solution $p$ and the approximate solution $p_h$ at the final time $T=5$ (with $\varepsilon = 1, 0.1, 0.01$ from top to bottom).}} \label{fig:graph5b_finaltime} \end{figure} Finally, we test the conservation properties of our DG scheme. We plot the Hamiltonian and the Energy of the numerical solutions for $t\in [0, 50]$ for different $\varepsilon$ in Figure \ref{fig:graph4}. \dong{The errors of the energy and Hamiltonian for different $\varepsilon$ are plotted in Figure \ref{fig:graph4_error_eps1}}. We see that our method successfully conserves both Hamiltonian and energy. We note that, even though the energy and Hamiltonian are conserved for the KdV equations (i.e.\dong{,} the source term $g \equiv 0$), the manufactured solution of this particular test with a nonzero source term happens to bear these properties as well and thus serves as an ideal test case. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.25]{SinewaveReviseHamiltonianEnergyOriginalPlotJune.png} \end{center} \caption{ Numerical Experiment 2 (third-order nonlinear equation): Conservation of Hamiltonian (Left) and energy (Right) when $\varepsilon$ = 1 (top), 0.1 (middle), and 0.01 (bottom).} \label{fig:graph4} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[scale=0.25]{SineEps1HamiltonianEnergyErrorJune20PowerPoint.png} \includegraphics[scale=0.25]{SineEps0pt1HamiltonianEnergyErrorJune20PowerPoint.png} \includegraphics[scale=0.25]{SineEps0pt01HamiltonianEnergyErrorJune20PowerPoint.png} \end{center} \caption{\dong{Numerical Experiment 2 (third-order nonlinear equation): Errors of Hamiltonian (Left) and energy (Right) when $\varepsilon = 1$ (top), $0.1$ (middle) and $0.01$ (bottom).}} \label{fig:graph4_error_eps1} \end{figure} \subsection{Numerical Experiment 3} In this example, we test the KdV equation $$u_{t} + \varepsilon u_{xxx} + (f(u))_x = 0$$ with $\varepsilon = \frac{1}{24^2}$ and $f(u) = \frac{u^2}{2}$. The domain is $\Omega = [0,1]$ and we are testing a cnoidal-wave solution \[u(x,t) = Acn^2(z),\] where $cn(z) = cn(z|m)$ is the Jacobi elliptic function with modulus $m = 0.9$, $z = 4K(x-vt-x_0)$, $A = 192m\varepsilon K(m)^2$, $v = 64\varepsilon (2m-1)K(m)^2$, and $K(m)=\int_{0}^{\frac{\pi}{2}} \frac{d\theta}{\sqrt{(1-msin^2\theta)}}$ is the Jacobi elliptic integral of the first kind; see \cite{abramowitz1970handbook}. The parameter $x_0$ is arbitrary, so we take it to be zero. The solution $u$ has a spatial period 1. This benchmark problem has been tested for other conservative DG methods in \cite{bona2013conservative,KarakashianXing16,liu2016hamiltonian,zhang2019conservative}. Those methods conserve either the Hamiltonian or the energy of the solution, \dong{but not both.} \dong{In Table \ref{fig:Table4}, we display the $L^2$ errors of approximate solutions to $u, q$, and $p$ for $k=0, 1, 2$. The convergence orders are similar to those in the previous numerical experiments.} In Figure \ref{fig:graph8}, we plot the exact solution and the approximate solution using polynomial degree $k=2$ over 32 intervals over the time period $t\in[0, 5]$. \dong{The snapshots of the exact and the approximation solutions at the final time $T=5$ are shown in Figure \ref{fig:graph8_finaltime}.} We can see that the graphs of exact solution and the approximate solution match up well \dong{in both figures}. Next, we compute the numerical solution using $k=2$ on 32 intervals for a longer time $T=50$. The graphs of the Hamiltonian and energy of the DG solution versus time are displayed in Figure \ref{fig:graph9}, \dong{and the errors of Hamiltonian and energy are plotted on the second row of Figure \ref{fig:graph9}}. We can see that both the Hamiltonian and the energy have been conserved during the whole time period. \begin{table} \dong{ \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{\textbf{k}}& \multirow{2}{*}{\textbf{N}}& \multicolumn{2}{c|}{\bo{$\mathbf{u_h}$}} &\multicolumn{2}{c|}{\bo{$\mathbf{q_h}$}} & \multicolumn{2}{c|}{\bo{$\mathbf{p_h}$}} \\ \cline{3-8} & &\textbf{$L_{2}$ Error}&\textbf{Order }&\textbf{$L_{2}$ Error}&\textbf{Order }&\textbf{$L_{2}$ Error}&\textbf{Order }\\\hline \multirow{4}{*}{0}&\textbf{8}& 5.44e-1&-& 8.11&-& 3.18e-1&-\\%\hline &\textbf{16}& 2.72e-1& 1.00& 4.56& 0.83& 2.42e-1& 0.40\\%\hline &\textbf{32}& 9.89e-2& 1.46& 1.65& 1.47& 8.80e-2& 1.46\\%\hline &\textbf{64}& 4.50e-2& 1.14& 7.69e-1& 1.10& 3.05e-2& 1.53\\%\hline &\textbf{128}& 2.22e-2& 1.02& 3.87e-1& 0.99& 1.34e-2& 1.19\\\hline \multirow{4}{*}{1}&\textbf{8}& 1.26e-1&-&2.40 &-& 1.12e-1&-\\%\hline &\textbf{16}& 7.49e-2& 0.75& 2.82& -0.23& 1.51e-1& -0.43\\%\hline &\textbf{32}& 2.13e-2& 1.81& 1.41& 1.00& 7.23e-2& 1.06\\%\hline &\textbf{64}& 6.07e-3& 1.81& 7.41e-1& 0.93& 5.01e-2& 0.53\\%\hline &\textbf{128}& 1.57e-3& 1.95& 3.74e-1& 0.99& 2.88e-1& 0.80\\\hline \multirow{4}{*}{2}&\textbf{8}& 1.18e-1&-& 4.70&-& 2.58e-1&-\\%\hline &\textbf{16}& 1.60e-2& 2.88& 7.24e-1& 2.70& 8.08e-2& 1.68\\%\hline &\textbf{32}& 2.71e-3& 2.56& 5.96e-2& 3.60& 1.15e-2& 2.85\\%\hline &\textbf{64}& 3.47e-4& 2.96& 6.15e-3& 3.28& 6.08e-4& 4.21\\%\hline \hline \end{tabular} \caption{Numerical Experiment 3 (classical KdV equation): Errors and convergence orders of $u_h$, $q_h$, and $p_h$} \label{fig:Table4} } \end{table} \begin{figure}[!h] \begin{center} \includegraphics[scale=0.25]{CnoidalWaveSurfacePlotT5.png} \end{center} \caption{Numerical Experiment \bo{3 (classical KdV equation)}: Solution in time (Left: exact, Right: approximate) for the Cnoidal Wave. } \label{fig:graph8} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[scale=0.25]{CnoidalExactandApproxuqpReviseJune28.png} \end{center} \caption{\dong{Numerical Experiment 3 (classical KdV equation): Exact and approximate solutions at the final time $T=5$ for the Cnoidal Wave (Top: $u$ and $u_h$, middle: $q$ and $q_h$, bottom: $p$ and $p_h$).}} \label{fig:graph8_finaltime} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[scale=0.25]{CnoidalWaveRevisedHamandEnergyNormalPlotJune2022.png} \includegraphics[scale=0.25]{CnoidalWavePowerPointHamiltonianandEnergy.png} \end{center} \caption{Numerical Experiment 3 (classical KdV equation): Hamiltonian (Left) and energy (Right) conservation for the Cnoidal Wave. \dong{Shown on the second row are the corresponding errors.}} \label{fig:graph9} \end{figure} \section{Concluding Remarks} In this paper, we design and implement a \dong{new} conservative DG method for simulating solitary wave solutions to the generalized KdV equation. We prove that the method conserves the mass, energy and Hamiltonian of the solution. Numerical experiments confirm that our method does have the desirable conservation properties proved by our analysis. The convergence orders are also comparable to prior works by others. Future extensions include the investigation \dong{of} other choices of numerical fluxes, as well as applying the novel framework of devising \dong{new} conservative DG methods to other problems featuring physically interesting quantities that are conserved.
2,869,038,156,102
arxiv
\section{Introduction} COVID-19 severity is due to complications from the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) but the clinical course of the infection varies for individuals. Research suggests that patients with and without severe COVID-19 have different genetic, pathological, and clinical signatures \citep{geneticscovid,overmyer2021large}. Further, beyond viral factors, COVID-19 severity depends on host factors, emphasizing the need to use molecular data to better understand the individual response of the disease \citep{overmyer2021large}. In \cite{overmyer2021large}, blood samples from patients admitted to the Albany Medical Center, NY from 6 April 2020 to 1 May 2020 for moderate to severe respiratory issues who had COVID-19 or exhibited COVID-19-like symptoms were collected and quantified for transcripts, proteins, metabolomic features and lipids. In addition to the molecular (or omics) data, several clinical and demographic data were collected at the time of enrollment. The authors analyzed each omics data separately, correlated the biomolecules with several clinical outcomes including disease status and severity, and also considered pairwise associations of the omics data to better understand COVID-19 mechanisms. Their findings suggested that COVID-19 severity is likely due to dysregulation in lipid transport system. In this paper, we take a holistic approach to integrate the omics data and the outcome (i.e., COVID-19 patient groups). In particular, instead of assessing pairwise associations and using unsupervised statistical methods as was done in \citep{overmyer2021large} to correlate the omics data, we model the overall dependency structure among the omics data while simultaneously modeling the separation of the COVID-19 patient groups. Ultimately, our goal is to elucidate the molecular architecture of COVID-19 by identifying molecular signatures with potential to discriminate patients with and without COVID who were or were not admitted to the the intensive care unit (ICU). There exists many linear (e.g., canonical correlation analysis, CCA [\cite{Hotelling:1936,Carroll:1968,SafoBIOM2017,safo2018sparse}], co-inertia analysis \citep{min2019penalized}) and nonlinear (e.g.,\cite{Akaho:2001,Andrew:2013,Kan:2016,Benton2:2019} ) methods that could be used to associate the multiple views. Canonical Correlation Analysis with deep neural network (Deep CCA) \citep{Andrew:2013}, and its variations (e.g.\cite{Wang:2015},\citep{Benton2:2019}), for instance, have been proposed to learn nonlinear projections of two or more views that are maximally correlated. Refer to \cite{Guo:2021} for a review of some CCA methods. These association-based methods are all unsupervised and they do not use the outcome data (i.e., class labels) when learning the low-dimensional representations. A naive way of using the class labels and all the views simultaneously is to stack the different views and then to perform classification on the stacked data, but this approach does not appropriately model the dependency structure among the views. To overcome the aforementioned limitations, one-step linear methods (e.g., \cite{safosida:2021,zhang2018joint,Luo2014}) have been proposed that could be used to jointly associate the multiple views and to separate the COVID-19 patient groups. For instance, in \cite{safosida:2021}, we proposed a method that combined linear discriminant analysis (LDA) and CCA to learn linear representations that associate the views and separate the classes in each view. However, the relationships among the multiple views and the COVID-19 patient groups are too complex to be understood solely by linear methods. Nonlinear methods including kernel and deep learning methods could be used to model nonlinear structure among the views and between a view and the outcome. The literature is scarce on nonlinear joint association and separation methods. In \cite{hu2019multi}, a deep neural network method, multi-view linear discriminant analysis network (MvLDAN), was proposed to learn nonlinear projections of multiple views that maximally correlate the views and separate the classes in each view but the convergence of MvLDAN is not guaranteed. Further, MvLDAN and the nonlinear association-based methods for multiple views mentioned above have one major limitation: they do not rank or select features, as such it is difficult to interpret the models and this limits their ability to produce clinically meaningful findings. If we apply MvLDAN or any of the nonlinear association methods to the COVID-19 omics data, we will be limited in our ability to identify molecules contributing most to the association of the views and the separation of the COVID-19 patient groups. The problem of selecting or ranking features is well-studied in the statistical learning literature but less-studied in the deep learning literature, especially in deep learning methods for associating multiple views. In \cite{li2016deep}, a deep feature selection method that adds a sparse one-to-one linear layer between the input layer and the first hidden layer was proposed for feature selection. In another article \citep{chang2017dropout}, a feature ranking method based on variational dropout was proposed. These methods were developed for data from one view and are not directly applicable to data from multiple views. In \cite{TS:2019}, a two-step approach for feature selection using a teacher-student (TS) network was proposed. The ``teacher" step obtains the best low-dimensional representation of the data using any dimension reduction method (e.g., deep CCA). The ``student" step performs feature selection based on these low-dimensional representations. In particular, a single-layer network with sparse weights is trained to reconstruct the low-dimensional representations obtained from the ``teacher" step, and the features are ranked based on the weights. This approach is limiting because the model training (i.e., the identification of the low-dimensional representation of the data) and the feature ranking steps are separated as such, one cannot ensure that the top-ranked features identified are meaningful. We propose Deep IDA, (short for Deep Integrative Discriminant Analysis), a deep learning method for integrative discriminant analysis, to learn complex nonlinear relationships among the multiple molecular data, and between the molecular data and the COVID-19 patient groups. Deep IDA uses deep neural networks (DNN) to nonlinearly transform each view, constructs an optimization problem that takes as input the output from our DNN (i.e., the nonlinearly transformed views), and learns view-specific projections that result in maximum linear correlation of the transformed views and maximum linear separation within each view. Further, we propose a homogeneous ensemble approach for feature ranking where we implement Deep IDA on different training data subsets to yield low-dimensional representations (that are correlated among the views and separate the classes in each view), we aggregate the classification performance from these low-dimensional representations, we rank features based on the aggregates, and we obtain low-dimensional representations of the data based on the top-ranked variables. As a result, Deep IDA permits feature ranking of the views and enhances our ability to identify features from each view that contribute most to the association of the views and the separation of the classes within each view. We note that our framework for feature ranking is general and adaptable to many deep learning methods and has potential to yield explainable deep learning models for associating multiple views. Table \ref{tab:uniquefeatures} highlights the key features of Deep IDA in comparison with some linear and nonlinear methods for multi-view data. Results from our real data application and simulations with small sample sizes suggest that Deep IDA may be a useful method for small sample size problems compared to other deep learning methods for associating multiple views. The rest of the paper is organized as follows. In Section 2, we introduce the proposed method and algorithms for implementing the method. In Section 3, we use simulations to evaluate the proposed method. In Section 4, we use two real data applications to showcase the performance of the proposed method. We end with a conclusion remark in Section 5. \begin{table} \label{tab:uniquefeatures} \centering \begin{tabular}{lllll} \hline \hline Property/&Linear One-step & Deep IDA& Randomized &Deep CCA$^{*}$,\\ Methods& Methods& (Proposed) & KCCA$^{*}$&Deep GCCA$^{+}$\\ \hline \hline Nonlinear Relationships & & \checkmark& \checkmark & \checkmark \\ \hline Classification& \checkmark& \checkmark& & \\ \hline Variable ranking/selection& \checkmark& \checkmark& & \\ \hline Covariates&\checkmark & \checkmark & & \checkmark \\ \hline One-step&\checkmark & \checkmark& & \\ \hline \hline \end{tabular} \caption{Unique features of Deep IDA compared to other methods. *Only applicable to two views. +Covariates could be added as additional view in Deep GCCA.} \end{table} \section{Method} Let $\mathbf X^d \in \mathbf R^{n\times p_d}$ be the data matrix for view $d$, $d=1,\ldots,D$ (e.g., proeotimcs, metabolomics, image, clinical data). Each view, $\mathbf X^d$, has $p_d$ variables, all measured on the same set of $n$ individuals or units. Suppose that each unit belongs to one of two or more classes, $K$. Let $y_i, i=1,\ldots,n$ be the class membership for unit $i$. For each view, let $\mathbf X^d$ be a concatenation of data from each class, i.e., $\mathbf X^d = [\mathbf X_1^d,\mathbf X_2^d,...,\mathbf X_K^d]^T$, where $\mathbf X_k^d \in \mathbf R^{n_k\times p_d}, k = 1,...,K$ and $n = \sum_{k=1}^K n_k$. For the $k$-th class in the $d$-th view, $\mathbf X_k^d = [\mathbf x_{k,1}^d,\mathbf x_{k,2}^d,...,\mathbf x_{k,n_k}^d]^T$, where $\mathbf x_{k,i}^d \in \mathbf R^{p_d}$ is a column vector denoting the view $d$ data values for the $i$-th unit in the $k$-th class. Given the views and data on class membership, we wish to explore the association among the views and the separation of the classes, and also to predict the class membership of a new unit using the unit's data from all views or from some of the views. Additionally, we wish to identify the features that contribute most to the overall association among the views and the separation of classes within each view. Several existing linear (e.g., CCA, generalized CCA) and nonlinear (e.g., deep CCA, deep generalized CCA, random Kernel CCA) methods could be used to first associate the different views to identify low dimensional representations of the views that maximize the correlation among the views or that explain the dependency structure among the views. These low-dimensional representations and the data on class membership could then be used for classification. In this two-step approach, the classification step is independent of the association step and does not take into consideration the effect of class separation on the dependency structure. Alternatively, classification algorithms (e.g., linear or nonlinear methods) could be implemented on the stacked views, however, this approach ignores the association among the views. Recently, \cite{safosida:2021} and \cite{zhang2018joint} proposed one-step methods that couple the association step with the separation step and showed that these one-step methods often times result in better classification accuracy when compared to classification on stacked views or the two step methods: association followed by classification. We briefly review the one-step linear method \citep{safosida:2021} for joint association and classification since it is relevant to our method. \subsection{Integrative Discriminant Analysis (IDA) for joint association and classification} \cite{safosida:2021} proposed an integrative discriminant analysis (IDA) method that combines linear discriminant analysis (LDA) and canonical correlation analysis (CCA) to explore linear associations among multiple views and linear separation between classes in each view. Let $\mathbf{S}_{b}^{d}$ and $\mbox{\boldmath{$\Sigma_{w}$}}^d$ be the between-class and within-class covariances for the $d$-th view, respectively. That is, $\mathbf{S}_{b}^{d}= \frac{1}{n-1} \sum_{k=1}^K n_k (\mbox{\boldmath{$\mu$}}^d_k - \mbox{\boldmath{$\mu$}}^d)(\mbox{\boldmath{$\mu$}}^d_k - \mbox{\boldmath{$\mu$}}^d)^{{\mbox{\tiny T}}}$; $\mbox{\boldmath{$\Sigma_{w}$}}^d = \frac{1}{n-1}\sum\limits_{k=1}^{K}\sum\limits_{i=1}^{n}(\mathbf x_{ik}^d-{\mbox{\boldmath{$\mu$}}}_{k})(\mathbf x_{ik}^d-{\mbox{\boldmath{$\mu$}}}_{k}^d)^{{\mbox{\tiny T}}}$, and ${\mbox{\boldmath{$\mu$}}}_{k}^d = \frac{1}{n_{k}}\sum_{i=1}^{n_{k}}\mathbf x_{ik}^d$ is the mean for class $k$ in view $d$. Let $\mathbf S_{dj}, j<d$ be the cross-covariance between the $d$-th and $j$-th views. Let $\mathcal{M}^d=\mathbf S_w^{d^{-1/2}}\mathbf S^d_b\mathbf S_w^{d^{-1/2}}$ and $\mathcal{N}_{dj}=\mathbf S_w^{d^{-1/2}}\mathbf S_{dj}\mathbf S_w^{j^{-1/2}}$. The IDA method finds linear discriminant vectors $(\widehat{\mbox{\boldmath{${\Gamma}$}}}^1, \ldots, \widehat{\mbox{\boldmath{${\Gamma}$}}}^d)$ that maximally associate the multiple views and separate the classes within each view by solving the optimization problem: \begin{eqnarray} \label{eqn:ida} \max_{\mbox{\boldmath{${\Gamma}$}}^1,\cdots,\mbox{\boldmath{${\Gamma}$}}^D} \rho\sum_{d=1}^D\text{tr}(\mbox{\boldmath{${\Gamma}$}}^{{d{\mbox{\tiny T}}}}\mathcal{M}^d \mbox{\boldmath{${\Gamma}$}}^d) + \frac{2(1-\rho)}{D(D-1)}\sum_{d=1,d\ne j }^{D}\text{tr}(\mbox{\boldmath{${\Gamma}$}}^{{d{\mbox{\tiny T}}}}\mathcal{N}_{dj}\mbox{\boldmath{${\Gamma}$}}^j\mbox{\boldmath{${\Gamma}$}}^{{j{\mbox{\tiny T}}}}\mathcal{N}_{jd}\mbox{\boldmath{${\Gamma}$}}^d) ~\mbox{s.t.}~\text{tr}(\mbox{\boldmath{${\Gamma}$}}^{{d{\mbox{\tiny T}}}}\mbox{\boldmath{${\Gamma}$}}^d)=K-1. \end{eqnarray} The first term in equation (\ref{eqn:ida}) maximizes the sum of the separation of classes in each view, and the second term maximizes the sum of the pairwise squared correlations between two views. Here, $\rho$ was used to control the influence of separation or association in the optimization problem. The optimum solution for equation (\ref{eqn:ida}) was shown to solve $D$ systems of eigenvalue problem \citep{safosida:2021}. The discriminant loadings $(\widehat{\mbox{\boldmath{${\Gamma}$}}}^1, \ldots, \widehat{\mbox{\boldmath{${\Gamma}$}}}^d)$ correspond to the eigenvectors that maximally associate the views and separate the classes within each view. Once the discriminant loadings were obtained, the views were projected onto these loadings to yield the discriminant scores and samples were classified using nearest centroid. In order to obtain features contributing most to the association and separation, the authors imposed convex penalties on $\mbox{\boldmath{${\Gamma}$}}^d$ subject to modified eigensystem constraints. In the following sections, we will modify the IDA optimization problem, cast it as an objective function for deep neural networks to study nonlinear associations among the views and separation of classes within a view. We will also implement a feature ranking approach to identify features contributing most to the association of the views and the separation of the classes in a view. \subsection{Deep Integrative Discriminant Analysis (Deep IDA)} We consider a deep learning formulation of the joint association and classification method\citep{safosida:2021} to learn nonlinear relationships among multi-view data and between a view and a binary or multi-class outcome. We follow notations in \cite{Andrew:2013} to define our deep learning network. Assume that the deep neural network has $m=1,\ldots,M$ layers for view $d$ (each view can have its own number of layers), and each layer has $c_m^d$ nodes, for $m=1,\ldots,M-1$. Let $o_1,o_2,...,o_D$ be the dimensions of the final($M$th) layer for the $D$ views. Let $h^d_1 =s(W^d_1 \mathbf{x}^d + b^d_1) \in \Re^{c^d_1}$ be the output of the first layer for view $d$. Here, $\mathbf{x}^d$ is a length-$p^d$ vector representing a row in $\mathbf X^d$, $W^d_1 \in \Re^{ c_1^d \times p^d}$ is a matrix of weights for view $d$, $b^d_1 \in \Re^{c^d_1} $ is a vector of biases for view $d$ in the first layer, and $s \in \Re \longrightarrow \Re$ is a nonlinear activation function. Using $h^d_1$ as an input for the second layer, let the output of the second layer be denoted as $h^d_2 =s(W^d_2h^d_1 + b^d_2) \in \Re^{c^d_2}$, $W^d_2 \in \Re^{c_2^d \times c_1^d}$ and $b^d_2 \in \Re^{c^d_2}$. Continuing in this fashion, let the output of the $(m-1)$th layer be $h^d_{m-1} =s(W^d_{m-1} h^d_{m-2} + b^d_{m-1}) \in \Re^{c^d_{m-1}}$, $W^d_{m-1} \in \Re^{c^d_{m-1} \times c^d_{m-2}}$ and $b^d_{m-1} \in \Re^{c^d_{m-1}}$. Denote the output of the final layer as $f^d(\mathbf{x}^d, \theta^d) = s(W^d_M h^d_{M-1} + b^d_M) \in \Re^{o_d}$, where $\theta^d$ is a collection of all weights, $W^d_m$, and biases, $b^d_m$ for $m=1,\ldots,M$ and $d=1,\ldots,D$. In matrix notation, the output of the final layer of the $d$-th view is denoted as $\mathbf H^d = f^d(\mathbf X^d) \in \Re^{n \times o_d}$, where it is clear that $f^d$ depends on the network parameters. On this final layer, we propose to solve a modified IDA optimization problem to obtain projection matrices that maximally associate the views and separate the classes. Specifically, we propose to find a set of linear transformations $\mathbf A_d= [\mbox{\boldmath{${\alpha}$}}_{d,1},\mbox{\boldmath{${\alpha}$}}_{d,2},...,\mbox{\boldmath{${\alpha}$}}_{d,l}] \in \mathbf R^{o_d\times l}$, $l\leq \min\{K-1,o_1,...,o_D\}$ such that when the nonlinearly-transformed data are projected onto these linear spaces, the views will have maximum linear association and the classes within each view will be linearly separated. Figure \ref{fig:DeepIDA} is a visual representation of Deep IDA. For a specific view $d$, $\mathbf H^d = [\mathbf H_1^d,\mathbf H_2^d,...,\mathbf H_K^d]^T, \mathbf H_k^d \in \mathbf R^{n_k\times o_d}, k = 1,...,K$ and $n = \sum_{k=1}^K n_k$. For the $k$-th class in the $d$-th final output, $\mathbf H_k^d = [{\bf h}_{k,1}^d,{\bf h}_{k,2}^d,...,{\bf h}_{k,n_k}^d]^T$, where ${\bf h}_{k,i}^d \in \mathbf R^{o_d}$ is a column vector representing the output for subject $i$ in the $k$th class for view $d$. Using $\mathbf H^d$ as the data matrix for view $d$, we define the between-class covariance (i.e., $\mathbf S^d_b \in \mathbf R^{o_d\times o_d}$), the total covariance (i.e., $\mathbf S^d_t \in \mathbf R^{o_d\times o_d}$), and the cross-covariance between view $d$ and $j$ ($\mathbf S_{dj}\in \mathbf R^{o_d\times o_j}$ ) respectively as : $\mathbf S^d_b = \frac{1}{n-1} \sum_{k=1}^K n_k (\mbox{\boldmath{$\mu$}}^d_k - \mbox{\boldmath{$\mu$}}^d)(\mbox{\boldmath{$\mu$}}^d_k - \mbox{\boldmath{$\mu$}}^d)^{{\mbox{\tiny T}}}$; $\mathbf S^d_t = \frac{1}{n-1} \sum_{i=1}^n ({\bf h}^d_{k,i} - \mbox{\boldmath{$\mu$}}^d)({\bf h}^d_{k,i} - \mbox{\boldmath{$\mu$}}^d)^{{\mbox{\tiny T}}}= \frac{1}{n-1} (\mathbf H^{d^{{\mbox{\tiny T}}}}-\mbox{\boldmath{$\mu$}}^d \cdot \mathbf{1})(\mathbf H^{d^{{\mbox{\tiny T}}}}-\mbox{\boldmath{$\mu$}}^d \cdot \mathbf{1})^{{\mbox{\tiny T}}}$, and $S_{dj} = \frac{1}{n-1} (\mathbf H^{d^{{\mbox{\tiny T}}}}-\mbox{\boldmath{$\mu$}}^d \cdot \mathbf{1})(\mathbf H^{j^{{\mbox{\tiny T}}}}-\mbox{\boldmath{$\mu$}}^j \cdot \mathbf{1})^{{\mbox{\tiny T}}}$. Here, $\mathbf{1}$ is an all-ones row vector of dimension $n$, $\mbox{\boldmath{$\mu$}}_k^d = \frac{1}{n_k}\sum_{i=1}^{n_k} {\bf h}_{k,i}^d \in \mathbf R^{o_d}$ is the $k$-th class mean, and $\mbox{\boldmath{$\mu$}}^d = \frac{1}{K} \sum_{i=1}^K \mbox{\boldmath{$\mu$}}_k^d \in \mathbf R^{o_d}$ is the mean for projected view $d$. To obtain the linear transformations $\mathbf A_1,\mathbf A_2,...,\mathbf A_D$ and the parameters of Deep IDA defining the functions $f^d$, (i.e., the weights and biases), we propose to solve the optimization problem: \begin{align}\label{eqn:mainopt} \argmax_{\mathbf A_1,\ldots,\mathbf A_D, f^1,\ldots,f^D } \left\{ \rho \frac{1}{D} \sum_{d=1}^D tr[\mathbf A_d^T\mathbf S_b^d\mathbf A_d] + (1-\rho) \frac{2}{D(D-1)} \sum_{d=1}^D \sum_{j,j\neq d}^D tr[\mathbf A_d^T \mathbf S_{dj} \mathbf A_j \mathbf A_j^T \mathbf S_{dj}^T \mathbf A_d] \right\} \nonumber\\ \mbox{subject to } tr[\mathbf A_d^T\mathbf S_t^d\mathbf A_d] = l, \forall d = 1,...,D, \end{align} where $tr[]$ is the trace of some matrix and $\rho$ is a hyper-parameter that controls the relative contribution of the separation and the association to the optimization problem. Here, the first term is an average of the separation for the $D$ views, and the second term is an average of the pairwise squared correlation between two different views ($\frac{D(D-1)}{2}$ correlation measures in total). \begin{figure}[H] \centering \includegraphics[height=7cm,width=11cm]{Plots/process.png} \caption{The framework of Deep IDA. Classes are represented by shapes and views are represented by colors. The deep neural networks (DNN) are used to learn nonlinear transformations of the $D$ views, the outputs of the DNN for the views ($f^d$) are used as inputs in the optimization problem, and we learn linear projections $\mathbf A_d, d=1,\ldots,D$ that maximally correlate the nonlinearly transformed views and separate the classes within each view. } \label{fig:DeepIDA} \end{figure} For fixed Deep IDA parameters, (i.e., weights and biases), equation (\ref{eqn:mainopt}) reduces to solving the optimization problem: \begin{align}\label{eqn:Asol} \argmax_{\mathbf A_1,\ldots,\mathbf A_D} \left\{ \rho \frac{1}{D} \sum_{d=1}^D tr[\mathbf A_d^T\mathbf S_b^d\mathbf A_d] + (1-\rho) \frac{2}{D(D-1)} \sum_{d=1}^D \sum_{j,j\neq d}^D tr[\mathbf A_d^T \mathbf S_{dj} \mathbf A_j \mathbf A_j^T \mathbf S_{dj}^T \mathbf A_d] \right\} \nonumber\\ \mbox{subject to } tr[\mathbf A_d^T\mathbf S_t^d\mathbf A_d] = l, \forall d = 1,...,D. \end{align} Denote ${\mathbf S_t^d}^{-\frac{1}{2}}$ as the square root of the inverse of $\mathbf S_t^d$. With the assumption that $o_d < n$, $\mathbf S_t^d$ is non-singular, as such we can take the inverse. Let $\mathcal{M}^d = {\mathbf S_t^d}^{-\frac{1}{2}} \mathbf S_b^d {\mathbf S_t^d}^{-\frac{1}{2}}$, $\mathcal{N}_{dj} = {\mathbf S_t^d}^{-\frac{1}{2}} \mathbf S_{dj} {\mathbf S_t^j}^{-\frac{1}{2}}$ and $\mbox{\boldmath{${\Gamma}$}}_d = {\mathbf S_t^d}^{\frac{1}{2}} \mathbf A_d$. Then, the optimization problem in equation (\ref{eqn:Asol}) is equivalently \begin{align}\label{eqn:Asol2} \argmax_{\mbox{\boldmath{${\Gamma}$}}_1,\mbox{\boldmath{${\Gamma}$}}_2,...,\mbox{\boldmath{${\Gamma}$}}_D} \left\{ \rho \frac{1}{D} \sum_{d=1}^D tr[\mbox{\boldmath{${\Gamma}$}}_d^T\mathcal{M}^d\mbox{\boldmath{${\Gamma}$}}_d] + (1-\rho) \frac{2}{D(D-1)} \sum_{d=1}^D \sum_{j,j\neq d}^D tr[\mbox{\boldmath{${\Gamma}$}}_d^T \mathcal{N}_{dj} \mbox{\boldmath{${\Gamma}$}}_j \mbox{\boldmath{${\Gamma}$}}_j^{{\mbox{\tiny T}}} \mathcal{N}_{dj}^{{\mbox{\tiny T}}} \mbox{\boldmath{${\Gamma}$}}_d] \right\} \nonumber\\ \mbox{subject to } tr[\mbox{\boldmath{${\Gamma}$}}_d^{{\mbox{\tiny T}}}\mbox{\boldmath{${\Gamma}$}}_d] = l, \forall d = 1,...,D, \end{align} and the solution reduces to solving a system of eigenvalue problems. More formally, we have the following theorem. \begin{thm}\label{thm:GEVPmain} Let $\mathbf S_{t}^{d}$ and $\mathbf S_{b}^{d}$ respectively be the total covariance and the between-class covariance for the top-level representations $\mathbf H^d, d=1,\ldots,D$. Let $\mathbf S_{dj}$ be the cross-covariance between top-level representations $d$ and $j$. Assume $\mathbf S_{t}^{d} \succ 0$. Define $\mathcal{M}^d = {\mathbf S_t^d}^{-\frac{1}{2}} \mathbf S_b^d {\mathbf S_t^d}^{-\frac{1}{2}}$ and $\mathcal{N}_{dj} = {\mathbf S_t^d}^{-\frac{1}{2}} \mathbf S_{dj} {\mathbf S_t^j}^{-\frac{1}{2}}$. Then $\mbox{\boldmath{${\Gamma}$}}^d \in \Re^{o_d \times l}$, $l \le \min\{K-1, o_1,\ldots,o_D\}$ in equation (\ref{eqn:Asol2}) are eigenvectors corresponding respectively to eigenvalues $\mbox{\boldmath{${\Lambda}$}}_{d}=$diag$(\lambda_{d_{k}},\ldots,\lambda_{d_{l}})$, $\lambda_{d_{k}} > \cdots > \lambda_{d_{l}}>0$ that iteratively solve the eigensystem problems: \begin{align*} \left(c_1\mathcal{M}^d + c_2\sum_{j\neq d}^D \mathcal{N}_{dj}\mbox{\boldmath{${\Gamma}$}}_j\mbox{\boldmath{${\Gamma}$}}_j^{{\mbox{\tiny T}}}\mathcal{N}_{dj}^{{\mbox{\tiny T}}}\right) \mbox{\boldmath{${\Gamma}$}}_d &= \mbox{\boldmath{${\Lambda}$}}_d \mbox{\boldmath{${\Gamma}$}}_d, \forall d = 1,...,D \end{align*} where $c_1 = \frac{\rho}{D}$ and $c_2 = \frac{2(1-\rho)}{D(D-1)}$. \end{thm} The proof of Theorem 1 is in the supplementary material. We can initialize the algorithm using any arbitrary normalized nonzero matrix. After iteratively solving $D$ eigensystem problems until convergence, we obtain the optimized linear transformations $\widetilde{\mbox{\boldmath{${\Gamma}$}}}_1,...,\widetilde{\mbox{\boldmath{${\Gamma}$}}}_D$ which maximize both separation of classes in the top-level representations, $\mathbf H^d$, and the association among the top-level representations. Since we find the eigenvector-eigenvalue pairs of $(c_1\mathcal{M}^d + c_2\sum_{j=1,j\neq d}^D \mathcal{N}_{dj}\mbox{\boldmath{${\Gamma}$}}_j\mbox{\boldmath{${\Gamma}$}}_j^T\mathbf N_{dj}^T)$, the columns of $\widetilde{\mbox{\boldmath{${\Gamma}$}}}_d$, $d = 1,...,D$ are orthogonal and provide unique information that contributes to the association and separation in the top-level representations. Given the optimized linear transformations $\widetilde{\mbox{\boldmath{${\Gamma}$}}}_1,...,\widetilde{\mbox{\boldmath{${\Gamma}$}}}_D$, we construct the objective function for the $D$ deep neural networks as: \begin{align} \label{eqn:lossobj} \argmax_{f^1,f^2,...,f^D} c_1 \sum_{d=1}^D tr[\widetilde{\mbox{\boldmath{${\Gamma}$}}}_d^{{\mbox{\tiny T}}}\mathcal{M}^d\widetilde{\mbox{\boldmath{${\Gamma}$}}}_d] + c_2 \sum_{d=1}^D \sum_{j, j\neq d}^D tr[\widetilde{\mbox{\boldmath{${\Gamma}$}}}_d^{{\mbox{\tiny T}}} \mathcal{N}_{dj} \widetilde{\mbox{\boldmath{${\Gamma}$}}}_j \widetilde{\mbox{\boldmath{${\Gamma}$}}}_j^{{\mbox{\tiny T}}} \mathcal{N}_{dj}^T \widetilde{\mbox{\boldmath{${\Gamma}$}}}_d]. \end{align} \begin{thm}\label{thm:loss} For $d$ fixed, let $\eta_{d,1}, \ldots, \eta_{d,l}$, $l \le \min\{K-1, o_1,\dots,o_D\}$ be the largest $l$ eigenvalues of $c_1\mathcal{M}^d + c_2\sum_{j\neq d}^D \mathcal{N}_{dj}\mbox{\boldmath{${\Gamma}$}}_j\mbox{\boldmath{${\Gamma}$}}_j^{{\mbox{\tiny T}}}\mathcal{N}_{dj}^{{\mbox{\tiny T}}}$. Then the solution $\widetilde{f}^d$ to the optimization problem in equation (\ref{eqn:lossobj}) for view $d$ maximizes \begin{align} \sum_{r=1}^l \eta_{d,r}. \end{align} \end{thm} The objective function in Theorem 2 aims to maximize the sum of the $l$ largest eigenvalues for each view. In obtaining the view-specific eigenvalues, we use the cross-covariances between that view and each of the other views, and the total and between-class covariances for that view. Thus, by maximizing the sum of the eigenvalues, we are estimating corresponding eigenvectors that maximize both the association of the views and the separation of the classes within each view. By Theorem 2, the solution $\widetilde{f}^1,\ldots,\widetilde{f}^D$, i.e. weights and biases for the $D$ neural networks of the optimization problem (\ref{eqn:lossobj}) is also given by the following: \begin{align}\label{eqn:lossobj2} \argmax_{f^1,f^2,...,f^D} \sum_{d=1}^D \sum_{r=1}^l \eta_{d,r}. \end{align} The objectives (\ref{eqn:Asol2}) and (\ref{eqn:lossobj2}) are naturally bounded because the characteristic roots of every ${\mathbf S^d_t}^{-1}\mathbf S^d_b$ (and hence ${\mathbf S_t^d}^{-\frac{1}{2}} \mathbf S_b^d {\mathbf S_t^d}^{-\frac{1}{2}}$) is bounded and every squared correlation is also bounded. This guarantees convergent solutions of the loss function in equation (\ref{eqn:lossobj2}) compared to the method in \citep{dorfer2015deep} that constrain the within-group covariance and has unbounded loss function. We optimize the objective in (\ref{eqn:lossobj2}) with respect to the weights and biases for each layer and each view to obtain $\widetilde{f^1},...,\widetilde{f^D}$. The estimates $\widetilde{f^1(\mathbf X^1)},...,\widetilde{f^D(\mathbf X^D)}$ are used as the low-dimensional representation for classification. For classification, we follow the approach in \cite{safosida:2021} and we use nearest centroid to assign future events to the class with the closest mean. For this purpose, we have the option to use the pooled low-dimensional representations $\widehat{f}=(\widetilde{f^1(\mathbf X^1)},...,\widetilde{f^D(\mathbf X^D)})$ or the individual estimates $\widetilde{f^d(\mathbf X^d)}, d=1,\ldots,D$. \subsection{Optimization and Algorithm} \begin{itemize} \item Feed forward and calculate the loss. The output for $D$ deep neural networks are $\mathbf H^1,...,\mathbf H^D$, which includes the neural network parameters (i.e., the weights and biases). Based on the objective in equation (\ref{eqn:lossobj2}), the final loss is calculated and denoted as $\mathcal{L} = -\sum_{d=1}^D \sum_{r=1}^l \eta_{d,r}$. \item Gradient of the loss function. The loss function $\mathcal{L}$ depends on the estimated linear projections $\widetilde{\mbox{\boldmath{${\Gamma}$}}}^d, d=1,\cdots,D$ and since these linear projections use the outputs of the last layer of the network, there are no parameters involved. Therefore we calculate gradient of the loss function with respect to the view-specific output, i.e., $\frac{\partial \mathcal{L}}{\partial \mathbf H^d}, d =1,\ldots,D$. \item Gradient within each sub-network. Since each view-specific sub-network is propagated separately, we can calculate the gradient of each sub-network independently. As the neural network parameters (i.e., weights and biases) of view $d$ network is denoted as $\theta^d$, we can calculate the partial derivative of last layer with respect to sub-network parameters as $\frac{\partial \mathbf H^d}{\partial \theta^d}$. These networks include shallow or multiple layers and nonlinear activation functions, such as Leaky-ReLu \citep{LeakyRelu:2013}. \item Deep IDA update via gradient descent. By the chain rule, we can calculate $\nabla_{\theta^d} \mathcal{L} = \frac{\partial \mathcal{L}}{\partial \mathbf H^d} \cdot \frac{\partial \mathbf H^d}{\partial \theta^d}$. We use the \textit{autograd} function in the PyTorch \citep{NEURIPS2019_9015} package to compute this gradient. Therefore, for every optimization step, a stochastic gradient descent-based optimizer, such as Adam \citep{kingma2014adam}, is used to update the network parameters. \end{itemize} We describe the Deep IDA algorithm in Algorithm 1. We also describe in Algorithm 2 the approach for obtaining the linear projections using the output of the final layer. \begin{algorithm} \caption{Algorithm of Deep IDA } \begin{algorithmic}[1] \INPUT Training data $\mathbf X = \{\mathbf X^1,\mathbf X^2,..., \mathbf X^D\} $ and class labels $\mathbf{y}$; number of nodes of each layer in $D$ neural networks (including dimensions of linear sub-spaces to project onto, $o_1,o_2,...,o_D$); learning rate $\alpha$ \OUTPUT Optimized weights and biases for $D$ neural networks and corresponding estimates ($\widetilde{f^1(\mathbf X^1)},...,\widetilde{f^D(\mathbf X^D)})$ \STATE \textbf{Initialization} Assign random numbers to weights and biases for $D$ neural networks \WHILE{loss not converge} \STATE Feed forward the network of each view with latest weights and biases to obtain the final layer $\mathbf H^d = f^d(\mathbf X^d) \in \mathbf R^{n\times o_d}, d=1,2,..,D $ \STATE Apply Algorithm 2 to obtain $\widetilde{\mbox{\boldmath{${\Gamma}$}}}_1,...,\widetilde{\mbox{\boldmath{${\Gamma}$}}}_D$ \STATE Compute eigenvalues of $c_1\mathcal{M}^d + c_2\sum_{j,j\neq d}^D \mathcal{N}_{dj}\widetilde{\mbox{\boldmath{${\Gamma}$}}}_j\widetilde{\mbox{\boldmath{${\Gamma}$}}}_j^{{\mbox{\tiny T}}}\mathcal{N}_{dj}^{{\mbox{\tiny T}}}$ to obtain the loss function $-\sum_{d=1}^D \sum_{i=1}^l \eta_{d,i}$ \STATE Compute the gradient of weights and biases for each network by the PyTorch \textit{Autograd} function \STATE Update the weights and biases with the specified learning rate $\alpha$ \ENDWHILE \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{Algorithm for iteratively solving Eigenvalue Problem} \begin{algorithmic}[1] \INPUT Training data $\mathbf H = \{\mathbf H^1,\mathbf H^2,..., \mathbf H^D\} $ and corresponding class labels $\mathbf{y}$; convergence criteria $\epsilon$ \OUTPUT Optimized discriminant loadings $\widetilde{\mbox{\boldmath{${\Gamma}$}}}_1,...,\widetilde{\mbox{\boldmath{${\Gamma}$}}}_D$ \STATE Compute $\mathcal{M}^d$, $\mathcal{N}_{dj}$ for $d,j=1,2,..,D $ \WHILE{$\max_{d=1,..,D} \frac{\|\widehat{\mbox{\boldmath{${\Gamma}$}}}_{d_{,new}} - \widehat{\mbox{\boldmath{${\Gamma}$}}}_{d_{,old}}\|_F^2}{\| \widehat{\mbox{\boldmath{${\Gamma}$}}}_{d_{,old}}\|_F^2} > \epsilon$} \STATE \textbf{for} $d=1,..,D$ \textbf{do}: fix $\widehat{\mbox{\boldmath{${\Gamma}$}}}_j, \forall j\neq d$, compute $\widehat{\mbox{\boldmath{${\Gamma}$}}}_d$ by \textbf{Theorem 1} \ENDWHILE \STATE Set $\widetilde{\mbox{\boldmath{${\Gamma}$}}}_d=\widehat{\mbox{\boldmath{${\Gamma}$}}}_d, \forall d=1,...,D$ \end{algorithmic} \end{algorithm} \subsection{Comparison of Deep IDA with Multi-view Linear Discriminant Analysis Network, MvLDAN} Our proposed method is related to the method in \cite{hu2019multi} since we find linear projections of nonlinearly transformed views that separate the classes within each view and maximize the correlation among the views. The authors in \cite{hu2019multi} proposed to solve the following optimization problem for linear projection matrices $\mathbf A_1, \cdots,\mathbf A_D$ and neural network parameters (weights and biases): \begin{equation}\label{mvldna} \argmax_{f^1,\cdots,f^D, \mathbf A_1,\cdots,\mathbf A_D} tr\left((\mathbf S_w + \beta\mathbf A^{{\mbox{\tiny T}}}\mathbf A)^{-1}(\mathbf S_b + \lambda\mathbf S_c) \right), \end{equation} where $\mathbf A =[\mathbf A_1^{{\mbox{\tiny T}}} \cdots \mathbf A^{D^{{\mbox{\tiny T}}}}]^{{\mbox{\tiny T}}}$ is a concatenation of projection matrices from all views, $\mathbf S_b$ and $\mathbf S_w$ are the between-class and within-class covariances for all views, respectively, and $\mathbf S_c$ is the cross-covariance matrix for all views. Further, $\lambda$ and $\beta$ are regularization parameters. The authors then considered to learn the parameters of the deep neural network by maximizing the smallest eigenvalues of the generalized eigenvalue problem arising from equation (\ref{mvldna}) that do not exceed some threshold that is specified in advance. Although we have the same goal as the authors in \cite{hu2019multi}, our optimization formulation in equation (\ref{eqn:mainopt}), and our loss function are different. We constrain the total covariance matrix $\mathbf S_t$ instead of $\mathbf S_w$ and as noted above, our loss function is bounded. As such, we do not have convergence issues when training our deep learning parameters. A major drawback of MvLDAN (and existing nonlinear association-based methods for multi-view data) is that they cannot be used to identify variables contributing most to the association among the views and/or separation in the classes. In the next section, we propose an approach that bridges this gap. \subsection{Feature Ranking via Bootstrap} A main limitation of existing nonlinear methods for integrating data from multiple views is that it is difficult to interpret the models and this limits their ability to produce clinically meaningful findings. We propose a general framework for ranking features in deep learning models for data from multiple views that is based on ensemble learning. Specifically, we propose a homogeneous ensemble approach for feature ranking where we implement Deep IDA on different training data subsets to yield low-dimensional representations (that are correlated among the views while separating the classes in each view), we aggregate the classification performance from these low-dimensional representations, we rank features based on the aggregates, and we obtain a final low-dimensional representations of the data based on the top-ranked variables. We emphasize that while we embed Deep IDA in this feature ranking procedure, in principle, any method for associating multiple views can be embedded in this process. This makes the proposed approach general. We outline our feature ranking steps below. Figure \ref{fig:my_label} is a visual representation of the feature ranking procedure. \begin{enumerate} \item Generate $M$ bootstrap sets of sample indices of the same sample size as the original data by random sampling the indices with replacement. Denote the bootstrap sets of indices as $B_1,B_2,...,B_M$. Let the out-of-bag sets of indices be $B_1^c,B_2^c,...,B_M^c$. In generating the bootstrap training sets of indices, we use stratified random sampling to ensure that the proportions of samples in each class in the bootstrap sets of indices are similar to the original data. \item Draw $q$ bootstrap sets of feature indices for each view. For view $j$, $j=1,\cdots,D$, draw $0.8p_j$ samples from the index set and denote as $V_{m,j}$. $V_m = \{V_{m,1},V_{m,2},...,V_{m,D}\}$ is the m-th bootstrap feature index for all D views. \item Pair sample and feature index sets randomly. Denote the pairing results as $(B_1,V_1),(B_2,V_2),...,(B_M,V_M)$. For each pair $(B_m,V_m)$ and $(B_m^c,V_m),(m=1,2,...,M),$ extract corresponding subsets of data. \item For the $m$th pair, denote the bootstrap data as $\mathbf X_{m,1},...,\mathbf X_{m,D}$ and the out-of-bag data as $\mathbf X_{m,1}^c,...,\mathbf X_{m,D}^c$. Train Deep IDA based on $\mathbf X_{m,1},...,\mathbf X_{m,D}$, and calculate the test classification rate based on $\mathbf X_{m,1}^c,...,\mathbf X_{m,D}^c$. Record this rate as baseline classification rate for pair $m, m=1,2,...,M$. \item For the $d$th view in the $m$th pair, permute the $k$th variable in $\mathbf X_{m,d}^c$ and keep all other variables unchanged. Denote the permuted view $d$ data as $\mathbf X_{m,d,k-permuted}^c$. Use the learned model from Step 4 and the permuted data $(\mathbf X_{m,1}^c,...,\mathbf X_{m,d,k-permuted}^c,...,\mathbf X_{m,D}^c)$ to obtain the classification rate for the permuted data. \item Repeat Step 5 for $m=1,...,M$, $d=1,...,D$ and $ k = 1,...,p_d$. Record the variables that lead to a decrease in classification rate when using the permuted data. \item For the $d$th view, calculate the occurrence proportion of variable $k$, $k=1,2,...,p_d$ (i.e., the proportion of times a variable leads to a decrease in classification accuracy) as $\frac{n_k}{N_k}$, where $n_k$ denotes the number of times that permuting variable $k$ leads to a decrease in out-of-bag classification rate, and $N_k$ denotes the number of times that variable $k$ is permuted (i.e. the total number of times variable $k$ is selected in the bootstrap feature index sets). Repeat this process for all views. \item For each view, rank the variables based on the occurrence proportions and select the top-ranked variables as the important variables. The top-ranked variables could be the top $r$ variables or top $r\%$ variables. \end{enumerate} Once we have obtained the top-ranked variables for each view (number of variables need to be in advance), we train Deep IDA on the original data but with these top-ranked variables. If testing data are available, say $\mathbf X^d_{test}$, we use the learned neural network parameters to construct the output of the top-level representations for each view, i.e., $\mathbf H^d_{test}=\widetilde{f^d(\mathbf X^d_{test}}), d=1,\dots,D$. These are then used in a nearest neighbor algorithm to predict the test classes. Thus, our final low-dimensional representations of the data are based on features from each view that contribute most to the association of the views and the separation of the classes within each view. We implement the proposed algorithm as a Python 3.0 package with dependencies on NumPy \citep{oliphant2006guide} and PyTorch \citep{NEURIPS2019_9015} for numerical computations, and Matlib for model visualization. \begin{figure} \centering \includegraphics[height=16cm,width=16cm]{Plots/vs_version3.png} \caption{The framework of feature ranking process. A) Bootstrapping samples and features. It includes Steps 1 and 2. $V_m$: the $m$-th bootstrap feature index; $B_m$: the $m$-th bootstrap sample index; $B_m^c$: the $m$-th bootstrap out-of-bag sample index. B) Pairing data, training the reference model, permuting and recording the decrease in classification performance. This includes Steps 3-6. C) Ranking features based on how often the baseline classification accuracy is reduced when permuted. This includes Steps 7 and 8. } \label{fig:my_label} \end{figure} \section{Simulations} \subsection{Set-up} We conduct simulation studies to assess the performance of Deep IDA for varying data dimensions, and as the relationship between the views and within a view become more complex. We demonstrate that Deep IDA is capable of i) simultaneous association of data from multiple views and discrimination of sample classes, and ii) identifying signal variables. We consider two different simulation Scenarios. In Scenario One, we simulate data to have linear relationships between views and linear decision boundaries between classes. In Scenario Two, we simulate data to have nonlinear relationships between views and nonlinear decision boundaries between classes. There are $K=3$ classes and $D=2$ and $D=3$ views in Scenario One. In Scenario Two, there are $K=2$ classes and $D=2$ views. In all Scenarios, we generate 20 Monte Carlo training, validation, and testing sets. We evaluate the proposed and existing methods using the following criteria: i) test accuracy, and ii) feature selection. For feature selection, we evaluate the methods ability to select the true signals. In Scenario One, the first 20 variables are important, and in Scenario Two, the Top $10\%$ of variables in view 1 are signals. Since Deep IDA and the teacher-student (TS) framework rank features, and SIDA assigns zero weights to unimportant variables, for fair comparison, we only assess the number of signal variables that are in the Top 20 (for Scenario One) and the Top $10\%$ (for Scenario Two) variables selected by the methods. We compare test accuracy for Deep IDA with and without the variable ranking approach proposed in this manuscript. \subsection{Comparison Methods} We compare Deep IDA with classification-, association-, and joint association and classification-based methods. For classification-based methods, we consider the support vector machine \citep{Hastie:2009} on stacked views. For association-based methods, we consider nonlinear methods such as deep canonical correlation analysis (Deep CCA) \citep{Andrew:2013}, deep generalized CCA (DGCCA) \citep{Benton2:2019} when there are three or more views, and randomized kernel CCA (RKCCA) \citep{pmlr-v32-lopez-paz14}. The association-based methods only consider nonlinear associations between views, as such we follow with SVM to perform classification using the learned low-dimensional representations from the methods. We also compare Deep IDA to SIDA \citep{safosida:2021}, a joint association and classification method that models linear relationships between views and among classes in each view. We perform SIDA and RKCCA using the Matlab codes the authors provide. We set the number of random features in RKCCA as 300 and we select the bandwidth of the radial basis kernel using median heuristic \citep{garreau2017large}. We perform Deep CCA and Deep generalized CCA using the PyTorch codes the authors provide. We couple Deep CCA and Deep GCCA with the teacher-student framework (TS) \citep{TS:2019} to rank variables. We also investigate the performance of these methods when we use variables selected from Deep IDA. \subsection{Linear Simulations} We consider two simulation settings in this Scenario and we simulate data similar to simulations in \cite{safosida:2021}. In Setting One, there are $D=2$ views $\mathbf X^{1}$ and $\mathbf X^{2}$, with $p_1=1,000$ and $p_2=1,000$ variables respectively. There are $K=3$ classes each with size $n_k=180$, $k=1,2,3$ giving a total sample size of $n=540$. Let $\mathbf X^{d} = [\mathbf X_1^d, \mathbf X_2^d, \mathbf X_3^d], d=1,2$ be a concatenation of data from the three classes. The combined data $\left(\mathbf X^{1}_k, \mathbf X^{2}_k\right)$ for each class are simulated from $N(\mbox{\boldmath{$\mu$}}_k, \mbox{\boldmath{$\Sigma$}})$, $\mbox{\boldmath{$\mu$}}_k = (\mbox{\boldmath{$\mu$}}^1_k, \mbox{\boldmath{$\mu$}}^2_k)^{{\mbox{\tiny T}}} \in \Re^{p_1 + p_2}, k=1,2,3$ is the combined mean vector for class $k$; $\mbox{\boldmath{$\mu$}}^1_k \in \Re^p_1, \mbox{\boldmath{$\mu$}}^2_k \in \Re^p_2$ are the mean vectors for $\mathbf X^{1}_k$ and $\mathbf X^{2}_k$ respectively. The covariance matrix $\mbox{\boldmath{$\Sigma$}}$ for the combined data for each class is partitioned as \begin{eqnarray} \mbox{\boldmath{$\Sigma$}} =\left( \begin{array}{cc} \mbox{\boldmath{$\Sigma$}}^{1} & \mbox{\boldmath{$\Sigma$}}^{12} \nonumber\\ \mbox{\boldmath{$\Sigma$}}^{21} & \mbox{\boldmath{$\Sigma$}}^{2} \nonumber\ \end{array} \right), \mbox{\boldmath{$\Sigma$}}^1 =\left( \begin{array}{cc} \widetilde{\mbox{\boldmath{$\Sigma$}}}^{1} & \textbf{0} \nonumber\\ \textbf{0} & \mathbf I_{p_1-20} \nonumber\ \end{array} \right), \mbox{\boldmath{$\Sigma$}}^2 =\left( \begin{array}{cc} \widetilde{\mbox{\boldmath{$\Sigma$}}}^{2} & \textbf{0} \nonumber\\ \textbf{0} & \mathbf I_{p_2-20} \nonumber\ \end{array} \right) \end{eqnarray} \noindent where $\mbox{\boldmath{$\Sigma$}}^{1}$, $\mbox{\boldmath{$\Sigma$}}^{2}$ are respectively the covariance of $\mathbf X^{1}$ and $\mathbf X^{2}$, and $\mbox{\boldmath{$\Sigma$}}^{12}$ is the cross-covariance between the two views. We generate $\widetilde{\mbox{\boldmath{$\Sigma$}}}^{1}$ and $\widetilde{\mbox{\boldmath{$\Sigma$}}}^{2}$ as block diagonal with 2 blocks of size 10, between-block correlation 0, and each block is a compound symmetric matrix with correlation 0.8. We generate the cross-covariance matrix $\mbox{\boldmath{$\Sigma$}}^{12}$ as follows. Let $\mathbf{V}^{1} = [\mathbf{V}^{1}_{1},~ \mathbf{0}_{(p_1-20) \times 2}]^{{\mbox{\tiny T}}} \in \Re^{p_1 \times 2 }$ and the entries of ${\bf V}^{1}_{1} \in \Re^{20 \times 2}$ are \textit{i.i.d} samples from U(0.5,1). We similarly define $\mathbf{V}^2$ for the second view, and we normalize such that $\mathbf{V}^{1^{T}}\mbox{\boldmath{$\Sigma$}}^{1}\mathbf{V}^{1} = \mathbf{I}$ and $\mathbf{V}^{2^{{\mbox{\tiny T}}}}\mbox{\boldmath{$\Sigma$}}^2 \mathbf{V}^{2} = \mathbf{I}$. We then set $\mbox{\boldmath{$\Sigma$}}^{12} = \mbox{\boldmath{$\Sigma$}}^{1}{\bf V}^1\mathbf D{\bf V}^{2^{{\mbox{\tiny T}}}} \mbox{\boldmath{$\Sigma$}}^{2}$, $\mathbf D= \text{diag}(0.4, 0.2)$ to depict moderate association between the views. For class separation, define the matrix $[\mbox{\boldmath{$\Sigma$}}\mathbf A, \textbf{0}_{(p_1 + p_2)}] \in \Re^{(p_1 + p_2) \times 3}$; $\mathbf A=[\mathbf A^1, \mathbf A^2]^{{\mbox{\tiny T}}} \in \Re^{(p_1 +p_2) \times 2}$, and set the first, second, and third columns as the mean vector for class 1, 2, and 3, respectively. Here, the first column of $\mathbf A^{1} \in \Re^{p_1 \times 2}$ is set to $(c\textbf{1}_{10}, \textbf{0}_{p_1-10})$ and the second column is set to $( \textbf{0}_{10},-c\textbf{1}_{10}, \textbf{0}_{p-20})$; we fix $c$ at 0.2. We set $\mathbf A^{2} \in \Re^{p_2 \times 2}$ similarly, but we fix $c$ at $0.1$ to allow for different class separation in each view. In Setting Two, we simulate $D=3$ views, $\mathbf X^d, d=1,2,3$, and each view is a concatenation of data from three classes as before. The combined data $\left(\mathbf X^{1}_k, \mathbf X^{2}_k,\mathbf X^{3}_k\right)$ for each class are simulated from $N(\mbox{\boldmath{$\mu$}}_k, \mbox{\boldmath{$\Sigma$}})$, where $\mbox{\boldmath{$\mu$}}_k = (\mbox{\boldmath{$\mu$}}^1_k, \mbox{\boldmath{$\mu$}}^2_k,\mbox{\boldmath{$\mu$}}^3_k)^{{\mbox{\tiny T}}} \in \Re^{p_1+p_2+p_3}, k=1,2,3$ is the combined mean vector for class $k$; $\mbox{\boldmath{$\mu$}}^d_k \in \Re^{p_d}, j=1,2,3$ are the mean vectors for $\mathbf X^{d}_k, d=1,2,3$. The true covariance matrix $\mbox{\boldmath{$\Sigma$}}$ is defined similar to Setting One but with the following modifications. We include $\mbox{\boldmath{$\Sigma$}}_{3}$, $\mbox{\boldmath{$\Sigma$}}_{13}$, and $\mbox{\boldmath{$\Sigma$}}_{23}$, and we set $\mbox{\boldmath{$\Sigma$}}_{13}=\mbox{\boldmath{$\Sigma$}}_{23}=\mbox{\boldmath{$\Sigma$}}_{12}$. Like $\mbox{\boldmath{$\Sigma$}}_{1}$ and $\mbox{\boldmath{$\Sigma$}}_{2}$, $\mbox{\boldmath{$\Sigma$}}_{3}$ is partitioned into signals and noise, and the covariance for the signal variables, $\widetilde{\mbox{\boldmath{$\Sigma$}}}^{3}$, is also block diagonal with 2 blocks of size 10, between-block correlation 0, and each block is compound symmetric matrix with correlation 0.8. We take $\mbox{\boldmath{$\mu$}}_k$ to be the columns of $[\mbox{\boldmath{$\Sigma$}}\mathbf A, \textbf{0}_{(p_1+p_2 +p_3)}] \in \Re^{(p_1+p_2 +p_3) \times 2}$, and $\mathbf A=[\mathbf A^1, \mathbf A^2,\mathbf A^3]^{{\mbox{\tiny T}}} \in \Re^{(p_1 +p_2+p_3) \times 2}$. The first column of $\mathbf A^{j} \in \Re^{p_j \times 2}$ is set to $(c_j\textbf{1}_{10}, \textbf{0}_{p_1-10} )$ and the second column is set to $( \textbf{0}_{10},-c_j\textbf{1}_{10}, \textbf{0}_{p-20})$ for $j=1,2,3$. We set $(c_1,c_2,c_3) =(0.2,0.1,0.05)$ to allow for different class separation in each view. \subsubsection{Results for Linear Simulations} Table \ref{tab: Linear} gives classification accuracy for the methods and the true positive rates for the top 20 variables selected. We implemented a three-hidden layer network with dimensions 512, 256, and 64 for both Deep IDA and Deep CCA. The dimension of the output layer was set as 10. Table 3 in the supplementary material lists the network structure used for each setting. For Deep IDA + Bootstrap, the bootstrap algorithm proposed in the Methods Section was implemented on the training data to choose the top 20 ranked variables. We then implemented Deep IDA on the training data but with just the variables ranked in the top 20 in each view. The learned model and the testing data were used to obtain test error. To compare our feature ranking process with the teacher-student (TS) network approach for feature ranking, we also implemented Deep IDA without the bootstrap approach for feature ranking, and we used the learned model from Deep IDA in the TS framework for feature ranking. We also performed feature ranking using the learned model from Deep CCA (Setting One) and Deep GCCA (Setting Two). The average error rates for the nonlinear methods are higher than the error rate for SIDA, a linear method for joint association and classification analysis. This is not surprising as the true relationships among the views, and the classes within a view are linear. Nevertheless, the average test error rate for Deep IDA that is based on the top 20 variables in each view from the bootstrap method (i.e., Deep IDA + Bootstrap) is lower than the average test error rates from Deep CCA, RKCCA, and SVM (on stacked views). When we implemented Deep CCA, RKCCA, SVM, and DGCCA on the top 20 variables that were selected by our proposed method, we observed a decrease in the error rate across most of the methods, except for RKCCA. For instance, the error rates for Deep CCA using all variables compared to using the top 20 variables identified by our method were $33.17\%$ and $22.95\%$, respectively. Further, compared to Deep IDA on all variables (i.e., Deep IDA + TS), Deep IDA + Bootstrap has a lower average test error, demonstrating the advantage of variable selection. In Setting Two, the classification accuracy for Deep GCCA was poor. In terms of variable selection, compared to SIDA, the proposed method was competitive at identifying the top-ranked 20 variables. The TS framework for ranking variables was sub-optimal as evident from the true positive rates for Deep IDA + TS, Deep CCA + TS, and Deep GCCA + TS. \begin{table} \spacingset{1} \begin{small} \begin{centering} \caption{Linear Setting: RS; randomly select tuning parameters space to search. TPR-1; true positive rate for $\mathbf{X}^1$. Similar for TPR-2. TS: Teacher student network. $-$ indicates not applicable. Deep IDA + Bootstrap is when we use the bootstrap algorithm to choose the top 20 ranked variables, train Deep IDA with the top 20-ranked variables, and then use the learned model and the test data to obtain test errors.} \label{tab: Linear} \begin{tabular}{lrrrr} \hline \hline Method&Error (\%)& TPR-1&TPR-2 &TPR-3 \\ \hline \hline \textbf{Setting One}& \\ \textcolor[rgb]{0,0,1}{Deep IDA + Bootstrap} &24.69 & 100.00&95.25& -\\ {Deep IDA+ TS} &33.87& 33.25&21.75& -\\ {SIDA} &20.81& 99.50&93.50& -\\ Deep CCA + TS & 33.17& 4.25&3.25& -\\ Deep CCA on selected top 20 variables & 22.95 & -&-& -\\ RKCCA&40.1& -&-& -\\ RKCCA on selected top 20 variables &42.07& -&-& -\\ SVM (Stacked views) & 31.53& -&-& -\\ SVM on selected top 20 variables (Stacked views) & 22.03& -&-& -\\ \hline \textbf{Setting Two}& \\ \textcolor[rgb]{0,0,1}{Deep IDA + Bootstrap} &23.16 &100.00&94.75& 78.75\\ {Deep IDA + TS} &31.22& 72.00&57.75& 47.75\\ {SIDA} &19.79& 99.75&99.50& 97.25\\ DGCCA + TS &60.01 & 2.0&2.0& 2.25\\ DGCCA on selected top 20 variables & 57.40& -&-& -\\ SVM (Stacked views) &29.06& -&-& -\\ SVM on selected top 20 variables (Stacked views) & 19.56 & -&-& -\\ \hline \hline \end{tabular} \end{centering} \end{small} \end{table} \subsection{Nonlinear Simulations} \begin{small} \begin{figure}[H] \begin{center} \begin{tabular}{ll} \includegraphics[height = 1.5in]{Plots/X1July20Setting.png}& \includegraphics[height = 1.5in]{Plots/imageX1July15Setting.png}\\ \end{tabular} \caption{Setting One. (Left panel) Structure of nonlinear relationships between signal variables in view 1. (Right panel) Image plot of view 1 showing the first 50 variables as signals. } \label{fig:nonlinears1} \end{center} \end{figure} \end{small} We consider four different settings for this scenario. Each setting has $K=2$ classes but they vary in their dimensions. In each setting, $10\%$ of the variables in the first view are signals and the first five signal variables in the first view are related to the remaining signal variables in a nonlinear way (See Figure \ref{fig:nonlinears1}). We generate data for View 1 as: $\mathbf{X}_1= \widetilde{\mathbf X}_1 \cdot \mathbf W + 0.2\bf{E}_1$ where $(\cdot)$ is element-wise multiplication, $\mathbf W \in \Re^{n \times p_1}= [\mathbf{1}_{0.1\times p_1}, \mathbf{0}_{0.9\times p_1}]$ is a matrix of ones and zeros, $\mathbf{1}$ is a matrix of ones, $\mathbf{0}$ is matrix of zeros, and $\mathbf{E}_1 \sim N(0,1)$. The first five columns (or variables) of $\widetilde{\mathbf X}_1 \in \Re^{n \times p^1}$ are simulated from $\exp(0.15\mbox{\boldmath{${\theta}$}})\cdot\sin(1.5\mbox{\boldmath{${\theta}$}})$. The next $0.1p^1 - 5$ variables are simulated from $\exp(0.15\mbox{\boldmath{${\theta}$}})\cdot\cos(1.5\mbox{\boldmath{${\theta}$}})$. Here, $\mbox{\boldmath{${\theta}$}}=\tilde{\mbox{\boldmath{${\theta}$}}} + 0.5U(0,1)$, and $\tilde{\mbox{\boldmath{${\theta}$}}}$ is a vector of $n$ evenly spaced points between $0$ and $3\pi$. The remaining $0.9p^1$ variables (or columns) in $\widetilde{\mathbf X}_1$ are generated from the standard normal distribution. View 2 has no signal variables and the variables do not have nonlinear relationships. Data for View 2 are generated as follows. We set each variable in $\mathbf{X}_1$ with negative entries to zero, normalized each variable to have unit norm and added a random number generated from the standard uniform distribution. \subsubsection{Results for Nonlinear Simulations} Table \ref{tab: nonlinear2} gives the classification and variable selection accuracy. We chose the number of layers that gave the minimum validation error (based on our approach without bootstrap) or better variable selection. Table 4 in the supplementary material lists the network structure used for each setting. We compare Deep IDA to the nonlinear methods. Similar to the linear setting, for Deep IDA + Bootstrap, we implemented the bootstrap approach for variable ranking on the training data to choose the top $10\%$ ranked variables. We then implemented Deep IDA on the training data but with just the selected variables. The learned model and the testing data were used to obtain test classification accuracy. We also implemented Deep CCA, RKCCA, and SVM with the variables that were selected by Deep IDA + Bootstrap to facilitate comparisons. For Deep IDA + TS and Deep CCA + TS, we implemented the teacher-student algorithm to rank the variables. Since in this Scenario, only view 1 had informative features, we expected the classification accuracy from view 1 to be better than the classification accuracy from both views and this is what we observed across most methods. We note that when training the models, we used both views. The classification accuracy from Deep IDA was generally higher than the other methods, except in Setting Three where it was lower than Deep CCA on the whole data (i.e., Deep CCA + TS). We compared the classification accuracy of the proposed method with (i.e., Deep IDA + Bootstrap) and without feature ranking by our method (i.e., Deep IDA + TS) to assess the effect variable selection has on classification estimates from our deep learning models. Deep IDA + Bootstrap had competitive or better classification accuracy (especially when using view 1 only for classification) compared to Deep IDA + TS. Further, the classification accuracy for Deep IDA + Bootstrap was generally higher than the other methods applied to data with variables selected by Deep IDA + Bootstrap (e.g., Deep CCA on top 50 selected features, Setting One). SVM applied on both views stacked together and on just view 1, either using the whole data or using data with variables selected by Deep IDA, resulted in similar classification performance, albeit lower than the proposed method. Thus, in this example, although only view 1 had signal variables, the classification performance from using both views was better than using only view 1 (e.g., SVM on view 1), attesting to the benefit of multi-view analyses. In terms of variable selection, the TS framework applied on Deep IDA and Deep CCA yielded sub-optimal results. Taken together, the classification and variable selection accuracy from both the linear and nonlinear simulations suggest that the proposed method is capable of ranking the signal variables higher, and is also able to yield competitive or better classification performance, even in situations where the sample size is less than the number of variables. \begin{table} \spacingset{1} \begin{centering} \begin{footnotesize} \caption{Mean (std.error) accuracy and true positive rates. View 1 data have signal variables with nonlinear relationships. TPR-1; true positive rate for $\mathbf{X}^1$. Deep IDA/Deep CCA/RKCCA view 1 means using the discriminant scores of view 1 only for classification. SVM view 1 uses view 1 data to train and test the model. $-$ indicates not applicable} \label{tab: nonlinear2} \begin{tabular}{lll} \hline \hline Method& Mean (\%) Accuracy& TPR-1 \\ \hline \hline \textbf{Setting One}& \\ $(p_1=500,p_2=500, n_1=200, n_2=150)$\\ \textcolor[rgb]{0,0,1}{Deep IDA + Bootstrap } &60.87 (1.28) & 100.0 \\ \textcolor[rgb]{0,0,1}{Deep IDA + Bootstrap } view 1 &81.17 (2.89) & 100.0 \\ {Deep IDA + TS} &62.60 (2.02) & 10.20 \\ {Deep IDA + TS View 1} &81.49 (3.06) &10.20\\ Deep CCA + TS & 58.20(0.59)& 8.30\\ Deep CCA + TS view 1 & 61.26(0.77)& 8.30\\ Deep CCA on top 50 selected features & 58.91(0.82)& -\\ Deep CCA on top 50 selected features view 1 & 59.87(0.87)& -\\ RKCCA&61.06 (0.47)& -\\ RKCCA View 1&64.93 (0.63)& -\\ RKCCA on top 50 selected features &56.21 (0.77)& -\\ RKCCA View 1 on top 50 selected features &58.94 (0.78)& -\\ SVM &54.20(0.30)& -\\ SVM view 1 &50.26(0.21)& -\\ SVM on top 50 selected features & 50.37(0.27)& -\\ SVM on top 50 selected features view 1 & 50.07(0.26)& -\\ \hline \textbf{Setting Two}& \\ $(p_1=500,p_2=500, n_1=3,000, n_2=2,250)$\\ \textcolor[rgb]{0,0,1}{Deep IDA + Bootstrap } &89.45(2.16)& 63.70\\ \textcolor[rgb]{0,0,1}{Deep IDA + Bootstrap View 1} &90.49(2.25) & 63.70 \\ {Deep IDA + TS} &91.78 (1.73) & 10.50 \\ {Deep IDA + TS} View 1 &84.37 (1.49) &10.50\\ Deep CCA + TS & 59.84(0.40)& 33.70\\ Deep CCA + TS view 1 & 60.35(0.34)& 33.70\\ Deep CCA on top 50 selected features & 58.50(0.52)& -\\ Deep CCA on top 50 selected features view 1 & 58.32(0.52)& -\\ RKCCA&57.14 (0.00)& -\\ RKCCA View 1&57.14 (0.00)& -\\ RKCCA on top 50 selected features &66.30 (0.96)& -\\ RKCCA View 1 on top 50 selected features &66.32 (0.98)& -\\ SVM &54.42(0.11)& -\\ SVM view 1 &52.81(0.13) \\ SVM on top 50 selected features & 50.56(0.07)& -\\ SVM on top 50 selected features view 1 & 50.49(0.04)& -\\ \hline \textbf{Setting Three}& \\ $(p_1=2000,p_2=2000, n_1=200, n_2=150)$\\ \textcolor[rgb]{0,0,1}{Deep IDA + Bootstrap } &54.77(0.91) & 96.05\\ \textcolor[rgb]{0,0,1}{Deep IDA + Bootstrap View 1} &70.40(2.17) & 96.05 \\ {Deep IDA + TS} &61.67 (1.74) & 30.55 \\ {Deep IDA + TS} View 1 &60.73 (1.76) &30.55\\ Deep CCA + TS & 62.27(0.46)& 10.28\\ Deep CCA + TS view 1 & 70.83(0.36)& 10.28\\ Deep CCA on top 200 selected features & 61.43(0.62)& -\\ Deep CCA on top 200 selected features view 1 & 68.67(0.83)& -\\ RKCCA&60.24 (0.63)& -\\ RKCCA View 1&63.54 (0.57)& -\\ RKCCA on top 200 selected features &58.70 (0.52)& -\\ RKCCA View 1 on top 200 selected features &61.90 (0.91)& -\\ SVM &54.69(0.45)& -\\ SVM view 1 &53.97(0.41)& -\\ SVM on top 200 selected features & 51.14(0.42)& -\\ SVM on top 200 selected features view 1 & 50.19(0.35)& -\\ \hline \textbf{Setting Four}& \\ $(p_1=2000,p_2=2000, n_1=3,000, n_2=2,250)$\\ \textcolor[rgb]{0,0,1}{Deep IDA + Bootstrap } &69.38(1.44)& 83.20\\ \textcolor[rgb]{0,0,1}{Deep IDA + Bootstrap View 1} &71.34(1.66) & 83.20 \\ {Deep IDA + TS} &64.52 (1.32) & 10.78 \\ {Deep IDA + TS} View 1 &69.99 (2.88) &10.78\\ Deep CCA + TS & 60.33(0.76)& 33.48\\ Deep CCA + TS view 1 & 58.54(1.00)& 33.48\\ Deep CCA on top 200 selected features & 59.58(1.66)& -\\ Deep CCA on top 200 selected features view 1 & 59.28(1.71)& -\\ RKCCA&57.14 (0.00)& -\\ RKCCA View 1&57.14 (0.00)& -\\ RKCCA on top 200 selected features &55.30 (0.79)& -\\ RKCCA View 1 on top 200 selected features &61.21 (0.81)& -\\ SVM &53.13(0.16)& -\\ SVM view 1 &53.37(0.13)& -\\ SVM on top 200 selected features & 52.72(0.13)& -\\ SVM on top 200 selected features view 1 & 50.59(0.12)& -\\ \hline \hline \end{tabular} \end{footnotesize} \end{centering} \end{table} \section{Real Data Analyses} We consider two real datasets: a) handwriting image data, and b) COVID-19 omics data. The image data will be used to primarily assess the classification performance of the proposed method without feature ranking while the COVID-19 data will be used to assess classification performance and to also demonstrate that Deep IDA is capable of identifying biologically relevant features. \subsection{Evaluation of the Noisy MNIST digits data} The original MNIST handwritten image dataset \citep{MNIST:1998} consists of 70,000 grayscale images of handwritten digits split into training, validation and testing sets of 50,000, 10,000 and 10,000 images, respectively. The validation set was used to select network parameters from the best epoch (lowest validation loss). Each image is $28 \times 28$ pixels and has associated with it a label that denotes which digit the image represents (0-9). In \cite{Wang:2015}, a more complex and challenging noisy version of the original data was generated and used as a second view. First, all pixel values were scaled to 0 and 1. The images were randomly rotated at angles uniformly sampled from $[-\frac{\pi}{4},\frac{\pi}{4}]$, and the resulting images were used as view 1. Each rotated image was paired with an image of the same label from the original MNIST data, independent random noise generated from U(0,1) was added, and the pixel values were truncated to [0,1]. The transformed data is view 2. Figure \ref{fig:mnist} shows two image plots of a digit for views 1 and 2. Of note, view 1 is informative for classification and view 2 is noisy. Therefore, an ideal multi-view classification method should be able to extract the useful information from view 1 while disregarding the noise in view 2. \begin{figure}[H] \centering \begin{tabular}{cc} \includegraphics[width=0.3\textwidth]{Plots/view1_mnist.png}& \includegraphics[width=0.3\textwidth]{Plots/view2_mnist.png}\\ \end{tabular} \caption{An example of Noisy MNIST data. For the subject with label "9", view 1 observation is on the left and view 2 observation is on the right.} \label{fig:mnist} \end{figure} The goal of this application is to evaluate how well the proposed method without feature ranking can classify the digits using the two views. Thus, we applied Deep IDA without feature ranking and the competing methods to the training data and we used the learned models and the testing data to obtain test classification accuracy. The validation data was used in Deep IDA and Deep CCA to choose the best model among all epochs. Table 5 in the supplementary material lists the network structure used in this analysis. Table \ref{tab: mnist} gives the test classification results of the methods. We observe that the test classification accuracy of the proposed method with nearest centroid classification is better than SVM on the stacked data, and is comparable to Deep CCA. We observe a slight advantage of the proposed method when we implement SVM on the final layer of Deep IDA. \begin{table} \spacingset{1} \centering \caption{Noisy MNIST data: SVM was implemented on the stacked data. For Deep CCA + SVM, we trained SVM on the combined outputs (from view 1 and view 2) of the last layer of Deep CCA. For Deep IDA + NCC, we implemented the Nearest Centroid Classification on the combined outputs (from view 1 and view 2) of the last layer of Deep IDA. For Deep IDA + SVM, we trained SVM on the combined outputs (from view 1 and view 2) of the last layer of Deep IDA. \label{tab: mnist}} \begin{tabular}{lr} \hline \hline Method&Accuracy (\%) \\ \hline \hline SVM (combined view 1 and 2) &93.75\\ Deep CCA + SVM & 97.01\\ \textcolor[rgb]{0,0,1}{Deep IDA + NCC} & 97.74\\ \textcolor[rgb]{0,0,1}{Deep IDA + SVM}& 99.15\\ RKCCA + SVM & 91.79 \\ \hline \hline \end{tabular} \end{table} \subsection{Evaluation of the COVID-19 Omics Data} \subsubsection{Study Design and Goal} The molecular and clinical data we used are described in \cite{overmyer2021large}. Briefly, blood samples from 128 patients admitted to the Albany Medical Center, NY from 6 April 2020 to 1 May 2020 for moderate to severe respiratory issues collected. These samples were quantified for metabolomics, RNA-seq, proteomics, and lipidomics data. In addition to the molecular data, various demographic and clinical data were obtained at the time of enrollment. For eligibility, subjects had to be at least 18 years and admitted to the hospital for COVID-19-like symptoms. Of those eligible, $102$ had COVID-19, and $26$ were without COVID-19. Of those with COVID-19, 51 were admitted to the Intensive Care Unit (ICU) and 51 were not admitted to the ICU (i.e., Non-ICU). Of those without COVID-19, 10 were Non-ICU patients and 16 were ICU patients. Our goal is to elucidate the molecular architecture of COVID-19 severity by identifying molecular signatures that are associated with each other and have potential to discriminate patients with and without COVID-19 who were or were not admitted to the ICU. \subsubsection{Data pre-processing and application of Deep IDA and competing methods} Of the 128 patients, 125 had both omics and clinical data. We focused on proteomics, RNA-seq, and metabolomics data in our analyses since many lipids were not annotated. We formed a four-class classification problem using COVID-19 and ICU status. Our four groups were: with COVID-19 and not admitted to the ICU (COVID Non-ICU), with COVID-19 and admitted to the ICU (COVID ICU), no COVID-19 and admited to the ICU (Non-COVID ICU), and no COVID-19 and not admitted to the ICU (Non-COVID Non-ICU). The frequency distribution of samples in these four groups were: $40\%$ COVID ICU, $40\%$ COVID Non-ICU, $8\%$ Non-COVID Non-ICU, and $12\%$ Non-COVID ICU. The initial dataset contained $18,212$ genes, $517$ proteins, and $111$ metabolomics features. Prior to applying our method, we pre-processed the data (see Supplementary Material) to obtain a final dataset of $\mathbf{X}^{1} \in \Re^{125 \times 2,734}$ for the RNA-sequencing data, $\mathbf{X}^{2} \in \Re^{125 \times 269}$ for the protoemics data, and $\mathbf{X}^{3} \in \Re^{125 \times 66}$ for the metabolomics. We randomly split the data into training ($n=74$) and testing ($n=51$) sets while keeping the proportions in each group similar to the original data, we applied the methods on the training data and we assessed error rate using the test data. We evaluated Deep IDA with and without feature selection. For Deep IDA with feature selection, we obtained the top 50 and top 10\% molecules after implementing Algorithm 1, learned the model on the training data with only the molecules that were selected, and estimated the test error with the testing data. We also assessed the performance of the other methods using variables that were selected by Deep IDA. This allowed us to investigate the importance of feature selection for these methods. \subsubsection{Test Accuracy and Molecules Selected} Table \ref{tab: omics} gives the test accuracy for Deep IDA in comparison with deep generalized CCA (Deep GCCA), SIDA, and SVM. Deep IDA on selected features and Deep IDA refer to applying the proposed method with and without variable selection, respectively. Deep IDA with feature selection based on the top $10\%$ variables yield the same test classification accuracy as Deep IDA without feature selection, and these estimates are higher than the test accuracy from the other methods. Further, we observed a slight increase in classification performance when we implemented Deep IDA with feature selection based on the top 50 ranked molecules for each omics. Naively stacking the data and applying support vector machine results in the worse classification accuracy. When we applied SVM on the stacked data obtained from the variables that were selected by Deep IDA, the classification accuracy increased to $62.74\%$ (based on the top $10\%$ features selected by Deep IDA), representing a $13.73\%$ increase from applying SVM on the stacked data obtained from all the variables. We also observed an increase in classification accuracy when we implemented Deep GCCA on the top $10\%$ selected features from Deep IDA compared to Deep GCCA on the whole data. Compared to SIDA, the joint association and classification method that assesses linear relationships in the views and among the groups, the proposed method has a higher test accuracy. Figure \ref{fig:discorr} gives the discriminant and correlation plots from Deep IDA based on the top-ranked 50 molecules from each omics. From the discriminant plots of the first three discriminant scores, we notice that the samples are well-separated in the training data. For the testing data, we observe some overlaps in the sample groups but the COVID ICU group seems to be separated from the COVID NON-ICU and NON-COVID ICU groups. This separation is more apparent in the RNA-sequencing and proteomics data and less apparent in the metabolomics data. Further, based on the testing data, the correlation between the metabolomics and proteomics data was higher when considering the first and third discriminant scores (0.69 and 0.36, respectively). When considering the second discriminant score, the correlation between the RNA-sequencing and proteomics data was higher (0.49). Overall, the mean correlation between the metabolomics and proteomics data was highest (0.4) while the mean correlation between the metabolomics and RNA-sequencing data was lowest (0.09). These findings suggest that the proposed method is capable of modeling nonlinear relationships among the different views and groups, and has potential to identify features that can lead to better classification results. Figure \ref{fig:featuresel} gives the top 50 genes, proteins, and metabolomics features that were highly-ranked by Deep IDA. Feature importance for each variable was normalized to the feature ranked highest for each omics. \begin{figure}[H] \centering \begin{tabular}{cc} \includegraphics[width=0.70\textwidth]{Plots/view_xyz_as_s123-edited.png}\\ \includegraphics[width=0.70\textwidth]{Plots/test_pairwise_corr.png}\\ \end{tabular} \caption{Discrimination (3-D) plots: COVID-19 patient groups are well-separated in the training data. From the testing data, the COVID ICU group seems to be separated from the COVID NON-ICU and NON-COVID ICU groups, especially in the RNA-sequencing and proteomics data. Correlation plots (2-D): Overall (combining all three discriminant scores), the mean correlation between the metabolomics and proteomics data was highest (0.4) while the mean correlation between the metabolomics and RNA-sequencing data was lowest (0.09). } \label{fig:discorr} \end{figure} \begin{figure} \begin{tabular}{cc} \centering \includegraphics[width=0.45\textwidth]{Plots/Feature_score_view_1_top50.pdf}& \includegraphics[width=0.45\textwidth]{Plots/Feature_score_view_2_top50.pdf}\\ \includegraphics[width=0.50\textwidth]{Plots/Feature_score_view_3_top50.pdf} \\ \end{tabular} \caption{Feature importance plots of the omics data used in the COVID-19 application. Upper left: RNA-Seq; upper right: Proteomics ; lower left: Metabolomics. Feature importance for each variable was normalized to the feature ranked highest for each omics. } \label{fig:featuresel} \end{figure} \clearpage \subsubsection{Pathway analysis of highly-ranked molecules} \begin{table} \spacingset{1} \centering \caption{COVID-19 Omics data: SVM is based on stacked three-view raw data. DeepCCA+SVM is training SVM based on the last layer of DeepCCA. DeepSIDA applies nearest centroid on the last layer for classification. The reported classification accuracy for Deep IDA are based on optimized network structure. DeepIDA + top 10\% is based on input-512-20 network structure. DeepIDA + top 50 is based on input-512-256-20 network structure.\label{tab: omics}} \begin{tabular}{lr} \hline \hline Method&Accuracy (\%) \\ \hline \hline SVM & 49.01\\ SVM on selected top 10\% features & 62.74\\ SVM on selected top 50 features & 64.71\\ Deep GCCA + SVM & 64.71\\ Deep GCCA + SVM on selected top 10\% features & 68.63\\ Deep GCCA + SVM on selected top 50 features & 64.71\\ SIDA & 60.78\\ \textcolor[rgb]{0,0,1}{Deep IDA} & 76.47\\ \textcolor[rgb]{0,0,1}{Deep IDA} on selected top 10\% features & 76.47\\ \textcolor[rgb]{0,0,1}{Deep IDA} on selected top 50 features & 78.43\\ \hline \hline \end{tabular} \end{table} We used the Ingenuity Pathway Analysis (IPA) software to investigate the molecular and cellular functions, pathways, and diseases enriched in the proteins, genes, and metabolites that were ranked in the top 50 by our variable selection method. IPA searches the ingenuity pathway knowledge base, which is manually curated from scientific literature and over 30 databases, for gene interaction. We observed strong pathways, molecular and cellular functions, and disease enrichment (Supplementary Tables 1 and 2). The top disease and disorders significantly enriched in our list of genes are found in Supplementary Table 2. We note that 36 of the biomolecules in our gene list were determined to be associated with neurological diseases. This finding aligns with studies that suggest that persons with COVID-19 are likely to have neurological manifestations such as reduced consciousness and stroke \cite{berlit2020neurological,taquet20216}. Further, 48 genes from our list were determined to be associated with cancer. Again, this supports studies that suggest that individuals with immuno-compromised system from cancer or individuals who recently recovered from cancer, for instance, are at higher risk for severe outcomes. Compared to the general population, individuals with cancer have a two-fold increased risk of contracting SARS-CoV-2\cite{al2020practical}. The top 2 networks determined to be enriched in our gene list was ``hereditary disorder, neurological disease, organismal injury and abnormalities", and ``cell signaling, cellular function and maintenance, and small molecule biochemistry". As in our gene list, 34 proteins were determined to be associated with neurological disease. Other significantly enriched diseases in our protein list included infectious diseases (such as infection by SARS coronavirus), inflammatory response (such as inflammation of organ), and metabolic disease (including Alzheimer disease and diabetes mellitus). A recent review \citep{steenblock2021covid} found that up to $50\%$ of those who have died from COVID-19 had metabolic and vascular disorders. In particular, patients with metabolic dysfunction (e.g., obesity, and diabetes) have an increased risk for developing severe COVID-19. Further, getting infected with SARS-CoV-2 can likely lead to new onset of diabetes. The top 2 networks determined to be enriched in our protein list was ``infectious diseases, cellular compromise, inflammatory response", and ``tissue development, cardiovascular disease, hematological disease". The top enriched canonical pathways in our protein list include the LXR/RXR activation FXR/RXR activation, acute phase response signaling, and atherosclerosis signaling (Table \ref{tab:toppathways}). These pathways are involved in metabolic processes such as cholesterol metabolism. The molecular and cellular functions enriched in our protein list include cellular movement and lipid metabolism (Supplementary Table 2). Overlapping canonical pathways (Figure \ref{fig:protoverlap}) in IPA was used to visualize the shared biology in pathways through the common molecules participating in the pathways. The two pathways ``FXR/RXR Activation” and ``LXR/RXR Activation” share a large number (eight) of molecules: AGT, ALB, APOA2, APOH, APOM, CLU, PON1 and TF. The LXR/RXR pathway is involved in the regulation of lipid metabolism, inflammation, and cholesterol to bile acid catabolism. The farnesoid X receptor (FXR) is a member of the nuclear family of receptors and plays a key role in metabolic pathways and regulating lipid metabolism, cell growth and malignancy \citep{wang2008fxr}. We observed lower levels of ALB, APOM, and TF in patients with COVID-19 (and more so in patients with COVID-19 who were admitted to the ICU) relative to patients without COVID-19 (Figure \ref{fig:proteinlevels}). We also observed higher levels of AGT and CLU in patients with COVID-19 admitted to the ICU compared to the other groups. The fact that the top enriched pathways, and molecular and cellular functions are involved in metabolic processes such as lipid metabolism seem to corroborate the findings in \citep{overmyer2021large} that a key signature for COVID-19 is likely a dysregulated lipid transport system. For the top-ranked metabolomics features, we first used the MetaboAnalyst 5.0 \citep{metaboanalyst} software to obtain their human metabolome database reference ID and then used IPA on the mapped data for pathway, diseases, molecular and cellular function enrichment analysis. Of the top 50 ranked features, we were able to map 25 features. The top disease and disorders significantly enriched in our list of metabolites ( Table 1 of the supplementary material) included cancer and gastrointestinal disease (such as digestive system, and hepatocellular cancer). Molecular and cellular functions enriched included amino acid metabolism, cell cycle, and cellular growth and proliferation. Figure 1 (supplementary material) shows the overlapping pathway network for the metabolites. Taken together, these findings suggest that COVID-19 disrupts many biological systems. The relationships found with diseases such as cancer, gastrointestinal, neurological conditions, and metabolic diseases (e.g., Alzheimers and diabetes mellitus) heighten the need to study the post sequelae effects of this disease in order to better understand the mechanisms and to develop effective treatments. \begin{table} \spacingset{1} \small \caption{ Top 5 Canonical Pathways from Ingenuity Pathway Analysis (IPA). }\label{tab:toppathways} \begin{tabular}{|l|l|l|l|} \hline \textbf{Omics Data} & \textbf{Top Canonical Pathway} & \textbf{P-value} & \textbf{Molecules Selected} \\ \hline RNA Sequencing & 4-hydroxybensoate Biosynthesis & 2.07E-03 & TAT \\ \hline & 4-hydroxyphneylpyruvate Biosythesis & 2.07E-03 & TAT \\ \hline & Tyrosine Degradation 1 & 1.03E-02 & TAT \\ \hline & Role of IL-17A in Psoriasis & 2.86E-02 & CCL20 \\ \hline & Fatty Acid Activation & 2.86E-02 & ACSM1 \\ \hline Proteomics & LXR/RXR Activation & 4.14E-11 & \begin{tabular}[c]{@{}l@{}}AGT, ALB, APOA2, APOH, APOM, \\ CLU, PON1, TF\end{tabular} \\ \hline & FXR/RXR Activation & 5.02E-11 & \begin{tabular}[c]{@{}l@{}}AGT, ALB, APOA2, APOH, APOM, \\ CLU, PON1, TF\end{tabular} \\ \hline & \begin{tabular}[c]{@{}l@{}}Acute Phase Response \\ Signaling\end{tabular} & 3.30E-08 & \begin{tabular}[c]{@{}l@{}}AGT, ALB, APOA2, APOH, HRG, \\ SERPINA3, TF\end{tabular} \\ \hline & Atherosclerosis Signaling & 1.06E-07 & \begin{tabular}[c]{@{}l@{}}ALB, APOA2, APOM, CLU, \\ COL18A1, PON1\end{tabular} \\ \hline & \begin{tabular}[c]{@{}l@{}}Clathrin-mediated Endocytosis \\ Signaling\end{tabular} & 1.09E-06 & \begin{tabular}[c]{@{}l@{}}ALB, APOA2, APOM, CLU, \\ PON1, TF\end{tabular} \\ \hline Metabolomics & tRNA Charging & 2.25E-13 & \begin{tabular}[c]{@{}l@{}}L-glutamic acid, L-Phenylalanine, \\ L-Glutamine, Glycine, L-Serine, \\ L-Methionine; L-Valine, L-Isoleucine, \\ L-Threonine, L-Tryptophan, L-Proline\end{tabular} \\ \hline & Glutamate Receptor Signaling & 5.43E-05 & L-glutamic acid, glycine, L-Glutamine \\ \hline & \begin{tabular}[c]{@{}l@{}}Phenylalanine Degradation IV \\ (Mammalian, via Side Chain)\end{tabular} & 7.24E-05 & \begin{tabular}[c]{@{}l@{}}L-glutamic acid, glycine, L-Glutamine, \\ L-Phenylalanine\end{tabular} \\ \hline & \begin{tabular}[c]{@{}l@{}}Superpathway of Serine and \\ Clycine Biosynthesis I\end{tabular} & 3.28E-04 & L-glutamic acid, glycine, L-serine \\ \hline & y-glutamyl Cycle & 4.23E-04 & \begin{tabular}[c]{@{}l@{}}L-glutamic acid, glycine, \\ pyrrolidonecarboxylic acid\end{tabular} \\ \hline \end{tabular} \end{table} \begin{figure} \centering \begin{tabular}{c} \includegraphics[height=5cm,width=17cm]{Plots/ProteinsTop10overlappath.png} \end{tabular} \caption{ Network of overlapping canonical pathways from highly ranked proteins. Nodes refer to pathways and a line connects any two pathways when there is at least two molecules in common between them. The two pathways ``FXR/RXR Activation” and ``LXR/RXR Activation” share a large number (eight) of molecules: AGT, ALB, APOA2, APOH, APOM, CLU, PON1 and TF. }\label{fig:protoverlap} \end{figure} \begin{figure} \centering \begin{tabular}{c} \includegraphics[width=0.9\textwidth]{Plots/Rplot.png} \end{tabular} \caption{Comparison of protein levels among COVID-19 patient groups (p-value $< 0.05$, Kruskal-Wallis test). COL18A1 was highly ranked by Deep IDA, and the other 8 proteins are shared by the ``FXR/RXR Activation” and ``LXR/RXR Activation” pathways. Protein expression levels for ALB, APOM, and TF are lower in patients with COVID-19 (especially in patients with COVID-19 who were admitted to the ICU). Protein expression levels for AGT and CLU are higher in patients with COVID-19 admitted to the ICU compared to the other groups.} \label{fig:proteinlevels} \end{figure} \clearpage \section{Conclusion} We have proposed a deep learning method, Deep IDA, for joint integrative analysis and classification studies of multi-view data. Our framework extends the joint association and classification method proposed in \cite{safosida:2021} to model nonlinear relationships among multiple views and between classes in a view. Specifically, we use deep neural networks (DNN) to nonlinearly transform each view and we construct an optimization problem that takes as input the output from our DNN and learns view-specific projections that result in maximum linear correlation of the transformed views and maximum linear separation within each view. Further, unlike most existing nonlinear methods for integrating data from multiple views, we have proposed a feature ranking approach based on resampling to identify features contributing most to the dependency structure among the views and the separation of classes within a view. Our framework for feature ranking is general and applicable to other nonlinear methods for multi-view data integration. The proposed algorithm, developed in Python 3, is user-friendly and will be useful in many data integration applications. Through simulation studies, we showed that the proposed method outperforms several other linear and nonlinear methods for integrating data from multiple views, even in high-dimensional scenarios where the sample size is typically smaller than the number of variables. When Deep IDA was applied to proteomics, RNA sequencing, and metabolomics data obtained from individuals with and without COVID-19 who were or were not admitted to the ICU, we identified several molecules that better discriminated the COVID-19 patient groups. We also performed enrichment analysis of the molecules that were highly ranked and we observed strong pathways, molecular and cellular functions, and disease enrichment. The top disease and disorders significantly enriched in our list of genes, proteins, and metabolomics data included cancer, neurological disorders, infectious diseases, and metabolic diseases. While some of these findings corroborate earlier results, the top-ranked molecules could be further investigated to delineate their impact on COVID-19 status and severity. Our work has some limitations. First, the bootstrap technique proposed is computationally tasking. In our algorithm, we use parallelization to mitigate against the computational burden, however, more is needed to make the approach less expensive. Second, the proposed method has focused on binary or categorical outcomes. Future work could consider other outcome types (e.g., continuous and survival). Third, the number (or proportion) of top-ranked features need to be specified in advance. In our proposed bootstrap method, once we have identified the top-ranked variables, we fit another deep learning model to obtain low-dimensional representations of the data that result in maximum association among the views and separation of classes based on the top-ranked variables, and we use these to obtain test classification accuracy if testing data are available. Alternatively, instead of learning a new model with the top-ranked variables, we could consider using the learned neural network parameters from the $M$ bootstrap implementations to construct $M$ $\mathbf H^d_{test}$s, and then aggregate these (over $M$) to obtain an estimate of the view-specific top-level representations for classification. Future work could compare this alternative with the current approach. In conclusion, we have developed a deep learning method to jointly model nonlinear relationships between data from multiple views and a binary or categorical outcome, while also producing highly-ranked features contributing most to the overall association of the views and separation of the classes within a view. The encouraging simulations and real data findings, even for scenarios with small to moderate sample sizes, motivate further applications. \section*{Funding and Acknowledgements} The project described was supported by the Award Numbers 5KL2TR002492-04 from the National Center For Advancing Translational Science and 1R35GM142695-01 from the National Institute Of General Medical Sciences of the National Institutes of Health. The content is solely the responsibility of the authors and does not represent the official views of the National Institutes of Health. ~\\ \textit{Declaration of Conflicting Interests}: The authors declare that there is no conflict of interest. \section*{Data Availability and Software} The data used were obtained from \cite{overmyer2021large}. We provide a Python package, \textit{Deep IDA}, to facilitate the use of our method. Its source codes, along with a README file would be made available via \url{https://github.com/lasandrall/Deep IDA}. \section*{Supplementary Data} In the online Supplementary Materials, we provide proof of Theorems 1 and 2, and also give more results from real data analyses. \clearpage \section{Theorems and Proofs} \begin{thm}\label{thm:GEVPmain} Let $\mathbf S_{t}^{d}$ and $\mathbf S_{b}^{d}$ respectively be the total covariance and the between-class covariance for the top-level representations $\mathbf H^d, d=1,\ldots,D$. Let $\mathbf S_{dj}$ be the cross-covariance between top-level representations $d$ and $j$. Assume $\mathbf S_{t}^{d} \succ 0$. Define $\mathcal{M}^d = {\mathbf S_t^d}^{-\frac{1}{2}} \mathbf S_b^d {\mathbf S_t^d}^{-\frac{1}{2}}$ and $\mathcal{N}_{dj} = {\mathbf S_t^d}^{-\frac{1}{2}} \mathbf S_{dj} {\mathbf S_t^j}^{-\frac{1}{2}}$. Then $\mbox{\boldmath{${\Gamma}$}}^d \in \Re^{o_d \times l}$, $l \le \min\{K-1, o_1,\ldots,o_D\}$ in equation (4) of main text are eigenvectors corresponding respectively to eigenvalues $\mbox{\boldmath{${\Lambda}$}}_{d}=$diag$(\lambda_{d_{k}},\ldots,\lambda_{d_{l}})$, $\lambda_{d_{k}} > \cdots > \lambda_{d_{l}}>0$ that iteratively solve the eigensystem problems: \begin{align*} \left(c_1\mathcal{M}^d + c_2\sum_{j\neq d}^D \mathcal{N}_{dj}\mbox{\boldmath{${\Gamma}$}}_j\mbox{\boldmath{${\Gamma}$}}_j^{{\mbox{\tiny T}}}\mathcal{N}_{dj}^{{\mbox{\tiny T}}}\right) \mbox{\boldmath{${\Gamma}$}}_d &= \mbox{\boldmath{${\Lambda}$}}_d \mbox{\boldmath{${\Gamma}$}}_d, \forall d = 1,...,D \end{align*} where $c_1 = \frac{\rho}{D}$ and $c_2 = \frac{2(1-\rho)}{D(D-1)}$. \end{thm} \textbf{Prove}. Solving the optimization problem is equivalent to iteratively solving the following generalized eigenvalue systems: \begin{align*} \left(c_1\mathcal{M}^1 + c_2\sum_{j=2}^D \mathcal{N}_{1j}\mbox{\boldmath{${\Gamma}$}}_j\mbox{\boldmath{${\Gamma}$}}_j^T\mathcal{N}_{1j}^T\right) \mbox{\boldmath{${\Gamma}$}}_1 &= \mbox{\boldmath{${\Lambda}$}}_1 \mbox{\boldmath{${\Gamma}$}}_1\\ &\vdots\\ \left(c_1\mathcal{M}^d + c_2\sum_{j=1,j\neq d}^D \mathcal{N}_{dj}\mbox{\boldmath{${\Gamma}$}}_j\mbox{\boldmath{${\Gamma}$}}_j^T\mathcal{N}_{dj}^T\right) \mbox{\boldmath{${\Gamma}$}}_d &= \mbox{\boldmath{${\Lambda}$}}_d \mbox{\boldmath{${\Gamma}$}}_d\\ &\vdots\\ \left(c_1\mathcal{M}^D + c_2\sum_{j=1}^{D-1} \mathcal{N}_{Dj}\mbox{\boldmath{${\Gamma}$}}_j\mbox{\boldmath{${\Gamma}$}}_j^T\mathcal{N}_{Dj}^T\right) \mbox{\boldmath{${\Gamma}$}}_D &= \mbox{\boldmath{${\Lambda}$}}_D \mbox{\boldmath{${\Gamma}$}}_D \end{align*} where $c1 = \frac{\rho}{D}$ and $c2 = \frac{2(1-\rho)}{D(D-1)}$.\\ \begin{proof} The Lagrangian is \begin{small} \begin{eqnarray} L(\mbox{\boldmath{${\Gamma}$}}_1,...,\mbox{\boldmath{${\Gamma}$}}_D,\lambda_1,...,\lambda_D) &=& \rho \frac{1}{D} \sum_{d=1}^D tr[\mbox{\boldmath{${\Gamma}$}}_d^T\mathcal{M}^d\mbox{\boldmath{${\Gamma}$}}_d] + (1-\rho) \frac{2}{D(D-1)} \sum_{d=1}^D \sum_{j,j\neq d}^D tr[\mbox{\boldmath{${\Gamma}$}}_d^T \mathcal{N}_{dj} \mbox{\boldmath{${\Gamma}$}}_j \mbox{\boldmath{${\Gamma}$}}_j^T \mathcal{N}_{dj}^T \mbox{\boldmath{${\Gamma}$}}_d] - \sum_{d=1}^D \eta_d (tr[\mbox{\boldmath{${\Gamma}$}}_d^T\mbox{\boldmath{${\Gamma}$}}_d] - l)\nonumber\\ &=& c_1 \sum_{d=1}^D tr[\mbox{\boldmath{${\Gamma}$}}_d^T\mathcal{M}^d\mbox{\boldmath{${\Gamma}$}}_d] + c_2 \sum_{d=1}^D \sum_{j,j\neq d}^D tr[\mbox{\boldmath{${\Gamma}$}}_d^T \mathcal{N}_{dj} \mbox{\boldmath{${\Gamma}$}}_j \mbox{\boldmath{${\Gamma}$}}_j^T \mathcal{N}_{dj}^T \mbox{\boldmath{${\Gamma}$}}_d] - \sum_{d=1}^D \lambda_d (tr[\mbox{\boldmath{${\Gamma}$}}_d^T\mbox{\boldmath{${\Gamma}$}}_d] - l) \end{eqnarray} \end{small} The first order stationary solution for $\mbox{\boldmath{${\Gamma}$}}_d (\forall d =1,...,D)$ is \begin{equation}\label{eqn:partial_L} \frac{\partial L(\mbox{\boldmath{${\Gamma}$}}_1,...,\mbox{\boldmath{${\Gamma}$}}_D,\lambda_1,...,\lambda_D)}{\partial \mbox{\boldmath{${\Gamma}$}}_d^T} = 2c_1 \mathcal{M}^d \mbox{\boldmath{${\Gamma}$}}_d + 2c_2 \sum_{j,j\neq d}^D(\mathcal{N}_{dj} \mbox{\boldmath{${\Gamma}$}}_j \mbox{\boldmath{${\Gamma}$}}_j^T \mathcal{N}_{dj}^T) \mbox{\boldmath{${\Gamma}$}}_d -2\lambda_d \mbox{\boldmath{${\Gamma}$}}_d = \textbf{0} \end{equation} Rearranging the equation \ref{eqn:partial_L} we have \begin{align*} \left(c_1\mathcal{M}^d + c_2\sum_{j,j\neq d}^D \mathcal{N}_{dj}\mbox{\boldmath{${\Gamma}$}}_j\mbox{\boldmath{${\Gamma}$}}_j^T\mathcal{N}_{dj}^T\right) \mbox{\boldmath{${\Gamma}$}}_d &= \lambda_d \mbox{\boldmath{${\Gamma}$}}_d \end{align*} For $\mbox{\boldmath{${\Gamma}$}}_j,\forall j\neq d$ fixed, the above can be solved for the eigenvalues of $(c_1\mathcal{M}^d + c_2\sum_{j,j\neq d}^D \mathcal{N}_{dj}\mbox{\boldmath{${\Gamma}$}}_j\mbox{\boldmath{${\Gamma}$}}_j^T\mathcal{N}_{dj}^T)$. Arrange the eigenvalues from large to small and denote $\mbox{\boldmath{${\Lambda}$}}_d \in \mathbf R^{o_d\times o_d}$ as the diagonal matrix of those values. For the top $l$ largest eigenvalues, denote the corresponding eigenvectors as $\widehat{\mbox{\boldmath{${\Gamma}$}}}_d = [\gamma_{d,1},...,\gamma_{d,l}]$. Therefore, starting from $d=1$, following this process, $\widehat{\mbox{\boldmath{${\Gamma}$}}}_1$ is updated; then, update $\widehat{\mbox{\boldmath{${\Gamma}$}}}_2$ and so on; finally, update $\widehat{\mbox{\boldmath{${\Gamma}$}}}_D$. We iterate until convergence, which is defined as, $\frac{\|\widehat{\mbox{\boldmath{${\Gamma}$}}}_{d_{,new}} - \widehat{\mbox{\boldmath{${\Gamma}$}}}_{d_{,old}}\|_F^2}{\| \widehat{\mbox{\boldmath{${\Gamma}$}}}_{d_{,old}}\|_F^2} < \epsilon$. When convergence is achieved, set $\widetilde{\mbox{\boldmath{${\Gamma}$}}}_d=\widehat{\mbox{\boldmath{${\Gamma}$}}}_d, \forall d=1,...,D$. \end{proof} \begin{thm}\label{thm:loss} For $d$ fixed, let $\eta_{d,1}, \ldots, \eta_{d,l}$, $l \le \min\{K-1, o_1,\dots,o_D\}$ be the largest $l$ eigenvalues of $c_1\mathcal{M}^d + c_2\sum_{j\neq d}^D \mathcal{N}_{dj}\mbox{\boldmath{${\Gamma}$}}_j\mbox{\boldmath{${\Gamma}$}}_j^{{\mbox{\tiny T}}}\mathcal{N}_{dj}^{{\mbox{\tiny T}}}$. Then the solution $\widetilde{f}^d$ to the optimization problem in equation (5) [main text] for view $d$ maximizes \begin{align} \sum_{r=1}^l \eta_{d,r}. \end{align} \end{thm} \textbf{Proof}. Fix $d$ and let $\eta_{d,1},\eta_{d,2},...,\eta_{d,l}$ be the top $l$ eigenvalues of \begin{align*} c_1\mathcal{M}^d + c_2\sum_{j\neq d}^D \mathcal{N}_{dj}\widetilde{\mbox{\boldmath{${\Gamma}$}}}_j\widetilde{\mbox{\boldmath{${\Gamma}$}}}_j^T\mathcal{N}_{dj}^T. \end{align*} Then, \begin{align*} \sum_{r=1}^l \eta_{d,r} = c_1 tr[\widetilde{\mbox{\boldmath{${\Gamma}$}}}_d^T\mathcal{M}^d\widetilde{\mbox{\boldmath{${\Gamma}$}}}_d] + c_2 \sum_{j,j\neq d}^D tr[\widetilde{\mbox{\boldmath{${\Gamma}$}}}_d^T \mathcal{N}_{dj} \widetilde{\mbox{\boldmath{${\Gamma}$}}}_j \widetilde{\mbox{\boldmath{${\Gamma}$}}}_j^T \mathcal{N}_{dj}^T \widetilde{\mbox{\boldmath{${\Gamma}$}}}_d] \end{align*} \begin{proof} \begin{align*} &c_1 tr[\widetilde{\mbox{\boldmath{${\Gamma}$}}}_d^T\mathcal{M}^d\widetilde{\mbox{\boldmath{${\Gamma}$}}}_d] + c_2 \sum_{j,j\neq d}^D tr[\widetilde{\mbox{\boldmath{${\Gamma}$}}}_d^T \mathcal{N}_{dj} \widetilde{\mbox{\boldmath{${\Gamma}$}}}_j \widetilde{\mbox{\boldmath{${\Gamma}$}}}_j^T \mathcal{N}_{dj}^T \widetilde{\mbox{\boldmath{${\Gamma}$}}}_d]\\ &= tr(\widetilde{\mbox{\boldmath{${\Gamma}$}}}_d^T (c_1\mathcal{M}^d + c_2\sum_{j,j\neq d}^D \mathcal{N}_{dj}\widetilde{\mbox{\boldmath{${\Gamma}$}}}_j\widetilde{\mbox{\boldmath{${\Gamma}$}}}_j^T\mathcal{N}_{dj}^T) \widetilde{\mbox{\boldmath{${\Gamma}$}}}_d)\\ &= tr(\widetilde{\mbox{\boldmath{${\Gamma}$}}}_d^T \mbox{\boldmath{${\Lambda}$}}_d \widetilde{\mbox{\boldmath{${\Gamma}$}}}_d)\\ &= \sum_{r=1}^l \eta_{d,r} \end{align*} \end{proof} \section{More Results From Real Data Analysis} \subsection{Data preprocessing and application of Deep IDA and competing methods} Of the 128 patients, 125 had both omics and clinical data. We focused on proteomics, RNA-seq, and metabolomics data in our analyses since many lipids were not annotated. We formed a four-class classification problem using COVID-19 and ICU status. Our four groups were: with COVID-19 and not admitted to the ICU (COVID Non-ICU), with COVID-19 and admitted to the ICU (COVID ICU), no COVID-19 and admited to the ICU (Non-COVID ICU), and no COVID-19 and not admitted to the ICU (Non-COVID Non-ICU). The frequency distribution of samples in these four groups were: $40\%$ COVID ICU, $40\%$ COVID Non-ICU, $8\%$ Non-COVID Non-ICU, and $12\%$ Non-COVID ICU. The initial dataset contains 18,212 genes, 517 proteins, and 111 metabolomics features. Prior to applying our method, we pre-processed the data as follows. All genes which were missing in our samples were removed from the dataset and 15,106 genes remained. We selected genes that more than half of their variables are non-zero, and we applied box-cox transformation on each gene as the gene data were highly skewed. The transformed data were standardized to have mean zero and variance one. We kept genes with variance less than the 25th percentile. We then used ANOVA on the standardized data to filter out (p-values $> 0.05$) genes with low potential to discriminate among the four groups. For the proteomics and metabolomics data, we standardized each molecule to have mean zero and variance one, pre-screened with ANOVA and filtered out molecules with p-values $> 0.05$. Our final data were $\mathbf{X}^{1} \in \Re^{125 \times 2,734}$ for the gene data, $\mathbf{X}^{2} \in \Re^{125 \times 269}$ for the protoemics data, and $\mathbf{X}^{3} \in \Re^{125 \times 66}$ for the metabolomics data. \begin{figure}[H] \begin{tabular}{c} \centering \includegraphics[height=6cm,width=13cm]{Plots/MetabolitesTop10overlappath.png} \end{tabular} \caption{ Network of overlapping canonical pathways from highly ranked metabolites. Nodes refer to pathways and a line connects any two pathways when there is at least two molecules in common between them. }\label{fig:metabolitesoverlap} \end{figure} \begin{table}[H] \caption{ Top Diseases and Biological Functions from Ingenuity Pathway Analysis (IPA). }\label{tab:topdiseases} \begin{tabular}{|l|l|l|l|} \hline & Top Diseases and Bio Functions & P-value range & Molecules Selected \\ \hline RNA Sequencing & \begin{tabular}[c]{@{}l@{}}Cancer (such as non-melanoma solid tumor, \\ head and neck tumor)\end{tabular} & 4.96E-02 – 2.74E-05 & 48 \\ \hline & Organismal Injury and Abnormalities & 4.96E-02 – 2.74E-05 & 48 \\ \hline & \begin{tabular}[c]{@{}l@{}}Neurological Disease (such as glioma cancer, \\ brain lesion, neurological deficiency)\end{tabular} & 4.86E-02 – 5.84E-05 & 36 \\ \hline & \begin{tabular}[c]{@{}l@{}}Developmental Disorder (such as intellectual \\ diability with ataxia)\end{tabular} & 4.46E-02 – 3.76E-05 & 16 \\ \hline & Hereditary Disorder (such as familial midline effect) & 4.86E-02 – 2.02E-05 & 16 \\ \hline & & & \\ \hline Proteomics & \begin{tabular}[c]{@{}l@{}}Infectious Diseases (such as Severe COVID-19, \\ COVID-19, infection by SARS coronavirus)\end{tabular} & 1.75E-03 – 8.31E-13 & 19 \\ \hline & \begin{tabular}[c]{@{}l@{}}Inflammatory Response (such as inflammation of \\ organ, degranulation of blood platelets)\end{tabular} & 1.29E-03 – 1.34E-12 & 32 \\ \hline & \begin{tabular}[c]{@{}l@{}}Metabolic Disease (such as amyloidosis, \\ Alzheimer disease, diabetes mellitus)\end{tabular} & 1.47E-03 – 2.48E-12 & 20 \\ \hline & \begin{tabular}[c]{@{}l@{}}Organismal Injury and Abnormalities \\ (such as amyloidosis, tauopathy)\end{tabular} & 1.72E-03 – 2.48E-12 & 39 \\ \hline & \begin{tabular}[c]{@{}l@{}}Neurological Disease (such as tauopathy, progressive \\ encephalopathy, progressive neurological disorder)\end{tabular} & 1.57E-03 – 3.41E-11 & 34 \\ \hline & & & \\ \hline Metabolomics & Cancer & 3.63E-02 – 5.20E-14 & 18 \\ \hline & \begin{tabular}[c]{@{}l@{}}Gastrointestinal Disease (such as digestive system \\ cancer, hepatocellular carcinoma)\end{tabular} & 3.64E-02 – 5.20E-14 & 20 \\ \hline & \begin{tabular}[c]{@{}l@{}}Organismal Injury and Abnormalities \\ (such as digestive system cancer, abdominal cancer)\end{tabular} & 3.79E-02 – 5.20E-14 & 22 \\ \hline & \begin{tabular}[c]{@{}l@{}}Hepatic System Disease \\ (such as hepatocellular carcinoma, liver lesion)\end{tabular} & 2.91E-02 – 1.66E-11 & 15 \\ \hline & \begin{tabular}[c]{@{}l@{}}Developmental Disorder \\ (such as mucopolysaccharidosis type I, spina bifida)\end{tabular} & 2.44E-02 – 1.83E-09 & 11 \\ \hline \end{tabular} \end{table} \newpage \begin{table} \caption{ Top Molecular and Cellular Functions Functions from Ingenuity Pathway Analysis (IPA). }\label{tab:topfunctions} \begin{tabular}{|l|l|l|l|} \hline & Molecular and Cellular Functions & P-value range & Molecules Selected \\ \hline RNA Sequencing & Cell Death and Survival & 4.46E-02 – 2.00E-03 & 8 \\ \hline & Amino Acid Metabolism & 3.47E-02 – 2.07E-05 & 2 \\ \hline & Cell-to-cell Signaling and Interaction & 4.86E-02 – 2.07E-03 & 10 \\ \hline & Cellular Assembly and Organization & 4.46E-02 – 2.07E-03 & 9 \\ \hline & Cellular Function and Maintenance & 4.86E-02 – 2.07E-03 & 10 \\ \hline & & & \\ \hline Proteomics & Cellular Compromise & 1.29E-03 – 1.34E-12 & 13 \\ \hline & Cellular Movement & 1.65E-03 – 2.19E-09 & 24 \\ \hline & Lipid Metabolism & 1.28E-03 – 2.95E-09 & 15 \\ \hline & Molecular Transport & 1.28E-03 – 2.95E-09 & 15 \\ \hline & Small molecule Biochemistry & 1.61E-03 – 2.95E-09 & 19 \\ \hline & & & \\ \hline Metabolomics & Amino Acid Metabolism & 3.64E-02 – 3.99E-08 & 9 \\ \hline & Molecular Transport & 3.82E-02 – 3.99E-08 & 17 \\ \hline & Small Molecule Biochemistry & 3.64E-02 – 3.99E-08 & 18 \\ \hline & Cellular Growth and Proliferation & 3.79E-02 – 5.12E-08 & 16 \\ \hline & Cell Cycle & 3.63E-02 – 5.81E-07 & 10 \\ \hline \end{tabular} \end{table} \begin{table} \begin{small} \begin{centering} \caption{\textbf{Linear Simulations} Network structures for all deep learning based methods. In order to make fair comparisons, for each dataset, the network structure for Deep CCA/Deep GCCA is the same as the proposed Deep IDA method. The activation function is Leakly Relu with parameter 0.1 by default. After activation, batch normalization is also implemented. $-$ indicates not applicable} \label{tab: NetworkStructure} \begin{tabular}{lllllll} \hline \hline Data & Sample size & Feature size & Method & Network structure & Epochs & Batch \\ ~& (Train,Valid,Test) & ($p^1, p^2, p^3)$ & ~ & ~ & per run & size \\ \hline \hline Setting 1 & 540,540,1080 & 1000,1000,- & Deep IDA (+Bootstrap) &Input-512-256-64-10& 50 & 540\\ \hline Setting 1 & 540,540,1080 & 1000,1000,- & Deep CCA &Input-512-256-64-10& 50 & 180\\ \hline Setting 2 & 540,540,1080 & 1000,1000,1000 & Deep IDA (+Bootstrap) &Input-512-256-20& 50 & 540\\ \hline Setting 2 & 540,540,1080 & 1000,1000,1000 & Deep GCCA &Input-512-256-64-20& 200 & 540\\ \hline \hline \end{tabular} \end{centering} \end{small} \end{table} \begin{table} \begin{centering} \begin{scriptsize} \caption{\textbf{Nonlinear Simulations} Network structures for all deep learning based methods. In order to make fair comparisons, for each dataset, the network structure for Deep CCA/Deep GCCA is the same as the proposed Deep IDA method. The activation function is Leakly Relu with parameter 0.1 by default. After activation, batch normalization is also implemented.} \label{tab: NetworkStructure2} \begin{tabular}{lllllll} \hline \hline Data & Sample size & Feature size & Method & Network structure & Epochs & Batch \\ ~& (Train,Valid,Test) & ($p^1, p^2, p^3)$ & ~ & ~ & per run & size \\ \hline \hline Setting 1 & 350,350,350 & 500,500 & Deep IDA (+Bootstrap) &Input-256*10-64-20& 50 & 350\\ \hline Setting 1 & 350,350,350 & 500,500 & Deep CCA &Input-256*10-64-20& 50 & 350\\ \hline Setting 3b & 5250,5250,5250 & 500,500 & Deep IDA (+Bootstrap) &Input-256-256-256-256-256-256-64-20& 50 & 500\\ \hline Setting 3b & 5250,5250,5250 & 500,500 & Deep CCA &Input-256-256-256-256-256-256-64-20& 50 & 500\\ \hline Setting 4 & 350,350,350 & 2000,2000 & Deep IDA (+Bootstrap) & input-256-256-256-256-256-256-256-64-20& 50 & 350\\ \hline Setting 4 & 350,350,350 & 2000,2000 & Deep CCA & input-256-256-256-256-256-256-256-64-20& 50 & 350\\ \hline Setting 5b & 5250,5250,5250 & 2000,2000 & Deep IDA (+Bootstrap) &Input-256-256-256-256-256-256-64-20& 50 & 500\\ \hline Setting 5b & 5250,5250,5250 & 2000,2000 & Deep CCA &Input-256-256-256-256-256-256-64-20& 50 & 500\\ \hline \hline \end{tabular} \end{scriptsize} \end{centering} \end{table} \begin{table} \begin{centering} \caption{\textbf{Real Data Analysis} Network structures for all deep learning based methods. In order to make fair comparisons, for each dataset, the network structure for Deep CCA/Deep GCCA is the same as the proposed Deep IDA method. The activation function is Leakly Relu with parameter 0.1 by default. After activation, batch normalization is also implemented. For Covid-19 data, we select the top 50 features for each view from Bootstrap Deep IDA with input-512-20. $-$ indicates not applicable} \label{tab: NetworkStructure3} \begin{tabular}{lllllll} \hline \hline Data & Sample size & Feature size & Method & Network structure & Epochs & Batch \\ ~& (Train,Valid,Test) & ($p^1, p^2,p^3)$ & ~ & ~ & per run & size \\ \hline \hline \hline \hline Noisy & 50000,10000, & 784,784,- & Deep CCA &Input-512-256-64-20& 50 & 50000\\ MNIST & 10000& ~ & non-Bootstrap Deep IDA &~& ~ & ~\\ \hline Covid-19 & 74,0,21 & 2734,269,66 & non-Bootstrap Deep IDA &Input-512-20& 20 & 74\\ \hline Covid-19 & 74,0,21 & 2734,269,66& Deep IDA on selected &Input-512-256-20& 20 & 74\\ ~ & ~ & ~ & top 50 features &~& ~ &~\\ \hline Covid-19 & 74,0,21 & 2734,269,66 & Deep IDA on selected &Input-256-64-20& 20 & 74\\ ~ & ~ & ~ & top 10 percent features &~& ~& ~\\ \hline Covid-19 & 74,0,21 & 2734,269,66 & Deep GCCA on &Input-256-20& 150 & 74\\ ~ & ~ & ~& selected top 50 features &~& ~ & ~\\ \hline \hline \end{tabular} \end{centering} \end{table} \end{document}
2,869,038,156,103
arxiv
\section{Introduction} Face recognition is one of the most commonly used techniques in biometrics authentication applications, of access and video surveillance control, this is due to its advantageous features. In a face recognition system, the individual is subject to a varied contrast and brightness lighting, background. This three-dimensional shape, when it is part of a two-dimensional surface, as is the case of an image, can lead to significant variations. The human face is an object of three-dimensional nature. This object may be subject to various rotations not only flat but also space and also subject to deformations due to facial expressions. The shape and characteristics of this object also change over time.\\ In these last years, a number of methods have been proposed for the recognition of human faces. In spite of the results obtained in this domain, the automatic face recognition stays one of very difficult problem. Several methods were developed for 2D face recognition. However, it has a certain number of limitations related to the orientation of the face or laying, lighting, facial expression. These last years, we talk more and more about 3D face recognition technology as solution of 2D face recognition problems. There are methods of 3D face recognition based on the use of three-dimensional information of the human face in the 3D space. Existing approaches that address the problem of 3D face recognition can be classified into several categories of approaches: Geometric or Local approaches 3D, Bronstein et al [1, 2] propose a new representation based on the isometric nature of the facial surface, Samir et al [3, 4] use 2D and 3D facial curves for analyzing the facial surface; Holistic approaches, Heseltine et al [5] have developed two approaches to applying the representations PCA Three-dimensional face, Cook et al [6] present a robust method for facial expression based on Log-Gabor models from images of deep and some approaches are based on face segmentation can be found in [7 , 8, 9, 10, 11, 12].\\ In this work we present an automatic 3D face recognition system based on facial surface analysis using a Riemannian geometry. For this we take the following steps:\\ - Detection of 3D face where the nose end is a reference point.\\ - Iso-geodesic curves extraction using a 3D Fast Marching algorithm.\\ - Compute of geodetic distance between two iso-geodesic curves using mathematical formulas in Riemannian metric.\\ The rest of this paper is organized as follows: Section 2 describes the methodology of the proposed method with its stages: reference point detection, geodesic distance computing, facial curves extraction. Section 3 includes the simulation results and analysis. Section 4 draws the conclusion of this work and possible points for future work.\\ \section{METHODOLOGY} Given an image of 3D face Shape REtrieval Contest 2008 database (SHRED2008) database our goal is to realize an automatic 3D face recognition system based on the computation of the geodesic distance between the reference point (nose tip) and the other points in the 3D face surface. So, our algorithm is divided to four main steps, first: Reference Point Detection, in this paper we have detected the point of reference (nose tip) manually. Second: Geodesic Distance computation, an effective method to compute a geodesic distance between two points of facial surface is using the Fast Marching as a numerical algorithm for solving the Eikonal equation. Third: facial curves extraction and compute a geodesic distance between each pair of curves. Finally: classification algorithms, in this step we use three types of classification algorithms: the Neural Networks (NN), k-Nearest Neighbor (KNN) and Support Vector Machines (SVM). Figure (1) illustrates the steps of proposed method in our 3D face recognition system.\\ \begin{figure}[H] \centering \includegraphics[width=4in]{figure1.pdf} \caption{Methodology Architecture} \label{fig_sim} \end{figure} \subsection{Reference Point Detection} The reference point (nose tip) is detected manually or automatically. There are several automatic approaches: L. Ballihi et al are developed an automatic algorithm nose end detection of a 3D face in [13]. This algorithm is based on two cuts of the facial surface. The first is at transverse face of the mass center. The second cut is based on the minimum depth point of the horizontal curve obtained by the first cut. The output of the last cut is a vertical curve and the minimum depth of this curve is the end of 3D face nose. In [14] S. Bahanbin et al use Gabor filters to automatically detect the nose tip. Another method has been used by C. Xu et al in 2004 [15], this method computes the effective energy of each neighbor pixel, then be determined the mean and variance of each neighbor pixel and uses the SVM to specify point end of the nose. L.H. Anuar et al [16] use a local geometry curvature and point signature to detect a nose tip region in 3D face model.\\ In this work we have detected the point of reference $p_{0}$ (nose tip) manually. The following figure (figure2) summarizes the steps to detect the nose tip of a 3D face an image of the database SHREC2008: \begin{figure}[H] \centering \includegraphics[width=3in]{figure2.pdf} \caption{3D face nose end detection steps: (a) 3D face image; (b) Manual nose tip selection; (c) Reference point detection} \label{fig_sim} \end{figure} \subsection{Geodesic Distance} The geodesic distance between two points’ p0 and p of 3D face surface is the shortest path between the two points while remaining on the facial surface. In the context of calculating the geodesic distance R. Kimmel and J.A. Sethian [17] propose the method of Fast Marching as a solution of the Eikonal equation.\\ The Eikonal equation given as: \begin{equation} \mid \bigtriangledown_{u} (x) \mid = F(x); \quad x \in \Omega \end{equation} with:\\ - $\Omega$ is an open set in $R^{n}$ housebroken limit.\\ - $\bigtriangledown$ denotes the gradient.\\ - $\mid . \mid$ is the Euclidean norm.\\ The Fast Marching method is a numerical method for solving boundary value problems of the Eikonal equation [17, 18, 19]. The algorithm is similar to the Dijkstra's algorithm [20] and uses that information flows only to the outside from the planting area.\\ We consider a 3D face surface discretized using a triangular mesh with N vertices. Figure (3) shows a 3D face image of the Shape REtrieval Contest 2008 (SHREC2008) database whose obverse surface is discretized into a triangular mesh. \begin{figure}[H] \centering \includegraphics[width=3in]{figure3.pdf} \caption{3D facial surface discretized on triangular mesh of N vertices} \label{fig_sim} \end{figure} The geodesic distance between two points on a surface is calculated as the length of the shortest path connecting the two points. Using the Fast Marching algorithm on the triangulated surface 3D face, we can compute the geodesic distance between the reference point P0 and the other point’s p constructing the facial surface.\\ The geodesic distance $\delta_{p_{0},p}$ between $p_0$ and p is approximated by the following expression: \begin{equation} \delta_{p_{0},p} = min \gamma (\beta(p_{0},p)) \end{equation} with:\\ - $\beta(p_{0},p)$ is the path between $p_0$ and according to the facial surface $S$ of the 2D face.\\ - $\gamma (\beta(p_{0},p))$ is the path length.\\ The following figure (Figure4) shows the steps for determining the geodesic distance using a 3D face image of SHREC2008 database. \begin{figure}[H] \centering \includegraphics[width=3in]{figure4.pdf} \caption{3D face geodesic distance computes Steps: (a) 3D face image; (b) Reference point detection; (c) Discretization by triangular mesh; (d) Geodesic distance computing} \label{fig_sim} \end{figure} \subsection{Facial Curves} This 3D face recognition method is based on the analysis of facial surfaces by analyzing of facial curves using Riemannian geometry. To extract the curves of a 3D face surface, the first step is to define the real-valued function on this surface [17]. According to the extraction strategy, different types of facial curves can be found: 1- Iso-depth curves, these curves are obtained by the intersection of the 3D face surface with parallel planes perpendicular to the direction of watching. The depth curves located at equal values of z [17]. 2-Iso-radius curves, these curves are determined by the intersection of the facial surfaces with spheres as centre is the reference point of 3D face (nose tip) and variable radius [13]. 3- Iso-geodesic curves are defined as the locations of all points on the facial surface having the same geodesic distance to the reference point chosen (in our case the end of the nose). The geodesic distance between two points on a surface is the shortest path between these two points along of surface [13, 17].\\ In this work, we represent the 3D human face surface by a collection of iso-geodesic curves. To extract the iso-geodesic curves we use the Fast Marching algorithm as a solution of Eikonal equation [19]. Figure (5) present the steps of extracting of iso-geodesic curves in some 3D face images of SHREC2008 database. \begin{figure}[H] \centering \includegraphics[width=3in]{figure5.pdf} \caption{Extracting facial curves steps: (a) Pre-treated face image, (b) Detection of the reference point, (c) Compute of geodesic distance, (d) Facial curves extraction} \label{fig_sim} \end{figure} Given two points on a face surface S (reference point $p_0$ and $p$), A geodesic distance between $p_0$ and $p$ is defined as the arc length of the shortest path between these two points along the surface and denoted by a Geodesic Distance Function (GDF), which is a continuous function on the facial surface. \begin{equation} F(p_{0},p) = k; \quad k \in [0, + \propto[ and (p_{0},p) \in S \end{equation} We can therefore define the facial curves by: \begin{equation} c_k = \{ p \in S \setminus F(p_{0},p) = k \} \subset S; \quad k \in [0, + \propto[ \end{equation} The function F defines the geodesic distance between $p_0$ and $p$, or the length of the shortest path between these two points while remaining on the surface $S$.\\ This definition allows us to cite three cases $c_k$ according to the values of $k$:\\ \begin{itemize}\setlength{\itemsep}{4mm} \item If $k=0$ then $c_k$ tends towards the reference point $p_0$ : $c_k = \{ p_0 \}$. \item If $k\rightarrow\propto$ then $c_k$ is empty : $c_k=\{ 0\}$. \item If $0<k<\propto$ then $c_k$ approaches $S$ : $c_k =\{p\in S \backslash F(p,p_0) = k \}$. \end{itemize} \vspace{0.5cm} \subsection{Riemannian analysis of facial curves} To analyze the facial surfaces, we simply analyze the iso-geodesic curves that characterize these 3D face surfaces and compute a geodesic distance between them on a manifold depends on the Riemannian metric. To analyze the curve shape, we use the parameterization by the mathematical function SRVF (Square Root Velocity Function) [30, 31].\\ Let a parameterized closed curve be denoted as $\beta: I \rightarrow R^3$, for a unit interval $I \equiv [0, 2\pi]$, $\beta$ is represented by its $SRVF: q: I \rightarrow R^3$ defined as follow: \begin{equation}\label{notre} q(s)= \frac{\beta'(s)}{(\parallel\beta(s)\parallel)^{\frac{1}{2}}}= \frac{\frac{d\beta(s)}{ds}}{\sqrt{\parallel\frac{d\beta(s)}{ds}\parallel}} \in R^3 \end{equation} Where, \begin{itemize}\setlength{\itemsep}{4mm} \item $s \in I \equiv [0, 2\pi]$. \item $\|.\|$ is the standard Euclidean norm in $R^3$. \item $\|q(s)\|$ is the square-root of the instantaneous speed on the curve $\beta$. \item $\frac{q(s)}{\|q(s)\|}$ is the instantaneous direction at the point $s \in [0, 2\pi]$ along the curve. \end{itemize} \vspace{0.5cm} Thus, the curve $\beta$ can be recovered within a translation, using: \begin{equation}\label{notre} \beta(s)= \int_{0}^{s} q(t)\|q(t)\| \, \mathrm{d}t \end{equation} We define the set of closed curves $\beta$ in $R^3$ by: \begin{equation}\label{notre} C=\{q : S^1 \rightarrow R^3 \mid \int_{S^1}q(t)\parallel q(t)\parallel dt=0 \} \subset L(S^1,R^3) \end{equation} Where, \begin{itemize}\setlength{\itemsep}{4mm} \item $L(S^1,R^3)$. denotes the set of all functions integral $S^1$ to $R^3$ \item $\int_{S^1}q(t)\parallel q(t)\parallel dt$ denotes the total displacement in $R^3$ while moving from the origin of the curve until the end. When $\int_{S^1}q(t)\parallel q(t)\parallel dt=0$ the curve is closed. \end{itemize} \vspace{0.5cm} All 3D closed curves are defined as nonlinear variety in the Helbert space. To analyze the shapes of the iso-geodesic curves and compute a geodesic distances between them, it is important to understand all vectors of their tangent spaces and impose a Riemannian structure. We equip the space of the closed curves of a Riemannian structure using the inner product defined as follows: \begin{equation}\label{notre} <f,g>= \int_{0}^{1} (f(s),g(s)) ds \end{equation} Here, $f$ and $g$ are two vectors in the tangent space $T_v(c)$. We can also define $T_v(c)$: \begin{equation}\label{notre} T_v(C)= \{ f:S^1 \rightarrow R^3 \mid <f(s),h(s)>=0, \quad h \in N_v(c) \} \end{equation} $N_v(c)$ is a space of the normal vectors to the face curve. After a mathematical representation of iso-geodesic curves using Riemannian metric, this metric should invariant certain transformation (translation, rotation, scale) [30]. The question to ask is how to compute the geodesic distance between two closed curves?. To answer this question, we used the approach introduced by Klassen in 2007 [31]. This method use path straightening flows to find a geodesic between two shapes.\\ To compare two facial surfaces, we just compare a pairs of closed curves of these two facial surfaces. Lets $c_1$ and $c_2$ two facial curves (iso-geodesic curves), $q_1$ and $q_2$ are respectively there Square Root Velocity Function ($SRVF$). The geodesic distance between $c_1$ and $c_2$ is computed by: \begin{equation}\label{notre} d(q_1,q_2)= \int_0^1 \sqrt{<\varepsilon\prime(t),\varepsilon\prime(t)>} dt \end{equation} With $\varepsilon$ is a geodesic path determined by the training method, this method is to connect the two curves by an arbitrary path $\alpha$ then update the path repeatedly in the negative direction of the gradient of the energy given by: \begin{equation}\label{notre} E[\alpha] = \frac{1}{2} \int_0^1 <\frac{d}{ds}\alpha(t),\frac{d}{ds}\alpha(t)> dt \end{equation} $\varepsilon$ has been shown that the critical points of the energy equation $E[\alpha]$ are geodesic paths in $S$ [31, 13].\\ The facial surfaces $S_1$ and $S_2$ are represented by their iso-geodesic curves collection respectively $\{c_k^1; k \in [0,k_0]\}$ and $\{c_k^2; k \in [0,k_0]\}$, $k$ is a geodesic distance between $p_0$ (reference point) and $p$ two points of facial surface $S$. The vectors of geodesic distances computed between a pairs of facial curves are used as input vectors of classification algorithms of our automatic facial recognition system.\\ To realize our 3D face recognition system, we use three classification algorithms are: the Neural Networks (NN), k-Nearest Neighbor (KNN) and Support Vector Machines (SVM). \section{Simulation Results} In this section we make a series of simulation to evaluate the effectiveness of the proposed approach. These results were performed based on SHREC 2008 database. This database contains total of 427 scans of 61 subjects (45 males and 16 females), for each of these 61 subjects 7 different scans, namely two “frontal”, one “look-up”, one “look-down”, one “smile”, one “laugh” and one “random expression” [21, 22].\\ In this paper, the features were extracted using Iso-Geodesic Curves (I-GC). This method was based on two principal steps: Iso-Geodesic Curves extraction using Fast Marching algorithm as solution of Eiconal equation and compute the length of the geodesic path between each facial curve and its corresponding curve using a Riemannian framework.\\ The following figure (6) summarizes the first experimental results of our simulation: \begin{figure}[H] \centering \includegraphics[width=3.4in]{figure6.pdf} \caption{Recognition rate in terms of number of facial curves used to represent the human face of SHREC 2008 database for each classification algorithm (NN, KNN and SVM)} \label{fig_sim} \end{figure} Figure 5 shows the recognition rate in terms of number of facial curves used to represent a 3D human faces used in our systems. This figure shows that the images of the SHREC 2008 database are represented using five facial curves.\\ Given a candidate 3D face image ($Img$) of SHREC 2008 database. $Img$ is represented using five iso-geodesic curves. The shortest path between two 3D face images id defined as the sum of the distance between all pairs of corresponding facial curves in the two face images. The feature vector is then formed by the geodesic distances computed on all the curves and its dimension is equal to the number of used iso-geodesic curves (five curves for SHREC 2008 database). These vectors are used as input of classification algorithms of our 3D face recognition systems. \begin{figure}[H] \centering \includegraphics[width=3in]{figure7.pdf} \caption{Recognition Rate for SHREC 2008 images using three classification algorithms (NN, KNN and SVM)} \label{fig_sim} \end{figure} Figure 7 shows the recognition rate for SHREC 2008 images using three classification algorithms: Neural Networks (NN), K-Nearest Neighbor (KNN) and Support Vector Machines (SVM). The best recognition rate was obtained using Support Vector Machines (SVM) as classification algorithm with recognition rate equal $98,9\%$.\\ In conclusion of this series of results, a summary table (Table2) compares the performance of our face authentication with respect to the performance obtained in other work systems. \begin{table}[!h] \centering \begin{tabular}{|l|p{1.3cm}|p{2.7cm}|c|p{1.3cm}|} \hline \bf Date & \bf Reference & \bf Method & \bf Database & \bf Reported performance\\ \hline \hline 2004 & Haar et al [26] & facial Contour Curves & SHREC’08 & 91.1 \% \\ \hline 2007 & Feng et al [24] & Euclidean Integral Invariants Signature (Closed 3D Curves) & FRGCv2 & 95,0\% \\ \hline 2007 & Samir et al [23] & Planar Curves Levels & Notre Dame & 90,4 \% \\ \hline 2007 & Samir et al [23] & Planar Curves Levels & FSU & 92,0\% \\ \hline 2008 & Daoudi et al [25] & Elastic Deformation Of Facial Surfaces (open paths) & FSU & 92,0 \% \\ \hline 2015 & Ahdid et al [29] & GD+LDA+SVM & SHREC’08 & 93,2\% \\ \hline 2015 & Ahdid et al [29] & GD+PCA+SVM & SHREC’08 & 95,3 \% \\ \hline \bf 2016 & \bf Our approach & \bf Iso-Geodesic Curves + SVM & \bf SHREC’08 & \bf 98,9\% \\ \hline \end{tabular} \caption{COMPARISON OF OUR METHOD WITH OTHER METHODS OBTAINED IN OTHER WORK SYSTEMS} \end{table} We can notice that the performance of our automatic 3D face recognition system, In addition our system is perfect in all assessment. Our goal was to improve 3D faces recognition system we affirm based on the results that our goal is achieved. \section{Conclusion} In this work, we have presented a 3D face recognition system based on the computation of the geodesic distance between the reference point and the other points in the 3D face surface. This method represents a face surface as a collection of Iso-Geodesic curves and computes a geodesic distance between each pairs of these facial curves. For the classifying step we have implemented algorithms as Neural Networks (NN), K-Nearest Neighbor (KNN) and Support Vector Machines (SVM). Simulation results show us a better recognition rate ($98.9\%$) using SVM classification algorithm.
2,869,038,156,104
arxiv
\subsection*{Abstract} This paper sets the groundwork for the consideration of families of recursively defined polynomials and rational functions capable of describing the Bernoulli numbers. These families of functions arise from various recursive definitions of the Bernoulli numbers. The derivation of these recursive definitions is shown here using the original application of the Bernoulli numbers: sums of powers of positive integers, i.e., Faulhaber's problem. Since two of the three recursive forms of the Bernoulli numbers shown here are readily available in literature and simple to derive, this paper focuses on the development of the third, non-linear recursive definition arising from a derivation of Faulhaber's formula. This definition is of particular interest as it shows an example of an equivalence between linear and non-linear recursive expressions. Part II of this paper will follow up with an in-depth look at this non-linear definition and the conversion of the linear definitions into polynomials and rational functions. \newpage To derive expressions for the Bernoulli numbers, we return to the original context under which the Bernoulli numbers were defined: sums of powers of positive integers, or power sums, which take the form: $$\sum_{i=1}^{n} i^m,~~~ m \in \mathbb{Z}, m \ge 0$$ Jakob Bernoulli gave the formula for these sums as: \medskip $$\sum_{i=1}^n i^m = \frac{1}{m+1}n^{m+1} + \frac{1}{2} n^m + \sum_{k=2}^m \frac{m!}{k!(m-k+1)!} B_k~n^{m-k+1}$$ \medskip where $B_k$ represents the $k^{th}$ Bernoulli number. In the MAA \textit{Convergence} article ``Sums of Powers of Postive Integers," Janet Beery lists the works of several mathematicians on these sums, including both Bernoulli and Faulhaber \cite{Beery}. In her paper, Beery notes Bernoulli's derivation of these sums stemmed from the work of Pierre de Fermat, who used triangular numbers of various orders to extract formulas for the sums. Similarly, Blaise Pascal used the binomial theorem as an entry point when considering these sums. \medskip In the above cases, Fermat's triangular numbers and Pascal's binomials served the same purpose: they provided a polynomial equation of arbitrary degree from which to construct formulas for power sums. Alternatively, however, Faulhaber's formula can be derived without a polynomial family by noting a recurrence relation between sums of successive powers. Using this method, Faulhaber's formula can be obtained along with, surprisingly, a definition for the Bernoulli numbers unique from that given by Bernoulli. This relation can be discovered by expanding and rearranging the terms of a sum of arbitrary degree. Observe the following\footnote{The resulting recursive relationship shown here is equivalent to a relationship noted by the ancient mathematician Abu Ali al-Hasan ibn al Hasan ibn al Haytham. His consideration of these sums is also detailed in Beery's article.}: \begin{align*} \sum_{i=1}^{n} i^{m} &= &&1^{m} + 2^{m} + 3^{m} + \cdots + n^{m}\\ &= &&1\cdot 1^{m-1} + 2\cdot 2^{m-1} + 3\cdot 3^{m-1} + \cdots + n\cdot n^{m-1}\\\\ &= &&(1^{m-1}) + (2^{m-1} + 2^{m-1}) + (3^{m-1}+ 3^{m-1} + 3^{m-1}) + \cdots + \\ & &&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(n^{m-1} + \cdots + n^{m-1})\\\\ &= &&( 1^{m-1} + 2^{m-1} + 3^{m-1} + \cdots + n^{m-1})~+ \\ & & &( 2^{m-1} + 3^{m-1} + \cdots + n^{m-1})~+ \\ & & & \vdots \\ & & & n^{m-1}\\ &= &&\sum_{i=1}^{n} \sum_{j=i}^{n} j^{m-1} = \sum_{i=1}^{n} \bigg( \sum_{j=1}^{n} j^{m-1} - \sum_{j=1}^{i-1} j^{m-1} \bigg) \end{align*} $$\implies \sum_{i=1}^{n} i^m = \sum_{i=1}^{n} \bigg( \sum_{j=1}^{n} j^{m-1} - \sum_{j=1}^{i-1} j^{m-1} \bigg)$$ \medskip Now suppose the function $S_k (n)$ gives the result of the sum $\sum_{i=1}^{n} i^k$. Without knowing anything of the nature of the family of the $S_k$ functions, it can be seen from the above that: $$\sum_{i=1}^{n} i^m = \sum_{i=1}^{n} \bigg( \sum_{j=1}^{n} j^{m-1} - \sum_{j=1}^{i-1} j^{m-1} \bigg) = \sum_{i=1}^{n} \bigg( S_{m-1}(n) - S_{m-1}(i-1) \bigg)$$ $$\implies \sum_{i=1}^{n} i^m = n S_{m-1}(n) - \sum_{i=1}^{n} S_{m-1}(i-1)$$ \medskip Knowing that the $S_k$ functions are truly polynomials, it can easily be seen from the first term in the above equation that the degree of the polynomials will increase with $m$. The second term of the above, $\sum_{i=1}^{n} S_{m-1}(i-1)$, is where the complexity of these sums and the Bernoulli numbers come in along with the inevitable introduction of the binomial theorem, which strongly flavors these sums and the resulting Bernoulli numbers. \medskip Ignoring Faulhaber's formula for the time being, suppose the formula for these sums was unknown and in need of discovery. From discrete mathematics, the formulas for $m = 0, 1, 2, 3$ are readily available. From these formulas, it can be hypothesized that the $S_k$ functions take the form of polynomials over the rationals of degree $k+1$. Let these polynomials be denoted using the following notation: $$S_k(n) = \sum_{i=1}^{n} i^k = a_{k,k+1} n^{k+1} + a_{k,k} n^k + a_{k,k-1} n^{k-1} + \cdots + a_{k,1} n^{1},~~~~ a_{i,j} \in \mathbb{Q}$$ where the first subscript of the coefficients denotes the order of the sum, and the second the exponent of the term it multiplies. Note no constant term was included in these polynomials; its absence is easily justified by an empty sum with $n = 0$. While a simpler, high-level proof could be used to prove these sums are described by polynomials, an in-depth proof by strong induction can be used to acquire a definition for the coefficients. \medskip \medskip \textbf{Proof of Polynomial Closed Form} \medskip For the base case, note by definition: $$S_{0}(n) = \sum_{i=1}^{n} 1 = \sum_{i=1}^{n} i^0 = n$$ \medskip For the inductive assumption, for $j = 0, 1, ..., k$ let: $$S_j(n) = \sum_{i=1}^{n} i^j = a_{j,j+1} n^{j+1} + a_{j,j} n^j + a_{j,j-1} n^{j-1} + \cdots + a_{j,1} n^{1}, a_{i,j} \in \mathbb{Q}$$ \medskip Using the previously stated recursive formula, the $(k+1)^{th}$ sum is given by: $$\sum_{i=1}^{n} i^{k+1} = n S_{k}(n) - \sum_{i=1}^{n} S_{k}(i-1)$$ \newpage For the moment, consider only the second term in the above equation. Under the inductive assumption: $$\sum_{i=1}^{n} S_{k}(i-1) = \sum_{i=1}^{n} \big( a_{k,k+1} (i-1)^{k+1} + a_{k,k} (i-1)^{k}+ \cdots + a_{k,1} (i-1)^1\big)$$ \medskip Expanding each of the powers of $(i-1)$ with the binomial theorem\footnote{Alternatively, the limits of summation can be changed by using a new index $j = (i-1)$; however, this route is more difficult in the end.}: \begin{align*} \sum_{i=1}^{n} S_{k}(i-1) &= \sum_{i=1}^{n} \bigg( a_{k,k+1} (i-1)^{k+1} + a_{k,k} (i-1)^{k}+ \cdots + a_{k,1} (i-1)^1\bigg)\\\\ &= \sum_{i=1}^{n} \bigg[~ a_{k,k+1} \bigg( \binom{k+1}{k+1}i^{k+1}(-1)^{0} + \cdots + \binom{k+1}{0}i^{0}(-1)^{k+1}\bigg)\\\\ &~~~~ + a_{k,k} \bigg( \binom{k}{k}i^{k}(-1)^{0} +\binom{k}{k-1}i^{k-1}(-1)^{1} + \cdots + \binom{k}{0}i^{0}(-1)^{k}\bigg)\\ &~~~~~ \vdots\\ &~~~~ + a_{k,2} \bigg( \binom{2}{2}i^{2}(-1)^{0} + \binom{2}{1}i^{1}(-1)^{1} + \binom{2}{0}i^{0}(-1)^{2}\bigg)\\\\ &~~~~ + a_{k,1} \bigg( \binom{1}{1}i^{1}(-1)^{0} + \binom{1}{0}i^{0}(-1)^{1}\bigg) \bigg]\\ \end{align*} Grouping terms according to powers of $i$: \begin{align*} \sum_{i=1}^{n} &S_{k}(i-1) = \sum_{i=1}^{n} \bigg[~ i^{k+1}\cdot a_{k,k+1} \binom{k+1}{k+1}(-1)^{0} \\\\ &+ i^{k} \bigg( a_{k,k+1}\binom{k+1}{k}(-1)^{1} + a_{k, k}\binom{k}{k}(-1)^{0}\bigg)\\\\ &+ i^{k-1} \bigg( a_{k,k+1}\binom{k+1}{k-1}(-1)^{2} + a_{k,k}\binom{k}{k-1}(-1)^{1} + a_{k,k-1}\binom{k-1}{k-1}(-1)^{0}\bigg)\\\\[-12pt] &~~~ \vdots \\\\[-12pt] & + i^{1} \bigg( a_{k,k+1}\binom{k+1}{1}(-1)^{k} + \cdots + a_{k, 1}\binom{1}{1}(-1)^{0}\bigg)\\\\ & + i^{0} \bigg( a_{k,k+1}\binom{k+1}{0}(-1)^{k+1} + \cdots + a_{k, 1}\binom{1}{0}(-1)^{1}\bigg) \bigg] \end{align*} For ease of communication, let the coefficients for each $i^j$ with $j > 0$ be denoted as follows: $$\alpha_{k,j} = a_{k,k+1}\binom{k+1}{j}(-1)^{k+1-j} + a_{k,k}\binom{k}{j}(-1)^{k-j} + \cdots + a_{k,j}\binom{j}{j}(-1)^{0}$$ For $j = 0$, a slightly altered formula is necessary in which the $\binom{0}{0}$ term is absent: $$\alpha_{k,0} = a_{k,k+1}\binom{k+1}{0}(-1)^{k+1} + a_{k,k}\binom{k}{0}(-1)^{k} + \cdots + a_{k,1}\binom{1}{0}(-1)^{1}$$ \pagebreak Returning to the derivation with these notational simplifications: \begin{align*} \sum_{i=1}^{n} S_{k}(i-1) &= \sum_{i=1}^{n} \big(~ \alpha_{k,k+1} \cdot i^{k+1} + \alpha_{k,k} \cdot i^{k} + \cdots \alpha_{k,1} \cdot i^{1} + \alpha_{k,0} \cdot i^{0} \big)\\ &= \alpha_{k,k+1} \sum_{i=1}^{n} i^{k+1} + \alpha_{k,k} \cdot S_{k}(n) + \cdots + \alpha_{k,0} \cdot S_{0}(n) \end{align*} Substituting this back into the equation for $S_{k+1}$: \begin{align*} S_{k+1}(n) &= \sum_{i=1}^{n} i^{k+1} = n S_{k}(n) - \sum_{i=1}^{n} S_{k}(i-1)\\ &= n S_{k}(n) - \bigg(~\alpha_{k,k+1} \sum_{i=1}^{n} i^{k+1} + \alpha_{k,k} \cdot S_{k}(n) + \cdots + \alpha_{k,0} \cdot S_{0}(n) ~\bigg) \end{align*} Solving for the $(k+1)^{th}$ sum: $$\implies (1 + \alpha_{k,k+1})~\sum_{i=1}^{n} i^{k+1} = n S_{k}(n) - \big(~\alpha_{k,k} \cdot S_{k}(n) + \cdots + \alpha_{k,0} \cdot S_{0}(n) ~\big)$$ $$\implies \sum_{i=1}^{n} i^{k+1} = \frac{1}{(1 + \alpha_{k,k+1})} \bigg[~n S_{k}(n) - \big(~\alpha_{k,k} \cdot S_{k}(n) + \cdots + \alpha_{k,0} \cdot S_{0}(n) ~\big) \bigg]$$ At this point, provided $\alpha_{k,k+1} \ne -1$ to prevent division by zero, the polynomial status of the $S_k$'s has been proven. However, further manipulation is needed to obtain a definition for the polynomial coefficients, the $a_{i,j}$'s. \medskip Expanding each of the $S_i$'s and grouping terms according to powers of $n$: \begin{align*} \sum_{i=1}^{n} i^{k+1} &= \frac{1}{(1 + \alpha_{k,k+1})} \bigg[~n~\big(a_{k,k+1} \cdot n^{k+1} + a_{k,k} \cdot n^{k} + \cdots + a_{k,1} \cdot n^{1}\big) \\\\ &~~~~~~~~ - \alpha_{k,k} \big(a_{k,k+1} \cdot n^{k+1} + a_{k,k} \cdot n^{k} + \cdots a_{k,1} \cdot n^{1}\big)\\\\ &~~~~~~~~ - \alpha_{k,k-1} \big(a_{k-1,k} \cdot n^{k} + a_{k-1,k} \cdot n^{k-1} + \cdots a_{k-1,1} \cdot n^{1}\big)\\\\[-12pt] &~~~~~~~~ \vdots \\\\[-12pt] &~~~~~~~~ - \alpha_{k,1} \big( a_{1,2} n^2 + a_{1,1} n^1\big) - \alpha_{k,0} \big( a_{0,1} n^1 \big) ~\bigg] \end{align*} \begin{align*} \sum_{i=1}^{n} i^{k+1} = \frac{1}{(1 + \alpha_{k,k+1})} \bigg[&~ n^{k+2} \cdot a_{k,k+1} ~+ \\ &n^{k+1} ~\big(a_{k,k} - \alpha_{k,k} \cdot a_{k,k+1}\big) ~+ \\ &n^{k} ~\big(a_{k,k-1} - \alpha_{k,k} \cdot a_{k,k} - \alpha_{k,k-1}\cdot a_{k-1,k}~\big) ~+ \\ & \vdots \\ &n^{2} ~\big(a_{k,1} - \alpha_{k,k} \cdot a_{k,2} - \alpha_{k,k-1} \cdot a_{k-1,2} - \cdots -\alpha_{k,1} \cdot a_{1,2} \big) ~- \\ &n^{1} ~\big(~\alpha_{k,k} \cdot a_{k,1} + \alpha_{k,k-1} \cdot a_{k-1,1} + \cdots + \alpha_{k,0} \cdot a_{0,1}~\big)~\bigg] \end{align*} With the above, the proof is now complete, and a generalized definition for the coefficients of the polynomials can be observed to be: \begin{multline*} a_{m,k} = \frac{1}{1 + \alpha_{m-1,m}}\bigg[~a_{m-1,k-1} - \big( \alpha_{m-1,m-1} \cdot a_{m-1,k} ~+ \\ \alpha_{m-1,m-2} \cdot a_{m-2,k} + \cdots + \alpha_{m-1,k-1} \cdot a_{k-1,k} \big)~\bigg] \end{multline*} This indexing, however, is not actually the most natural setup. Given that the coefficients' recursive definitions become stronger the lower the order of the term, it is more convenient to index the coefficients by offsetting from the leading term, or rather its consort, $n^m$. Put more mathematically\footnote{Note this offset-style indexing corresponds to the alternative indexing starting at $-1$ sometimes used when dealing with the Bernoulli numbers.}: \begin{multline*} a_{m,m-x} = \frac{1}{1 + \alpha_{m-1,m}}\bigg[~a_{m-1,m-x-1} - \big( \alpha_{m-1,m-1} \cdot a_{m-1,m-x} ~+ \\ \alpha_{m-1,m-2} \cdot a_{m-2,m-x} + \cdots + \alpha_{m-1,(m-1)-x} \cdot a_{(m-1) -x,m-x} \big)~\bigg] \end{multline*} Using this indexing, the above definition is valid for $ 0 \le x \le m-2$, i.e., all but the coefficients on the first ($x = -1, k = m+1$) and last ($x = m - 1, k = 1$) terms, whose definitions are given by: $$a_{m,m+1} = \frac{a_{m-1,m}}{1+\alpha_{m-1,m}}$$ $$a_{m,1} = \frac{-1}{1 + \alpha_{m-1,m}}\bigg[~\alpha_{m-1,m-1} \cdot a_{m-1,1} + \alpha_{m-1,m-2} \cdot a_{m-2,1} + \cdots + \alpha_{m-1,0} \cdot a_{0,1}~\bigg]$$ Using the above, closed forms for the coefficients can be obtained. For the sake of some sense of brevity, the process used to do so will be described below but not shown in detail. \medskip To obtain closed forms for the coefficients, note that the coefficients are multivariate: they are functions of both $x$ and $m$. Additionally, they are strongly recursive. However, holding $x$ constant creates sequences solely in terms of $m$. The base cases for each of these offset-sequences are attainable thanks to the special definitions for $\alpha_{m,0}$ and $a_{m,1}$. Finding a closed form for the leading term, $x = -1$, is a trivial exercise as it depends only on the previous term in the $x = -1$ sequence\footnote{This sequence also puts to rest the previous concern for division by zero previously noted as its terms are always positive.}; for all other offsets, the above strong recursive definitions must first be weakened, i.e., its dependence on sequences with offsets different from its own pruned. \medskip Suppose the sequence to be weakened was $x = b$. To weaken the recursive definition, the closed form for sequences with $x < b$ must be known; these closed forms are then to be substituted into the strong recursive definitions given above to obtain an expression solely in terms of $m, x,$ and previous terms of the $x = b$ sequence. \pagebreak It can be shown that these weakened/condensed recursive forms in general take the form: $$a_{m,m-x} = a_{m-1,(m-1)-x}\frac{m~(m-x-1)}{(m+1)(m-x)} + \frac{C_x \cdot m!}{(m+1) (m-x)!}$$ \medskip with base cases given by the independent term: $$a_{x+1,1} = \frac{C_x \cdot (x+1)!}{x+2}$$ where the $C_{i}$'s are constants unique to each offset sequence (these eventually lead to the Bernoulli numbers). This condensed recursive form simplifies the proof of the closed form of the coefficients, which at this stage are given by: $$a_{m,m-x} = \frac{C_x \cdot m!}{(x+2)(m-x)!}$$ \medskip To conduct this proof, proof by mathematical induction is first used to show the condensed recursive definition yields the closed form above. Secondly, a proof by strong induction then proves that the above closed form, when substituted into the strongly recursive definition, yields the condensed recursive definition. These two proofs/sub-proofs also yield the definition for the $C_i$'s: $$C_x = - \bigg[~\frac{C_{x-1}}{x+1}\beta_{0} + \frac{C_{x-2}}{x}\beta_{1} + \cdots + \frac{C_{0}}{2}\beta_{x-1} + \frac{C_{-1}}{1}\beta_{x}^* ~\bigg],~~~~ C_{-1} = 1$$ The $\beta_{i}$ terms arise from the $\alpha_{k,i}$'s and are given by: $$\beta_{x} = \frac{C_{-1}(-1)^{x+1}}{1 \cdot (x+1)!} + \frac{C_{0}(-1)^{x}}{2 \cdot x!} + \cdots + \frac{C_{x-1}(-1)^{1}}{(x+1) \cdot 1!} + \frac{C_{x}(-1)^{0}}{(x+2)\cdot 0!}$$ $$\beta_{x}^* = \frac{C_{-1}(-1)^{x+1}}{1 \cdot (x+1)!} + \frac{C_{0}(-1)^{x}}{2 \cdot x!} + \cdots + \frac{C_{x-2}(-1)^{2}}{x \cdot 2!} + \frac{C_{x-1}(-1)^{1}}{(x+1)\cdot 1!}$$ \pagebreak The $C_i$'s are a few steps away from the Bernoulli numbers. It can be observed and proven that a given $C_x$ has a factor of one over $(x+1)!$, which leads to the $D_x$ sequence: $$C_x = \frac{D_x}{(x+1)!} \implies D_x = (x+1)! C_x$$ At this point, using the base case of $\sum_{i=1}^{n} i^{0} = n$ and proof by strong and mathematical induction, this derivation has proven that: $$\sum_{i=1}^{n} i^{m} = \frac{D_{-1}~ m!}{1!~(m+1)!}n^{m+1} + \frac{D_{0}~ m!}{2!~m!}n^{m} + \cdots + \frac{D_{m-1}~ m!}{(m+1)!~1!}n^{1}$$ $$\implies \sum_{i=1}^{n} i^{m} = \sum_{x = -1}^{m-1} \frac{D_{x}~ m!}{(x+2)!~(m-x)!}n^{m-x}$$ \medskip which is completely analogous to Faulhaber's formula. From the above it can be observed that when the traditional indexing from 0 is used for the Bernoulli numbers, they are related quite simply to the $D_x$ sequence by: $$D_x = \frac{B_{x+1}}{x+2} \implies B_{x} = (x+1) D_{x-1}$$ \medskip Alternatively, when the Bernoulli numbers are indexed from $-1$: $$D_x = \frac{B_x}{x+2} \implies B_x = (x+2) D_x$$ The most interesting consequence of this equivalency, however, is the difference in the definitions of the Bernoulli numbers and the $D_x$'s. There are many definitions for the Bernoulli numbers, but the typical recursive form is linear with respect to the $B_i$'s; the definition for the $D_x$'s, however, is non-linear with respect to the other $D_i$'s. Ignoring the Bernoulli numbers and their many definitions, this duplicity in definition can be observed using only the $D_x$'s. Using what has been proven thus far, consider the case of a power sum with $n=1$ for any $m$: $$\sum_{i=1}^1 i^m = 1^m = 1 = \frac{D_{-1}~ m!}{1!~(m+1)!}(1)^{m+1} + \frac{D_{0}~ m!}{2!~m!}(1)^{m} + \cdots + \frac{D_{m-1}~ m!}{(m+1)!~1!}(1)^{1}$$ $$ \implies 1 = \frac{D_{-1}~ m!}{1!~(m+1)!} + \frac{D_{0}~ m!}{2!~m!} + \cdots + \frac{D_{m-2}~ m!}{m!~2!} + \frac{D_{m-1}~ m!}{(m+1)!~1!}$$ \medskip For the sake of notation, let $m$ become $(x+1)$ and then solve for $D_x$: $$D_x = 1 - (x+1)!\bigg[~ \frac{D_{-1}}{1!~(x+2)!} + \frac{D_{0}}{2!~(x+1)!} + \cdots + \frac{D_{x-2}}{x!~3!} + \frac{D_{x-1}}{(x+1)!~2!} ~\bigg]$$ \medskip This definition for the $D_x$ is analogous to one of the recursive definitions commonly given for the Bernoulli numbers, but recall this derivation gives a non-linear definition for the $D_x$'s: $$D_x = - \bigg[~ \frac{D_{x-1}}{(x+1)!} \beta_{0} + \frac{D_{x-2}}{x!} \beta_{1} + \cdots + \frac{D_{0}}{2!} \beta_{x-1} + \frac{D_{-1}}{1!} \beta_{x}^*~\bigg]$$ \medskip $$\beta_{x} = \frac{D_{-1}(-1)^{x+1}}{1! \cdot (x+1)!} + \frac{D_{0}(-1)^{x}}{2! \cdot x!} + \cdots + \frac{D_{x-1}(-1)^{1}}{(x+1)! \cdot 1!} + \frac{D_{x}(-1)^{0}}{(x+2)!\cdot 0!}$$ \medskip $$\beta_{x}^* = \frac{D_{-1}(-1)^{x+1}}{1! \cdot (x+1)!} + \frac{D_{0}(-1)^{x}}{2! \cdot x!} + \cdots + \frac{D_{x-2}(-1)^{2}}{x! \cdot 2!} + \frac{D_{x-1}(-1)^{1}}{(x+1)!\cdot 1!}$$ \medskip Additionally, there is another linear definition for the $D_x$'s that can be gleaned from this derivation using the idea of an empty sum where $n = -1$: $$\sum_{i=1}^{-1} i^m = 0 = \frac{D_{-1}~ m!}{1!~(m+1)!}(-1)^{m+1} + \frac{D_{0}~ m!}{2!~m!}(-1)^{m} + \cdots + \frac{D_{m-1}~ m!}{(m+1)!~1!}(-1)^{1}$$ $$D_x = (x+1)!\bigg[~ \frac{D_{-1}(-1)^{x+2}}{1!~(x+2)!} + \frac{D_{0}(-1)^{x+1}}{2!~(x+1)!} + \cdots + \frac{D_{x-2}(-1)^{3}}{x!~3!} + \frac{D_{x-1}(-1)^{2}}{(x+1)!~2!} ~\bigg]$$ \medskip These contending definitions (among others) testify of the fascinating nature of the Bernoulli numbers. From the above, a family of recursive polynomials and rational functions can be defined as can a matrix equation. The matrix equation and its consequences have been detailed by Giorgio Pietrocola and thus will not be discussed in great depth in Part II of this paper \cite{Pietrocola}. The conversion of the linear recursive forms into a family of polynomials, dubbed the partial polynomials, and a similarly defined family of rational functions, however, will be discussed in detail in Part II of this paper, as will the equivalence of the non-linear and linear definitions. \newpage
2,869,038,156,105
arxiv
\section{Introduction} In recent times, some interesting field theoretical descriptions of the statistical mechanics of polymer rings subjected to topological constraints have been proposed by various authors \cite{polftmft}--\cite{kleinert}. In all these approaches, which are based on the pioneering works \cite{edwards},\cite{degennes}, the inclusion of higher order link invariants \cite{kauffmann},\cite{witten}, \cite{jones} is yet an unsolved problem. On the other side, these invariants are necessary in order to specify in an unique way the distinct topological states of the polymers. The main difficulty in the application of higher order link invariants is that they cannot be easily expressed in terms of the variables which characterize the polymer, i. e. its trajectory in the three dimensional $(3-D)$ space and its contour length. For this reason, in all analytical methods used to study entangling polymers only the simplest topological invariant is considered, namely the Gauss linking number \cite{polftcsprop}. Even in this approximation, a rigorous treatment of the $3-D$ entanglement problem is mathematically difficult. Basically, only the case of a single polymer fluctuating in a background of static polymers or fixed obstacles has been investigated until now. The configurations of the background polymers may be ``averaged'' exploiting mean-field type arguments (see e. g. \cite{polftmft}). As it has been recently suggested \cite{polftcsprop},\cite{kleinert}, a possible way out from the above difficulties is the introduction of a Chern-Simons (C--S) field theory \cite{chernsimons} in the treatment of polymer entanglement. In doing this, however, one is faced with the problem of computing expectation values of holonomies around the closed trajectories formed by the polymers in space. As it is is well known, these expectation values are affected by the presence of ambiguous contributions coming from the self-linking of the loops. In pure C--S field theories, a good regularization for these non-topological terms is provided by the so-called framing procedure \cite{witten}. However, the framing depends on the form of the loop, which in our case is a dynamical variable, in the sense that one has to sum over all the possible trajectories of the polymers in $3-D$ space. Therefore, the addition of a framing complicates the form of the polymer action to the extent that the integration over the polymer configurations can no longer be performed in any closed form. To avoid this obstacle, we propose in this paper a model in which the polymers are coupled to two abelian C--S fields. The coupling constants are suitably chosen in order to cancel all the undesired terms coming from the self-linking of the loops without the need of a framing. Of course, this procedure does not replace framing, but it is sufficient to eliminate the self-linking ambiguities at least for the C--S amplitudes that are necessary in our context. The advantage of introducing auxiliary C--S fields is that they mediate the topological interactions between the polymers decoupling their actions. As a consequence, each polymer can be separately treated with the powerful methods developed in refs. \cite{gaugemod},\cite {tanaka}. In this way, we are able to formulate the problem of two fluctuating polymers subjected to topological constraints in terms of a C--S gauge field theory with $n$ components in the limit in which $n$ goes to zero. With respect to refs. \cite{gaugemod},\cite {tanaka}, the only complication in our approach is that the external magnetic field arising due to the presence of the background polymers is replaced by quantum C--S fields. For instance, we can compute the analogous of all polymer configurational probabilities derived in ref. \cite{tanaka}. This will be done in Section 3. The material presented in this paper is divided as follows. In Section 2 we briefly review the field theoretical approach developed in refs. \cite{gaugemod},\cite {tanaka} in the case of a test polymer entangling with another polymer of fixed conformation. In Section 3 this approach will be extended to the situation in which both polymers are dynamical. Finally, in the Conclusions some possible generalizations and applications of our treatment will be discussed. \section{Statistical--Mechanical Theory of Polymer Entanglement} Let $P$ be a polymer (see e. g. ref. \cite{kleinert} for a general introduction on the physics of polymers) represented as a long chain of $N+1$ segments $\vec{r}_{i+1} - \vec{r}_i$ for $i=0,\ldots,N$. Each segment has fundamental step length $a$, which is supposed to be very small with respect to the total length of the polymer. Moreover, the junction between adjacent segments is such that they can freely rotate in all directions. In the limit of large values of $N$, the ensemble of $M$ polymers $P_1,\ldots, P_M$ of this kind can be regarded as the ensemble of $M$ particles subjected to a self-avoiding randow walk. The whole configuration of a polymer $P$ is thus entirely specified by the trajectory $\mbd{r}(s)$ of a particle in three dimensional ($3-D$) space, with $0\le s \le L$. $L$ is the contour length of the polymer and plays the role of time. We assume that the molecules of the polymer repel each other with a self-avoiding potential $v(\mbd{r}-\mbd{r}')$. The potential $v$ must be strong enough to avoid unwanted intersections of the trajectory $\mbd{r}(s)$ with itself. In the following, the case of two closed or infinitely long polymers $P_1$ and $P_2$ of contour length $L_1$ and $L_2$ respectively will be investigated. We suppose that they describe closed loops or infinitely long Wilson lines $C_1$ and $C_2$ in $3-D$ space. In order to take into account the entanglement of $P_1$ around $P_2$, we introduce the Gauss invariant $\chi(C_1,C_2)$: \begin{equation} \chi(C_1,C_2)\equiv \frac{1}{4\pi}\oint_{C_1}\oint_{C_2}d\mbd{r}_1\times d\mbd{r}_2\cdot \frac{{\mbd{r}_1}-\mbd{r}_2}{|\mbd{r}_1 - \mbd{r}_2|^3} \label{gaussinv} \end{equation} Physically, $\chi(C_1,C_2)$ can be interpreted as the potential energy of a particle $\mbd{r}_1$ moving inside a magnetic field $\mbd{B}$ generated by the particle $\mbd{r}_2$: \begin{equation} \chi(C_1,C_2)=\oint_{C_1}d\mbd{r}_1(s_1)\cdot \mbd{B}(\mbd{r}_1(s_1)) \label{physgaussinv} \end{equation} where \begin{equation} \mbd{B}(\mbd{r}_1)=\frac{1}{4\pi}\oint_{C_2} d\mbd{r}_2(s_2)\times \frac{{\mbd{r}_1}-\mbd{r}_2(s_2)}{|\mbd{r}_1 - \mbd{r}_2(s_2)|^3} \label{magfie} \end{equation} For the moment, we confine ourself to the situation in which one single test polymer, for instance $P_1$, is entangling with a polymer $P_2$ of static configuration $\mbd{r}_2(s_2)$, with $0\le s_2\le L_2$.. The configurational probability $G_m(\mbd{r}_1,L_1;\mbd{r}_{0,1},0)$ to find $P_1$ with one end at a point $\mbd{r}_1$ starting the other end at $\mbd{r}_{0,1}$ after intersecting a number $m=0,1,2,\ldots$ of times an arbitrary surface having as border $P_2$, can be expressed in terms of path integrals as the Green function of a particle subjected to a self-avoiding random walk \cite{tanaka}: \[ G_m(\mbd{r}_1,L_1;\mbd{r}_{0,1},0) = \int_{\mbd{r}_{0,1}=\mbd{r}_1(0)}^{\mbd{r}_1=\mbd{r}_1(L_1)}{\cal D}\mbd{r}_1(s_1)\delta(\chi(C_1,C_2)-m)\times \] \begin{equation} \mbox{exp}\left \{ -\int_0^{L_1}ds_1{\cal L}_0-\frac{1}{2a^2} \int_0^{L_1}ds_1\int_0^{L_1}ds_1^\prime v(\mbd{r}(s_1)-\mbd{r}(s_1^\prime))\right\} \label{singlepoldyn} \end{equation} where \begin{equation} {\cal L}_0 = \frac 3{2a}\dot{\mbd{r}}_1^2\label{freelag} \end{equation} The case of closed loops or infinitely long polymers is recovered by taking suitable boundary conditions in the path integral (\ref{singlepoldyn}). For instance, we require that $\mbd{r}_1=\mbd{r}_{0,1}$ for a closed loop. To simplify the computations, it is convenient to work with the chemical potential $\lambda$ conjugated to the topological charge $m$. Thus we take the Fourier transform of $G_m(\mbd{r}_1,L_1;\mbd{r}_{0,1},0)$ with respect to $m$: \begin{equation} G_m(\mbd{r}_1,L_1;\mbd{r}_{0,1},0)= \int \frac{d\lambda}{2\pi}e^{-i\lambda m} G_\lambda(\mbd{r}_1,L_1;\mbd{r}_{0,1},0)\label{ftongm} \end{equation} Comparing with eq. (\ref{singlepoldyn}), the Green function $G_\lambda(\mbd{r}_1,L_1;\mbd{r}_{0,1},0)$ is given by: \[ G_\lambda(\mbd{r}_1,L_1;\mbd{r}_{0,1},0)=\int_{\mbd{r}_{0,1}}^{\mbd{r}_1}{\cal D}\mbd{r}_1(s_1) \mbox{exp}\left \{ -\int_0^{L_1}ds_1{\cal L}_0\right\}\times \] \begin{equation} \mbox{exp}\left \{-\frac{1}{2a^2} \int_0^{L_1}ds_1\int_0^{L_1}ds_1^\prime v(\mbd{r}(s_1)-\mbd{r}(s_1^\prime))+ i\lambda \chi(C_1,C_2)\right\}\label{aa} \end{equation} Let us notice that the the above path integral does not describe a Markoffian random walk due to the presence of the non-local self-avoiding term. To reduce it to a Markoffian random walk, we introduce gaussian scalar fields $\phi(\mbd{r})$ with propagator: \begin{equation} \langle \phi(\mbd{r})\phi(\mbd{r}')\rangle =\frac 1{a^2}v(\mbd{r} -\mbd{r}') \label{gpropone} \end{equation} Thus we have from eq. (\ref{aa}) \begin{equation} G_\lambda(\mbd{r}_1,L_1;\mbd{r}_{0,1},0)=\langle G_\lambda(\mbd{r}_1,L_1;\mbd{r}_{0,1},0|\phi,\mbd{B})\rangle_\phi\label{aaa} \end{equation} In (\ref{aaa}) the symbol $\langle\enskip\rangle_\phi$ denotes average over the auxiliary fields $\phi$ and \begin{equation} G_\lambda(\mbd{r}_1,L_1;\mbd{r}_{0,1},0|\phi,\mbd{B})= \int_{\mbd{r}_{0,1}}^{\mbd{r}_1}{\cal D}\mbd{r}_1(s_1) \mbox{exp}\left \{ -\int_0^{L_1}ds_1\left ({\cal L}_\phi+ i \dot{\mbd{r}_1}(s_1)\cdot\lambda\mbd{B}(\mbd{r}_1(s_1))\right)\right \}\label{bbb} \end{equation} where we have put \begin{equation} {\cal L}_\phi = {\cal L}_0 + i\phi(r_1(s_1)) \label{ccc} \end{equation} We remark that the Green function $G_\lambda(\mbd{r}_1,L_1;\mbd{r}_{0,1},0|\phi,\mbd{B})$ is formally that of a particle diffusing under the magnetic field $\mbd{B}$ defined in eq. (\ref{magfie}) and an imaginary electric potential $i\phi$. Thus it can be shown to satisfy the Schr\"odinger-like equation: \begin{equation} \left\{\frac{\partial}{\partial L_1} -\frac a 6({\mbld \nabla}+i\lambda\mbd{B} )^2+i\phi\right\}G_\lambda(\mbd{r}_1,L_1;\mbd{r}_{0,1},0|\phi,\mbd{B})= \delta(L_1)\delta(\mbd{r}_1-\mbd{r}_{0,1}) \label{ddd} \end{equation} The Laplace transformed of the above equation with respect to the contour length $L_1$ is \begin{equation} \left\{z_1 -\frac a 6({\mbld \nabla}+i\lambda\mbd{B} )^2+i\phi\right\}G_\lambda(\mbd{r}_1, \mbd{r}_{0,1};z_1|\phi,\mbd{B})= \delta(\mbd{r}_1-\mbd{r}_{0,1}) \label{lapltransf} \end{equation} where \begin{equation} G_\lambda(\mbd{r}_1, \mbd{r}_{0,1};z_1|\phi,\mbd{B}) = \int_0^\infty dL_1e^{-z_1L_1}G_\lambda(\mbd{r}_1,L_1;\mbd{r}_{0,1},0|\phi,\mbd{B}) \label{defla} \end{equation} The variable $z_1$ plays the role of the chemical potential conjugate to the contour length $L_1$. From now on, we set $\mbd{D}=\mbld{\nabla} +i\lambda \mbd{B}$. Starting from eq. (\ref{lapltransf}) and integrating over the auxiliary fields $\phi$ by means of the replica trick, one can express the Laplace transformed $G_\lambda(\mbd{r}_1,\mbd{r}_{0,1};z_1)$ of the Green function (\ref{aa}) in term of second quantized fields. Skipping the details of the derivation, that can be found in ref. \cite{tanaka}, we just state the result that one obtains in the case of a self-avoiding potential of the kind $v(\mbd{r}) =a^2v_0\delta(\mbd{r})$: \begin{equation} G_\lambda(\mbd{r}_1,\mbd{r}_{0,1};z_1)=\lim_{n\to 0}\int\prod_{\omega=1}^n {\cal D}\psi^{*\omega} {\cal D}\psi^{\omega}\psi^{*\overline \omega}(\mbd{r}_1) \psi^{\overline \omega}(\mbd{r}_{0,1}) e^{-F[\Psi]} \label{onepolymerfinal} \end{equation} In the above equation the fields $\psi^{*\omega},\psi^\omega$, $\omega=1,\ldots, n$, are complex replica fields and $\Psi = (\psi^1,\ldots,\psi^n)$. Moreover, ${\overline \omega}$ is an arbitrarily chosen integer in the range $1,...,n$ and the polymer free energy $F[\Psi]$ is given by: \begin{equation} F[\Psi] = \int d^3r\left\{\frac a 6||{\mbd D}\Psi||^2+z_1||\Psi||^2 +v_0||\Psi||^4\right\} \label{spfreeenergy} \end{equation} where $||{\mbd D}\Psi||^2=\sum_\omega({\mbd D}\psi^\omega)^\dagger{\mbd D} \psi^\omega$ and $||\Psi||^2=\sum_\omega|\psi^\omega|^2$. The generalization of the above formulas to an arbitrary number $M$ of static polymers $P_2,\ldots,P_M$ is straightforward, but not the inclusion of their fluctuations. Already in the case of two polymers, the analogous of the Schr\"odinger equation (\ref{lapltransf}) becomes complicated due to the presence of the nontrivial interactions between the polymers introduced by the Gauss invariant (\ref{gaussinv}). This makes the derivation of the Green function $G_\lambda(\mbd{r}_1,L_1;\mbd{r}_{0,1},0|\phi,\mbd{B})$ in terms of second quantized fields extremely difficult, apart from a few cases in which the trajectories of the polymers are strongly constrained. On the other side, without including the fluctuations of all polymers, there is always the difficulty of determining those configurations of the static polymers which are physically relevant. Indeed, we see from eqs. (\ref{onepolymerfinal}--\ref{spfreeenergy}) that the free energy $F[\Psi]$ of the test polymer $P_1$ depends on the trajectory $\mbd{r}_2(s)$ through the magnetic potential ${\mbd B}$ contained in the covariant derivative $\mbd{D}$. As we will see in the next section, the introduction of auxiliary Chern--Simons fields will remove all these problems. \section{Topological Entanglement of Polymers via Chern--Simons Fields} We study in this Section the fluctuations of two polymers $P_1$ and $P_2$ subjected to topological constraints. In analogy with the previous Section, we consider the configurational probability $G_m(\{\mbd{r}\},\{L\};\{\mbd{r}_0\},0)$ of finding the polymer $P_1$ with ends in $\mbd{r}_1$ and $\mbd{r}_{0,1}$ and the polymer $P_2$ with ends in $\mbd{r}_2$ and $\mbd{r}_{0,2}$. Moreover, we require that $P_1$ intersects $P_2$ $m$ times. Here we have put $\{\mbd{r}\}=\mbd{r}_1,\mbd{r}_2$, $\{L\}=L_1,L_2$ etc. From now on, the indices $\tau,\tau',.... = 1,2$ will be used to distinguish the two different polymers. The self-avoiding potential of the previous Section must be extended in the present case in order to take into account the reciprocal repulsions among the molecules of the two different polymers. Thus we choose a self-avoiding potential of the kind: \begin{equation} v_{\tau\tau'}(\mbd{r}_\tau(s_\tau)-\mbd{r}_{\tau'}(s'_{\tau'})) = v^0_{\tau\tau'}v(\mbd{r}_\tau(s_\tau)-\mbd{r}_{\tau'}(s'_{\tau'})) \label{tpselfavpot} \end{equation} where $v^0_{\tau\tau'}$ is a symmetric $2\times 2$ matrix and $v(\mbd{r})$ is a strongly repulsive potential. As seen in the previous Section, the presence of self-avoiding potentials of this kind leads to random walks which are not Markoffian. To solve this problem, we introduce here auxiliary scalar fields with gaussian action and propagator: \begin{equation} \langle\phi_\tau(\mbd{x})\phi_{\tau'}(\mbd{y})\rangle =\frac 1 {a^2} v^0_{\tau\tau'}v(\mbd{x} -\mbd{y})\label{modpot} \end{equation} In future we will make use of the following formula: \[ \int \prod_{\tau=1}^2{\cal D}\phi_\tau \mbox{\rm exp} {\left\{-\frac{a^2}2\int d^3\mbd{x} d^3\mbd{y}\phi_\tau(\mbd{x})M^{\tau\tau'} (\mbd{x},\mbd{y})\phi_{\tau'}(\mbd{y})-i\sum_{\tau=1}^2\int d^3\mbd{x} J_\tau(\mbd{x}) \phi_\tau(\mbd{x})\right\}}= \] \begin{equation} \mbox{\rm exp}\left\{-\frac 1{2a^2}\int_0^{L_\tau}\int_0^{L_{\tau'}} ds_\tau ds'_{\tau'} v_{\tau\tau'}(\mbd{r}_\tau(s_\tau)-\mbd{r}_{\tau'}(s_{\tau'})) \right\} \label{eee} \end{equation} where $M^{\tau\tau'}(\mbd{x},\mbd{y})$ is the inverse of the matrix $v_{\tau\tau'}(\mbd{x}-\mbd{y})$ and \begin{equation} J_\tau(\mbd{x})=\int_0^{L_\tau}ds_\tau\delta^{(3)}(\mbd{x}-\mbd{r}_\tau(s_\tau)) \label{fff} \end{equation} Let us now rewrite the topological contribution in the path--integral (\ref{aa}) in a more suitable way by means of auxiliary C--S fields. With the introduction of these fields, our treatment of the polymer entanglement problem departs from that of Section 2 and from ref. \cite {tanaka}. We will consider for our purposes abelian C-S field theories of action: \begin{equation} {\cal A}_{CS}(A_\mu,\kappa) =\frac\kappa{8\pi}\int d^3\mbd{x}\epsilon^{\mu\nu\rho}A_\mu \partial_\nu A_\rho\label{csaction} \end{equation} with $\mu,\nu,\rho=1,2,3$. $\kappa$ is a real coupling constant and $\epsilon^{\mu\nu\rho}$ is the completely antisymmetric tensor in $3-D$. The above action can also be written in another useful form: \begin{equation} {\cal A}_{CS}(\mbd{A},\kappa) =\frac\kappa{8\pi}\int d^3\mbd{r} \mbd{A} \cdot(\mbld{\nabla}\times \mbd{A})\label{pcsaction} \end{equation} where $\mbd{r}=(x^1,x^2,x^3)$. To quantize the C--S theory we choose the Feynman gauge with propagator: \begin{equation} G_{\mu\nu}(\mbd{x},\mbd{y})=\frac i\kappa \epsilon_{\mu\nu\rho} \frac{(x-y)^\rho}{|\mbd{x} -\mbd{y}|^3} \label{csprop} \end{equation} The observables of the theory are gauge invariant operators built out of the basic fields $A_\mu$. A complete set is given by the holonomies around closed curves \begin{equation} {\cal W}(C,\gamma)\equiv \mbox{\rm exp} \left\{-i\gamma\oint_C A_\mu dx^\mu \right\} \label{holonomy} \end{equation} The vacuum expectation value of two of these observables ${\cal W}(C_1,\gamma_1)$ and ${\cal W}(C_2,\gamma_2)$ is: \begin{equation} \langle {\cal W}(C_1,\gamma_1){\cal W}(C_2,\gamma_2)\rangle_A = \mbox{\rm exp}\left\{ -i\left(\frac {2\pi}\kappa\right) \left[\gamma_1^2\chi(C_1,C_1)+\gamma_2^2\chi(C_2,C_2)+2 \gamma_1\gamma_2\chi(C_1,C_2)\right]\right\} \label{wleval} \end{equation} where $\chi(C_\tau,C_\tau)$, $\tau=1,2$ is the so-called self-linking number of the loop $C_\tau$. To reproduce the term of eq. (\ref{aa}) containing the Gauss invariant $\chi$, we need two Chern--Simons fields $a_\mu$ and $b_\mu$ with actions ${\cal A}_{CS}(\mbd{a},\kappa)$ and ${\cal A}_{CS}(\mbd{b},-\kappa)$ respectively. Using eq. (\ref{wleval}) and setting for instance \begin{equation} \gamma_1=\frac\kappa{4\pi}\qquad \qquad \gamma_2 = \frac \lambda 4 \label{positiona} \end{equation} one sees in fact that \begin{equation} \langle{\cal W}(C_1,\gamma_1){\cal W}(C_2,\gamma_2)\rangle_a\enskip \langle{\cal W}(C_1,-\gamma_1){\cal W}(C_2,\gamma_2)\rangle_b= \mbox{\rm exp}\left\{-i\lambda\chi(C_1,C_2))\right\} \label{selflinkelim} \end{equation} The right hand side of the above equation is exactly the contribution due to the topological entanglements of the polymers appearing in eq. (\ref{aa}). We are now ready to write the expression of the Green function $G_\lambda(\{\mbd{r}\},\{L\};\{\mbd{r}_0\},0)$ for two entangling polymers. First of all, let us put: \begin{equation} G_\lambda(\{\mbd{r}\},\{L\};\{\mbd{r}_0\},0) =\langle G_\lambda(\{\mbd{r}\},\{L\};\{\mbd{r}_0\},0|\{\phi\},\{\mbd{A}\})\rangle_{\{\phi\},\mbd{a},\mbd{b}} \label{ggg} \end{equation} where $\langle\enskip\rangle_{\{\phi\},\mbd{a},\mbd{b}}$ denotes the average with respect to the fields $\phi_\tau,\mbd{a},\mbd{b}$ and \[ G_\lambda(\{\mbd{r}\},\{L\};\{\mbd{r}_0\},0|\{\phi\},\{\mbd{A}\})= \prod_{\tau=1}^2 \int_{\mbd{r}_{0,\tau}}^{\mbd{r}_\tau} {\cal D} \mbd{r}_\tau(s_\tau)\mbox{\rm exp} \left\{-\int_0^{L_\tau}{\cal L}_{\phi_\tau}\right\}\times \] \begin{equation} \mbox{\rm exp}\left\{-i\gamma_\tau\int_0^{L_\tau}\mbd{A}^\tau(\mbd{r}_\tau(s_\tau)) \cdot d\mbd{r}_\tau(s_\tau)\right\} \label{hhh} \end{equation} The parameters $\gamma_\tau$ appearing in the above equation are defined as in eq. (\ref{positiona}) and the fields $\mbd{A}^\tau$ are given by the relation: \begin{equation} \mbd{A}^\tau=\mbd{a}+(-1)^\tau\mbd{b}\qquad\qquad\tau=1,2 \label{ataudef} \end{equation} Exploiting formulas (\ref{eee}) and (\ref{wleval}) in order to perform the two independent integrations over the fields $\phi_\tau,\mbd{a}$ and $\mbd{b}$ in eq. (\ref{ggg}), one finds that: \[ G_\lambda(\{\mbd{r}\},\{L\};\{\mbd{r}_0\},0) = \] \[ \int_{\mbd{r}_{0,1}}^{\mbd{r}_1}{\cal D}\mbd{r}_1(s_1)\int_{\mbd{r}_{0,2}}^{\mbd{r}_2}{\cal D} \mbd{r}_2(s_2)\mbox{\rm exp}\left\{-\int_0^{L_1}ds_1{\cal L}_0(\dot{\mbd{r}}_1(s_1)) -\int_0^{L_2}ds_2{\cal L}_0(\dot{\mbd{r}}_2(s_2))\right\}\times \] \begin{equation} \mbox{\rm exp} \left\{ -\frac 1 {2a^2} \sum_{\tau,\tau'=1}^2 \int_0^{L_\tau} ds_\tau\int_0^{L_{\tau'}} ds_{\tau'}v_{\tau\tau'} (\mbd{r}_\tau(s_\tau)-\mbd{r}_{\tau'}(s_{\tau'}))-i\lambda\chi(C_1,C_2)\right\} \label{glamsn} \end{equation} By inverse Fourier transformation in $\lambda$ as in eq. (\ref{ftongm}), we obtain from eq. (\ref{glamsn}): \[ G_m(\{\mbd{r}\},\{L\};\{\mbd{r}_0\},0) = \] \[ \int_{\mbd{r}_{0,1}}^{\mbd{r}_1}{\cal D}\mbd{r}_1(s_1)\int_{\mbd{r}_{0,2}}^{\mbd{r}_2}{\cal D} \mbd{r}_2(s_2)\mbox{\rm exp}\left\{-\int_0^{L_1}ds_1{\cal L}_0(\dot{\mbd{r}}_1(s_1)) -\int_0^{L_2}ds_2{\cal L}_0(\dot{\mbd{r}}_2(s_2))\right\}\times \] \begin{equation} \mbox{\rm exp} \left\{ -\frac 1 {2a^2} \sum_{\tau,\tau'=1}^2 \int_0^{L_\tau} ds_\tau\int_0^{L_{\tau'}} ds_{\tau'}v_{\tau\tau'} (\mbd{r}_\tau(s_\tau)-\mbd{r}_{\tau'}(s_{\tau'}))\right\}\delta(\chi(C_1,C_2)-m) \label{gglamsn} \end{equation} This is the desired generalization of eq. (\ref{singlepoldyn}) to the case of two fluctuating polymers. In fact, if we ignore the fluctuations of $P_2$ and the reciprocal repulsion among the molecules of $P_1$ and $P_2$, which was not taken into account in Section 2, eq. (\ref{glamsn}) exactly coincides with eq. (\ref{singlepoldyn}). The advantage of having coupled the polymers with the Chern--Simons fields is that now each polymer $P_1$ and $P_2$ undergoes an independent random walk. Their mutual interaction, that in eqs. (\ref{singlepoldyn}) and (\ref{gglamsn}) occurred through the Gauss invariant $\chi(C_1,C_2)$, is now mediated by the Chern--Simons fields, as it is possible to see from eqs. (\ref{ggg}--\ref{hhh}). Let us now express the Green function $G_\lambda(\{\mbd{r}\},\{L\},\{\mbd{r}_0\},0)$ in terms of second quantized fields. To this purpose, we split the Green function $G_\lambda(\{\mbd{r}\},\{L\},\{\mbd{r}_0\},0|\{\phi\},\{\mbd{A}\})$ of eq. (\ref{hhh}) as follows: \begin{equation} G_\lambda(\{\mbd{r}\},\{L\};\{\mbd{r}_0\},0|\{\phi\},\{\mbd{A}\})= G_\lambda(\mbd{r}_1,L_1;\mbd{r}_{0,1},0|\phi_1,\mbd{A}^1) G_\lambda(\mbd{r}_2,L_2;\mbd{r}_{0,2},0|\phi_2,\mbd{A}^2) \label{iii} \end{equation} where, for $\tau=1,2$: \begin{equation} G_\lambda(\mbd{r}_\tau,L_\tau;\mbd{r}_{0_\tau},0|\phi_\tau,\mbd{A}^\tau)= \int_{\mbd{r}_{0,\tau}}^{\mbd{r}_\tau}{\cal D}\mbd{r}_\tau(s_\tau) \mbox{\rm exp} \left\{-\int_0^{L_\tau}\left[{\cal L}_{\phi_\tau}-i\gamma_\tau \mbd{A}^\tau\cdot d\mbd{r}_\tau(s_\tau)\right]\right\} \label{jjj} \end{equation} Each Green function $G_\lambda(\mbd{r}_\tau,L_\tau;\mbd{r}_{0,\tau},0|\phi_\tau,\mbd{A}^\tau)$, $\tau=1,2$, is the Green function of a particle diffusing under the vector potential $\mbd{A}^\tau$ and the imaginary electromagnetic field $\phi_\tau$. As a consequence, it can be written as the solution of a Schr\"odinger-like equation like (\ref{ddd}). In analogy with the previous Section, it is convenient to introduce the chemical potentials conjugate to $L_\tau$ and to consider the Laplace transformed Green function: \begin{equation} G_\lambda(\{\mbd{r}\},\{\mbd{r}_0\};\{z\}|\{\phi\},\{\mbd{A}\})= \int_0^\infty\int_0^\infty dz_1dz_2e^{-(z_1L_1+z_2L_2)} G_\lambda(\{\mbd{r}\},\{L\};\{\mbd{r}_0\},0|\phi_\tau,\mbd{A}^\tau) \label{tvlaplt} \end{equation} From eqs. (\ref{iii})--(\ref{jjj}) we have: \begin{equation} G_\lambda(\{\mbd{r}\},\{\mbd{r}_0\};\{z\}|\{\phi\},\{\mbd{A}\})= G_\lambda(\mbd{r}_1,\mbd{r}_{0,1};z_1|\phi_1,\mbd{A}^1) G_\lambda(\mbd{r}_2,\mbd{r}_{0,2};z_2|\phi_2,\mbd{A}^2) \label{glzexp} \end{equation} where the functions $G_\lambda(\mbd{r}_\tau,\mbd{r}_{0,\tau};z_\tau|\phi_\tau,\mbd{A}^\tau)$ have already been defined in eq. (\ref{defla}). For each value of $\tau=1,2$, they explicitly satisfy eq. (\ref{lapltransf}), which we rewrite here for convenience: \begin{equation} \left\{z_\tau -\frac a 6\mbd{D}_\tau^2+i\phi_\tau\right\} G_\lambda(\mbd{r}_\tau, \mbd{r}_{0,\tau};z_\tau|\phi_\tau,\mbd{A}^\tau)= \delta(\mbd{r}_\tau-\mbd{r}_{0,\tau}) \label{tplapltransf} \end{equation} The covariant derivatives $\mbd{D}_\tau$ are defined as follows: $\mbd{D}_\tau=\mbld{\nabla}+i\gamma_\tau \mbd{A}^\tau$. The solution of eq. (\ref{tplapltransf}) in terms of complex scalar fields $\psi^*_\tau,\psi_\tau$ is: \begin{equation} G_\lambda(\mbd{r}_\tau, \mbd{r}_{0,\tau};z_\tau|\phi_\tau,\mbd{A}^\tau)= \frac 1{Z_\tau}\int{\cal D}\psi^*_\tau{\cal D}\psi_\tau \psi^*_\tau(\mbd{r}_\tau)\psi_\tau(\mbd{r}_{0,\tau})e^{-F[\psi_\tau]} \label{kkk} \end{equation} where, setting $|\mbd{D}_\tau\psi_\tau|^2= (\mbd{D}_\tau\psi_\tau)^\dagger\cdot\mbd{D}_\tau\psi_\tau$, the free energy $F[\psi_\tau]$ is given by: \begin{equation} F[\psi_\tau]=\int d^3\mbd{r}\left[ \frac a 6|{\mbd D}_\tau\psi_\tau|^2 + (z_\tau+i\phi_\tau)|\psi_\tau|^2 \right] \label{freeenergy} \end{equation} Finally, the partition function $Z_\tau$ is: \begin{equation} Z_\tau=\int{\cal D}\psi^*_\tau{\cal D}\psi_\tau e^{-F[\psi_\tau]} \label{mmm} \end{equation} As we see from eq. (\ref{freeenergy}), $F[\psi]$ is nothing but the Gintzburg-Landau free energy of a superconductor in a fluctuating magnetic field. We are now ready to perform the average over the auxiliary fields $\phi_\tau$ in the Green function (\ref{glzexp}). This integration is however higly non trivial. As a matter of fact, using eq. (\ref{kkk}) to express the original Green function (\ref{glzexp}) in terms of second quantized fields, one immediately realizes that the integrand is not gaussian due to the presence of the factors $Z_\tau^{-1}$. To solve this problem, we exploit the replica trick. Thus we introduce $2n$ replica fields $\psi_\tau^{* \omega},\psi_\tau^\omega$, with $\tau=1,2$ and $\omega=1,\ldots,n$. In terms of these fields, the Green function (\ref{glzexp}) can be written as follows: \begin{equation} G_\lambda(\{\mbd{r}\},\{\mbd{r}_0\};\{z\}|\{\phi\},\{\mbd{A}\})= \lim_{n\to 0}\prod_{\tau=1}^2\left[\int\prod_{\omega=1}^n {\cal D} \psi_\tau^{*\omega} {\cal D} \psi_\tau^{\omega}\psi_\tau^{*\overline{\omega}}(\mbd{r}_\tau) \psi_\tau^{\overline{\omega}}(\mbd{r}_{0,\tau})e^{-F[\psi_\tau^\omega]}\right] \label{replicatrick} \end{equation} where $\overline{\omega}$ is an arbitrary integer chosen in the range $1\le \overline{\omega}\le n$. According to the replica trick, we will also assume that the limit for $n$ going to zero commutes with the integrations in the fields $\mbd{A}^\tau$ and $\phi_\tau$. In this way, the integral over the auxiliary fields $\phi_\tau$ becomes gaussian and can be easily performed. Supposing in analogy with the previous Section that the self-avoiding potential is of the kind: \begin{equation} v_{\tau\tau'}(\mbd{r})=v^0_{\tau\tau'}\delta(\mbd{r}) \label{sapot} \end{equation} we have after some calculations that: \[ \langle G_\lambda(\{\mbd{r}\},\{\mbd{r}_0\};\{z\}|\{\phi\},\{\mbd{A}\})\rangle_{\{\phi\}, \mbd{a},\mbd{b}}= \] \begin{equation} \lim_{n\to 0}\int{\cal D}\mbd{a}{\cal D}\mbd{b} \prod_{\tau=1}^2\left[\prod_{\omega=1}^n {\cal D} \psi_\tau^{*\omega} {\cal D} \psi_\tau^{\omega} \psi_\tau^{*\overline{\omega}}(\mbd{r}_\tau) \psi_\tau^{\overline{\omega}}(\mbd{r}_{0,\tau})\right]\mbox{\rm exp} \left\{-{\cal A}(\mbd{a},\mbd{b},\{\Psi\})\right\} \label{finalf} \end{equation} where, using the same notation of eqs. (\ref{onepolymerfinal}--\ref{spfreeenergy}), the action ${\cal A}(\mbd{a},\mbd{b},\{\Psi\})$ is: \[ {\cal A}(\mbd{a},\mbd{b},\{\Psi\})=i{\cal A}_{CS}(\mbd{a},\kappa)+ i{\cal A}_{CS}(\mbd{b},-\kappa)+ \] \begin{equation} \sum_{\tau=1}^2 \left[\frac {a}6||\mbd{D}_\tau\Psi_\tau||^2+z_\tau||\Psi_\tau||^2\right] +\sum_{\tau,\tau'=1}^2||\Psi_\tau||^2v^0_{\tau\tau'}||\Psi_{\tau'}||^2 \label{actionf} \end{equation} Eq. (\ref{finalf}) is the generalization of eq. (\ref{onepolymerfinal}), describing in terms of fields the configurational probability for two entangling polymers $P_1$ and $P_2$ to have their ends in $\mbd{r}_\tau$ and $\mbd{r}_{0,\tau}$ for $\tau=1,2$ respectively. This probability is given in the space of the chemical potentials $\lambda$ and $z_\tau$ conjugated to the topological number $m$ and the contour lengths $L_\tau$. It is also possible to find an expression of the above configurational probability in the space of the topological number $m$ by taking the inverse Fourier transformation of the Green function (\ref{finalf}): \begin{equation} G_m(\{\mbd{r}\},\{\mbd{r}_0\},\{z\})= \int \frac{d\lambda}{2\pi}e^{-i\lambda m} \langle G_\lambda(\{\mbd{r}\},\{\mbd{r}_0\},\{z\}|\{\phi\},\{\mbd{A}\})\rangle_{\{\phi\}, \mbd{a},\mbd{b}} \label{ftinv} \end{equation} To this purpose, we split the action (\ref{actionf}) into three parts: \begin{equation} {\cal A}(\mbd{a},\mbd{b},\{\Psi\})= {\cal A}_0(\{\mbd{A}\},\{\Psi\})+\lambda \int d^3\mbd{r}\enskip\mbd{i}_2(\mbd{r})\cdot \mbd{A}^2(\mbd{r})+ \frac a 6\lambda^2\int \frac{d^3\mbd{r}}{16} \enskip \mbd{A}^2\cdot \mbd{A}^2||\Psi_2 (\mbd{r})||^2\label{daction} \end{equation} where ${\cal A}_0(\{\mbd{A}\},\{\psi^\omega\})$ is the contribution to the action ${\cal A}(\mbd{a},\mbd{b},\{\psi^\omega\})$ which does not contain $\lambda$: \[ {\cal A}_0(\{\mbd{A}\},\{\Psi\})= i{\cal A}_{CS}(\mbd{a},\kappa)+ i{\cal A}_{CS}(\mbd{b},-\kappa)+ \] \begin{equation} \frac{a}6||\mbd{D}_1\Psi_1||^2+ \frac{a}6||\mbld{\nabla}\Psi_2||^2+\sum_{\tau=1}^2 z_\tau||\Psi_\tau||^2 +\sum_{\tau,\tau'=1}^2||\Psi_\tau||^2v^0_{\tau\tau'}||\Psi_{\tau'}||^2 \label{actionzero} \end{equation} and \begin{equation} \mbd{i}_2(\mbd{r})=\frac a {12} \frac 1{2i} \left(\Psi_2^*\nabla\Psi_2 -\Psi_2\nabla\Psi_2^*\right)= \frac a {12} \frac 1{2i}\sum_{\omega=1}^n \left(\psi_2^{*\omega}\nabla\psi_2^\omega- \psi_2\nabla\psi_2^{*\omega}\right) \label{irrr} \end{equation} Performing the Gauss integral in (\ref{ftinv}) and neglecting irrelevant constant factors, we have: \[ G_m(\{\mbd{r}\},\{\mbd{r}_0\};\{z\})= \lim_{n\to 0}\int{\cal D}\mbd{a}{\cal D}\mbd{b}\prod_{\tau=1}^2\left[ \prod_{\omega=1}^n {\cal D} \psi_\tau^{*\omega} {\cal D} \psi_\tau^{\omega} \psi_\tau^{*\overline{\omega}}(\mbd{r}_\tau) \psi_\tau^{\overline{\omega}}(\mbd{r}_{0,\tau})\right]\times \] \begin{equation} \mbox{\rm exp}\left\{ -{\cal A}_0(\{\mbd{A}\},\{\psi^\omega\})\right\} \mbox{\rm exp}\left\{-\frac 1 4 K^{-1}\left(m-i\sum_{\tau=1}^2 \int d^3\mbd{r}\enskip\mbd{i}_2(\mbd{r})\cdot\mbd{A}^2(\mbd{r})\right)^2\right\}K^{-\frac 1 2} \label{ffinal} \end{equation} where \begin{equation} K=\frac a 6\int \frac{d^3\mbd{r}}{16}\mbd{A}^2\cdot\mbd{A}^2||\Psi_2||^2 \label{kdef} \end{equation} \section{Conclusions} In this paper a C--S based model of polymers subjected to topological constraints has been proposed and applied to the description of two entangling polymers $P_1$ and $P_2$. As we have seen in Section 3, the two polymers, whose action in eq. (\ref{gglamsn}) was complicated by the reciprocal interactions introduced by the Gauss invariant, become completely decoupled after the introduction of auxiliary C--S fields (see eq. (\ref{daction})). Each polymer $P_\tau$, $\tau=1,2$, interacts only with the auxiliary fields $\mbd{A}_\tau,\phi_\tau$ and its action is formally that of a particle moving in the background of the ``electromagnetic'' field $(\mbd{A}^\tau,i\phi_\tau)$. In this way, the application of the methods explained in Section 2 can also be extended to the case in which the configurations of both polymers are non-static. One advantage of having included the fluctuations of the second polymer is that the external magnetic field $\mbd{B}$ of eq. (\ref{onepolymerfinal}) has been replaced by the quantum C--S fields of eqs. (\ref{finalf}) and (\ref{ffinal}). Thus, there is no need to give a physical distribution for the configurations of the static polymers or to average them using mean-field type techniques. On the other side, the complications introduced in our approach by the fact that both polymers are non-static, are minimal. The expression of the Green function in eq. (\ref{finalf}) differs from that of eq. (\ref{gglamsn}) only by the presence of two sets of replica fields instead of one. The inclusion of the C--S fields in the treatment of polymers opens the possibility of taking into account more sophisticated topological invariants than the Gauss linking number. For instance, after replacing in our procedure the fields $\mbd{a}$ and $\mbd{b}$ with their nonabelian counterparts, one can obtain higher order knot invariants from the radiative corrections of eq. (\ref{selflinkelim}) as shown in ref. \cite{agua}. However, in the nonabelian case the elimination of the undesired Gauss self-linking terms occurring in eq. (\ref{selflinkelim}) is valid only at the first order approximation with respect to the C--S coupling constant $\kappa$, but not at higher orders. A possible solution to this problem is the introduction of a suitable framing such that \begin{equation} \chi_{framed}(C,C)=0 \label{framedlink} \end{equation} On the other side, as already mentioned above, to get rid of the self-linking contributions with the help of a framing is not so convenient as in pure C--S field theories. In fact, a framing like that in eq. (\ref{framedlink}) is necessarily depending on the form of the loop $C$, which in turn is a dynamical variable in the present context. Thus the choice of a framing would terribly complicate the form of the Schr\"odinger equation (\ref{tplapltransf}), preventing its solution in terms of second quantized fields. Concluding, we have shown here that it is possible to couple abelian C--S fields to polymers avoiding the ambiguous self-linking contributions. As an example, all the polymer configurational probabilities derived in \cite{tanaka} for one single test polymer chain have been generalized to the case of two fluctuating polymers. In the future we hope that it will be possible to extend the present approach also to the case of nonabelian C--S fields and to the situations in which there is an arbitrary number of polymers. Finally, we remark that our results could be also applied to other physical systems such as vortex rings and dislocation lines embedded in a solid, in which topological contraints among entangled one-dimensional excitations in a continuum play essential roles.
2,869,038,156,106
arxiv
\section{Introduction} \label{sec:intro} Named entity recognition (NER) is the widely studied task consisting in identifying text spans that denote {\em named entities} such as person, location and organization names, to name the most important types. Such text spans are called named entity {\em mentions}. In NER, mentions are generally not only identified, but also classified according to a more or less fine-grained ontology, thereby allowing for instance to distinguish between the telecommunication company {\em Orange} and the town {\em Orange} in southern France (amongst others). Importantly, it has long been recognised that the type of named entities can be defined in two ways, which underlies the notion of metonymy: the intrinsic type ({\em France} is always a location) and the contextual type (in {\em la France a signé un traité} `France signed a treaty', {\em France} denotes an organization). NER has been an important task in natural language processing for quite some time. It was already the focus of the MUC conferences and associated shared tasks \cite{marsh1998muc}, and later that of the CoNLL~2003 and ACE shared tasks \cite{conll03,doddington2004automatic}. Traditionally, as for instance was the case for the MUC shared tasks, only person names, location names, organization names, and sometimes ``other proper names'' are considered. However, the notion of named entity mention is sometimes extended to cover any text span that does not follow the general grammar of the language at hand, but a type- and often culture-specific grammar, thereby including entities ranging from product and brand names to dates and from URLs to monetary amounts and other types of numbers. As for many other tasks, NER was first addressed using rule-based approaches, followed by statistical and now neural machine learning techniques (see Section~\ref{subsec:sota} for a brief discussion on NER approaches). Of course, evaluating NER systems as well as training machine-learning-based NER systems, statistical or neural, require named-entity-annotated corpora. Unfortunately, most named entity annotated French corpora are oral transcripts, and they are not always freely available. The ESTER and ESTER2 corpora (60 plus 150 hours of NER-annotated broadcast transcripts) \cite{ester_inter09}, as well as the Quaero \cite{grouin2011law} corpus are based on oral transcripts (radio broadcasts). Interestingly, the Quaero corpus relies on an original, very rich and structured definition of the notion of named entity \cite{rosset11}. It contains both the intrinsic and the contextual types of each mention, whereas the ESTER and ESTER2 corpora only provide the contextual type. \newcite{sagot:hal-00703108} describe the addition to the French Treebank (FTB) \citelanguageresource{ftbLR} in its FTB-UC version \citelanguageresource{ftbucLR} of a new, freely available annotation layer providing named entity information in terms of span and type (NER) as well as reference (NE linking), using the Wikipedia-based {\sf Aleda}\xspace \cite{sagot12aleda} as a reference entity database. This was the first freely available French corpus annotated with referential named entity information and the first freely available such corpus for the written journalistic genre. However, this annotation is provided in the form of an XML-annotated text with sentence boundaries but no tokenization. This corpus will be referred to as FTB-NE in the rest of the article. Since the publication of that named entity FTB annotation layer, the field has evolved in many ways. Firstly, most treebanks are now available as part of the {\em Universal Dependencies} (UD)\footnote{\url{https://universaldependencies.org}} treebank collection. Secondly, neural approaches have considerably improved the state of the art in natural language processing in general and in NER in particular. In this regard, the emergence of contextual language models has played a major role. However, surprisingly few neural French NER systems have been published.\footnote{We are only aware of the {\em entity-fishing} NER (and NE linking) system developed by Patrice Lopez, a \href{https://github.com/kermitt2/entity-fishing}{freely available} yet unpublished system.} This might be because large contextual language models for French have only been made available very recently \cite{arxiv19camembert}. But it is also the result of the fact that getting access to the FTB with its named entity layer as well as using this corpus were not straightforward tasks. For a number of technical reasons, re-aligning the XML-format named entity FTB annotation layer created by \newcite{sagot:hal-00703108} with the ``official'' version of the FTB or, later, with the version of the FTB provided in the Universal Dependency (UD) framework was not a straightforward task.\footnote{Note that the UD version of the FTB is freely downloadable, but does not include the original tokens or lemmas. Only people with access to the original FTB can restore this information, as required by the intellectual property status of the source text.} Moreover, due to the intellectual property status of the source text in the FTB, the named entity annotations could only be provided to people having signed the FTB license, which prevented them from being made freely downloadable online. The goal of this paper is to establish a new state of the art for French NER by (i)~providing a new, easy-to-use UD-aligned version of the named entity annotation layer in the FTB and (ii)~using this corpus as a training and evaluation dataset for carrying out NER experiments using state-of-the-art architectures, thereby improving over the previous state of the art in French NER. In particular, by using both FastText embeddings \cite{bojanowski-etal-2017-enriching} and one of the versions of the CamemBERT French neural contextual language model \cite{arxiv19camembert} within an LSTM-CRF architecture, we can reach an F1-score of 90.25, a 6.5-point improvement over the previously state-of-the-art system SEM \cite{dupont2017exploration}. \section{A named entity annotation layer for the UD version of the French TreeBank} In this section, we describe the process whereby we re-aligned the named entity FTB annotations by \newcite{sagot:hal-00703108} with the UD version of the FTB. This makes it possible to share these annotations in the form of a set of additional columns that can easily be pasted to the UD FTB file. This new version of the named entity FTB layer is much more readily usable than the original XML version, and will serve as a basis for our experiments in the next sections. Yet information about the named entity annotation guidelines, process and results can only be found in \newcite{sagot:hal-00703108}, which is written in French. We therefore begin with a brief summary of this publication before describing the alignment process. \subsection{The original named entity FTB layer} \label{subsec:originalannotations} \newcite{sagot:hal-00703108} annotated the FTB with the span, absolute type\footnote{ Every mention of {\em France} is annotated as a {\tt Location} with subtype {\tt Country}, as given in {\sf Aleda}\xspace database, even if in context the mentioned entity is a political organization, the French people, a sports team, etc.}, sometimes subtype and {\sf Aleda}\xspace unique identifier of each named entity mention.\footnote{Only proper nouns are considered as named entity mentions, thereby excluding other types of referential expressions.} Annotations are restricted to person, location, organization and company names, as well as a few product names.\footnote{More precisely, we used a tagset of 7 base NE types: {\tt Person}, {\tt Location}, {\tt Organization}, {\tt Company}, {\tt Product}, {\tt POI} (Point of Interest) and {\tt FictionChar}.} There are no nested entities. Non capitalized entity mentions (e.g.~{\em banque mondiale} `World Bank') are annotated only if they can be disambiguated independently of their context. Entity mentions that require the context to be disambiguated (e.g.~{\em Banque centrale}) are only annotated if they are capitalized. \footnote{So for instance, in {\em université de Nantes} `Nantes university', only {\em Nantes} is annotated, as a city, as {\em université} is written in lowercase letters. However, {\em Université de Nantes} `Nantes University' is wholly annotated as an organization. It is non-ambiguous because {\em Université} is capitalized. {\em Université de Montpellier} `Montpellier University' being ambiguous when the text of the FTB was written and when the named entity annotations were produced, only {\em Montpellier} is annotated, as a city.} For person names, grammatical or contextual words around the mention are not included in the mention (e.g.~in {\em M.~Jacques Chirac} or {\em le Président Jacques Chirac}, only {\em Jacques Chirac} is included in the mention). Tags used for the annotation have the following information: \begin{itemize} \item the identifier of the NE in the {\sf Aleda}\xspace database ({\tt eid} attribute); when a named entity is not present in the database, the identifier is {\tt null},\footnote{Specific conventions for entities that have merged, changed name, ceased to exist as such (e.g.~{\em Tchequoslovaquie}) or evolved in other ways are described in \newcite{sagot:hal-00703108}.} \item the normalized named of the named entity as given in {\sf Aleda}\xspace; for locations it is their name as given in GeoNames and for the others, it is the title of the corresponding French Wikipedia article, \item the type and, when relevant, the subtype of the entity. \end{itemize} Here are two annotation examples:\\ \noindent{\small\tt <ENAMEX type="Organization" eid="1000000000016778" name="Confédération française démocratique du travail">CFDT</ENAMEX>\\ <ENAMEX type="Location" sub\_type="Country" eid="2000000001861060" name="Japan">Japon</ENAMEX>} \newcite{sagot:hal-00703108} annotated the 2007 version of the FTB treebank (with the exception of sentences that did not receive any functional annotation), i.e.~12,351 sentences comprising 350,931 tokens. The annotation process consisted in a manual correction and validation of the output of a rule- and heuristics-based named entity recognition and linking tool in an XML editor. Only a single person annotated the corpus, despite the limitations of such a protocol, as acknowledged by \newcite{sagot:hal-00703108}. In total, 5,890 of the 12,351 sentences contain at least a named entity mention. 11,636 mentions were annotated, which are distributed as follows: 3,761 location names, 3,357 company names, 2,381 organization names, 2,025 person names, 67 product names, 29 fiction character names and 15 points of interest. \subsection{Alignment to the UD version of the FTB} \label{subsec:alignment} The named entity (NE) annotation layer for the FTB was developed using an XML editor on the raw text of the FTB. Annotations are provided as inline XML elements within the sentence-segmented but non tokenized text. For creating our NER models, we first had to align these XML annotations with the already tokenized UD version of FTB. Sentences were provided in the same order for both corpora, so we did not have to align them. For each sentence, we created a mapping $M$ between the raw text of the NE-annotated FTB (i.e.~after having removed all XML annotations) and tokens in the UD version of the FTB corpus. More precisely, character offsets in the FTB-NE raw text were mapped to token offsets in the tokenized FTB-UD. This alignment was done using case insensitive character-based comparison and were a mapping of a span in the raw text to a span in the tokenized corpus. We used the inlined XML annotations to create offline, character-level NE annotations for each sentence, and reported the NE annotations at the token level in the FTB-UD using the mapping $M$ obtained. We logged each error (i.e.~an unaligned NE or token) and then manually corrected the corpora, as those cases were always errors in either corpora and not alignment errors. We found 70 errors in FTB-NE and 3 errors in FTB-UD. Errors in FTB-NE were mainly XML entity problems (unhandled "\&", for instance) or slightly altered text (for example, a missing comma). Errors in FTB-UD were probably some XML artifacts. \section{Benchmarking NER Models} \subsection{Brief state of the art of NER} \label{subsec:sota} As mentioned above, NER was first addressed using rule-based approaches, followed by statistical and now neural machine learning techniques. In addition, many systems use a lexicon of named entity mentions, usually called a ``gazetteer'' in this context. Most of the advances in NER have been achieved on English, in particular with the CoNLL 2003 \cite{conll03} and Ontonotes~v5 \cite{pradhan2012conll,pradhan2013towards} corpora. In recent years, NER was traditionally tackled using Conditional Random Fields (CRF) \cite{lafferty2001conditional} which are quite suited for NER; CRFs were later used as decoding layers for Bi-LSTM architectures \cite{huang2015bidirectional,lample2016neural} showing considerable improvements over CRFs alone. These Bi-LSTM-CRF architectures were later enhanced with contextualized word embeddings which yet again brought major improvements to the task \cite{peters2018deep,akbik2018contextual}. Finally, large pre-trained architectures settled the current state of the art showing a small yet important improvement over previous NER-specific architectures \cite{devlin2019bert,baevski2019cloze}. For French, rule-based system have been developed until relatively recently, due to the lack of proper training data \cite{sekine04,rosset05,stern10np,nouvel11}. The limited availability of a few annotated corpora (cf.~Section~\ref{sec:intro}) made it possible to apply statistical machine learning techniques \cite{bechetcharton_icassp2010,dupont14,dupont2017exploration} as well as hybrid techniques combining handcrafted grammars and machine learning \cite{coop_taln11}. To the best of our knowledge, the best results previously published on FTB NER are those obtained by \newcite{dupont2017exploration}, who trained both CRF and BiLSTM-CRF architectures and improved them using heuristics and pre-trained word embeddings. We use this system as our strong baseline. Leaving aside French and English, the CoNLL 2002 shared task included NER corpora for Spanish and Dutch corpora \cite{tjong2002introduction} while the CoNLL 2003 shared task included a German corpus \cite{conll03}. The recent efforts by \newcite{strakova2019neural} settled the state of the art for Spanish and Dutch, while \newcite{akbik2018contextual} did so for German. \begin{table*} \ra{1.1} \centering\small \begin{tabular}{lrrr} \toprule \textsc{Model} & \textsc{Precision} & \textsc{Recall} & \textsc{F1-Score} \\ \midrule \multicolumn{4}{c}{\em baseline}\\ SEM (CRF) & 87.18 & 80.48 & 83.70\\ \midrule LSTM-seq2seq & 85.10 & 81.87 & 83.45\\ + FastText & 86.98 & 83.07 & 84.98\\ + FastText + FrELMo & 89.49 & 87.48 & 88.47\\ + FastText + CamemBERT\textsubscript{OSCAR-BASE-WWM} & 89.79 & 88.86 & 89.32\\ + FastText + CamemBERT\textsubscript{OSCAR-BASE-WWM} + FrELMo & 90.00 & 88.60 & 89.30\\ + FastText + CamemBERT\textsubscript{CCNET-BASE-WWM} & 90.31 & 89.29 & 89.80\\ + FastText + CamemBERT\textsubscript{CCNET-BASE-WWM} + FrELMo & 90.11 & 88.86 & 89.48\\ + FastText + CamemBERT\textsubscript{OSCAR-BASE-SWM} & 90.09 & 89.46 & 89.77\\ + FastText + CamemBERT\textsubscript{OSCAR-BASE-SWM} + FrELMo & 90.11 & 88.95 & 89.53\\ + FastText + CamemBERT\textsubscript{CCNET-BASE-SWM} & 90.31 & 89.38 & 89.84\\ + FastText + CamemBERT\textsubscript{CCNET-BASE-SWM} + FrELMo & 90.64 & 89.46 & \underline{90.05}\\ + FastText + CamemBERT\textsubscript{CCNET-500K-WWM} & \underline{90.68} & 89.03 & 89.85\\ + FastText + CamemBERT\textsubscript{CCNET-500K-WWM} + FrELMo & 90.13 & 88.34 & 89.23\\ + FastText + CamemBERT\textsubscript{CCNET-LARGE-WWM} & 90.39 & 88.51 & 89.44\\ + FastText + CamemBERT\textsubscript{CCNET-LARGE-WWM} + FrELMo & 89.72 & 88.17 & 88.94\\ \midrule \multicolumn{4}{c}{\em LSTM-CRF + embeddings}\\ LSTM-CRF & 85.87 & 81.35 & 83.55\\ + FastText & 88.53 & 84.63 & 86.53\\ + FastText + FrELMo & 88.89 & 88.43 & 88.66\\ + FastText + CamemBERT\textsubscript{OSCAR-BASE-WWM} & 90.47 & 88.51 & 89.48\\ + FastText + CamemBERT\textsubscript{OSCAR-BASE-WWM} + FrELMo & 89.70 & 88.77 & 89.24\\ + FastText + CamemBERT\textsubscript{CCNET-BASE-WWM} & 90.24 & 89.46 & 89.85\\ + FastText + CamemBERT\textsubscript{CCNET-BASE-WWM} + FrELMo & 89.38 & 88.69 & 89.03\\ + FastText + CamemBERT\textsubscript{OSCAR-BASE-SWM} & \textbf{90.96} & \underline{89.55} & \textbf{90.25}\\ + FastText + CamemBERT\textsubscript{OSCAR-BASE-SWM} + FrELMo & 89.44 & 88.51 & 88.98\\ + FastText + CamemBERT\textsubscript{CCNET-BASE-SWM} & 90.09 & 88.69 & 89.38\\ + FastText + CamemBERT\textsubscript{CCNET-BASE-SWM} + FrELMo & 88.18 & 87.65 & 87.92\\ + FastText + CamemBERT\textsubscript{CCNET-500K-WWM} & 89.46 & 88.69 & 89.07\\ + FastText + CamemBERT\textsubscript{CCNET-500K-WWM} + FrELMo & 90.11 & 88.86 & 89.48\\ + FastText + CamemBERT\textsubscript{CCNET-LARGE-WWM} & 89.19 & 88.34 & 88.76\\ + FastText + CamemBERT\textsubscript{CCNET-LARGE-WWM} + FrELMo & 89.03 & 88.34 & 88.69\\ \midrule \multicolumn{4}{c}{\em fine-tuning}\\ mBERT & 80.35 & 84.02 & 82.14\\ % CamemBERT\textsubscript{OSCAR-BASE-WWM} & 89.36 & 89.18 & 89.27\\ CamemBERT\textsubscript{CCNET-500K-WWM} & 89.35 & 88.81 & 89.08 \\ CamemBERT\textsubscript{CCNET-LARGE-WWM} & 88.76 & \textbf{89.58} & 89.39\\ \bottomrule \end{tabular} \caption{Results on the test set for the best development set scores.} \label{tab:results_ordered} \end{table*} \begin{table*} \ra{1.1} \centering\small \begin{tabular}{lrrr} \toprule \textsc{Model} & \textsc{Precision} & \textsc{Recall} & \textsc{F1-Score} \\ \midrule \multicolumn{4}{c}{\em shuf 1}\\ SEM(dev) & 92.96 & 87.84 & 90.33\\ LSTM-CRF+CamemBERT\textsubscript{OSCAR-BASE-SWM}(dev) & \underline{93.77} & \underline{94.00} & \underline{93.89}\\ SEM(test) & 91.88 & 87.14 & 89.45\\ LSTM-CRF+CamemBERT\textsubscript{OSCAR-BASE-SWM}(test) & \textbf{92.59} & \textbf{93.96} & \textbf{93.27}\\ \midrule \multicolumn{4}{c}{\em shuf 2}\\ SEM(dev) & 91.67 & 85.96 & 88.73\\ LSTM-CRF+CamemBERT\textsubscript{OSCAR-BASE-SWM}(dev) & \underline{93.15} & \underline{94.21} & \underline{93.68}\\ SEM(test) & 90.57 & 87.76 & 89.14\\ LSTM-CRF+CamemBERT\textsubscript{OSCAR-BASE-SWM}(test) & \textbf{92.63} & \textbf{94.31} & \textbf{93.46}\\ \midrule \multicolumn{4}{c}{\em shuf 3}\\ SEM(dev) & 92.53 & 88.75 & 90.60\\ LSTM-CRF+CamemBERT\textsubscript{OSCAR-BASE-SWM}(dev) & \underline{94.85} & \underline{95.82} & \underline{95.34}\\ SEM(test) & 90.68 & 85.00 & 87.74\\ LSTM-CRF+CamemBERT\textsubscript{OSCAR-BASE-SWM}(test) & \textbf{91.30} & \textbf{92.67} & \textbf{91.98}\\ \bottomrule \end{tabular} \caption{Results on the test set for the best development set scores.} \label{tab:results_shuffled} \end{table*} \subsection{Experiments} We used SEM \cite{dupont2017exploration} as our strong baseline because, to the best of our knowledge, it was the previous state-of-the-art for named entity recognition on the FTB-NE corpus. Other French NER systems are available, such as the one given by SpaCy. However, it was trained on another corpus called WikiNER, making the results non-comparable. We can also cite the system of \cite{stern2012joint}. This system was trained on another newswire (AFP) using the same annotation guidelines, so the results given in this article are not directly comparable. This model was trained on FTB-NE in \newcite{stern2013identification} (table C.7, page 303), but the article is written in French. The model yielded an F1-score of 0.7564, which makes it a weaker baseline than SEM. We can cite yet another NER system, namely grobid-ner.\footnote{\url{https://github.com/kermitt2/grobid-ner\#corpus-lemonde-ftb-french}} It was trained on the FTB-NE and yields an F1-score of 0.8739. Two things are to be taken into consideration: the tagset was slightly modified and scores were averaged over a 10-fold cross validation. To see why this is important for FTB-NE, see section \ref{subsubsec:shuffling}. In this section, we will compare our strong baseline with a series of neural models. We will use the two current state-of-the-art neural architectures for NER, namely seq2seq and LSTM-CRFs models. We will use various pre-trained embeddings in said architectures: fastText, CamemBERT\xspace (a French BERT-like model) and FrELMo (a French ELMo model) embeddings. \subsubsection{SEM} SEM \cite{dupont2017exploration} is a tool that relies on linear-chain CRFs \cite{lafferty2001conditional} to perform tagging. SEM uses Wapiti \cite{lavergne2010practical} v1.5.0 as linear-chain CRFs implementation. SEM uses the following features for NER: \begin{itemize} \item token, prefix/suffix from 1 to 5 and a Boolean isDigit features in a [-2, 2] window; \item previous/next common noun in sentence; \item 10 gazetteers (including NE lists and trigger words for NEs) applied with some priority rules in a [-2, 2] window; \item a "fill-in-the-gaps" gazetteers feature where tokens not found in any gazetteer are replaced by their POS, as described in \cite{raymond2010reconnaissance}. This features used token unigrams and token bigrams in a [-2, 2] a window. \item tag unigrams and bigrams. \end{itemize} We trained our own SEM model by using SEM features on gold tokenization and optimized L1 and L2 penalties on the development set. The metric used to estimate convergence of the model is the error on the development set ($1 - accuracy$). Our best result on the development set was obtained using the rprop algorithm, a 0.1 L1 penalty and a 0.1 L2 penalty. SEM also uses an NE mention broadcasting post-processing (mentions found at least once are used as a gazetteer to tag unlabeled mentions), but we did not observe any improvement using this post-processing on the best hyperparameters on the development set. \subsubsection{Neural models} In order to study the relative impact of different word vector representations and different architectures, we trained a number of NER neural models that differ in multiple ways. They use zero to three of the following vector representations: FastText non-contextual embeddings \cite{bojanowski-etal-2017-enriching}, the FrELMo contextual language model obtained by training the ELMo architecture on the OSCAR large-coverage Common-Crawl-based corpus developed by \newcite{ortiz2019asynchronous}, and one of multiple CamemBERT\xspace language models \cite{arxiv19camembert}. CamemBERT\xspace models are transformer-based models based on an architecture similar to that of RoBERTa \cite{liu2019roberta}, an improvement over the widely used and successful BERT model \cite{devlin2019bert}. The CamemBERT\xspace models we use in our experiments differ in multiple ways: \begin{itemize} \item Training corpus: OSCAR (cited above) or CCNet, another Common-Crawl-based corpus \cite{wenzek2019ccnet} classified by language, of an almost identical size ($\sim$32 billion tokens); although extracted using similar pipelines from Common Crawl, they differ slightly in so far that OSCAR better reflects the variety of genre and style found in Common Crawl, whereas CCNet was designed to better match the style of Wikipedia; moreover, OSCAR is freely available, whereas only the scripts necessary to rebuild CCNet can be downloaded freely. For comparison purposes, we also display the results of an experiment using the mBERT multilingual BERT model trained on the Wikpiedias for over 100 languages. \item Model size: following \newcite{devlin2019bert}, we use both ``BASE'' and ``LARGE'' models; these models differ by their number of layers (12 vs.~24), hidden dimensions (768 vs.~1024), attention heads (12 vs.~16) and, as a result, their number of parameters (110M vs.~340M). \item Masking strategy: the objective function used to train a CamemBERT\xspace model is a masked language model objective. However, BERT-like architectures like CamemBERT\xspace rely on a fixed vocabulary of explicitly predefined size obtained by an algorithm that splits rarer words into subwords, which are part of the vocabulary together with more frequent words. As a result, it is possible to use a whole-word masked language objective (the model is trained to guess missing words, which might be made of more than one subword) or a subword masked language objective (the model is trained to guess missing subwords). Our models use the acronyms WWM and SWM respectively to indicate the type of masking they used. \end{itemize} We use these word vector representations in three types of architectures: \begin{itemize} \item Fine-tuning architectures: in this case, we add a dedicated linear layer to the first subword token of each word, and the whole architecture is then fine-tuned to the NER task on the training data. \item Embedding architectures: word vectors produced by language models are used as word embeddings. We use such embeddings in two types of LSTM-based architectures: an LSTM fed to a seq2seq layer and an LSTM fed to a CRF layer. In such configurations, the use of several word representations at the same time is possible, using concatenation as a combination operator. For instance, in Table~\ref{tab:results_ordered}, the model FastText + CamemBERT\textsubscript{OSCAR-BASE-WWM} under the header ``{\em LSTM-CRF + embeddings} corresponds to a model using the LSTM-CRF architecture and, as embeddings, the concatenation of FastText embeddings, the output of the CamemBERT\xspace ``BASE'' model trained on OSCAR with a whole-word masking objective, and the output of the FrELMo language model. \end{itemize} For our neural models, we optimized hyperparameters using F1-score on development set as our convergence metric. We train each model three times with three different seeds, select the best seed on the development set, and report the results of this seed on the test set in Table~\ref{tab:results_ordered}. \subsubsection{Results} \paragraph{Word Embeddings:} Results obtained by SEM and by our neural models are shown in table \ref{tab:results_ordered}. First important result that should be noted is that LSTM+CRF and LSTM+seq2seq models have similar performances to that of the SEM (CRF) baseline when they are not augmented with any kind of embeddings. Just adding classical fastText word embeddings dramatically increases the performance of the model. \paragraph{ELMo Embeddings:} Adding contextualized ELMo embeddings increases again the performance for both architectures. However we note that the difference is not as big as in the case of the pair with/without fastText word embeddings for the LSTM-CRF. For the seq2seq model, it is the contrary: adding ELMo gives a good improvement while fastText does not improve the results as much. \paragraph{CamemBERT\xspace Embeddings:} Adding the CamemBERT\xspace embeddings always increases the performance of the model LSTM based models. However, as opposed to adding ELMo, the difference with/without CamemBERT\xspace is equally considerable for both the LSTM-seq2seq and LSTM-CRF. In fact adding CamemBERT\xspace embeddings increases the original scores far more than ELMo embeddings does, so much so that the state-of-the-art model is the LSTM + CRF + FastText + CamemBERT\textsubscript{OSCAR-BASE-SWM}. \paragraph{CamemBERT\xspace + FrELMo:} Contrary to the results given in \newcite{strakova2019neural}, adding ELMo to CamemBERT\xspace did not have a positive impact on the performances of the models. Our hypothesis for these results is that, contrary to \newcite{strakova2019neural}, we trained ELMo and CamemBERT\xspace on the same corpus. We think that, in our case, ELMo either does not bring any new information or even interfere with CamemBERT\xspace. \paragraph{Base vs large:} an interesting observation is that using large model negatively impacts the performances of the models. One possible reason could be that, because the models are larger, the information is more sparsely distributed and that training on the FTB-NE, a relatively small corpus, is harder. \subsubsection{Impact of shuffling the data} \label{subsubsec:shuffling} One important thing about the FTB is that the underlying text is made of articles from the newspaper Le Monde that are chronologically ordered. Moreover, the standard development and test sets are at the end of the corpus, which means that they are made of articles that are more recent than those found in the training set. This means that a lot of entities in the development and test sets may be new and therefore unseen in the training set. To estimate the impact of this distribution, we shuffled the data, created a new training/development/test split of the same lengths than in the standard split, and retrained and reevaluated our models. We repeated this process 3 times to avoid unexpected biases. The raw results of this experiment are given in table \ref{tab:results_shuffled}. We can see that the shuffled splits result in improvements on all metrics, the improvement in F1-score on the test set ranging from 4.04 to 5.75 (or 25\% to 35\% error reduction) for our SEM baseline, and from 1.73 to 3.21 (or 18\% to 30\% error reduction) for our LSTM-CRF architectures, reaching scores comparable to the English state-of-the-art. This highlights a specific difficulty of the FTB-NE corpus where the development and test sets seem to contain non-negligible amounts of unknown entities. This specificity, however, allows to have a quality estimation which is more in line with real use cases, where unknown NEs are frequent. This is especially the case when processing newly produced texts with models trained on FTB-NE, as the text annotated in the FTB is made of articles around 20 years old. \section{Conclusion} \label{sec:conclusion} In this article, we introduce a new, more usable version of the named entity annotation layer of the French TreeBank. We aligned the named entity annotation to reference segmentation, which will allow to better integrate NER into the UD version of the FTB. We establish a new state-of-the-art for French NER using state-of-the-art neural techniques and recently produced neural language models for French. Our best neural model reaches an F1-score which is 6.55 points higher (a 40\% error reduction) than the strong baseline provided by the SEM system. We also highlight how the FTB-NE is a good approximation of a real use case. Its chronological partition increases the number of unseen entities allows to have a better estimation of the generalisation capacities of machine learning models than if it were randomised. Integration of the NER annotations in the UD version of FTB would allow to train more refined model, either by using more information or through multitask learning by learning POS and NER at the same time. We could also use dependency relationships to provide additional information to a NE linking algorithm. One interesting point to investigate is that using Large embeddings overall has a negative impact on the models performances. It could be because larger models store information relevant to NER more sparingly, making it harder for trained models to capitalize them. We would like to investigate this hypothesis in future research. \subsection*{Acknowledgments} This work was partly funded by the French national ANR grant BASNUM (\mbox{ANR-18-CE38-0003}), as well as by the last author's chair in the PRAIRIE institute,\footnote{\url{http://prairie-institute.fr/}} funded by the French national ANR as part of the ``Investissements d’avenir'' programme under the reference \mbox{ANR-19-P3IA-0001}. The authors are grateful to Inria Sophia Antipolis - Méditerranée ``Nef'' \footnote{\url{https://wiki.inria.fr/wikis/ClustersSophia}} computation cluster for providing resources and support. \section{Bibliographical References} \label{main:ref} \bibliographystyle{lrec}
2,869,038,156,107
arxiv
\section{Introduction} The $F$-signature of a local ring $(R,\mathfrak{m},K)$ is an important numerical invariant for rings of prime characteristic $p>0$. Suppose that $R$ is a reduced ring, and the Frobenius map is a finite morphism. Then, we can work with the ring consisting of its $p^e$-roots, denoted by $R^{1/p^e}$, which is a module-finite extension of $R$. The free rank\index{free rank} of a finitely generated $R$-module $M$ is the largest rank of a free $R$-linear direct summand of $M$. Kunz \cite{KunzReg} showed that if $R$ is local, then $R$ is regular if and only $R^{1/p}$ is free (equivalently $R^{1/p^e}$ is free for every or some $e\geq 1$). One may then expect that the free rank $R^{1/p^e}$ quantifies the singularity of $R$. This information is encoded in the following invariant. The $F$-signature of $R$ is defined by $$ s(R)=\lim\limits_{e\to\infty }\frac{\mathrm{freerank}_R(R^{1/p^e}) }{p^{e(d+\alpha)}}, $$ where $d=\dim(R)$, and $p^\alpha=\dim_K K^{1/p}$. This number was implicitly introduced in the work on rings of differential operators by Smith and Van den Bergh~\cite{SmithVDB} (see also \cite{SeibertHilbertKunz}) and formally defined by Huneke and Leuschke \cite{HLMCM}, provided that the limit exists. After partial results \cite{HLMCM,YaoObsFsig,AE,SinghSemigroup}, Tucker \cite{TuckerFSig} showed that the $F$-signature exists as a limit, rather than just a limsup, in general. Yao \cite{YaoObsFsig} gave an important characterization of the $F$-signature in terms of ideals defined by Cartier maps. Let \[I_e=\{r\in R\ |\ \phi(r^{1/p^e})\in\mathfrak{m} \hbox{ for every }\phi\in\operatorname{Hom}_R(R^{1/p^e},R)\}.\] Then, \[ s(R)=\lim\limits_{e\to\infty }\frac{\lambda_R(R/I_e)}{p^{ed}}, \] where $\lambda_R(M)$ denotes the length of $M$ as an $R$-module. The $F$-signature has a number of properties that relate to different aspects of the singularity of $R$. For example, \begin{itemize} \item[(i)] $s(R)\in [0,1]$. \item[(ii)] $R$ is regular if and only if $s(R)=1$ \cite{HLMCM}. \item[(iii)] $R$ is strongly $F$-regular if and only if $s(R)>0$ \cite{AL}. \end{itemize} Unfortunately, computing the $F$-signature is a very difficult task. Some notable examples include particular rings of invariants. In particular, $s(R^G)=\frac{1}{|G|}$ when $G$ is a finite subgroup of $\mathrm{SL}_n(K)$ of invertible order acting linearly on a polynomial ring $R$. In addition, the $F$-signature of an affine toric variety can be computed as the volume of a certain polytope \cite{WatanabeYoshida,VonKorff} (see also \cite{SinghSemigroup}). By its nature, there is not a notion of $F$-signature in characteristic zero. However, there has been work trying to find an appropriate analogue. Some approaches to this are via reduction to positive characteristic, symmetric powers of syzygies and of K\"{a}hler differentials \cite{SymSig,BCHigh}. Suppose that $K$ is a perfect field and $R$ is the localization of a finitely generated $K$-algebra at a maximal ideal. As $R^{1/p}$ detects singularity in prime characteristic, the module of K\"{a}hler differentials, $\Omega_{R|K}$, detects singularity. As a characteristic-free analogue for $R^{1/p^e}$, we consider ``higher'' K\"{a}hler differentials. Let $\ModDif{n}{R}{K} = (R \otimes_K R) / \Delta_{R|K}^{n+1}$, where $\Delta_{R|K}$ is the kernel of the multiplication map $\mu:R\otimes_K R \rightarrow R$. These modules are known as the \emph{modules of principal parts}, introduced by Grothendieck \cite[D\'{e}finition 16.3.1]{EGAIV}. We note that the module of $K$-linear K\"{a}hler differentials of $R$ is isomorphic to $\Delta_{R|A}/\Delta^2_{R|A}$ and that $D^n_{R|K}\cong \operatorname{Hom}_R(\ModDif{n}{R}{K},R)$, where $ D^n_{R|K} $ denotes the $R$-module of all differential operators on $R$ of order at most $n$. In Theorem~\ref{Mel}, we give a characteristic-free characterization of regularity: $R$ is regular if and only if $\ModDif{n}{R}{K}$ is free for every (some) $n\geq 1$. As with the $F$-signature, one can expect that the free rank of $\ModDif{n}{R}{K}$ quantifies singularity. In this work, we introduce a numerical invariant that does this. We define the $K$-principal parts signature of $R$ by $$\pps{K}(R)=\limsup\limits_{n\to\infty}\frac{\mathrm{freerank}_R(\ModDif{n}{R}{K} )}{\operatorname{rank}( \ModDif{n}{R}{K} ) }.$$ The free rank of $\ModDif{n}{R}{K}$ has an easy interpretation in terms of partial differential equations on $R$. It is the maximal $t$ such that there exists a surjection $\ModDif{n}{R}{K} \rightarrow R^t$. Such a surjection is the same as an independent collection of $t$ differential operators $\delta_1, \ldots, \delta_t$ of order $\leq n$ with the property that the algebraic partial differential equation $\delta_j(-) = 1$ has a solution in $R$, see Lemma \ref{unitarysystem}. Hence the differential signature is a measure for the asymptotic size of such ``unitary'' operators. We observe that if ``free rank'' is replaced by ``minimal number of generators'' in the previous definition, then one obtains the Hilbert-Samuel multiplicity of $R$. This characterization motivates the following analogy: the principal parts signature is to the $F$-signature as Hilbert-Samuel multiplicity is to the Hilbert-Kunz multiplicity (see Remark~\ref{analogy}). Unfortunately, the module of principal parts might not be a finitely generated $R$-module for rings that are not $K$-algebras essentially of finite type. In order to have a definition of a signature for any local $K$-algebra, we make use of the action of differential operators on the ring $R$. Suppose that $(R,\mathfrak{m})$ is a local ring containing a field $K$ of any characteristic. We consider the \emph{differential powers} of $\mathfrak{m}$, which are given by $$\mathfrak{m}\dif{n}{K} =\{ r\in R\ |\ \delta(r)\in\mathfrak{m} \hbox{ for every }\delta\in D^{n-1}_{R|K} \}.$$ For prime ideals in polynomial rings, the differential powers coincide with symbolic powers, by the Zariski-Nagata Theorem \cite{ZariskiHolFunct,Nagata} (see also \cite{SurveySP}). We define the $K$-differential signature of $R$ by \[ \dm{K}(R)=\limsup\limits_{n\to\infty}\frac{\lambda_R(R/\mathfrak{m}\dif{n}{K})}{n^d / d!}.\] In Theorem~\ref{ThmDiffSigRanks}, we show that if $R$ is the localization of a finitely generated $K$-algebra at a maximal ideal, with $K=\overline{K}$, then $\dm{K}(R)=\pps{K}(R)$. In fact, we show this equality in a more general and technical setting. We are able to show that the differential signature shares several features with the $F$-signature. Let $R$ be a reduced ring that is the localization of a finitely generated $K$-algebra at a maximal ideal, with $K$ perfect; see Definition~\ref{def-pseudocoefficient} for a slightly more general setup. \begin{itemize} \item[(i)] If $R$ is a domain, then $\dm{K}(R)\in [0,1]$. (Corollary~\ref{leq-1}) \item[(ii)] If $R$ is regular, then $\dm{K}(R)=1$. (Example~\ref{reg-1}) \item[(iii-a)] If $\dm{K}(R)>0$, then $R$ is a simple $D_{R|K}$-module. (Theorem~\ref{ThmDifMultDsimple}) \item[(iii-b)] If $R$ is a graded Gorenstein domain in characteristic zero with an isolated singularity and $\dm{K}(R)>0$, then $R$ has negative $a$-invariant. (Theorem~\ref{Possiganeg}) \item[(iii-c)] If $K$ has positive characteristic and $R$ is $F$-pure, then $\dm{K}(R)>0$ if and only if $R$ is strongly $F$-regular. (Theorem~\ref{ThmFregPos}) \item[(iii-d)] If $K$ has characteristic zero, $R$ has dense $F$-pure type, the anticanonical cover of $R$ is finitely generated, and $\dm{K}(R)>0$, then $R$ is log-terminal. (Theorem~\ref{ThmKLTPos}) \item[(iii-e)] If $R$ is a direct summand of a regular ring, then $\dm{K}(R)>0$. (Theorem~\ref{ThmDirSumPos}) \end{itemize} The behavior of the $F$-signature in a relative setting, say over $\operatorname{Spec} \mathbb{Z}$, is not well understood, since it is not possible to compare the splitting behavior of the Frobenius morphisms for different prime characteristics. An advantage of the differential signature is that its definition refers to the module $P^n_{R|K}$ which behaves nicely in a relative setting. See Subsection \ref{SubSecrelative} and in particular Corollaries~\ref{freerankprincipalrelative} and \ref{freerankprincipalrelativesequence}. We are able to compute many examples of differential signature of interesting rings. For instance, $\dm{K}(R^G)=\frac{1}{|G|}=s(R^G)$ when $G$ is a finite subgroup of $\mathrm{SL}_n$ of invertible order acting linearly on a polynomial ring $R$ in Theorem~\ref{group-formula}. In Theorem~\ref{quadricsignature}, we show that $\dm{K}(R)=\left(\frac{1}{2} \right)^{d-1}$ for a quadric hypersurface of dimension $d\geq 2$. We also give a formula for the differential signature of a normal affine toric ring in terms of the volume of a certain polytope in Theorem~\ref{ThmToric}. The values we obtain are positive and rational, but may differ from the values of the $F$-signature. However, the formulas are highly analogous; roughly, up to a common scaling factor, the $F$-signature is the volume of the intersection of the $d$-dimensional unit cube with a linear subspace and the differential signature is $d!$ times the volume of the $d$-dimensional unit simplex intersected with the same linear subspace. {In Theorem~\ref{ThmDet}, we compute the differential signature of $K[X]/I_t(X)$, where $K$ is a field of characteristic zero, $X$ is a generic matrix, and $I_t(X)$ is the ideal generated by the $t\times t$-minors of $X$.} Formulas for symmetric and skew-symmetric rank varieties appear in the same section. We point out that the $F$-signature for these classes of rings is not known for $t>2$. We expect that the differential signature will find applications to geometry and singularity theory. One such application is in the forthcoming work \cite{JS}, where differential signature is applied to give a characteristic-free approach to bounding \'etale fundamental groups of singularities. Unfortunately, we are not able to show in general that the differential signature exists as a limit rather than just a limsup. In Theorem~\ref{existenceandraionality}, we show that the limit exists and is rational when $R$ is an algebra with coefficient field $K$ and the associated graded ring of the differential operators with respect to the order filtration is finitely generated. Moreover, every statement about the differential signature in this paper, including (non)vanishing, bounds, and computations, is equally valid for the liminf definition as for the limsup. However, the simple example $\mathbb{C}[x^2,x^3]$ in Example~\ref{example-not-D-graded} illustrates why recent advances in convergence of numerical limits in Commutative Algebra do not apply to differential signature. For our work on the differential signature, we develop new tools to study differential operators. We introduce an algorithmic framework for the module of principal parts, differential operators, and their induced action on symmetric powers of the module of K\"ahler differentials (Subsections \ref{SubSecJacobi} and \ref{SubAlg}). This is in particular important for computing the differential signature of quadric hypersurfaces. We define the differential core of an ideal, which emulates the splitting prime of a ring, see Section~\ref{SecDiffPrimes}. We define and work with ring extensions that are differentially extensible, a notion implicit in the work of Levasseur-Stafford \cite{LS} and Schwarz \cite{Schwarz}. In Theorem~\ref{BS-Thm}, we use these extensions to reduce the computation of the new notion of Bernstein-Sato polynomials in certain singular rings \cite{AMHNB} to the classical Bernstein-Sato theory. As a consequence, we obtain a method for computing the Bernstein-Sato polynomials of elements of determinantal rings and other rings of invariants, see Remark~\ref{Rem-com-BS}. Our approach to Bernstein-Sato polynomials is a generalization of the methods of Hsiao and Matusevich \cite{Hsiao-Matusevich}. We also establish some new results and new proofs of old results that are of independent interest from differential signature. In Theorem~\ref{Mel}, we give a characteristic-free characterization of regularity that can be interpreted as a converse to the Zariski-Nagata Theorem (see \cite{ZariskiHolFunct,Nagata,SurveySP}). In Remark~\ref{Kunz}, we compare our characterization with Kunz's criterion for regularity in prime characteristic \cite{KunzReg}. We provide a Fedder/Glassbrenner-type criterion for $D$-simplicity in Subsection~\ref{FGCriterion}. We also give a new description of $F$-signature in Remark~\ref{analogy}, and a simplified proof of the polytope formula for $F$-signature of toric varieties in Theorem~\ref{WYVKS}. An index with notation and new or uncommon terminology is provided at the end for the reader's convenience. \section{Differential operators and $n$-differentials}\label{basics} In this section, we recall the definition of the ring of differential operators and different characterizations. We often work in the following setting. \begin{definition}\label{def-pseudocoefficient} A ring $(R,\mathfrak{m},\Bbbk)$ is an \emph{algebra with pseudocoefficient field $K$}\index{algebra with pseudocoefficient field} if $R$ is a commutative $K$-algebra with $1\neq 0$ that is either $\mathbb{N}$-graded {and finitely generated over} $R_0=K$ or local and essentially of finite type over $K$, $\mathfrak{m}$ is the homogeneous (respectively, the unique) maximal ideal of $R$, and the inclusion map from $K\rightarrow \Bbbk=R/\mathfrak{m}$ is a finite separable extension of fields. If $K=\Bbbk$, which is automatic in the graded case, then $R$ is an \emph{algebra with coefficient field $K$}. \end{definition} We note that if $K$ is perfect, and $R$ is a finitely generated $K$-algebra, then $R_{\mathfrak{m}}$ is an algebra with pseudocoefficient field $K$ for any maximal ideal $\mathfrak{m} \subset R$. That is, coordinate rings of closed points of varieties over perfect fields are of this form. \subsection{Differential operators} \begin{definition} Let $R$ be a commutative ring and $A$ be a subring, both with $1\neq 0$. The \textit{$A$-linear differential operators of $R$ of order $n$},\index{$D^n_{R \vert A}$}\index{$D_{R \vert A}$} $D^{n}_{R|A}\subseteq \operatorname{Hom}_A(R,R)$, are defined inductively as follows: \begin{itemize} \item[(i)] $D^{0}_{R|A} =\operatorname{Hom}_R(R,R).$ \item[(ii)] $D^{n}_{R|A} = \{\delta\in \operatorname{Hom}_A(R,R)\;|\; [\delta,r]\in D^{n-1}_{R|A} \;\forall \; r \in R \}.$ \end{itemize} The ring of $A$-linear differential operators is defined by $D_{R|A}=\displaystyle\bigcup_{n\in\mathbb{N}}D^{n}_{R|A}$. Throughout, when we discuss rings of differential operators $D_{R|A}$, the rings $A$ and $R$ are assumed to be commutative with $1\neq 0$. More generally, if $M$ and $N$ are $R$-modules, one defines $D^{n}_{R|A}(M,N)$ and $D_{R|A}(M,N)$ as submodules of $\operatorname{Hom}_A(M,N)$ by the similar rules: \begin{itemize} \item[(i)] $D^{0}_{R|A}(M,N) =\operatorname{Hom}_R(M,N).$ \item[(ii)] If $r_M$ and $r_N$ denote the multiplication by $r$ in the modules $M$ and $N$ respectively, then $$D^{n}_{R|A} (M,N)= \{\delta\in \operatorname{Hom}_A(M,N)\;|\; \delta r_M- r_N\delta \in D^{n-1}_{R|A}(M,N) \;\forall \; r \in R \}.$$ \end{itemize} \end{definition} These are $R$-modules, where $R$ acts by postcomposition of maps. The ring structure on $D_{R|A}$ is given by composition and satisfies $D^m_{R|A}D^n_{R|A} \subseteq D^{m+n}_{R|A}$. { \begin{remark} In this note, we often say that a local $K$-algebra essentially of finite type is smooth without assuming that it is of finite type; in many sources, smooth entails finite type. Here, by smooth we mean that $R$ is flat over $K$ and that $\Omega_{R|K}$ is projective as an $R$-module. \end{remark} } \begin{example}\label{example-regular-D} Let $(R,\mathfrak{m},\Bbbk)$ be an algebra with pseudocoefficent field $K$. We recall that a graded or local ring essentially of finite type over $K$ is smooth if and only if $R\otimes_K L$ is regular for some (equivalently, every) perfect field $L/K$; in particular, if $K$ is perfect, this is equivalent to $R$ being regular. Set $d=\dim(R)$. In this case, $R$ is differentially smooth over $K$ \cite[16.10.2]{EGAIV}. We then have the following description of the ring of differential operators. Let $\mathfrak{m}=(x_1,\dots,x_d)$, where $x_i$ are homogeneous in the graded case. Then, for each $\alpha=(a_1,\dots,a_d) \in \mathbb{N}^d$, there is a differential operator $\delta_{\alpha}$ such that \[\delta_{\alpha} (x_1^{b_1} \cdots x_d^{b_d}) = \binom{b_1}{a_1} \cdots \binom{b_d}{a_d} x_1^{b_1-a_1} \cdots x_d^{b_d-a_d},\] and $D^n_{R|K}=R\langle \, \delta_\alpha \ | \ |\alpha|\leq n \, \rangle$ \cite[16.11.2]{EGAIV}. In the graded case, where $R$ is a polynomial ring, we write $ \delta_{\alpha}=\frac{1}{a_1 !} \cdots \frac{1}{a_d !} \frac{\partial^{a_1}}{\partial x_1^{a_1}} \cdots \frac{\partial^{a_d}}{\partial x_d^{a_d}}$, where $\frac{\partial^{a_i}}{\partial x_i^{a_i}}$ is the $a_i$-iterate of the usual partial derivative $\frac{\partial}{\partial x_i}$. Since $\frac{1}{a_1 !} \cdots \frac{1}{a_d !} \in R$ for all $\alpha$ only for $R$ of characteristic zero, we take this as an honest equality of operators there, and merely a formal one in characteristic $p$. By a standard abuse of notation, we write $\frac{1}{\alpha !} \partial^{\alpha}$\index{${\frac{1}{\lambda !}} {{\partial^{\lambda}}} $} for the operator $\delta_{\alpha}$ in any characteristic. In particular, $D_{R|K}$ is generated by $R$ and the partial derivatives $\frac{\partial}{\partial x_i}$ in characteristic zero, and is a Noetherian noncommutative ring. In positive characteristic, $D_{R|K}$ is not Noetherian for $d>0$. \end{example} Even for nice rings of characteristic zero, the ring of differential operators may fail to be Noetherian. \begin{example}\label{example-BGG} Let $R=\mathbb{C}[x,y,x]/(x^3+y^3+z^3)$, and $\mathfrak{m}=(x,y,z)$ be the homogeneous maximal ideal. The following facts hold about $D_{R|\mathbb{C}}$ \cite{DiffNonNoeth}: \begin{itemize} \item $D_{R|\mathbb{C}}$ is a graded ring. \item $[D_{R|\mathbb{C}}]_{<0} = 0$. \item $[D_{R|\mathbb{C}}]_{0} = \mathbb{C}\langle 1, E, E^2, \dots \rangle$, where $E=x \frac{\partial}{\partial x} +y \frac{\partial}{\partial y}+z \frac{\partial}{\partial z}$. \item For every $n$, $\displaystyle \frac{[D^n_{R|\mathbb{C}}]_{1}}{[D^{n-1}_{R|\mathbb{C}}]_{1} +E [D^{n-1}_{R|\mathbb{C}}]_{1}}\cong \mathbb{C}^3$ as $\mathbb{C}$-vector spaces. \end{itemize} From this, it follows that $D_{R|\mathbb{C}}$ is not a finitely generated $\mathbb{C}$-algebra, and neither left- nor right-Noetherian. \end{example} We note that $R$ is a left $D_{R|A}$-module, as elements of $D_{R|A}$ are endomorphisms of $R$. We call every $D_{R|A}$-submodule of $R$ a \emph{$D_{R|A}$-ideal}.\index{D-ideal} \begin{lemma}[{\cite[Lemma~4.1]{Switala}}]\label{diff-ops-are-cts} Let $R$ be a ring, $A$ be a subring, and $I\subseteq R$ be an ideal. Then every differential operator $\delta\in D_{R|A}$ is $I$-adically continuous. \end{lemma} \subsection{Modules of principal parts} A key description of the differential operators comes from the fact that they are represented by an $R$-module analogously to how derivations are represented by the K\"ahler differentials. \begin{definition} Let $R$ be a ring and $A$ be a subring. The \textit{module of $n$-differentials}, or \textit{principal parts},\index{principal parts}\index{$\ModDif{n}{R}{A}$} of $R$ over $A$, is \[ \ModDif{n}{R}{A} = (R \otimes_A R) / \Delta_{R|A}^{n+1}\] where $\Delta_{R|A}$\index{$\Delta_{R \vert A}$} is the kernel of the multiplication map $\mu:R\otimes_A R \rightarrow R$. \end{definition} We warn the reader that even if $R$ is Noetherian, $\ModDif{}{R}{A}:=R\otimes_A R$\index{$\ModDif{}{R}{A}$} may not be, and $\ModDif{n}{R}{A}$ may fail to be finitely generated as an $R$-module. However, both of these finiteness conditions hold if $R$ is a essentially of finite type over $A$. Note that for $A\subseteq R$ and $R$-modules $M$ and $N$, there is an $(R\otimes_A R)$-module structure on $\operatorname{Hom}_A(M,N)$ given by the rule \[ ((a \otimes b) \cdot \phi) (m) = a \phi(b m). \] We also endow $\operatorname{Hom}_R( R \otimes_A M, N)$ with an $(R\otimes_A R)$-module structure by the rule \[ ((a \otimes b) \cdot \phi) (r \otimes m) = \phi(ar \otimes bm). \] The natural isomorphism \begin{equation}\label{eq:Hom} \operatorname{Hom}_R(R\otimes_A M, N) \rightarrow \operatorname{Hom}_A(M,N) \end{equation} given by composing the adjunction isomorphism and the evaluation isomorphism is given by $\Phi(\phi)(m)=\phi(1\otimes m)$. One has that $(a\otimes b)(\phi)(r\otimes m) = \phi(ar \otimes bm)= a\phi(r\otimes bm)$, so that $\Phi( (a\otimes b)(\phi))(m)=a \phi(1\otimes bm)$. On the other hand, $(a\otimes b)(\Phi(\phi))(m)=a \phi(1 \otimes bm)$, so the natural isomorphism is an isomorphism of $(R\otimes_A R)$-modules via the structures given above. The following characterization of differential operators is useful. \begin{lemma}[{\cite[2.2.3]{HeynemannSweedler}}]\label{diff-ann} A map $\delta\in \operatorname{Hom}_A(M,N)$ is a differential operator of order $\leq n$ if and only if $\Delta_{R|A}^{n+1} \cdot \delta=0$ under the $(R\otimes_A R)$-module action described above. \end{lemma} The modules of $n$-differentials represent the differential operators in the same way that the {K\"ahler} differentials represent the modules of derivations. Namely, let $d^n$\index{$d^n$}\index{universal differential} be the \emph{universal differential} $d^n:R\rightarrow \ModDif{n}{R}{A}$ given by $d^n(x)=\overline{1\otimes x}$. Then one has the following: \begin{proposition}[{\cite[16.8.8]{EGAIV}, \cite[2.2.6]{HeynemannSweedler}}]\label{universaldifferential} Let $R$ be a ring and $A$ be a subring. For all $R$-modules $M,N$ and $\delta\in D^n_{R|A}(M,N)$, the map $d^*:\operatorname{Hom}_R(\ModDif{n}{R}{A}\otimes_R M,N)\rightarrow D^n_{R|A}(M,N)$ given by $\phi\mapsto \phi\circ d^n$ is an $R$-module isomorphism. \end{proposition} We find it useful to compare different filtrations on rings of differential operators. To this end, and motivated by the previous proposition, we make the following definition. \begin{definition} \label{DefIDef} Let $I$ be an ideal of $R\otimes_A R$, and $M$ and $N$ be $R$-modules. \begin{enumerate} \item[(i)] The \emph{$I$-differential operators} of $M$ into $N$ are \[ D^I_{R|A}(M,N)= ( 0 :_{\operatorname{Hom}_A(M,N)} I). \]\index{$D^I_{R \vert A}$} \item[(ii)] The \emph{module of $I$-differentials} of $M$ is \[ \ModDif{I}{R}{A}(M) = \frac{R \otimes_A M}{ I \cdot (R\otimes_A M)}.\]\index{$\ModDif{I}{R}{A}$} \item[(iii)] The \emph{universal $I$-differential}\index{universal differential}\index{$d^n$} on $M$ is the map $d^I_{M|A}: M \rightarrow \ModDif{I}{R}{A}(M)$ given by $d^I_{M|A}(m)= \overline{1 \otimes m}$. \end{enumerate} \end{definition} \begin{remark} We also define $\ModDif{}{R}{A}(M)={R \otimes_A M}$ and $\ModDif{n}{R}{A}(M)=\frac{R \otimes_A M}{ \Delta_{R|A}^n \cdot (R\otimes_A M)}$. Note that $\ModDif{}{R}{A}$ is an algebra, and that every $\ModDif{I}{R}{A}$ is a quotient of this ring and $\ModDif{I}{R}{A}(M)$ is a module over this ring. \end{remark} \begin{remark} Each of the modules of differentials $\ModDif{}{R}{A}$, $\ModDif{I}{R}{A}$, $\ModDif{}{R}{A}(M)$, and $\ModDif{I}{R}{A}(M)$ can be considered as an $R$-module in multiple ways. However, unless specified otherwise, when we consider any of these as an $R$-module, we mean the $R$-module structure coming from the left copy of $R$. \end{remark} The following proposition is an extension of Proposition~\ref{universaldifferential}. \begin{proposition}\label{representing-differential} With the notation in Definition~\ref{DefIDef}, the map \[\psi: \operatorname{Hom}_{R}(\ModDif{I}{R}{A}(M),N) \rightarrow D^I_{R|A}(M,N)\] given by $\psi(\phi)=\phi \circ d^I_{M|A}$ is an isomorphism of $R$-modules. \end{proposition} \begin{proof} The map $\psi$ is clearly additive, and the action of $R$ on $\ModDif{I}{R}{A}(M)$ corresponds to postcomposition of maps in the adjunction isomorphism~\eqref{eq:Hom}, so it is $R$-linear. It follows from the definitions that $d^I_{M|A} \in D^I(M, \ModDif{I}{R}{A}(M))$. It is a routine verification that the image of $\psi$ consists of $I$-differential operators. Since the target is generated as an $R$-module by elements in the image of $D^I_{M|A}$, $\psi$ is injective. Now, by the discussion preceding Equation~\ref{eq:Hom}, we have an $(R\otimes_A R)$-isomorphism from $\operatorname{Hom}_R(R\otimes_A M, N) \rightarrow \operatorname{Hom}_A(M,N)$. Thus, given $\delta\in D^I_{R|A}(M,N)\subseteq \operatorname{Hom}_A(M,N)$, $\delta$ factors as an $R$-linear map through $R \otimes_A M$, and since $I \cdot \delta = 0$, $\delta$ factors through $\ModDif{I}{R}{A}(M)$. Thus, $\psi$ is surjective. \end{proof} We note that not every $I$-differential operator is a differential operator. However, one has the following, which follows immediately from Lemma~\ref{diff-ann}. \begin{lemma}\label{I-diff-ops} With the notation in Definition~\ref{DefIDef}, we have \begin{enumerate} \item[(i)] If $I \subseteq R\otimes_A R$ contains $\Delta^{n+1}_{R|A}$ for some $n$, then every $I$-differential operator from $M$ to $N$ is a differential operator from $M$ to $N$ of order $\leq n$. \item[(ii)] If $I_n \subseteq R\otimes_A R$ form a system of ideals cofinal with the powers of $\Delta_{R|A}$ as $n$ varies, then $\delta \in \operatorname{Hom}_A(M,N)$ is an element of $D_{R|A}$ if and only if $\delta$ is an $I_n$-differential operator for some $n$. \end{enumerate} \end{lemma} \subsection{Behavior of differential operators under localization and completion} All differential operators on a localization occur as localizations of differential operators on the original ring. Namely, \begin{proposition}[{\cite[2.2.2 \& 2.2.10]{Masson}}]\label{localization1} Let $K$ be a field, $R$ be a $K$-algebra, $R\rightarrow S$ be formally \'etale, and suppose that {$\ModDif{n}{R}{K}$} is finitely presented for all $n$. Then the natural maps \[ S \otimes_R \ModDif{n}{R}{K} \to \ModDif{n}{S}{K} \qquad \text{and} \qquad S \otimes_R D^n_{R|K} \rightarrow D^n_{S | K}\] are isomorphisms for all~$n$. In particular, formation of differential operators commutes with localization. Additionally, $$D^n_{R|K}=\{\delta \in D^n_{W^{-1}R | K} \ | \ \delta(R) \subseteq R \}$$ for a multiplicative system $W$, if $R$ has no $W$-torsion (so that $R\subseteq W^{-1}R$). \end{proposition} Formation of the module of differentials also commutes with localization. We provide a proof to fill a gap in the proof found in the standard reference, and because our statement is somewhat more general. \begin{proposition}[{\cite[Theorem~16.4.14]{EGAIV}}]\label{diffmod-localize} Let $A$ be a subring of $R$, and $W$ be a multiplicatively closed subset of $R$. Let $I \subseteq \ModDif{}{R}{A}$ contain $\Delta^n_{R|A}$ for some $n$, and set $I'$ to be the image of $I$ in $\ModDif{}{W^{-1}R}{A}\cong \ModDif{}{W^{-1}R}{(W\cap A)^{-1}A}$. Then, there are isomorphisms of $R$-modules \[W^{-1}\ModDif{I}{R}{A} \cong \ModDif{I'}{W^{-1}R}{A} \cong \ModDif{I'}{W^{-1}R}{(W\cap A)^{-1}A} \] given by the natural maps. In particular, \[W^{-1}\ModDif{n}{R}{A} = \ModDif{n}{W^{-1}R}{A} = \ModDif{n}{W^{-1}R}{(W\cap A)^{-1}A}. \] \end{proposition} \begin{proof} Since $$\ModDif{}{W^{-1}R}{A}\cong \ModDif{}{W^{-1}R}{(W\cap A)^{-1}A},$$ the isomorphism $$\ModDif{I'}{W^{-1}R}{A} \cong \ModDif{I'}{W^{-1}R}{(W\cap A)^{-1}A}$$ is immediate from the definitions. Now, $\ModDif{I'}{W^{-1}R}{A}$ is the localization of $W^{-1}\ModDif{I}{R}{A}$ at the image of $(1\otimes W)$. We remind the reader that, since $W^{-1}\ModDif{I}{R}{A}$ is to be interpreted as a localization as an $R$-module, it is the localization of $\ModDif{I}{R}{A}$ at the image of $(W \otimes 1)$. To show that the map given by localization at the image of $(1\otimes W)$ is an isomorphism, it suffices to show that each element in this multiplicative set is already a unit in $W^{-1}\ModDif{I}{R}{A}$. Let $w\in W\subseteq R$. By assumption, $(w\otimes 1 - 1 \otimes w)^{n+1}=0$ in $W^{-1}\ModDif{I}{R}{A}$, so we can write $w^{n+1}\otimes 1 = (1\otimes w)\cdot \alpha$ for some $\alpha \in W^{-1}\ModDif{I}{R}{A}$. But, $w\otimes 1$ is a unit, so $w^{n+1}\otimes 1$ is a unit, and $1 \otimes w$ is as well. This concludes the proof of the first series of isomorphisms. For the second, we observe that $\Delta^{n+1}_{W^{-1}R|A}$ is the image of $\Delta^{n+1}_{R|A}$ under localization. \end{proof} The following proposition is an immediate consequence of Propositions~\ref{diffmod-localize} and \ref{universaldifferential} combined with the fact that Hom commutes with localization for finitely presented modules. In contrast with Proposition~\ref{localization1}, we do not assume that $A$ is a field. \begin{proposition}\label{localization2} Let $A$ be a subring of $R$, $W$ be a multiplicatively closed subset of $R$, and suppose that $\ModDif{n}{R}{A}$ is finitely presented for all $n$. Then the natural maps \[W^{-1}R \otimes_R D^n_{R|A} \rightarrow D^n_{W^{-1}R | A} \rightarrow D^n_{W^{-1}R | {(W\cap A)^{-1}A}}\] are isomorphisms for all $n$. More generally, if $I \subseteq \ModDif{}{R}{A}$ contains $\Delta^n_{R|A}$ for some $n$, and $I'$ is the image of $I$ in $\ModDif{}{W^{-1}R}{A}$, then the natural maps \[W^{-1}R \otimes_R D^I_{R|A} \rightarrow D^{I'}_{W^{-1}R | A} \rightarrow D^{I'}_{W^{-1}R | {(W\cap A)^{-1}A}}\] are isomorphisms. \end{proposition} \begin{remark} The isomorphisms above may be interpreted more concretely as follows. An $A$-linear differential operator $\delta$ on $R$ extends to a differential operator $\tilde{\delta}$ on $W^{-1}R$ by the rule $\tilde{\delta} (\frac{r}{w})=\frac{ \delta(r)}{w}$ if $\delta$ has order zero. Assume for the sake of induction that the action of every element in $D_{R|A}^{n-1}$ on $W^{-1}R$ is defined. Take $\delta\in D^{n}_{R|A}.$ Then, \[ \tilde{\delta} \left(\frac{r}{w}\right) = \frac{\delta(r) - \widetilde{[\delta,w]}(\frac{r}{w})}{w}, \] which is well defined since the order of $[\delta,w]$ is as most $n-1$. Note that one has the equality $\tilde{\delta} (\frac{r}{1})=\frac{\delta(r)}{1}$ by induction. Then, the previous proposition can be interpreted as saying that, when the modules of principal parts are finitely presented, every $A$-linear differential operator on $W^{-1}R$ of order at most $n$ can be written in the form $\frac{1}{w} \tilde{\delta}$ for some $\delta\in D^n_{R|A}$; one checks easily that this does not depend on the choice of representatives. \end{remark} We need a generalization of Proposition~\ref{localization1} to $I$-differential operators. \begin{lemma}\label{etalemap} Let $A\subseteq R\to S$ be maps of rings. If $I\subseteq \ModDif{}{R}{A}$ and $J\subseteq \ModDif{}{S}{A}$ are such that $I \ModDif{}{S}{A} \subseteq J$, then there is an $S$-module homomorphism $\alpha:S \otimes_R \ModDif{I}{R}{A} \to \ModDif{J}{S}{A}$. If $I=\Delta_{R|A}^n$ and $J=\Delta_{S|A}^n$, then $\alpha$ is an isomorphism on the \'etale locus of the map $R\to S$. Similarly, in characteristic $p>0$, $I=\Delta_{R|A}^{[p^e]}$ and $J=\Delta_{S|A}^{[p^e]}$, then $\alpha$ is an isomorphism on the \'etale locus of the map $R\to S$. \end{lemma} \begin{proof} The map $\alpha$ is just the map given by $(S \otimes_R R\otimes_A R)/I^e \to (S \otimes_R R\otimes_A R \otimes_R S)/I^e \to (S \otimes_R R\otimes_A R \otimes_R S)/J$. To verify that $\alpha$ is an isomorphism when stated, we use the local structure theorem for \'etale maps \cite[{Tag~025A}]{stacks-project}. Write $S$ as a localization (at $W^{-1}$) of $R[\theta]/f(\theta)$ in which $f'(\theta)$ is invertible. In the case of powers, the map $\alpha$ takes the form \[ \frac{W^{-1} R[\theta]\otimes_A R }{\Delta^n_{R|A} + (f(\theta))} \to \frac{W^{-1} R[\theta]\otimes_A W^{-1} R[\overline{\theta}] }{(\Delta_{R|A} + (\overline{\theta}-\theta))^n + (f(\theta),f(\overline{\theta}))}.\] By the same argument as Proposition~\ref{diffmod-localize}, we can rewrite the right-hand side as \[\frac{((W^{-1} R)\otimes_A R)[\theta,\overline{\theta}] }{(\Delta_{R|A} + (\overline{\theta}-\theta))^n + (f(\theta),f(\overline{\theta}))}.\] We write $\theta'=\overline{\theta}-\theta$ and use the Taylor expansion of $f(\theta+\theta')$ to rewrite the target module as \[\frac{((W^{-1} R)\otimes_A R)[\theta,{\theta'}] }{(\Delta_{R|A} + \theta')^n + (f(\theta),\theta' + \frac{\theta'^2 f''(\theta)}{2! f'(\theta)}+\cdots))}.\] Given an element in this module, we can expand as a polynomial expression in $\theta'$. If there is a term of the form $B \theta'^i$ , with $1\leq i <n$, we can subtract off $B (\theta'^i + \frac{\theta'^{i+1} f''(\theta)}{2! f'(\theta)}+\cdots)$ to obtain an expression where the least such $i$ for which $B\neq 0$ is larger. Iterating this, we obtain an expression for the element with no $\theta'$ term. That is, the map $\alpha$ is an isomorphism. The argument is entirely analogous in the case of Frobenius powers. \end{proof} \begin{proposition}\label{RankDiff} Let $K$ be a field, and $(R,\mathfrak{m},\Bbbk)$ be a local or graded domain that is essentially of finite type over $K$. Suppose that $\operatorname{Frac}(R)$ is separable over $K$. Let $d=\dim R$ and $t$ be the transcendence degree of $\Bbbk$ over $K$. Then { ${\operatorname{rank}_R(\ModDif{n}{R}{K})=\binom{d+t+n}{d+t}}$}. \end{proposition} \begin{proof} Let $F=\operatorname{Frac}(R)$, and $e=d+t$, which, by standard dimension theory, is the transcendence degree of $F$ over $K$. Since $F$ is separable over $K$, we can write $F=K(x_1,\dots,x_e)(\alpha)$, where $x_1,\dots,x_e$ are a transcendence basis for $F$ over $K$, and $F$ is a separable algebraic extension of $L=K(x_1,\dots,x_e)$. It follows from {Proposition~\ref{diffmod-localize}} that $\operatorname{rank}_R(\ModDif{n}{R}{K})=\operatorname{rank}_F(\ModDif{n}{F}{K})$. Then, since $F$ is a separable extension of $L$, by Lemma~\ref{localization1}, we have {$\operatorname{rank}_F(\ModDif{n}{F}{K}) = \operatorname{rank}_L(\ModDif{n}{L}{K})$}. Applying {Proposition~\ref{diffmod-localize}} again, this is equal to {$\operatorname{rank}_R(\ModDif{n}{K[x_1,\dots,x_e]}{K})$}. In this case, we compute {$\displaystyle \ModDif{n}{K[x_1,\dots,x_e]}{K}\cong \frac{K[x_1,\dots,x_e,z_1,\dots,z_e]}{(z_1,\dots,z_e)^n}$} (see \S\ref{SubSecJacobi} below) from which the claim follows. \end{proof} \begin{definition} Let $(R,\mathfrak{m},\Bbbk)$ be a complete local ring, and $A\subseteq R$ be a subring. The \emph{complete module of $n$-differentials} or \emph{complete module of principal parts} of $R$ over $A$ is $\wModDif{n}{R}{A}$\index{$\wModDif{n}{R}{A}$}, the $\mathfrak{m}$-adic completion of $\ModDif{n}{R}{A}$. \end{definition} \begin{lemma}\label{separatedquot} Let $(R,\mathfrak{m},\Bbbk)$ be a complete local ring, and $K\cong \Bbbk$ be a coefficient field. Then, \begin{enumerate} \item $\wModDif{n}{R}{K}$ is finitely generated, and \item $\wModDif{n}{R}{K}\cong \sep{(\ModDif{n}{R}{K})}$, where $\sep{M}=M/(\cap_{n=1}^\infty\mathfrak{m}^n M)$\index{$\sep{M}$}, the maximal separated quotient of $M$. \end{enumerate} \end{lemma} \begin{proof} Each $\sep{(\ModDif{n}{R}{K})}$ is finitely generated over $R$, hence is complete \cite[Remark~4.7]{Switala}. The isomorphism in (2) then follows from the universal properties of the two modules, and the first statement is then immediate. \end{proof} The following proposition is an analogue of Proposition~\ref{representing-differential} for complete rings. \begin{proposition}\label{represent-complete} Let $(R,\mathfrak{m},\Bbbk)$ be a complete local ring, and $A\subseteq R$ be a subring. Then $D^n_{R|A} {\cong}\operatorname{Hom}_R(\wModDif{n}{R}{A},R)$. \end{proposition} \begin{proof} By Proposition~\ref{representing-differential}, we have $D^n_{R|A}\cong \operatorname{Hom}_R(\ModDif{n}{R}{A},R)$. Since $R$ is complete, a map from $\ModDif{n}{R}{A}$ to $R$ factors uniquely through $\wModDif{n}{R}{A}$. \end{proof} The analogue of Proposition~\ref{localization2} holds as well. \begin{proposition}[{\cite[2.3.3]{LyuUMC}}]\label{diff-ops-completion} Let $(R,\mathfrak{m},K)$ be an $A$-algebra essentially of finite type, with $A$ Noetherian. Then there are isomorphisms \[ \widehat{R} \otimes_R D^n_{R|A} \rightarrow D^n_{\widehat{R}|A}. \] \end{proposition} We note that for an algebra with a pseudocoefficient field $K$, the modules of principal parts $\ModDif{n}{R}{K}$ are all finitely presented, so Proposition~\ref{localization2} applies. \subsection{The Jacobi-Taylor matrices}\label{SubSecJacobi} In this subsection we introduce a family of matrices that give a presentation for the modules of principal parts and a computationally easy description of differential operators. We use these matrices for algorithmic aspects of the differential signature in Subsection~\ref{SubAlg}. In particular, we compute the differential signature for quadrics in Subsection~\ref{SubQuadrics}. We point out that a closely related version of the Jacobi-Taylor matrices were independently and simultaneously introduced by Barajas and Duarte \cite{BarajasDuarte} under the name of higher Jacobian matrices. We point out that the hypersurface case was already studied by Duarte \cite{Duarte}. For polynomials $f_i$, $1 \leq i \leq m$, in $k$ variables the usual Jacobian matrix $J=\left( \partial_j f_i \right)$ provides a representation \[ R^m \stackrel{J^\text{tr} }{\longrightarrow} R^k \longrightarrow \Omega_{R|K} \longrightarrow 0, \] {where $J^\text{tr}$ denotes the transpose matrix of $J$, } of the module of K\"ahler differentials. In this subsection we provide a similar description of the module of principal parts. For a $k$-tuple $\lambda \in \mathbb{N}^k$ we define the operators $ \frac{ 1 }{ \lambda! } \partial^\lambda $ as in Example~\ref{example-regular-D}. \begin{lemma}\label{principalpartdescription} Let $f_1 , \ldots , f_m \in K[x_1 , \ldots , x_k]$ denote polynomials with residue class ring $ R = K[x_1 , \ldots , x_k]/ \left( f_1 , \ldots , f_m \right) $. Then \[ R \otimes_K R \cong R[y_1 , \ldots , y_k]/ \left( g_1 , \ldots , g_m \right) , \] where $ g_i = \sum_\lambda g_ {i, \lambda } y^\lambda $ and $ g_{i, \lambda } = \frac{ 1 }{ \lambda ! } \partial^\lambda (f_i).$ \end{lemma} \begin{proof} We work with the description \[ \begin{aligned} R \otimes_{ K } R &= { K[x_1 , \ldots , x_k ] / \left( f_1 , \ldots , f_m \right) \otimes_{ K } K[x_1 , \ldots , x_k ] /\left( f_1 , \ldots , f_m \right) } \\ & = { K[x_1 , \ldots , x_k , \tilde{x}_1 , \ldots , \tilde{x}_k ] /\left( f_1 , \ldots , f_m , \tilde{f}_1 , \ldots , \tilde{f}_m\right) } , \end{aligned} \] where $\tilde{f_i}$ comes from $f_i$ by replacing $x_j$ by $\tilde{x}_j$. We put $ y_j = \tilde{x_j} - x_j $ and write the ring as \[ K[x_1 , \ldots , x_k, y_1 , \ldots , y_k ]/ \left( f_1 , \ldots , f_m, g_1 , \ldots , g_m \right) = R[ y_1 , \ldots , y_k ]/ \left( g_1 , \ldots , g_m \right) , \] where \[ g_i = \tilde{f_i} = f_i \left( \tilde{x}_1 , \ldots , \tilde{x}_k \right) = f_i \left( x_1+y_1 , \ldots , x_k+y_k \right). \] Consider a monomial $x_1^{\nu_1} \cdots x_k^{\nu_k}$ in some $f$. This corresponds to a term in $g$ of the form \[ ( x_1+y_1)^{\nu_1} \cdots (x_k+y_k)^{\nu_k} .\] Multiplying out yields {\small \[ \sum_{\lambda \leq \nu} \binom { \nu_1 } { \lambda_1} \cdots \binom { \nu_k } { \lambda_k} x_1^{\nu_1- \lambda_1} y_1^{\lambda_1} \cdots x_k^{\nu_k- \lambda_k} y_k^{\lambda_k} = \sum_{\lambda \leq \nu} \binom { \nu_1 } { \lambda_1} \cdots \binom { \nu_k } { \lambda_k} x_1^{\nu_1- \lambda_1} \cdots x_k^{\nu_k- \lambda_k} y_1^{\lambda_1} \cdots y_k^{\lambda_k} . \] } Thus, the term for the monomial $y^\lambda$ in $g$ is $ {\binom { \nu_1 } { \lambda_1} \cdots \binom { \nu_k } { \lambda_k} x_1^{\nu_1- \lambda_1} \cdots x_k^{\nu_k- \lambda_k}}$, which coincides with $\frac{ 1 }{ \lambda ! } \partial^\lambda (x^\nu) $. \end{proof} \begin{definition} Let $f_1 , \ldots , f_m \in K[x_1 , \ldots ,x_k]$ be polynomials. For $n \in \mathbb{N}$, let \[ \mathcal{A} = \left\{ (\mu,i) \ | \ \mu \in \mathbb{N}^k \text{ such that} \mondeg {\mu} \leq n-1 , \, 1 \leq i \leq m \right\}\] and \[ \mathcal{B} = \left\{ \nu \ | \ \nu \in \mathbb{N}^ k \text{ such that } \mondeg { \nu } \leq n \right\} .\] Then the $\mathcal{A} \times \mathcal{B}$ matrix with entries \[ a_{( \mu,i ; \nu)} = \frac{ 1 }{ (\nu - \mu )! } \partial^{\nu - \mu} (f_i)\] is called the $n$-th \emph {Jacobi-Taylor matrix}.\index{Jacobi-Taylor matrix} \end{definition} We denote these matrices by $J_n$\index{$J_n$}. We may consider them over the polynomial ring or over the residue class ring. To give an example, in three variables and one equation $f$, the transposed second Jacobi-Taylor matrix is given as \[ \begin{blockarray}{ccccc} & 1 & a & b & c \\ \begin{block}{c[cccc]} 1 & 0 & 0 & 0 & 0 \\ a & \partial_x (f) & 0 & 0 & 0 \\ b & \partial_y (f) & 0 & 0 & 0 \\ c & \partial_z (f) & 0 & 0 & 0 \\ a^2 & \frac{ 1 }{ 2 } \partial_x \partial_x (f) & \partial_x (f) & 0 & 0 \\ ab & \partial_x \partial_y (f) & \partial_y (f) & \partial_x (f) & 0 \\ ac & \partial_x \partial_z (f) & \partial_z (f) & 0 & \partial_x (f) \\ b^2 & \frac{ 1 }{ 2 } \partial_y \partial_y (f) & 0 & \partial_y (f) & 0 \\ bc & \partial_y \partial_z (f) & 0 & \partial_z (f) & \partial_y (f) \\ c^2 & \frac{ 1 }{ 2 } \partial_z \partial_z (f) & 0 & 0 & \partial_z (f) \\ \end{block} \end{blockarray} , \] where $1,a,b,c$ and $1,a,\dots, c^2$ indicate which column (respectively, row) corresponds with which indexing element of $J$ (respectively, $I$). Note that in all $J_n$, for varying $n$, only a finite number of distinct entries occur, namely all partial derivatives of the $f_i$. We now prove that this matrix {gives} a presentation for the module of principal parts. \begin{corollary} \label{principalpartsrepresentation} Let $f_1 , \ldots , f_m \in K[x_1 , \ldots , x_k]$ denote polynomials with residue class ring $R = K[x_1 , \ldots , x_k]/ \left( f_1 , \ldots , f_m \right)$. Then the module of principal parts $P^n_{R | K}$ has the presentation \[ \bigoplus_{\substack{ \mondeg {\mu} \leq n - 1 \\ 1 \leq i \leq m }} R e_{\mu ,i} \stackrel{J_n^\mathrm{tr} } { \longrightarrow }\bigoplus_{ \mondeg {\lambda} \leq n } R e_{\lambda } \longrightarrow P^n_{R | K} \longrightarrow 0 . \] \end{corollary} \begin{proof} Due to Lemma~\ref{principalpartdescription} we have \[ P^n_{R{{|}} K} = (R \otimes_K R) /\Delta^{n+1} \cong R[y_1 , \ldots , y_k]/ \left( g_1 , \ldots , g_m , y^\lambda , \mondeg {\lambda} \geq n+1 \right) \, \] In particular, the monomials $y^\lambda$, $ \mondeg {\lambda } \leq n$, give an $R$-module generating system for $P^n_{R {{|}} K}$ and a surjective mapping \[ \bigoplus_{ \mondeg {\lambda} \leq n } R e_\lambda \longrightarrow P^n_{R | K} ,\, e_\lambda \longmapsto y^\lambda .\] The part of the ideal generated by $g_i$ of degree $\leq n$ is generated as an $R$-module by \[ y^\mu g_i = y^\mu \left( \sum_\lambda g_{i, \lambda} y^\lambda \right) , \, \mondeg {\mu} \leq n-1, \, 1 \leq i \leq m \, . \] Hence the kernel of the mapping is generated by all $\lambda$-tuples { \[ C_{\mu, i} = \left( C_{\nu; \mu,i } \right) \text{ with } C_{\nu; \mu,i} = g_{i, \nu-\mu } ,\, \mondeg {\mu} \leq n-1, \, 1 \leq i \leq m . \]} So the kernel is the image of the map \[ \bigoplus_{\substack{ \mondeg {\mu} \leq n - 1 \\ 1 \leq i \leq m }} R e_{\mu,i} \longrightarrow \bigoplus_{\mondeg {\lambda} \leq n } R e_\lambda , \, e_{ \mu,i} \longmapsto C_{\mu, i} .\] The entry of this matrix in row index $\nu$ and column index $(\mu,i)$ is \[ g_{i, \nu - \mu} = \frac{ 1 }{ (\nu-\mu)! } \partial^{\nu - \mu} (f_i) , \] so this is the transposed Jacobi-Taylor matrix. \end{proof} \begin{remark} \label{JacobiTaylorrelation} Every Jacobi-Taylor matrix evolves from the previous one in the block matrix form \[ J_n^\text{tr} = \begin{pmatrix} J_{n-1}^\text{tr} & 0 \\ S_n & T_n \end{pmatrix} \] In the matrix $T_n$ we only have first partial derivatives of the $f_i$, this matrix sends $e_{\mu,i}$ to $ \sum_j \partial_j(f_i) \cdot e_{\mu +e_j}$. We have a commutative diagram with exact rows \[\xymatrix{ 0 \ar[r] &\bigoplus\limits_{ \mondeg {\mu} = n-1, i } R e_{ \mu,i } \ar[r]\ar[d]_-{T_n} &\bigoplus\limits_{ \mondeg {\mu} \leq n-1, i } R e_{ \mu,i } \ar[r]\ar[d]_-{J^{\mathrm{tr}}_n} &\bigoplus\limits_{ \mondeg {\mu} \leq n-2, i } R e_{ \mu,i } \ar[r]\ar[d]_-{J^{\mathrm{tr}}_{n-1}} & 0 \\ 0 \ar[r] & \bigoplus\limits_{ \mondeg {\lambda} = n } R {e_\lambda } \ar[r]\ar[d] & \bigoplus\limits_{ \mondeg {\lambda} \leq n } R {e_\lambda } \ar[r]\ar[d] & \bigoplus\limits_{ \mondeg {\lambda} \leq n-1 } R {e_\lambda } \ar[r]\ar[d] & 0\\ 0 \ar[r] & \Delta^{n}/\Delta^{n+1} \ar[r] \ar[d] & \ModDif{n}{R}{K}\ar[r]\ar[d] & \ModDif{n-1}{R}{K}\ar[r]\ar[d] & 0\\ & 0 &0& \, 0 . } \] The first two rows split. The columns in the middle and on the right are also exact. In the column on the left the second map is surjective and it is exact provided that $J^{\mathrm{tr}}_{n-1}$ is injective. This is not always the case, e.g. in positive characteristic it might be that all first partial derivatives and hence $T_n$ is $0$. \end{remark} \begin{corollary} \label{JacobiTayloroperators} Let $f_1 , \ldots , f_m \in K[x_1 , \ldots , x_k]$ denote polynomials with residue class ring $R = K[x_1 , \ldots , x_k]/ \left( f_1 , \ldots , f_m \right)$. Then differential operators on $R$ of order $\leq n $ correspond to elements in the kernel of the $n$-th Jacobi-Taylor matrix. A $\lambda$-tuple $\left( a_\lambda \right)$ in the kernel corresponds to the operator that is represented on the level of the polynomial ring by \[ \sum_\lambda a_\lambda \frac{ 1 }{ \lambda! } \partial^\lambda .\] \end{corollary} \begin{proof} We work with the presentation \[ \bigoplus_{\substack{ \mondeg {\mu} \leq n - 1 \\ 1 \leq i \leq m }} R e_{\mu ,i} \stackrel{ J_n^{ \mathrm{tr} } } {\longrightarrow} \bigoplus_{ \mondeg {\lambda} \leq n } R e_{\lambda } \longrightarrow P^n_{R {{|}} K} \longrightarrow 0 \] from Corollary~\ref{principalpartsrepresentation}. A differential operator on $R$ is the same as an $R$-linear form on $ P^n_{R | K}$. This again is the same as an $R$-linear form $\varphi$ on $ \bigoplus_{ \mondeg {\lambda} \leq n } R e_{\lambda }$ (given by an $R$-tuple $\left( a_\lambda \right)$), fulfilling $\varphi \circ { J_n^{ \text{tr} } } = 0$. This is equivalent with $ J_n \circ { \varphi^{ \text{tr} } } = 0$. In the notation of Lemma~\ref{principalpartdescription}, the universal operator $d^n: R \rightarrow P^n_{R{{|}} K}$ sends a monomial $x^\nu$ to \[ \begin{aligned} 1 \otimes x^\nu & = 1 \otimes x_1^{\nu_1} \cdots x_k^{\nu_k} \\ & = \tilde{x}_1^{\nu_1} \cdots \tilde{x}_k^{\nu_k} \\ & = (x_1+y_1)^{\nu_1} \cdots (x_k+y_k)^{\nu_k} \\ & = \sum_{\lambda \leq \nu} \binom { \nu_1 } { \lambda_1} \cdots \binom { \nu_k } { \lambda_k} x_1^{\nu_1- \lambda_1} \cdots x_k^{\nu_k- \lambda_k} y_1^{\lambda_1} \cdots y_k^{\lambda_k} . \end{aligned} \] The composition with the linear form on $P^n_{R | K}$ given by $\left( a_\lambda \right)$ yields \[ \sum_{\lambda \leq \nu} \binom { \nu_1 } { \lambda_1} \cdots \binom { \nu_k } { \lambda_k} x_1^{\nu_1- \lambda_1} \cdots x_k^{\nu_k- \lambda_k} a_\lambda \, . \] This coincides with \[ \sum_\lambda a_\lambda \frac{ 1 }{ \lambda! } \partial^\lambda \left( x^\nu \right).\]\qedhere \end{proof} \begin{remark} It follows from the previous corollary that a differential operator $\delta$ on $K[x_1,\dots,x_k]$ of order $n$ descends to a differential operator on $K[x_1,\dots,x_k]/(f_1,\dots,f_m)$ if and only if $\delta(x^{\lambda} f_i) \in (f_1,\dots,f_m)$ for all $i$ and all $\lambda$ of degree at most $n-1$. This fact is known to experts, but we could not find a clear reference. \end{remark} For the universal differential operator $d^n$ we have a canonical lifting \[ \xymatrix{ & R \ar[dl]_-{d^{n'}} \ar[dr]^-{d^n} & \\ \bigoplus\limits_{ \mondeg {\lambda} \leq n } R e_{\lambda } \ar[rr] & & \ModDif{n}{R}{K} . }\] An element $h$ is sent by $d^{n \prime}$ to $\sum_{ \mondeg {\lambda} \leq n} \frac{1}{\lambda!} \partial^\lambda (h) e_\lambda$. The commutativity follows from the proof of Corollary~\ref{JacobiTayloroperators}. \begin{remark} Let $R=K[x_1, \ldots, x_k] /(f_1, \ldots , f_m)$ and let $W$ be a multiplicative subset of $R$. Then by Proposition~\ref{diffmod-localize} we have $ P^n_{W^{-1}R|K} \cong W^{-1} P^n_{R|K} \cong P^n_{R|K} \otimes_R W^{-1}R $. Hence the representation for the module of principal parts given by the Jacobi-Taylor matrices given in Corollary~\ref{principalpartsrepresentation} can be used directly also for algebras essentially of finite type over $K$ and in particular for localizations. \end{remark} \begin{lemma} \label{JacobiTaylorsymmetric} Let $f_1 , \ldots , f_m \in K[x_1 , \ldots , x_k]$ denote polynomials with residue class ring $R = K[x_1 , \ldots , x_k]/ \left( f_1 , \ldots , f_m \right)$. Let $\delta\in D^n_{R|K}$ be given by the $\lambda$-tuple $\left( a_\lambda \right), \deg(\lambda)\leq n$ in the kernel of the $n$-th Jacobi-Taylor matrix in the sense of Corollary~\ref{JacobiTayloroperators}. The image of $\delta$ under the natural $R$-linear map $D^{n}_{R|K} \rightarrow \operatorname{Hom}_R(\operatorname{Sym} ^n(\Omega_{R|K} ) , R )$ is given by the restricted tuple $\left( a_\lambda \right)$, $ \mondeg {\lambda} = n $. \end{lemma} \begin{proof} From the presentation \[ \bigoplus_{i = 1}^m R e_i \stackrel{J^\text{tr} }{\longrightarrow} \bigoplus_{j = 1}^k R e_j \longrightarrow \Omega_{R|K} \longrightarrow 0 \] we get for the symmetric powers $\operatorname{Sym} ^n (\Omega_{R|K} ) $ the presentation \[ \xymatrix{ \(\bigoplus\limits_{i = 1}^m R e_i \) \otimes \operatorname{Sym} ^{n-1}\( \bigoplus\limits_{j = 1}^k R e_j \) \ar[r]\ar[d]^{\cong} & \operatorname{Sym} ^n\( \bigoplus\limits_{j = 1}^k R e_j \) \ar[r]\ar[d]^{\cong} & \operatorname{Sym} ^n \(\Omega_{R|K} \) \ar[r]\ar[d]^{\cong} & 0 \\ \bigoplus\limits_{i, \mondeg {\mu} = n-1 } R e_{ \mu, i} \ar[r] & \bigoplus\limits_{ \mondeg {\lambda} = n} Re_\lambda \ar[r] & \operatorname{Sym} ^n\(\Omega_{R|K} \) \ar[r] & 0 } \] sending $e_\lambda \mapsto (dx)^\lambda$ and $e_{\mu, i} \mapsto \sum_j \partial_j (f_i) e_{\mu + e_j} $. This last map is the matrix $T_n$ from Remark~\ref{JacobiTaylorrelation}. A linear form on $\operatorname{Sym} ^n (\Omega_{R|K} )$ is the same as a linear form on $ \bigoplus_{ \mondeg {\lambda} = n} Re_\lambda $ annihilating $T_n$ from the left. We work with the commutative diagram \[\xymatrix{ \bigoplus\limits_{ \mondeg {\lambda} = n } R {e_\lambda }\ar[d] \ar[rr] & & \bigoplus\limits_{ \mondeg {\lambda} \leq n } R {e_\lambda }\ar[d] \\ \operatorname{Sym} ^n(\Omega_{R|K} ) \ar[r] & \Delta^{n}/\Delta^{n+1} \ar[r] & \ModDif{n}{R}{K} . } \] A differential operator of order $\leq n$, considered as a linear form on $\ModDif{n}{R}{K}$ via the second row, induces a linear form on $\operatorname{Sym} ^n(\Omega_{R|K} )$. If such a differential operator is given by a $\lambda$-tuple $\left( a_\lambda \right)$, $ \mondeg {\lambda} \leq n$, then both linear forms are given by sending $e_\lambda$ to $a_\lambda$. So the induced linear form is just given by the restricted tuple. \end{proof} \section{Differential powers and $D$-simplicity}\label{SecDiffPrimes} In this section we recall the definition of differential powers of ideals, and the related notion of $D$-ideals. These notions are essential to define the differential signature. We use these powers to give a criterion for the $D$-simplicity of $R$. \subsection{$D$-ideals} \begin{definition} We say that an ideal of $R$ is a \textit{$D_{R|A}$-ideal} if it is a $D_{R|A}$-submodule of $R$. We say that $R$ is \textit{$D_{R|A}$-simple} (or just \textit{$D$-simple}\index{D-simple} if no confusion is likely) if $R$ has no proper {nonzero} $D_{R|A}$-ideals. Equivalently, $R$ is $D$-simple if it is simple as a $D_{R|A}$-module. \end{definition} We caution the reader that the property that the ring $D_{R|A}$ is a simple ring is also studied in the literature with similar nomenclature. \begin{proposition}\label{PropDidealsLoc} Suppose that $\ModDif{n}{R}{A}$ is finitely presented for all $n$. Let $W\subseteq R$ be a multiplicative system. There is a natural bijection between \[\mathscr{A}=\{I\subseteq R\ |\ I \ \text{is a $D_{R|A}$-ideal and} \ I\cap W=\varnothing\}\] and \[\mathscr{B}=\{ J\subseteq W^{-1}R \ |\ J \ \text{is a $D_{W^{-1}R | A}$-ideal}\,\}.\] \end{proposition} \begin{proof} Let $\phi:\mathscr{A}\to\mathscr{B}$ given by $I\mapsto I \cdot W^{-1}R$. Since $I$ is a $D_{R|A}$-ideal, $D_{R|A} I\subseteq I$. Then, $W^{-1}R\otimes_R D_{R|A} I\subseteq I \cdot W^{-1} R$ by Proposition~\ref{localization2}. Then, $\phi$ is well-defined. Let $\iota:R\to W^{-1}R$ denote the localization map, and $\varphi:\mathscr{B}\to \mathscr{A}$ given by $J\mapsto \iota^{-1}(J).$ Let $\delta\in D_{R|A}$, and $f\in \iota^{-1} (J)$. Then, $\delta \iota(f)=\iota(\delta f)\in J$ because $J\in \mathscr{B}.$ As a consequence, $\delta f\in \iota^{-1}(J).$ Since $\varphi\circ \phi(I)=I$ and $\phi\circ\varphi(J)=J,$ we obtain the desired conclusion. \end{proof} We use the previous proposition to obtain properties of $D$-ideals. \begin{lemma}\label{PropMinimalPrime} Suppose that $\ModDif{n}{R}{A}$ is finitely presented for all $n$. Every minimal primary component of a $D_{R|A}$-ideal is a $D_{R|A}$-ideal. In particular, the minimal primes of a radical $D_{R|A}$-ideal are $D_{R|A}$-ideals. \end{lemma} \begin{proof} { Let $I$ be a $D$-ideal of $R$ and $P$ a prime containing $I$. It follows from Proposition~\ref{PropDidealsLoc} that $I_P$ is a $D_{R_P|A}$-ideal, and that $I_P \cap R$ is a $D_{R|A}$-ideal. } \end{proof} \begin{remark}\label{rem-radicals-D-ideals} It is not necessarily true that every minimal prime of a $D_{R|A}$-ideal is a $D_{R|A}$-ideal. For example, one can check that the $K$-linear endomorphism of $R=K[x]/(x^2)$ such that $\delta(1)=0$ and $\delta(x)=1$ is a $K$-linear differential operator of order $2$ (and of order 1 if $K$ has characteristic 2) and that $R$ is $D_{R|K}$-simple. Consequently, $(0)$ is a $D_{R|K}$-ideal but $\sqrt{(0)}=(x)$ is not. \end{remark} We end this section with a property that {relates} $D$-ideals of quotient with $D$-ideals of the original ring. \begin{lemma}\label{lemma-D-ideal-from-quotient} Let $R$ be a ring and $A$ be a subring. Let $I\subseteq J$ be two ideals of $R$. Set $R'=R/I$, $A'=A/(A\cap I)$, and $J'$ to be the image of $J$ in $R'$. If $I$ is a $D_{R|A}$-ideal and $J'$ is a $D_{R' | A'}$-ideal, then $J$ is a $D_{R|A}$-ideal. \end{lemma} \begin{proof} If $I$ is a $D_{R|A}$-ideal, so that for every differential operator $\delta\in D^n_{R|A}$, $\delta(I)\subseteq I$, then $\delta$ descends to a map on $R'$. It is immediate from the definitions that the map induced by $\delta$ on the quotient lies in $D^n_{R'|A'}$ so that there is a map $D_{R|A}\rightarrow D_{R'|A'}$ of filtered rings. If one also has that $J$ is not a $D_{R|A}$-ideal, there is some $g \in J$ and $\delta\in D_{R|A}$ such that $\delta(g)\notin J$. This then descends to a map $\bar{\delta}\in D_{R'|A'}$ with $\bar{\delta}(\bar{g})\notin J'$, so that $J'$ is not a $D_{R' | A'}$-ideal. \end{proof} \subsection{Differential powers and cores} Motivated by Zariski's \cite{ZariskiHolFunct} study on symbolic powers for polynomial rings in characteristic zero, the differential powers were recently introduced \cite{SurveySP} to push this study to other rings. \begin{definition} Let $R$ be a ring and $A$ be a subring. Let $I$ be an ideal of $R$, and $n$ be a positive integer. We define the \emph{$A$-linear $n$th differential powers of $I$}\index{differential power}\index{$I\dif{n}{A}$} by $$ I\dif{n}{A}=\{f\in R \, | \, \delta(f)\in I \hbox{ for all } \delta\in D^{n-1}_{R|A}\}. $$ \end{definition} \begin{example}\label{diff-powers-regular} It follows from Example~\ref{example-regular-D} that $\mathfrak{m}\dif{n}{K}=\mathfrak{m}^n$ for an algebra with pseudocoefficent field $K$ that is smooth over $K$, $(R,\mathfrak{m},\Bbbk)$. This follows from essentially the same argument for polynomial rings \cite{SurveySP} applied in the more general setting of Example~\ref{example-regular-D}, but we reproduce it here for transparency. By Proposition~\ref{properties-diff-powers}~(ii) below, we have that $\mathfrak{m}^n \subseteq \mathfrak{m}\dif{n}{K}$. To see the other containment, let $x_1,\dots,x_d$ be a minimal generating set for $\mathfrak{m}$, and pick $f\notin \mathfrak{m}^n$. Write $f=g + \sum_{\alpha\in S} u_\alpha x^\alpha$, with $S$ a subset of $\mathbb{N}^d$ consisting of elements with sum $<n$, $u_\alpha \notin \mathfrak{m}$, and $g\in \mathfrak{m}^n$. Since $f\notin \mathfrak{m}^n$, $S$ is nonempty, so pick $\alpha\in S$ with $|\alpha|$ minimal. Then, in the notation of Example~\ref{example-regular-D}, the differential operator $D_\alpha$ has order $<n$ and is easily seen to satisfy $\partial^{\alpha}(f)\notin \mathfrak{m}$, so $f\notin \mathfrak{m}\dif{n}{K}$. Thus, $\mathfrak{m}^n = \mathfrak{m}\dif{n}{K}$. \end{example} If $f \notin \mathfrak{m}\dif{n}{A}$, then there exists a $\delta \in D^{n-1}_{R|A}$ such that $\delta(f)$ is not in $\mathfrak{m}$, hence it is a unit $u$. But then $u^{-1} \circ \delta$ is a differential operator of the same order with $(u^{-1} \circ \delta) (f)=1$. We now recall a few properties of differential powers. { \begin{proposition}[\cite{SurveySP}]\label{properties-diff-powers} Let $R$ be a ring, $A$ be a subring, $I,J_\alpha\subseteq R$ be ideals. \begin{itemize} \item[(i)] $I\dif{n}{A}$ is an ideal. \item[(ii)] $I^{n}\subseteq I\dif{n}{A}$. \item[(iii)] $\left(\bigcap_{\alpha}J_\alpha\right)\dif{n}{A} =\bigcap_{\alpha }(J_\alpha)\dif{n}{A}$. \item[(iv)] If $I$ is $\mathfrak{p}$-primary, then $I\dif{n}{A}$ is also $\mathfrak{p}$-primary . \item[(v)] If $I$ is prime, then $I^{(n)} \subseteq I\dif{n}{A}$. \end{itemize} \end{proposition} \begin{proof} Parts (i)--(iv) are Proposition 2.4, Proposition 2.5, Exercise 2.13, and Propositions 2.6 of \cite{SurveySP}, respectively. The last part is a consequence of \mbox{(ii)} and~\mbox{(iv)}. \end{proof} } We also note that if $A \subseteq B \subseteq R$, and $I$ is an ideal of $R$, then $I\dif{n}{A}\subseteq I\dif{n}{B}$ for all $n$. Differential powers behave well with localization. \begin{lemma}\label{diff-localize} Let $W$ be a multiplicative set in $R$ and $I$ an ideal such that $W\cap I=\varnothing$. Suppose also that $\ModDif{n}{R}{A}$ is finitely presented for all $n$. Then $I\dif{n}{A} = (W^{-1}I)\dif{n}{A} \cap R$. \end{lemma} \begin{proof} $(\subseteq)$: It suffices to show that if $D^{n-1}_{R|A}\cdot f\subseteq I$, then $D^{n-1}_{W^{-1}R | A}\cdot(\frac{f}{1}) \subseteq W^{-1}I$. By Proposition~\ref{localization2}, for any $\delta\in D^{n-1}_{W^{-1}R | A}$, there exists $g\in W^{-1}R$ and $\eta\in D^{n-1}_{R|A}$ such that $\delta(\frac{f}{1})=g \frac{\eta(f)}{1}$ for all $f\in R$. The claim is then clear. $(\supseteq)$: Suppose that $f\in R$, and $D^{n-1}_{W^{-1}R | A}\cdot(\frac{f}{1}) \subseteq W^{-1}I$. If $\delta\in D^{n-1}_{R|A}$, then it extends to a differential operator $\tilde{\delta}\in D^{n-1}_{W^{-1}R | A}$ such that $\tilde{\delta}(\frac{f}{1})=\frac{\delta(f)}{1}$. By hypothesis, this element is in $W^{-1}I \cap R =I$. Thus, $f$ lies in $I\dif{n}{A}$. \end{proof} \begin{lemma}\label{diff-localize2} Let $W$ be a multiplicative set in $R$ and $I$ an ideal. Suppose also that $\ModDif{n}{R}{A}$ is finitely presented for all $n$. Then $I\dif{n}{A} (W^{-1}R) = (W^{-1} I)\dif{n}{A}$. \end{lemma} \begin{proof} {For any $J\in W^{-1}R$, we have $J=(J \cap R) W^{-1}R$, so this follows from Lemma~\ref{diff-localize}.} \end{proof} As a consequence of Proposition~\ref{diff-ops-completion}, differential powers of the maximal ideal commute with completion. \begin{lemma}\label{diff-powers-completion} Let $(R,\mathfrak{m},\Bbbk)$ be an algebra with pseudocoefficent field $K$. Then $\mathfrak{m}\dif{n}{K}\widehat{R} = (\mathfrak{m}\widehat{R})\dif{n}{K}$. Consequently, if $(S,\mathfrak{n},L)$ is a power series ring over $L$, then $\mathfrak{n}\dif{n}{L}=\mathfrak{n}^n$ for all $n$. \end{lemma} \begin{proof} We first establish the equality $\mathfrak{m}\dif{n}{K}\widehat{R} = (\mathfrak{m}\widehat{R})\dif{n}{K}$. $(\subseteq)$: Let $f\in R$, and assume $D^{n-1}_{R|K}\cdot f\subseteq \mathfrak{m}$. If $\delta\in D^{n-1}_{\widehat{R}|K}$, then by Proposition~\ref{diff-ops-completion}, there is some $r\in \widehat{R}$ and $\eta\in D^{n-1}_{R|K}$ such that $\delta(f)=r\eta(f)$, which by hypothesis, lies in $\mathfrak{m}\widehat{R}$. $(\supseteq)$: Since $(\mathfrak{m}\widehat{R})\dif{n}{K}$ is $\mathfrak{m}$-primary, and every $\mathfrak{m}$-primary ideal of $\widehat{R}$ is expanded from $R$, it suffices to show that if $f\in R$ and $D_{\widehat{R}|K}^{n-1}\cdot f \subseteq \mathfrak{m}\widehat{R}$, then $D^{n-1}_{R|K}\cdot f\subseteq \mathfrak{m}$. This is clear since every element of $D^{n-1}_{R|K}$ extends to $D_{\widehat{R}|K}^{n-1}$. Now, suppose that $S$ is a power series ring over $L$. Write $S=L \llbracket x_1, \dots, x_d \rrbracket$ and $\mathfrak{n}=(x_1,\dots, x_d)$. Set $R=L[x_1, \dots, x_d]_{\mathfrak{m}}$, with $\mathfrak{m}=(x_1,\dots, x_d)$. Since $S=\widehat{R}$ and $\mathfrak{n}=\mathfrak{m}\widehat{R}$, the second assertion of the Lemma follows from the first. \end{proof} We now introduce a differential version of the splitting prime for $F$-pure rings. However, this notion is valid in any characteristic. \begin{definition} Let $(R,\mathfrak{m})$ be a local ring, $A$ be a subring, and let $J \subseteq R$ be an ideal. We define the \emph{$A$-differential core of $J$}\index{differential core}\index{${\mathcal P}_A(J)$} by $$ {\mathcal P}_A(J) =\bigcap_{n\in\mathbb{N}}J\dif{n}{A}. $$ The \emph{$A$-differential core of $R$} is ${\mathcal P}_A= {\mathcal P}_A(\mathfrak{m})$. \end{definition} \begin{lemma}\label{LemmaDidealDifPower} Let $R$ be a ring and $A$ be a subring. Then, $I$ is a $D_{R|A}$-ideal if and only if $I\dif{n}{A}=I$ for every integer $n\geq 1.$ \end{lemma} \begin{proof} We first show that if $I$ is a $D_{R|A}$-ideal, then $I\dif{n}{A}=I$ for every integer $n\geq 1.$ We note that $I\dif{n}{A}\subseteq I$ because if $f\in I\dif{n}{A},$ then $1\cdot f\in I$ as $1\in D^{0}_{R|A}.$ We now show the other containment. If $f\in I $, then $D_{R|A} f\subseteq I$ as $I$ is an $D_{R|A}$-ideal. Then, in particular, $D^{n-1}_{R|A} f\subseteq I$, and so, $f\in I\dif{n}{A}$. We now focus on the other implication. We suppose that $I\dif{n}{A}=I$ for every integer $n\geq 1.$ Let $f\in I.$ Then, $f\in I\dif{n}{A}$ for every $n\in\mathbb{N}$, and so $D^{n-1}_{R|A} f\subseteq I$ for every $n\in\mathbb{N}.$ Hence, $D_{R|A} f\subseteq I$, and so, $D_{R|A} I\subseteq I$. \end{proof} \begin{remark} The previous proposition is not true if one replaces the condition ``$I\dif{n}{A}=I$ for every integer $n\geq 1$'' with ``$I\dif{n}{A}=I$ for some integer $n>1.$'' For example, let $R=K[x^2,xy,y^2]$. Then, $D^1_{R|K}=R \cdot \langle 1, x \frac{\partial}{\partial x}, y \frac{\partial}{\partial x}, x \frac{\partial}{\partial y}, y \frac{\partial }{\partial y}\rangle$, and $D^1_{R|K}(\mathfrak{m})=\mathfrak{m}$, so $\mathfrak{m}\dif{2}{K}=\mathfrak{m}$. However, {$\frac{1}{2} \frac{\partial^2}{\partial x^2} \in D^2_{R|K}$,} so $x^2 \notin \mathfrak{m}\dif{3}{K}$. \end{remark} We summarize a few properties of differential cores. In particular, we characterize $D$-simplicity using differential cores in Corollary~\ref{CorDifPrimeDsimple}. \begin{proposition}\label{PropDiffPrime} Let $(R,\mathfrak{m})$ be a local ring, $J$ an ideal of $R$, and $A$ be a subring. Then, \begin{enumerate} \item ${\mathcal P}_A(J)$ is a $D_{R|A}$-ideal. \item\label{cPAJ-contains} ${\mathcal P}_A(J)$ contains every $D_{R|A}$-ideal of $R$ contained in $J$. \item\label{diff-prime-primary} ${\mathcal P}_A(J)$ is a primary ideal if $J$ is prime. In particular, ${\mathcal P}_A$ is primary. \item $R/{\mathcal P}_A$ is $D$-simple. \end{enumerate} \end{proposition} \begin{proof} We proceed by parts. \begin{enumerate} \item Let {$f\in {\mathcal P}_A(J)$}, and $\delta\in D^n_{R|A}.$ For every $\partial \in D^m_{R|A},$ $\partial \delta\in D^{m+n}_{R|A}.$ Since $f\in J\dif{m+n}{A}$ for every $m\in\mathbb{N}$, we have that $\partial\delta\cdot f\in J.$ Then, $\delta f\in J\dif{m}{A}$ for every $m\in\mathbb{N}.$ Hence, $\delta \cdot f\in {\mathcal P}_A(J)$. \item Let $I$ be a $D_{R|A}$-ideal with $I\subseteq J$. We have that $I\dif{n}{A}\subseteq J\dif{n}{A}.$ Then, $$ I=\bigcap_{n\in\mathbb{N}} I\dif{n}{A}\subseteq \bigcap_{n\in\mathbb{N}} J\dif{n}{A}={\mathcal P}_A(J), $$ where the first equality follows from Lemma~\ref{LemmaDidealDifPower}. \item Let $\mathfrak{p}$ be a minimal prime of ${\mathcal P}_A(J)$. Since $J$ is prime, and ${\mathcal P}_A(J) \subseteq J$, we have that $\mathfrak{p} \subseteq J$. Let $\mathfrak{q}$ be the $\mathfrak{p}$-primary component of ${\mathcal P}_A(J)$. We can write ${\mathcal P}_A(J)$ as the intersection of $\mathfrak{q}$ with some other primary ideals; in particular ${\mathcal P}_A(J) \subseteq \mathfrak{q} \subseteq J$. By Lemma~\ref{PropMinimalPrime}, $\mathfrak{q}$ is a $D_{R|A}$-ideal, and by Part~(\ref{cPAJ-contains}), $\mathfrak{q} \subseteq {\mathcal P}_A(J)$. Thus, $\mathfrak{q} = {\mathcal P}_A(J)$. \item Suppose on the contrary that there is an ideal of $R/{\mathcal P}_A$ that is stable under its differential operators. By Lemma~\ref{lemma-D-ideal-from-quotient}, there would then exist a proper $D_{R|A}$-ideal contatining ${\mathcal P}_A$, which would contradict Part~(\ref{cPAJ-contains}).\qedhere \end{enumerate} \end{proof} \begin{corollary}\label{CorDifPrimeDsimple} Let $(R,\mathfrak{m})$ be a local ring, and $A$ be a subring. Then, $R$ is a simple $D_{R|A}$-module if and only if ${\mathcal P}_A=0$. \end{corollary} \begin{proof} This follows immediately from Part~(\ref{cPAJ-contains}) of Proposition~\ref{PropDiffPrime}. \end{proof} Thus, $D$-simplicity means that for all $f \neq 0$ in $R$ there exists a differential operator $\delta$ such that $\delta(f)$ is a unit $u$. By taking $u^{-1} \delta$, one also finds an operator sending $f$ to $1$. An ongoing line of research is the comparison of $D_{R|K}$ and the ring generated by the derivations for finitely generated $K$-algebras. We now show that these algebras must differ for rings that are $D_{R|K}$-simple, but not regular. \begin{remark}\label{rem:der-simple} Let $K$ be a field of characteristic zero, and $R$ be essentially of finite type over $K$. We can consider the subalgebra $\mathscr{D}\subseteq D_{R|K}$ generated by $R$ and the $K$-linear derivations on $R$. The minimal primes of the singular locus of $R$ are stable under each derivation of $R$ \cite[Theorem~5]{Seidenberg}, hence are stable under the action of $\mathscr{D}$. It follows that if $R$ is $D$-simple and $\mathscr{D}= D_{R|K}$, then $R$ is regular. This is a special case of Nakai's conjecture that generalizes other known cases \cite{Ishibashi}. This approach to Nakai's conjecture is employed in the work of Traves \cite{Traves}. \end{remark} \subsection{A Fedder/Glassbrenner-type criterion for $D$-simplicity}\label{FGCriterion} In this subsection we introduce a similar criterion for $D$-simplicity, which is motivated by Glassbrenner's Criterion \cite{Glassbrenner} for strong $F$-regularity, \begin{lemma}[{\cite[Lemma 1.6]{FedderFputityFsing}}]\label{Fedder} Let $A\subseteq B$ be Gorenstein rings, and suppose that $B$ is a finitely generated free $A$-module. \begin{enumerate} \item[(i)] $\operatorname{Hom}_A(B,A) \cong B$ as $B$-modules. \item[(ii)] If $\Phi$ is a generator for $\operatorname{Hom}_A(B,A)$ as a $B$-module, $\a \subseteq A$ and $\b \subseteq B$ are ideals, and $x\in B$, then $(x \Phi)(\b) \subseteq \a$ if and only if $x \in (\a B : \b)$. \end{enumerate} \end{lemma} \begin{setup}\label{fin-type} Let $K$ be a field, $K[x_1,\dots,x_d]$ be a polynomial ring, and $\mathfrak{m}=(x_1,\dots,x_d)$. Let $R=S/I$, where $S=K[x_1,\dots,x_d]_{\mathfrak{m}}$. {Then $\ModDif{}{S}{K} = W^{-1}K[x_1,\dots,x_d,\tilde{x}_1,\dots,\tilde{x}_d]$, where $W$ is the multiplicative set given by the product of $R\setminus \mathfrak{m}$, and the corresponding set when $x_1,\ldots,x_d$ are replaced for $\tilde{x}_1,\ldots,\tilde{x}_d$}. Set $\Delta^{[n]}_{S|K}=((x_1-\tilde{x}_1)^n,\dots,(x_d-\tilde{x}_d)^n)$, and $\ModDif{[n]}{S}{K}=\ModDif{}{S}{K}/\Delta^{[n]}_{S|K}$. Then $\ModDif{}{R}{K}$ is naturally a quotient of $\ModDif{}{S}{K}$, and we write $\Delta^{[n]}_{R|K}, \ModDif{[n]}{R}{K}$ for the image of $\Delta^{[n]}_{S|K}, \ModDif{[n]}{S}{K}$ under this quotient map. By abuse of notation, we use $d$ for the universal differential in various of these settings. \end{setup} In the context of Setup~\ref{fin-type}, for an ideal ${J}$ of $R$ we define \[J^{\Fdif{n}} := \{ r \in R \ | \ \delta(r)\in I \ \text{for all} \ \delta\in D^{[n]}_{R|K}\} \]\index{$I^{\Fdif{n}}$} where $D^{[n]}_{R|K}$ is the set of $\Delta^{[n]}_{R|K}$-differential operators of $R$. This definition depends not only on ${J}$, but also on the presentation of $R$. We revisit this definition in Section~\ref{five}. \begin{proposition}\label{fedformula} In the context of Setup~\ref{fin-type}, let $K$ be a field and let $J$ be an ideal of $R$, and $J'$ be the preimage of $J$ in $S$. Then $$J^{\Fdif{n}}=\operatorname{Im}\left(\displaystyle \big( d(J') \ModDif{}{S}{K} + \Delta^{[n]}_{S|K} \big) :_{\ModDif{}{S}{K}} \Big( \big( d(I) \ModDif{}{S}{K} + \Delta^{[n]}_{S|K} \big) :_{\ModDif{}{S}{K}} I \ModDif{}{S}{K} \Big)\right)$$ and $$J\dif{n}{K}=\operatorname{Im}\left({\displaystyle \big( d(J') \ModDif{}{S}{K} + \Delta^{[n]}_{S|K} \big) :_{\ModDif{}{S}{K}} \Big( \big( d(I) \ModDif{}{S}{K} + \Delta^{[n]}_{S|K} \big) :_{\ModDif{}{S}{K}} \big( I \ModDif{}{S}{K} + \Delta^n_{S|K} \big) \Big)}\right),$$ where the images are taken in $R$. \end{proposition} \begin{proof} For the first part, by Proposition~\ref{representing-differential}, we have that $\delta(f)\in J$ for all $\delta\in D^{[n]}_{R|K}$ if and only if for every ${\phi\in \operatorname{Hom}_R( \ModDif{[n]}{R}{K}, R)}$, we have $\phi( {d(f)})\in J$. We can write $${\ModDif{[n]}{R}{K} = \ModDif{[n]}{S}{K} / \big( I\ModDif{[n]}{S}{K} + d(I)\ModDif{[n]}{S}{K}\big)}.$$ Since $\ModDif{[n]}{S}{K}$ is Gorenstein and free over $S$, Lemma~\ref{Fedder} applies. If $\Phi$ is a generator for $\operatorname{Hom}_S(\ModDif{[n]}{S}{K},S)$, then \[(r\Phi)\big( I\ModDif{[n]}{S}{K} + d(I)\ModDif{[n]}{S}{K}\big) \subseteq I\] if and only if \[ r \in \big( I\ModDif{[n]}{S}{K} :_{\ModDif{[n]}{S}{K}} \big( I\ModDif{[n]}{S}{K} + d(I)\ModDif{[n]}{S}{K}\big) \big)=: W,\] so $\operatorname{Hom}_R( \ModDif{[n]}{R}{K}, R)$ consists of images of maps $r \Phi$, with $r \in W$. Then $\overline{(r \Phi)}(\overline{d(f)})\in J$ if and only if $\Phi(r \cdot d(f) ) \in J' $. Thus, $f\in J^{\Fdif{n}}$ is equivalent to $\Phi(d(f) \cdot W)\subseteq J'$, which in turn is equivalent to $(d(f) \cdot \Phi)(W) \subseteq J'$. Applying Lemma~\ref{Fedder} again, $d(f)$ satisfies this condition if and only if $d(f) \in \big( J' \ModDif{[n]}{S}{K}:_{\ModDif{[n]}{S}{K}} W \big)$. By taking preimages in $\ModDif{}{S}{K}$, and simplifying, this occurs if and only if \[ d(f) \cdot \Big( \big( I \ModDif{}{S}{K} + \Delta^{[n]}_{S|K} \big) :_{\ModDif{}{S}{K}} d(I) \ModDif{}{S}{K} \Big) \not\subseteq J' \ModDif{}{S}{K} + \Delta^{[n]}_{S|K}. \] By switching the left inclusion $S \rightarrow \ModDif{}{S}{K}$ and the right inclusion $d: S \rightarrow \ModDif{}{S}{K}$, one obtains the statement of the theorem. The proof of the second part proceeds similarly. We note that $\Delta^{[n]}_{S|K}$ is contained in $\Delta^{n}_{S|K}$, so every element of $H:=\operatorname{Hom}_S(\ModDif{n}{S}{K},S)$ is the image of some element of $\operatorname{Hom}_S(\ModDif{[n]}{S}{K},S)$. In particular, if $\Phi$ is a generator of the latter module, as in the proof of the first part, \[(r\Phi)\big( I\ModDif{[n]}{S}{K} + d(I)\ModDif{[n]}{S}{K} + \Delta^n_{S|K}\ModDif{[n]}{S}{K} \big) \subseteq I\] if and only if \[ r \in \big( I\ModDif{[n]}{S}{K} :_{\ModDif{[n]}{S}{K}} \big( I\ModDif{[n]}{S}{K} + d(I)\ModDif{[n]}{S}{K} + \Delta^n_{S|K}\ModDif{[n]}{S}{K}\big) \big).\] The rest is analogous to the previous part. \end{proof} { We are now ready to state the criterion for $D$-simplicity. } \begin{theorem}\label{criterion} In the context of Setup~\ref{fin-type}, $R$ is $D_{R|K}$-simple if and only if for every $f \in S$ whose image is nonzero in $R$, there is some $n$ such that \[ f \cdot \Big( \big( d(I) \ModDif{}{S}{K} + \Delta^{[n]}_{S|K} \big) :_{\ModDif{}{S}{K}} I \ModDif{}{S}{K} \Big) \not\subseteq \mathfrak{m}^{[n]} \ModDif{}{S}{K} + d(\mathfrak{m}) \ModDif{}{S}{K} \quad \text{in} \ \ModDif{}{S}{K}. \] \end{theorem} \begin{proof} We note first that $R$ is $D_{R|K}$-simple if and only if for every nonzero element $r$ of $R$, there is a differential operator $\delta$ such that $\delta(r)=1$; equivalently, $\delta(r)\notin \mathfrak{m}$. By Lemma~\ref{I-diff-ops}, $\delta\in \operatorname{Hom}_K(R,R)$ is an element of $D_{R|K}$ if and only if $\delta$ is a $\Delta^{[n]}_{R|K}$-differential operator for some $n$. Thus, $R$ is $D_{R|K}$-simple if and only if for every nonzero $f\in R$, there is some $n$ such that $f\notin \mathfrak{m} R^{\Fdif{n}}$. The theorem then follows from Proposition~\ref{fedformula} and the observation that $d(\mathfrak{m})\ModDif{}{S}{K} + \Delta^{[n]}_{S|K} = \mathfrak{m}^{[n]} \ModDif{}{S}{K} + d(\mathfrak{m})$. \end{proof} \section{Differential signature} In this section, we introduce our main object of study. As noted in the introduction, there are multiple definitions for the differential signature, that all agree in the case of an algebra with a pseudocoefficient field or complete local ring; each of these definitions provides different insights into this limit. Our first goal below is to establish the equivalence of the definitions. We then proceed to collect some of the basic properties of differential signature. \subsection{Differential signature} \begin{definition} Let $(R,\mathfrak{m})$ be a local ring, $A$ be a subring, and let $d=\dim(R)$. We define the \textit{differential signature}\index{differential signature}\index{$\dm{A}{R}$} of $R$ by \[ \dm{A}(R)=\limsup\limits_{n\to\infty}\frac{\lambda_R(R/\mathfrak{m}\dif{n}{A})}{n^d / d!} . \] \end{definition} \begin{example}\label{reg-1} Let $K$ be a field, and $(R,\mathfrak{m})$ be a graded or local ring of dimension $d$, essentially of finite type and smooth over $K$. Then, by Example~\ref{diff-powers-regular}, $\mathfrak{m}\dif{n}{K}=\mathfrak{m}^n$ for all $n>0$. Now, we compute \[ \dm{K}(R)=\limsup_{n\rightarrow\infty}\frac{\lambda_R(R/\mathfrak{m}\dif{n}{K})}{n^d/d!} = \limsup_{n\rightarrow\infty}\frac{\lambda_R(R/\mathfrak{m}^n)}{n^d/d!} =e(R)= 1.\] \end{example} \begin{example}\label{cubic} Let $R=\mathbb{C}[x,y,z]/(x^3+y^3+z^3)$. Then $\mathfrak{m}\dif{n}{\mathbb{C}}=\mathfrak{m}$ for all $n>0$. Indeed, one always has that $\mathfrak{m}\dif{n}{\mathbb{C}} \subseteq \mathfrak{m}$ for each $n$. If $f\in \mathfrak{m}$ is homogeneous, and $\delta\in D^{n-1}_{R|\mathbb{C}}$ is homogenous, $\deg(\delta(f)) > \deg(f)>0$, by Example~\ref{example-BGG}, so $\delta(f)\in \mathfrak{m}$. Again by Example~\ref{example-BGG}, $D_{R|\mathbb{C}}$ is graded, so the previous computation implies that $\mathfrak{m}\subseteq \mathfrak{m}\dif{n}{\mathbb{C}}$ as well. Now, we compute \[ \dm{\mathbb{C}}(R)=\limsup_{n\rightarrow\infty}\frac{\lambda_R(R/\mathfrak{m}\dif{n}{\mathbb{C}})}{n^2/2!} = \limsup_{n\rightarrow\infty}\frac{\lambda_R(R/\mathfrak{m})}{n^2/2!}= \limsup_{n\rightarrow\infty}\frac{1}{n^2/2} =0.\] \end{example} In Theorem~\ref{Possiganeg}, we generalize this example to show that for cones over smooth curves of genus $\geq 1$ the differential signature is always $0$. \subsection{Principal parts signature} We now proceed to define a signature in terms of the free ranks of the modules of principal parts. We recall that the \emph{free rank}\index{free rank} of a module $M$ is the maximal rank $a$ of a free summand in any direct sum decomposition $M=R^{a}\oplus M'$. \begin{definition} Let $A\subseteq R$ be a map of rings that is essentially of finite type. The \emph{principal parts signature}\index{principal parts signature}\index{$\pps{K}(R)$} of $R$ over $A$ is \[\pps{K}(R):=\limsup_{n \rightarrow \infty} \frac{\mathrm{freerank}_R(\ModDif{n}{R}{A})}{\mathrm{rank}_R(\ModDif{n}{R}{A})}.\] \end{definition} The rank of $\ModDif{n}{R}{A}$ may not always be defined; we say that the principal part signature is not defined in this case. Of course, this is not an issue when $R$ is a domain. To work with this definition, we use the following two characterizations of free rank. \begin{lemma}\label{freerankinterpretation}\label{freerank-conditions} Let $(R,\mathfrak{m},\Bbbk)$ be local or graded and $M$ be a finitely generated $R$-module. \begin{enumerate} \item Define a submodule \[ \NF{M} := \{ m \in M \ | \ \forall \phi \in \operatorname{Hom}_R(M,R), \ \phi(m) \in \mathfrak{m} \}. \]\index{$\NF{M}$} Then, one has that $\mathrm{freerank}_R(M)=\lambda_R(M/\NF{M}) =\dim_{\Bbbk} (M/\NF{M}) $. \item Consider the short exact sequence of $R$-modules, \[ 0 \longrightarrow \operatorname{Hom}_R(M, {\mathfrak m} ) \longrightarrow \operatorname{Hom}_R(M,R ) \longrightarrow Q \longrightarrow 0 . \] Then the free rank of $M$ is the same as the $\Bbbk$-dimension of the quotient $Q$. \end{enumerate} \end{lemma} \begin{proof} The first part is well known {\cite[Discussion~6.7]{CraigSurvey}}. We now focus on the second part. The quotient $Q$ is a module over $\Bbbk$. Let $M = F \oplus N$ with a free module $F \cong R^s$. We have \[ \operatorname{Hom}_{ R } \left( M , \mathfrak{m} \right) \cong \operatorname{Hom}_{ R } \left( F , \mathfrak{m} \right) \oplus \operatorname{Hom}_{ R } \left( N , \mathfrak{m} \right) \] and \[ \operatorname{Hom}_{ R } \left( M , R \right) \cong \operatorname{Hom}_{ R } \left( F , R \right) \oplus \operatorname{Hom}_{ R } \left( N , R \right) .\] The quotient is \[ \begin{aligned} Q & = \left( \operatorname{Hom}_{ R } \left( F , R \right) \oplus \operatorname{Hom}_{ R } \left( N , R \right) \right)/ \left( \operatorname{Hom}_{ R } \left( F , {\mathfrak m} \right) \oplus \operatorname{Hom}_{ R } \left( N , {\mathfrak m} \right) \right) \\ & \cong \operatorname{Hom}_{ R } \left( F , R \right) / \operatorname{Hom}_{ R } \left( F , {\mathfrak m} \right) \oplus \operatorname{Hom}_{ R } \left( N , R \right) /\operatorname{Hom}_{ R } \left( N , {\mathfrak m} \right) \\ & \cong R^s/ {\mathfrak m}^{\oplus s} \oplus \operatorname{Hom}_{ R } \left( N , R \right) /\operatorname{Hom}_{ R } \left( N , {\mathfrak m} \right) \\ & \cong \Bbbk^s \oplus \operatorname{Hom}_{ R } \left( N , R \right) /\operatorname{Hom}_{ R } \left( N , {\mathfrak m} \right). \end{aligned} \] So the $\Bbbk$-dimension of $Q$ is at least $s$. If $F$ is a submodule where the free rank of $M$ is attained, then $N$ does not have a nontrivial free summand and there is no surjective homomorphism from $N$ to $R$. Hence $\operatorname{Hom}_{ R } \left( N , R \right) =\operatorname{Hom}_{ R } \left( N , \mathfrak{m} \right)$ and the right summand is $0$. \end{proof} \begin{definition} A \emph{unitary differential operator}\index{unitary differential operator} on a local or graded ring $(R,\mathfrak{m})$ is a differential operator $\delta :R \rightarrow R$, such that the image of $\delta$ is not contained in $\mathfrak{m}$ {; in the graded case, we assume $\delta$ is graded.} \end{definition} {Suppose that $K\subseteq R$.} If $R$ contains only the constant units of the base field, this means that the partial differential equation $\delta (f)=1$ has a solution $f \in R$ for some {$\delta \in D_{R\vert K}$}. For a local {or graded} ring $R$ and a differential operator $\delta$ on $R$ we denote in the following the induced differential operator with values in $R/{\mathfrak m}$ by $\delta'$. \begin{lemma}\label{unitary} Let $(R,\mathfrak{m},\Bbbk)$ be a local {or graded} ring containing a {coefficient field $K$, so that $K\cong \Bbbk$}. Let $\delta$ be a $K$-linear differential operator from $R$ to $R$ of order at most $n$; {in the graded case, we assume that $\delta$ is graded}. Let $\delta'$ be the induced operator $R \xrightarrow{\delta'} R \twoheadrightarrow R/\mathfrak{m}=\Bbbk$. Then the following are equivalent. \begin{enumerate} \item The $R$-linear map $\delta :P^n_{R|K} \rightarrow R$ is surjective. \item There is a unit ( {of degree zero in the graded case}) inside the image of the differential operator $\delta$. \item $\delta'$ is surjective. \item There exists a function $f \in R$ such that $\delta'(f)=1$. \item {$\delta$ is unitary.} \end{enumerate} \end{lemma} \begin{proof} The equivalence of (2), (3), (4), {and (5)} is clear. If (2) holds, let $f \in R$ with $\delta(f)=u \notin {\mathfrak m}$. Let $d^n(f) \in \ModDif{n}{R}{K}$ be the image of $f$ under the $n$th universal differential map. Then $\delta (d^n(f))$ is the unit $u$ and since $\delta$ is an $R$-linear form on the module of principal parts it must be surjective. Suppose now that (2) does not hold. Then the image of the differential operator $\delta$ is inside the maximal ideal $\mathfrak m$ of $R$. Since $\ModDif{n}{R}{K}$ is generated as an $R$-module by the images $d^n(f)$, $ f \in R$ \cite[Proposition~16.3.8]{EGAIV}, also the image of $\delta$ considered as a linear form on $\ModDif{n}{R}{K}$ lies inside the maximal ideal, and (1) does not hold. \end{proof} Note that there are many surjective differential operators from $R$ to $K$: For example, the $K$-valued derivation space $\operatorname{Der}_R(R,K)$ is under certain conditions just the tangent space. But such an operator is in general not a unitary differential operator, for which we require that it comes from an operator with values in $R$. \begin{lemma}\label{unitarysystem} Let $(R,\mathfrak{m},\Bbbk)$ be a local ring containing a field $K$. Let $\delta_1 , \ldots, \delta_t$ be differential operators from $R$ to $R$ of order at most $n$. Let $\delta'_i$ be the induced operators $R \xrightarrow{\delta_i} R \twoheadrightarrow R/\mathfrak{m}=\Bbbk$. Write $\delta_i=\phi_i \circ d^n$. Then the following are equivalent. \begin{enumerate} \item The $R$-linear map $\Phi=(\phi_1, \ldots, \phi_t) :\ModDif{n}{R}{K} \rightarrow R^t$ is surjective. \item The $\delta'_i$ are linearly independent over $\Bbbk$, where the vector space structure is given by postmultiplication. \end{enumerate} \end{lemma} \begin{proof} We consider the $R$-linear map \[ \Phi':\ModDif{n}{R}{K} \xrightarrow{(\phi_1, \ldots, \phi_t)} R^t \xrightarrow{\pi} \Bbbk^t . \] If (1) holds, then $\Phi$ is surjective and hence also $\Phi'$ is surjective. The maps $\pi \circ \phi_i$ factor through $\Bbbk$-linear maps $\phi'_i:\ModDif{n}{R}{K}\otimes_R \Bbbk \rightarrow \Bbbk$. The map $(\phi'_1,\dots,\phi'_t)$ is surjective, so the component maps are linearly independent. If the differential operators $\delta'_i$ were linearly dependent, since the image of $d^n$ generates $\ModDif{n}{R}{K}$ as an $R$-module, this would contradict the linear independence of the maps $\phi'_i$. Conversely, if (2) holds, then $\Phi'$ is surjective. By Nakayama's Lemma, $\Phi$ is surjective as well. \end{proof} \begin{definition} A set of differential operators forms an \emph{independent system of unitary operators}\index{independent system of unitary operators} if it satisfies the equivalent conditions of the previous lemma. \end{definition} \begin{example} The differential operators $\frac{\partial}{\partial y}$ and $ x \frac{\partial}{\partial x} + \frac{\partial}{\partial y}$ on $K[x,y]_{(x,y)}$ show that the properties from Lemma~\ref{unitarysystem} are not equivalent to the property that the differential operators themeselves are $K$-linearly independent, even if they both are unitary. The two operators are independent over $K$, but as derivations to $K$ they are the same. \end{example} \begin{remark} \label{localunitaryoperators} We have the short exact sequences of $R$-modules \[ \xymatrix{ 0 \ar[r] & \operatorname{Hom}_R(\ModDif{n}{R}{K},\mathfrak{m}) \ar[r] \ar[d]^{\cong} & \operatorname{Hom}_R(\ModDif{n}{R}{K},R) \ar[r] \ar[d]^{\cong} & Q_n \ar[r] \ar[d]^{\cong} & 0 \\ 0 \ar[r] & D^n_{R|K}(R,\mathfrak{m}) \ar[r] & D^n_{R|K} \ar[r] & Q_n \ar[r] & 0 } \] which are identical by the universal property of the module of principal parts, and where the $K$-dimension of the quotient $Q_n$\index{$Q_n$} is the number of linearly independent unitary operators, which equals the free rank of $ P^n_{R|K}$ by Lemma~\ref{freerankinterpretation} (2). \end{remark} \begin{corollary} Let $(R,\mathfrak{m},\Bbbk)$ be a local ring containing a field $K$. Then the free rank of $\ModDif{n}{R}{K}$ is equal to the maximal number of independent unitary operators of order at most $n$. \end{corollary} \begin{proof} This is clear from Lemma~\ref{unitarysystem}. \end{proof} \begin{lemma} \label{ppsignaturelocalize} Let $R$ be a domain essentially of finite type over a field $K$ and let $W \subseteq R$ be a multiplicative subset. Then we have the inequality \[\pps{K}(R) \leq \pps{K}(W^{-1} R) . \] This holds in particular for a localization of a local $K$-algebra essentially of finite type. \end{lemma} \begin{proof} By Proposition~\ref{diffmod-localize}, we have $P^n_{W^{-1}R|K} \cong W^{-1} P^n_{R|K} $. Free ranks can only increase by localizing, and the rank is preserved under localization, since it coincides with generic rank. \end{proof} We also have the following weak equisingularity statement. \begin{lemma} Let $R$ be a domain of finite type over a field $K$ and let $\mathfrak{p} \subseteq R$ be a prime ideal. Then we have the equality \[\pps{K}(R_\mathfrak{m}) = \pps{K}( R_\mathfrak{p}) \, \] for a very general maximal ideal $\mathfrak{m} \in V(\mathfrak{p})$; i.e., there exists a countable union of closed subsets of $V(\mathfrak{p})$ of smaller dimension such that all maximal ideals outside of this set have this property. \end{lemma} \begin{proof} For each $n$ we have a decomposition \[ P^n_{ R_{\mathfrak{p} }|K } \cong (P^n_{R|K})_\mathfrak{p} \cong R_p^{r_n} \oplus M ,\] where $r_n$ is the free rank of $P^n_{ R_\mathfrak{p}|K} $. Since everything is finitely generated, there exists $f_n \notin \mathfrak{p}$ such that also $ P^n_{ R_{f_n}|K} \cong R_{f_n}^{r_n} \oplus N$ holds. Then $V(\mathfrak{p}) \cap \bigcup_{n \in {\mathbb N } } V(f_n)$ describes the exceptional locus. \end{proof} \subsection{Comparison of the two signatures} We now proceed to relate the differential power signature and the principal parts signature. \begin{proposition}\label{freerank} Let $(R,\mathfrak{m},\Bbbk)$ be a local {or graded} ring with dimension $d$. \begin{enumerate} \item If $(R,\mathfrak{m},\Bbbk)$ is an algebra with pseudocoefficient field $K$, then \[ \lambda_R(R/\mathfrak{m}\dif{n+1}{K}) = \mathrm{freerank}_R(\ModDif{n}{R}{K}). \] \item If $(R,\mathfrak{m},\Bbbk)$ is a complete local ring with coefficient field $K\cong \Bbbk$, then \[ \lambda_R(R/\mathfrak{m}\dif{n+1}{K}) = \mathrm{freerank}_R(\wModDif{n}{R}{K})=\mathrm{freerank}_R(\ModDif{n}{R}{K}).\] \end{enumerate} \end{proposition} \begin{proof} First, we consider Case~(1). Let \[\NF{\ModDif{n}{R}{K}}=\{ p \in \ModDif{n}{R}{K} \ | \ \phi(p)\in \mathfrak{m} \quad \text{for all} \ \ \phi\in \operatorname{Hom}_R(\ModDif{n}{R}{K},R)\}\] where we regard $\ModDif{n}{R}{K}$ as an $R$-module via the left factor. By Lemma~\ref{freerank-conditions} (1), \[{\mathrm{freerank}_R(\ModDif{n}{R}{K})=\lambda_R(\ModDif{n}{R}{K} / \NF{\ModDif{n}{R}{K}})}.\] We analyze the map {of $K$-vector spaces} $\bar{d}^n:R/\mathfrak{m}\dif{n+1}{K}\rightarrow \ModDif{n}{R}{K} / \NF{\ModDif{n}{R}{K}}$ induced by $d^n:R\rightarrow \ModDif{n}{R}{K}$. By Proposition~\ref{universaldifferential}, we have that for $r\in R$, $\delta(r)\in \mathfrak{m}$ for all $\delta\in D^n_{R|K}$ if and only if $\phi(d^n(r))\in \mathfrak{m}$ for all $\phi \in \operatorname{Hom}_R(\ModDif{n}{R}{K},R)$. That is, $r\in \mathfrak{m}\dif{n+1}{K}$ if and only if $d^n(r)\in \NF{\ModDif{n}{R}{K}}$. Thus $\bar{d}$ is well-defined and injective. Now, notice that $\mathfrak{m} \ModDif{n}{R}{K} \subseteq \NF{\ModDif{n}{R}{K}}$, where, again, multiplication by elements of $R$ occurs via the left factor. We claim that the map induced by $d^n$ from $R$ to $S:=\ModDif{n}{R}{K}/\mathfrak{m} \ModDif{n}{R}{K} \cong (\Bbbk \otimes_K R)/\overline{\Delta^{n+1}_{R|K}}$ is surjective, where $\overline{\Delta^{n+1}_{R|K}}$ is the image of ${\Delta^{n+1}_{R|K}}$ in $\Bbbk \otimes_K R$. It follows from the claim that $\bar{d}^n$ is surjective, which concludes the proof. The claim is clear when $K=\Bbbk$. In the general case, there exists a primitive element $u$ for $\Bbbk$ over $K$; let $f$ be the minimal polynomial of $u$ over $K$. Setting $\delta=u\otimes 1 - 1\otimes u$, we have that $\Bbbk \otimes_K R$ is generated as an algebra over $R$ by $\delta$, where we take $R$ to be the image of $1\otimes R$. Then, in $S$, $\delta^{n+1}=0$. By applying the Taylor expansion and the definition of~$f$, \[\begin{aligned}0=f(u\otimes 1)&=f(1\otimes u+\delta)=f(1\otimes u)+\delta \, f'(1\otimes u) + \delta^2 \, f_2(1\otimes u)+\cdots\\ &= \delta \, f'(1\otimes u) + \delta^2 \, f_2(1\otimes u)+\cdots + \delta^n \, f_n(1\otimes u), \end{aligned}\] where $f_i=f^{(i)}/i!$, which is defined in any characteristic. By separability, $f'(1\otimes u)$ is a unit, so \[T=R[\delta]/(\delta^{n+1}, \delta+ \delta^2 (f_2/f')(1\otimes u) + \cdots + \delta^n (f_n/f')(1\otimes u))\] surjects onto $S$. But, $T$ is in fact isomorphic to $R$ itself. Indeed, if not, let $t= r + r_a \delta^a+ \cdots + r_n \delta^n$ be a representative of an element of $T\setminus R$. Then, in $T$, this element is equal to $t-r_a \delta^{a-1}(\delta+ \delta^2 (f_2/f')(1\otimes u) + \cdots + \delta^n (f_n/f')(1\otimes u))$, which can be written as $r + r'_{a+1}\delta^{a+1} + \cdots + r'_n \delta^n$. Applying this at most $n$ times gives a representative for $t$ in $R$. Thus, the map induced by $d^n$ gives a surjection from $R$ to $S$, which establishes the claim, {so $R/\mathfrak{m}\dif{n+1}{K}$ and $\ModDif{n}{R}{K} / \NF{\ModDif{n}{R}{K}}$ are $K$-vector spaces of the same (finite) dimension. Then, \[ \lambda_R(R/\mathfrak{m}\dif{n+1}{K}) = \frac{\dim_K(R/\mathfrak{m}\dif{n+1}{K})}{\dim_K(\Bbbk)} = \frac{\dim_K(\ModDif{n}{R}{K} / \NF{\ModDif{n}{R}{K}})}{\dim_K(\Bbbk)} = \lambda_R(\ModDif{n}{R}{K} / \NF{\ModDif{n}{R}{K}}).\]} The argument for the first equality in Case~(2) is entirely analogous, with the use of Proposition~\ref{representing-differential} replaced by Proposition~\ref{represent-complete}. It remains to show that $\mathrm{freerank}_R(\wModDif{n}{R}{K})=\mathrm{freerank}_R(\ModDif{n}{R}{K})$ for each $n$. We now show that any free $R$-summand $F$ of $\wModDif{n}{R}{K}$ is a free summand of $\ModDif{n}{R}{K}$ and vice versa. Let $F$ be a free summand of $\wModDif{n}{R}{K}$. By Lemma~\ref{separatedquot}, $\wModDif{n}{R}{K}\cong \sep{\ModDif{n}{R}{K}}$, so the completion map $\ModDif{n}{R}{K}\rightarrow \wModDif{n}{R}{K}$ is surjective. It follows that the inclusion of $F$ into $ \wModDif{n}{R}{K}$ factors through $\ModDif{n}{R}{K}$, so $F$ is a free summand of $\ModDif{n}{R}{K}$. Conversely, let $F$ be a free summand of $\ModDif{n}{R}{K}$. Since $F$ is a complete module, the splitting map from $\ModDif{n}{R}{K}$ to $F$ factors through $\wModDif{n}{R}{K}$, so $F$ is a free summand of $\wModDif{n}{R}{K}$. \end{proof} \begin{remark} \label{equalityvariant} In the case of a localization $R$ of a finitely generated $K$-algebra over a field $K$ at a maximal ideal with residue class field $K$, one may prove the previous proposition slightly differently by showing that the natural map \[D^{n}_{R|K} /D^n_{R|K} (R,\mathfrak{m}) \rightarrow \operatorname{Hom}_K(R/ \mathfrak{m}\dif{n+1}{K} , K) ,\, E \longmapsto \eta \circ E , \] is an isomorphism of $K$-vector spaces. Here $\eta:R \rightarrow R/\mathfrak{m} \cong K$ denotes the projection and the left hand side is $Q_n$ in the notation of Remark \ref{localunitaryoperators}, whose $K$-dimension is the free rank of the module of principal parts. The map is well defined, since $E \in D^{n}_{R|K}$ sends $ \mathfrak{m}\dif{n+1}{K} $ to $\mathfrak{m}$ and $E \in D^n_{R|K} (R,\mathfrak{m})$ is sent to $0$. If $E \notin D^n_{R|K} (R,\mathfrak{m}) $, then there exists $f \in R$ with $E(f) \notin \mathfrak{m}$ and so $\eta \circ E \neq 0$, which gives the injectivity. To prove surjectivity, suppose that $U \subseteq \operatorname{Hom}_K(R/ \mathfrak{m}\dif{n+1}{K} , K)$ is the image space. Then there exists a subspace $W \subseteq R/ \mathfrak{m}\dif{n+1}{K} $ such that \[U=W^\perp =\{ \varphi:R/ \mathfrak{m}\dif{n+1}{K} \rightarrow K|_{\varphi(W)}=0 \}.\] Assume that $U$ is not the full space. Then $W \neq 0$ and there exists $h \in W$, $h \neq 0$. In particular, $h \notin \mathfrak{m}\dif{n+1}{K}$ and so there exists $E \in D^n_{R|K}$ with $E(h) \notin \mathfrak{m}$. But then $\eta \circ E$ does not annihilate $W$ and so it does not belong to $ U$. \end{remark} \begin{remark} The hypothesis that $(R,\mathfrak{m},\Bbbk)$ is an algebra with pseudocoefficient field $K$ can be weakened slightly in Proposition~\ref{freerank}. Suppose that $R$ is a local $K$-algebra essentially of finite type, and the field extension $K \to \Bbbk$ is algebraic and separably generated. The proof of Proposition~\ref{freerank}(1) goes through with only minor modifications: it is no longer true that there exists a primitive element $u$, but the argument of surjectivity of $\bar{d}^n$ follows in the same way. If $(R,\mathfrak{m},\Bbbk)$ is a local ring essentially of finite type over a field of characteristic zero, then there exists a subfield $K$ of $R$ such that $K \to \Bbbk$ is is algebraic and separably generated, and, hence, for which $\lambda_R(R/\mathfrak{m}\dif{n+1}{K}) = \mathrm{freerank}_R(\ModDif{n}{R}{K})$ for all $n$. \end{remark} The previous result allows us to give a different characterization of differential signature, which suggests that the module of principal parts works as a characteristic free analogue of $R^{1/p^e}$. \begin{theorem}\label{ThmDiffSigRanks} Let $(R,\mathfrak{m},\Bbbk)$ be a domain that is an algebra with pseudocoefficient field $K$, and assume that $\operatorname{Frac}(R)/K$ is separable. Then, $\dm{K}(R)=\pps{K}(R)$. \end{theorem} \begin{proof} We have that \[ \dm{K}(R)=\limsup\limits_{n\to\infty}\frac{\lambda(R/\mathfrak{m}\dif{n}{A})}{n^d / d!} = \limsup\limits_{n\to\infty}\frac{\mathrm{freerank}_R (\ModDif{n}{R}{K})}{n^d / d!}, \] where the last equality follows by Proposition~\ref{freerank}. By Proposition~\ref{RankDiff} $\operatorname{rank} (\ModDif{n}{R}{K})=\binom{d+n}{d}$. Since $\lim_{n\to \infty}\frac{n^d/d!}{\binom{d+n}{d}}=1,$ we obtain that \[ \limsup\limits_{n\to\infty}\frac{\mathrm{freerank}_R (\ModDif{n}{R}{K})}{n^d / d!}= \limsup\limits_{n\to\infty}\frac{\mathrm{freerank}_R (\ModDif{n}{R}{K})}{\operatorname{rank}{(\ModDif{n}{R}{K})} }, \] which concludes the proof. \end{proof} We do not know whether the two definitions agree when one relaxes the assumption on existence of a pseudocoefficient field. There is, however, an inequality that holds under a much weaker assumption. We prepare for this with a straightforward lemma. \begin{lemma}\label{lem:Lunz} Let $K$ be a field, and $(R,\mathfrak{m},\Bbbk)$ be a local $K$-algebra essentially of finite type over a field $\Bbbk$, and assume that $\Bbbk$ is separable over $K$. Let $\lambda_1,\dots,\lambda_t$ be units of $R$ such that their images in $\Bbbk$ form a separating transcendence basis for $\Bbbk$ over $K$. Let $T=R[x_1,\dots,x_t]$, and $S=T_{\mathfrak{n}}$, where $\mathfrak{n}=\mathfrak{m}+(x_1-\lambda_1,\dots,x_t-\lambda_t)$. Then, $(S,\mathfrak{n}, \Bbbk)$ is an algebra with pseudocoefficient field $K(x_1,\dots,x_t)$, $${x_1-\lambda_1},\dots,{x_t-\lambda_t}$$ is a regular sequence on $S$, and \[\sum_{s<n}\mathfrak{m}\dif{n-s}{K} (x_1-\lambda_1,\dots,x_t-\lambda_t)^s \subseteq \mathfrak{n}\dif{n}{K(\underline{x})}.\] \end{lemma} \begin{proposition} Let $K$ be a field, and $(R,\mathfrak{m},\Bbbk)$ be a local $K$-algebra essentially of finite type over {$K$}. Assume that $\Bbbk$ is separable over $K$. Then, $\pps{K}(R) \leq \dm{K}(R)$. \end{proposition} \begin{proof} We use the notation of Lemma~\ref{lem:Lunz}. For the ring $S$, Proposition~\ref{freerank} applies, so $\mathrm{freerank}_S(\ModDif{n}{S}{K(\underline{x})})=\ell_S(S/\mathfrak{n}\dif{n}{K(\underline{x})})$. Since $S$ is obtained from $R$ by base change and localization, using Proposition~\ref{diffmod-localize}, we find that $\mathrm{freerank}_S(\ModDif{n}{S}{K(\underline{x})})=\mathrm{freerank}_R(\ModDif{n}{R}{K})$. Applying Lemma~\ref{lem:Lunz}, we obtain an inequality \[\mathrm{freerank}_R(\ModDif{n}{R}{K})\leq \ell_S\left(\frac{S}{\sum_{s<n}\mathfrak{m}\dif{n-s}{K} (x_1-\lambda_1,\dots,x_t-\lambda_t)^s}\right).\] The function $f(n)$ determining the RHS above is the $t$-iterated sum transform of the function $g(n)=\ell_R(R/\mathfrak{m}\dif{n}{K})$. It is then an elementary analysis fact that one has $\limsup_n \frac{g(n)}{\binom{n+d}{n}} \geq \limsup_n \frac{f(n)}{\binom{n+t+d}{n}}$. The proposition follows. \end{proof} We now turn the case of complete local rings. We extend the definition of principal parts signature to this case. \begin{definition} Let $(R,\mathfrak{m},\Bbbk)$ be a complete local domain of dimension $d$ with coefficient field $K\cong \Bbbk$. We define the \emph{principal parts signature}\index{principal parts signature}\index{$\pps{K}(R)$} of $R$ over $K$ as \[\pps{K}(R):=\limsup_{n \rightarrow \infty} \frac{\mathrm{freerank}_R(\ModDif{n}{R}{A})}{\binom{n+d}{n}}.\] \end{definition} Using the second part of Proposition~\ref{freerank}, the same proof as Theorem~\ref{ThmDiffSigRanks} establishes the following. \begin{theorem}\label{ThmDiffSigRanksComplete} Let $(R,\mathfrak{m},\Bbbk)$ be a complete local domain with coefficient field $K\cong \Bbbk$. Then $\pps{K}(R) = \dm{K}(R)$. \end{theorem} We mostly work in the situation of algebras with pseudocoefficient fields and complete local rings henceforth, in which case we freely use Theorems~\ref{ThmDiffSigRanks}~and~\ref{ThmDiffSigRanksComplete} to move between the two definitions. \subsection{Basic properties of differential signature} We first note that the differential signature is also bounded by one. \begin{proposition}\label{leq-1} If $(R,\mathfrak{m},\Bbbk)$ is a domain with pseudocoefficient field $K$ and $\operatorname{Frac}(R) / K$ is separable, then ${\dm{K}(R)\leq 1}$. \end{proposition} \begin{proof} This follows immediately from Theorem~\ref{ThmDiffSigRanks}. \end{proof} Differential signature behaves well under completion. \begin{proposition}\label{diff-sig-completion} Let $(R,\mathfrak{m},\Bbbk)$ be a local algebra with coefficient field $K\cong \Bbbk$; we may identify $K$ with a coefficient field for $\widehat{R}$. Then $\dm{K}{(R)}=\dm{K}{(\widehat{R})}$. The same equality holds when one replaces limits superior with limits inferior in the definition of differential signature. \end{proposition} \begin{proof} This is immediate from Proposition~\ref{diff-powers-completion}. \end{proof} We now give a preparatory lemma to show that the differential signature detects $D$-simplicity. We note that this is one of the properties that $F$-signature has. \begin{lemma}\label{LemmaDimDiffPrimeNotDsimple} Let $(R,\mathfrak{m})$ be reduced local {or graded} ring, and $A$ be a subring. If $R$ is not a simple $D_{R|A}$-module, then $\dim (R/{\mathcal P}_A)<\dim(R).$ \end{lemma} \begin{proof} If $R$ is a domain, the result follows from Corollary~\ref{CorDifPrimeDsimple}, because ${\mathcal P}_A\neq 0.$ We suppose that $R$ is not a domain. The zero ideal is radical but not prime by hypothesis. Then, by Proposition~\ref{PropMinimalPrime}, the minimal primes of $R$ are $D_{R|A}$-ideals. Thus, the sum $I$ of any set of minimal primes is a $D_{R|A}$-ideal, and for such an ideal $\dim (R/I)<\dim(R)$. Then, since ${\mathcal P}_A \supseteq I$ by Lemma~\ref{PropDiffPrime}, the inequality holds. \end{proof} The following result resembles one of the key features of $F$-signature in prime characteristic for $F$-pure rings. However, {Theorem~\ref{ThmDifMultDsimple}} holds in characteristic zero and prime. \begin{theorem}\label{ThmDifMultDsimple} Let $(R,\mathfrak{m})$ be reduced local {or graded} ring, and $A$ be a subring. If $\dm{A}(R)>0,$ then $R$ is a simple $D_{R|A}$-module. \end{theorem} \begin{proof} We set $d=\dim(R).$ We prove the equivalent statement: if $R$ is not a simple $D_{R|A}$-module, then $\dm{A}(R)=0$. Since $ \mathfrak{m}^n\subseteq \mathfrak{m}\dif{n}{A}$ and ${\mathcal P}_A\subseteq \mathfrak{m}\dif{n}{A}$ for every $n\in\mathbb{N},$ we have that \[ \begin{aligned} \dm{A}(R)=\limsup\limits_{n\to\infty}\frac{\lambda_R(R/\mathfrak{m}\dif{n}{A})}{n^d/d!} &=\limsup\limits_{n\to\infty}\frac{\lambda_R(R/({\mathcal P}_A+\mathfrak{m}\dif{n}{A}))}{n^d/d!} \\ &\leq \limsup\limits_{n\to\infty}\frac{\lambda_R(R/({\mathcal P}_A+\mathfrak{m}^n))}{n^d/d!}. \end{aligned} \] By Lemma~\ref{LemmaDimDiffPrimeNotDsimple}, we have that $\dim(R/{\mathcal P}(A))<d,$ and so $\displaystyle \limsup\limits_{n\to\infty}\frac{\lambda_R(R/({\mathcal P}_A+\mathfrak{m}^n))}{n^d/d!}$ is zero. Hence, $\dm{A}(R)=0$. \end{proof} We end this subsection by noticing that if one replaces the differential operators by Hasse-Schmidt differentials in the definition of differential signature, a great deal of information is lost. \begin{remark} Let $K$ be a field of characteristic zero, and $(R,\mathfrak{m})$ be an algebra with pseudocoefficient field $K$. One can define a ``Hasse-Schmidt signature'' $s^{HS}_{K}(R)$ by replacing the $n$-th differential power of $\mathfrak{m}$ with the set of elements of $\mathfrak{m}$ that are sent into $\mathfrak{m}$ by every product of at most $n-1$ derivations. If $R$ is not regular, by Remark~\ref{rem:der-simple}, there exists an ideal stable under the action of all derivations. By an argument analogous to Theorem~\ref{ThmDifMultDsimple}, $s^{HS}_{K}(R)=0$ for any such $R$. \end{remark} \subsection{Some basic examples} \begin{example} The inequality {$\dm{K}(R)\leq 1$} in Proposition~\ref{leq-1} does not necessarily hold if $R$ is not a domain. For example, let $K$ be a field and $R=K[x]/(x^2)$. By Remark~\ref{rem-radicals-D-ideals}, we have $\mathfrak{m}\dif{n}{K}=(0)$ for $n>2$. Thus, $\dm{K}(R)=2$. The modules of principal parts $\ModDif{n}{R}{K}$ are free $R$-modules of rank 2 for all $n\geq 3$. \end{example} We now prepare to show that $\dm{K}(R)=1$ does not imply that $R$ is regular, even if $R$ is a complete domain. { \begin{lemma}\label{normalization-lemma} Let $(R,\mathfrak{m})\subseteq (S,\mathfrak{n})$ be local domains that are $K$-algebras, and suppose that $K$ is a coefficient field of each. If $R$ and $S$ have the same fraction field, and there is some differential operator $\alpha\in D_{S|K}$ such that $\alpha(S)\subseteq R$ and $\alpha(1)=1$, then $\dm{K}(R)\geq \dm{K}(S)$. \end{lemma} } \begin{proof} Let $\alpha\in D_{S|K}$ be as in the statement of the lemma, and suppose that $\alpha$ has order $\leq t$. If $x\in R\setminus \mathfrak{n} \dif{n}{K}$, then there is some $\delta\in D^{n-1}_{S|K}$ such that $\delta(x)=1$. Then, $\alpha\circ\delta$ restricted to $R$ is a differential operator in $D_{R|K}$ of order at most $t+n-1$, and $\alpha\circ\delta(x)=1$. Therefore, $x\notin \mathfrak{m}\dif{n+t}{K}$. Thus, $\mathfrak{m}\dif{n+t}{K}\subseteq R\cap \mathfrak{n}\dif{n}{K}$. Now, consider the short exact sequence \[ 0 \longrightarrow \frac{R}{R\cap \mathfrak{n}\dif{n}{K}} \longrightarrow \frac{S}{\mathfrak{n}\dif{n}{K}} \longrightarrow \frac{S}{R+\mathfrak{n}\dif{n}{K}} \longrightarrow 0\] {As $S/R$ has dimension strictly less than that of $S$,} and since the image of $\mathfrak{m}^n$ is contained in the image of $\mathfrak{n}\dif{n}{K}$ in $S/R$, by comparison with the Hilbert function one has that \[\limsup_{n\rightarrow\infty}\frac{d!}{n^d}\; \lambda_R \left(\frac{S}{R+\mathfrak{n}\dif{n}{K}}\right)=0,\] so \[\dm{K}(S)=\limsup_{n\rightarrow\infty}\frac{d!}{n^d}\; \lambda_R \left( \frac{R}{R\cap \mathfrak{n}\dif{n}{K}} \right) \leq \limsup_{n\rightarrow\infty}\frac{d!}{n^d}\; \lambda_R \left( \frac{R}{\mathfrak{m}\dif{n+t}{K}} \right) = \dm{K}(R),\] where the last equality follows from shifting indices by $t$ and ${\limsup\limits_{n\to\infty} (n+t)^d / n^d =1}$. \end{proof} \begin{corollary} If $(R,\mathfrak{m})$ is a local domain with a perfect coefficient field $K$, the normalization $R'$ of $R$ is local and regular, and $R'/R$ has finite length, then $\dm{K}(R)=1$. \end{corollary} \begin{proof} {By Example \ref{reg-1} and Proposition~\ref{leq-1},} we have that $\dm{K}(R')=1$ and $\dm{K}(R)\leq 1$. It then suffices to show that $\dm{K}(R)\geq\dm{K}(R')$. By Lemma~\ref{normalization-lemma}, it suffices to show that there is some $\alpha\in D_{R'|K}$ sending $1$ to $1$ and with image in $R$. Under the hypotheses, $R'$ is differentially smooth over $K$ \cite[17.15.5]{EGAIV}. Given finitely many $K$-linearly independent elements $f_1,\dots,f_s$ of $R'$ and equally many elements $g_1,\dots,g_s$ of $R'$, there is some differential operator $\alpha$ such that $\alpha(f_i)=g_i$. In particular, if we choose $f_1,\dots,f_s$ whose images form a basis of $R'/R$, there is a differential operator that sends $1$ to $1$ and each $f_i$ to 0, and hence sends $R'$ into $R$. \end{proof} \begin{example}\label{example-equals-1} If $R=K\llbracket x^a \ | \ a \in \Theta \rrbracket \subseteq K\llbracket x \rrbracket$ for some numerical semigroup $\Theta\subseteq \mathbb{N}$ and some perfect field $K$, then $\dm{K}(R)=1$. One may also compute this example explicitly for $\Theta=\langle 2,3\rangle$ using the description of the differential operators on this ring found in the work of Smith \cite{PaulSmith} and Smith-Stafford \cite{SmithStafford}. \end{example} \begin{example} If $R=K\llbracket x^2,x^3,y,xy\rrbracket\subseteq R'=K\llbracket x,y\rrbracket$, the normalization of $R$ is $R'$, and the quotient has length one. We have that $\dm{K}(R)=1$. We note that, for $K=\mathbb{C}$, $D_{R|\mathbb{C}}$ is not a simple ring \cite[0.13.3]{LS}. Thus positivity of differential signature does not imply simplicity of the ring of differential operators. \end{example} We do not know examples of normal rings with differential signature equal to one that are not regular. We thus pose the following question. \begin{question} If $R$ is a normal domain with coefficient field $K$ and $\dm{K}(R)=1$, must $R$ be regular? \end{question} We do not know whether the differential signature exists as a limit rather than a limit superior. If the differential powers form a \emph{graded family}\index{graded family}, i.e., satisfy the containments $\mathfrak{m}\dif{a}{K}\mathfrak{m}\dif{b}{K}\subseteq \mathfrak{m}\dif{a+b}{K}$ for all $a,b\in \mathbb{N}$, and $R$ is reduced, then this follows from work of Cutkosky. Namely, as an immediate consequence of \cite[Theorem~1.1]{Cutkosky}, we have the following. \begin{proposition} If $(R,\mathfrak{m})$ is local, the differential powers of $\mathfrak{m}$ form a graded family, and the dimension of the nilradical of $R$ as a module is less than the dimension of $R$ (e.g., $R$ is reduced), then the differential signature of $R$ exists as a limit. \end{proposition} Alas, it is not always the case that differential powers form a graded family. \begin{example}\label{example-not-D-graded} Let $R=K\llbracket x^2, x^3 \rrbracket$. By the description of the differential operators on this ring found in the work of Smith \cite{PaulSmith} and Smith-Stafford \cite{SmithStafford}, one sees that $x^n \in \mathfrak{m}\dif{n+1}{K} \setminus \mathfrak{m}\dif{n+2}{K}$ for all $n>1$. In particular, $x^2 \in \mathfrak{m}\dif{3}{K}$, but $x^4=(x^2)^2 \notin \mathfrak{m}\dif{6}{K}$. Thus, $\{ \mathfrak{m}\dif{n}{K} \}_{n\in\mathbb{N}}$ does not form a graded family. \end{example} {We do not know examples of normal domains whose differential powers do not form a graded family. Then, we pose the following question.} {\begin{question} If $R$ is a normal domain with coefficient field $K$, do the differential powers $\{ \mathfrak{m}\dif{n}{K} \}_{n\in\mathbb{N}}$ form a graded family? \end{question} } {We give some positive results on convergence and rationality in Section~\ref{sec-duality}.} \subsection{Algorithmic aspects}\label{SubAlg} In this subsection we deal with the question how the free ranks of the modules of principal parts and the differential powers can be computed algorithmically. Throughout we work with a family of polynomials $\{f_i, 1 \leq i \leq m\}$, in $K[x_1, \ldots , x_k]$ and with the Jacobi-Taylor matrices over the residue class ring $R=K[x_1, \ldots, x_k] /\left(f_1, \ldots, f_m\right)$ from Section~\ref{basics}, or localizations thereof. \begin{corollary} \label{JacobiTaylorunitaryoperators} Let $f_1 , \ldots , f_m \in K[x_1 , \ldots , x_k]$ denote polynomials with residue class ring $R = K[x_1 , \ldots , x_k]/ \left( f_1 , \ldots , f_m \right)$. Then a differential operator on $R$ of order $\leq n $ is unitary if and only if the corresponding tuple (see Corollary~\ref{JacobiTayloroperators}) $\left( a_\lambda \right)$ in the kernel of the $n$-th Jacobi-Taylor matrix generates the unit ideal in $R$. In the graded case this is true if and only if one $a_\lambda$ is a unit. \end{corollary} \begin{proof} A differential operator is unitary if and only if the corresponding linear form on $P^n_{R|K}$ is surjective by Lemma~\ref{unitary}. It is clear that a linear form on $P^n_{R|K}$ is surjective if and only if the corresponding tuple $\left( a_\lambda \right) $ defines a surjection, and this is the same as the $a_\lambda$ generating the unit ideal. \end{proof} Note that if $a_\nu$ is a unit, then $X^\nu$ is sent to a unit by the corresponding operator, since by Corollary~\ref{JacobiTayloroperators} \[ \sum_\lambda a_\lambda \frac{ 1 }{ \lambda! } \partial^\lambda \left( x^\nu \right) = a_\nu \frac{ 1 }{ \nu ! } \partial^\nu \left( x^\nu \right) + \sum_{\lambda < \nu} a_\lambda \frac{ 1 }{ \lambda! } \partial^\lambda \left( x^\nu \right) \in a_\nu + (x_1, \ldots, x_k) \, .\] In the graded case, the induced operator with values in $K$ does only depend on those $a_\nu$ where $\nu$ has minimal degree. \begin{example} For $R=K[x,y,z]/(z^2-xy)$, the transposed second Jacobi-Taylor matrix is \[ \begin{blockarray}{ccccc} & 1 & a & b & c \\ \begin{block}{c[cccc]} 1 & 0 & 0 & 0 & 0 \\ a & -y & 0 & 0 & 0 \\ b & -x & 0 & 0 & 0 \\ c & 2z & 0 & 0 & 0 \\ a^2 & 0 & -y & 0 & 0 \\ ab & -1 & -x & -y & 0 \\ ac & 0 & 2z & 0 & -y \\ b^2 & 0 & 0 & -x & 0 \\ bc & 0 & 0 & 2z & -x \\ c^2 & 1 & 0 & 0 & 2z \\ \end{block} \end{blockarray} .\] From this we get the unitary differential operators $(1,0,0,0 \ldots, 0,0)$ (which exists always and corresponds to the identity) and \[ (0,1,0,0,4x,0,2z,0,0,y), \, (0,0,1,0,0,0,0,4y,2z,x),\, (0,0,0,1,0,4z,2x,0,2y,2z) . \] The first of these corresponds to $\partial_x +2x \partial_x \circ \partial_x + 2z \partial_x \circ \partial_z + y \frac{1}{2} \partial_z \circ \partial_z $ and sends $x$ to $1$. \end{example} \begin{remark} \label{JacobiTayloralgorithms} For $R=K[x_1, \ldots, x_k]/\left(f_1, \ldots, f_m \right)$ we can compute the free rank of $P^n_{R|K}$ with the help of the Jacobi-Taylor matrix $J_n$. It is the maximal number $r$ of tuples $\left (a_{\lambda}\right)_i $, $i=1, \ldots , r$, inside the kernel of the Jacobi-Taylor matrix such that there exists $\left (c_{\lambda}\right)_i$, $i=1, \ldots , r$, fulfilling the orthogonal relations $a_i \cdot c_j= \delta_{ij}$, since this relation describes the surjectivity of the map from $P^n_{R|K}$ to $R^r$. For a localization $ ( S,\mathfrak{m}) $ of $R$, we interpret the short exact sequence from Remark~\ref{localunitaryoperators} as \[ 0 \longrightarrow \ker (J_n) \cap \mathfrak{m} \,\left(\bigoplus_{ \mondeg {\lambda} \leq n} Re_\lambda\right) \longrightarrow \ker (J_n) \longrightarrow Q_n \longrightarrow 0 .\] This provides a way to compute the free ranks of the modules of principal parts algorithmically. This applies also to the differential powers $ \mathfrak{m}\dif{n}{K}$. An element $h \in R$ belongs to $\mathfrak{m}\dif{n}{K}$ if and only if the following hold: For all elements $\left( a_\lambda \right) $ in the kernel of the Jacobi-Taylor matrix $J_{n-1}$ we have $\sum_{ \mondeg {\lambda} \leq n-1} \frac{1}{\lambda!} \partial^\lambda (h) a_\lambda \in \mathfrak{m}$. As the kernel is a finitely generated module, this is a finite test for one element $h$. In this, we only have to consider a maximal unitary system for the $\left( a_\lambda \right) $. If we want to know for a fixed element $h$ whether there exists an $n$ such that ${h \notin \mathfrak{m}\dif{n}{K}}$, the situation is more complicated. For $n $ large enough the terms $$\sum_{ \mondeg {\lambda} \leq n-1} \frac{1}{\lambda!} \partial^\lambda (h) e_\lambda$$ do not change anymore. However, the containment $\sum_{ \mondeg {\lambda} \leq n-1} \frac{1}{\lambda!} \partial^\lambda (h) a_\lambda \in \mathfrak{m}$ has to be checked for all kernel elements of all higher Jacobi-Taylor matrices. The computation of $\mathfrak{m}\dif{n}{K}$ for fixed $n$ is also more complicated. At least over a finite field this is possible. By Proposition~\ref{properties-diff-powers}~(ii) we know that $\mathfrak{m}^n \subseteq \mathfrak{m}\dif{n}{K}$, and since $R/\mathfrak{m}^n$ is finite we can check the containments for all $h$ separately. \end{remark} \begin{corollary} Let $R= \left( K[x_1, \ldots, x_k] /(f_1, \ldots , f_m)\right)_\mathfrak{p} $ be a local ring essentially of finite type. Then $R$ is $D$-simple if and only if for every $h \in R$, $h \neq 0$, there exists an element $(a_\lambda)$ in the kernel of some Jacobi-Taylor matrix such that $ \sum_{\lambda} \frac{1}{\lambda!} \partial^\lambda (h) a_\lambda $ is a unit. \end{corollary} \begin{proof} This follows from Corollaries~\ref{CorDifPrimeDsimple} and \ref{JacobiTayloroperators} in connection with Remark~\ref{JacobiTayloralgorithms}. \end{proof} The Jacobi-Taylor matrices are given by finitely many data, all partial derivatives of the defining functions. Therefore it is reasonable to expect that there are certain patterns in them to get some finistic results, to put it optimistically: finite determination of differential signature, its rationality, that the limsup is in fact a limit. A first result in this direction is the following. \begin{lemma} \label{JacobiTaylorperiodic} Let $f_1 , \ldots , f_m \in K[x_1 , \ldots , x_k]$ denote polynomials and let $\delta_j $ be the maximum of the exponents of $x_j$ in any monomial in any $f_i$. Set \[\Lambda = \{\lambda \in \mathbb{N}^k \ | \ \lambda_j \geq \delta_j \text{ for all } j \}.\] Let $\sum_{\lambda \in \Lambda} a_\lambda C_\lambda=0$ (over the residue class ring) be a relation among the columns of a Jacobi-Taylor matrix for these data which involves only columns with indices from $\Lambda$. Then for all $\beta \in \mathbb{N}^k$ also $\sum_{\lambda \in \Lambda} a_\lambda C_{\lambda+ \beta}=0$, where the columns may refer to a sufficiently larger Jacobi-Taylor matrix. \end{lemma} \begin{proof} The initial relation still holds after passing to a larger Jacobi-Taylor matrix. By induction it is enough to show the statement for $\beta=e_1$. We look at the row given by $(\mu',i)$. If $\mu'=\mu+e_1$, then \[ \begin{aligned} \sum_{\lambda \in \Lambda} a_\lambda C_{\lambda +e_1, (\mu',i) } & = \sum_{\lambda \in \Lambda} a_\lambda \frac{1}{ ( \lambda +e_1 - \mu' )!} \partial^{ \lambda +e_1 - \mu' } (F_i) \\ &= \sum_{\lambda \in \Lambda} a_\lambda \frac{1}{ ( \lambda - \mu )!} \partial^{ \lambda - \mu } (F_i) \\ &= 0 . \end{aligned} \] If $\mu' \neq \mu+e_1$, then the first component of $\mu'$ is $0$. In this case \[ \sum_{\lambda \in \Lambda} a_\lambda C_{\lambda +e_1, (\mu',i) } = \sum_{\lambda \in \Lambda} a_\lambda \frac{1}{ ( \lambda +e_1 - \mu' )!} \partial^{ \lambda +e_1 - \mu' } (F_i) =0 , \] since the first component is always $\lambda_1+1 > \delta_1$ and so these differential operators annihilate $F_i$. \end{proof} \subsection{The graded case}\label{graded case} We now consider the case of a standard-graded $K$-algebra $R$. In this setting, every differential operator has a decomposition into homogeneous differential operators, and the degree of a homogeneous operator $\delta$ is given as the difference $ \deg (\delta(f)) - \deg (f)$ for every homogeneous element $f$. For example, the degree of $x^\nu \partial^\lambda$ on the polynomial ring is $ \mondeg {\nu} - \mondeg {\lambda} $. A unitary homogeneous operator sending $f$ to $1$ has degree $- \deg (f)$, and the (non)existence of operators of certain negative degrees imposes strong conditions on the differential signature. \begin{lemma} \label{gradedsignature} Let $R$ be a standard-graded ring over $K$ of dimension $d$ and multiplicity $e$. Suppose that there exists $\alpha \in {\mathbb R}_{\geq 0}$ such that $(D^n_{R|K} )_\ell = 0 $ for all $\ell < - \alpha n $. Then $\dm{K}(R) \leq e \alpha^d $. \end{lemma} \begin{proof} We claim that \[ R_{ > \alpha n } \subseteq \mathfrak{m}^{\langle n+1 \rangle} =\{f \in R\;|\; \delta(f) \in \mathfrak{m} \text{ for all operators } E \text{ of order } \leq n \} . \] So let $f$ be a homogeneous element of degree $> \alpha n $. By assumption, every nonzero homogeneous operator $\delta$ of order $ \leq n$ has degree at least $ - \alpha n$. Therefore the degree of $ \delta (f) $ is $ > \alpha n - \alpha n = 0$ and so $\delta(f) \in \mathfrak{m}$. It follows that we have a surjection \[ R/ R_{ > \alpha n } = R_{\leq \lfloor \alpha n \rfloor } \longrightarrow R/ \mathfrak{m}^{\langle n+1 \rangle} . \] Hence asymptotically \[ \dim_K ( R/ \mathfrak{m}^{\langle n+1 \rangle} ) \leq \frac{ e}{d!} \alpha^d n^d \] and the result follows. \end{proof} Compare also the proof of Theorem~\ref{ThmDirSumPos} for a bound from below with a similar shape. \begin{corollary} \label{gradedsymdersignature} Let $R$ be a standard-graded ring over $K$ of dimension $d$ and multiplicity $e$. Suppose that there exists $\alpha \in {\mathbb R}_{\geq 0}$ such that $ \operatorname{Hom}_R (\operatorname{Sym} ^n (\Omega_{R|K}), R )_\ell = 0 $ for all $\ell < - \alpha n $. Then $\dm{K}(R) \leq e \alpha^d $. \end{corollary} \begin{proof} We have to show that the assumption implies that $(D^n_{R|K} )_\ell = 0 $ for all $\ell < - \alpha n $, then the result follows from Lemma~\ref{gradedsignature}. This we prove by induction on $n$. For $n=0$ the statement is true anyway since there is no multiplication of negative degree in a standard-graded ring. For the induction step we look at the short exact sequence \[ 0 \longrightarrow (D^{n-1}_{R|K}) _\ell \longrightarrow (D^{n}_{R|K}) _\ell \longrightarrow \operatorname{Hom}_R (\operatorname{Sym} ^n (\Omega_{R|K}), R )_\ell \] which we will discuss in detail in Section~\ref{Comparison}. The homogeneity of the map on the left follows from the beginning of the proof of Theorem~\ref{compareoperatorcomposition}. So suppose that $E$ is an operator of order $\leq n$ and of degree $\ell < - \alpha n$. If $E$ has order $\leq n-1$, then it is $0$ by the induction hypothesis. Hence $E$ does not come from the left and maps to a nonzero element on the right which contradicts the assumption. \end{proof} \begin{remark} \label{jacobitaylorhomogeneous} If $E$ is a homogeneous differential operator of degree $ \degoperator$ on a $\mathbb{Z}$-graded ring $R$ of finite type and given by a tuple $\left( a_\lambda \right)$ as in Corollary~\ref{JacobiTayloroperators}, then the $a_\lambda$ are homogeneous of degree $\deg (a_\lambda) = \degoperator + \mondeg {\lambda} $. The operator can be decomposed as \[\delta = \sum_{\mondeg {\lambda} = u } a_\lambda \frac{\partial^\lambda}{ \lambda! } + \sum_{ \mondeg {\lambda} = u+1 } a_\lambda \frac{\partial^\lambda}{ \lambda! } + \cdots + \sum_{\mondeg {\lambda} = v } a_\lambda \frac{\partial^\lambda}{ \lambda! } , \] where we suppose that the sums on the very left and on the very right are not $0$. The left sum determines the induced operator with values in $K$ alone, and this induced operator can only be nonzero if $u=- \degoperator $. The number $v$ is the order of the operator, and by Lemma~\ref{JacobiTaylorsymmetric} this last sum determines the corresponding element in $\operatorname{Hom} (\operatorname{Sym} ^v (\Omega_{R|K}) , R)_\degoperator $. \end{remark} If $R$ is a normal standard-graded domain over a field of characteristic $0$ and $U \subseteq \operatorname{Spec} R=X$ is smooth and contains all points of codimension one, then we have \[ \operatorname{Hom}_R(\operatorname{Sym} ^n(\Omega_{R|K}),R ) \cong \Gamma(U, \operatorname{Hom}_X(\operatorname{Sym} ^n(\Omega_{X|K}), {\mathcal O}_X ) ) \cong \Gamma(U, \operatorname{Sym} ^n (\operatorname{Der}_K {\mathcal{O}}_X)) . \] This holds in every degree. If $R$ has an isolated singularity, then we can take $U$ to be the punctured spectrum and we can compute $ \operatorname{Hom}_R(\operatorname{Sym} ^n(\Omega_{R|K}),R )_\ell$ on the smooth projective variety $\operatorname{Proj} R$. \begin{corollary} \label{gradedhypersurfacesymdersignature} Let $K$ be a field of characteristic $0$ and let $F \in K[x_1 , \ldots , x_{d+1}]$ ($d \geq 2$) be a homogeneous polynomial of degree $e$. Suppose that $R=K[x_1 , \ldots , x_{d+1}]/(f)$ has an isolated singularity and set $Y=\operatorname{Proj} R$. Suppose that there exists $\alpha \in {\mathbb R}_{\geq 0}$ such that $ \Gamma(Y, \operatorname{Sym} ^ n (\operatorname{Syz} (\partial_1 F, \ldots , \partial_{d+1} F) ) (m) ) = 0 $ for all $m < (e- \alpha ) n $. Then $\dm{K}(R) \leq e \alpha^d $. \end{corollary} \begin{proof} Since we have an isolated singularity we have on $Y$ short exact sequences of locally free sheaves of the form \[ 0 \longrightarrow \operatorname{Syz} (\partial_1 f, \ldots , \partial_{d+1} f) (m) \longrightarrow \bigoplus_{ d+1} {\mathcal O}_Y (m-e+1) \xrightarrow{\partial_1 f, \ldots , \partial_{d+1} f } {\mathcal O}_Y(m) \longrightarrow 0 \, \] for all twists $m$. On the right we have the Jacobi matrix. Hence we get a correspondence \[ \operatorname{Der}_K(R)_{m-e} = \Gamma(Y, \operatorname{Syz} (\partial_1 f, \ldots , \partial_{d+1} f) (m ) ) \, \] because of the following: A global section of $ \operatorname{Syz} (\partial_1 f, \ldots , \partial_{d+1} f ) $ over $Y$ in total degree $m $ is a syzygy $(s_1, \ldots,s_{d+1})$ where the $s_i$ are homogeneous elements of degree $m -e+1$. This corresponds via $x_i \mapsto dx_i \mapsto s_i$ to a derivation on $R$ of degree $ m -e$. On a sheaf level we can write this as $ \widetilde {(\operatorname{Der}_KR)} ( m -e) \cong \operatorname{Syz} ( \partial_1 f, \ldots , \partial_{d+1} f )(m) $, where the tilde denotes taking the corresponding sheaf of a graded module. From this we get \[ \operatorname{Sym} ^ n \widetilde {(\operatorname{Der}_KR)} \cong \operatorname{Sym} ^ n ( \operatorname{Syz} ( \partial_1 f, \ldots , \partial_{d+1} f ) (e) ). \] We translate this back to the punctured cone and to get \[ \operatorname{Hom}_R(\operatorname{Sym} ^n(\Omega_{R|K}),R )_{ m - n e} = \Gamma(Y, \operatorname{Sym} ^ n (\operatorname{Syz} ( \partial_1 f, \ldots , \partial_{d+1} f ) ) (m ) ), \] because $ \operatorname{Hom}_R(\operatorname{Sym} ^n(\Omega_{R|K}),R ) $ is reflexive. By assumption we know the nonexistence of nonzero global sections for the twists $m < (e- \alpha) n $. Hence we deduce $ \operatorname{Hom}_R(\operatorname{Sym} ^n(\Omega_{R|K}),R )_{ \ell } =0 $ for $\ell < - \alpha n $ and Corollary~\ref{gradedsymdersignature} gives the result. \end{proof} \begin{remark} \label{Symsyzcomputation} The global sections of $\operatorname{Sym} ^ n ( \operatorname{Syz} (\partial_1 f, \ldots , \partial_{d+1} f )) $ and its twists can be computed with the short exact sequences {\small \[ 0 \to \operatorname{Sym} ^ n ( \operatorname{Syz} (\partial_1 f, \ldots , \partial_{d+1} f )) \to \operatorname{Sym} ^ n ( \bigoplus_{ d+1} {\mathcal{O}}_Y (-e+1) ) \to \operatorname{Sym} ^{ n -1} ( \bigoplus_{ d+1} {\mathcal{O}}_Y (-e+1) )) \to 0. \]} With the identifications $ \operatorname{Sym} ^ n ( \mathcal{O}_{Y}(-e+1)^{\oplus d+1} ) \cong \mathcal{O}_{Y}( n (-e+1))^{\bigoplus \binom{ n +d}{d} } $, the map on the right hand side is given as $e_\nu \mapsto \sum_j \partial_j f \, e_{\nu -e_j}$. A section in the symmetric power of the syzygy bundle (and its twists) is a tuple in the kernel of this map. The matrices describing these maps appear also as the submatrices $T_q^\text{tr}$ of the Jacobi-Taylor matrices $J_q$, see Remark~\ref{JacobiTaylorrelation} and Lemma~\ref{JacobiTaylorsymmetric}. \end{remark} \begin{example} Let $F$ be a homogeneous polynomial of degree $3$ in $3$ variables over an algebraically closed field $K$ of characteristic $0$ such that $Y=\operatorname{Proj} K[x,y,z]/(f)$ is an elliptic curve. The Euler derivation determines because of $x\partial_1f +y\partial_2f+z\partial_3f = 0$ a short exact sequence \[0 \longrightarrow \mathcal{O}_Y \longrightarrow \operatorname{Syz} (\partial_1f,\partial_2f,\partial_3f) (3) \longrightarrow \mathcal{O}_Y \longrightarrow 0 . \] This does not split since the space of global sections in the middle has dimension $1$. Therefore the syzygy bundle is the bundle $F_2$ in Atiyah's classification of bundles on an elliptic curve \cite{atiyahelliptic}. Hence for the symmetric powers we get \[ \operatorname{Sym} ^ n ( \operatorname{Syz} (\partial_1f,\partial_2f,\partial_3f) ) \cong \operatorname{Sym} ^ n (F_2 (-3) ) \cong (\operatorname{Sym} ^ n (F_2))(-3n) \cong F_n (-3n) , \] where $F_n$ is again from Atiyah's classification, i.e. $F_n$ are the (semistable) bundles of rank $n$ which are the unique nontrivial extensions of $F_{n-1}$ by $\mathcal{O}_Y$. For $m < 3 n $ there are no global sections of $ \Gamma(Y, \operatorname{Sym} ^ n (\operatorname{Syz} (\partial_1 f, \partial_{2} f, \partial_3 f) ) (m) ) \cong \Gamma(Y, F_ n ( m -3 n ) ) $. So this reproves known facts \cite{DiffNonNoeth} mentioned in Example~\ref{example-BGG}, and Corollary~\ref{gradedhypersurfacesymdersignature} with $e=3$ and $\alpha =0$ shows that the differential signature is $0$. \end{example} The following theorem shows that positive differential signature is in the graded case related with many other relevant notions for a singularity. \begin{theorem} \label{Possiganeg} Let $K$ be an algebraically closed field of characteristic zero and let $R$ be an $\mathbb{N}$-graded $K$-algebra of dimension at least two that is generated in degree one. Assume that $R$ is a Gorenstein ring and has an isolated singularity at the homogeneous maximal ideal. Suppose that $R$ has positive differential signature. Then, the $a$-invariant of $R$ is negative. \end{theorem} \begin{proof} Let $ X = \operatorname{Proj} R $ be the smooth projective variety corresponding to $R$, let ${\mathcal O}_X(1)$ its very ample line bundle and let $\omega ={\mathcal O}( a ) $ be the canonical line bundle. For this interpretation of the $a$-invariant, see \cite[Section 3.6]{BrHe}. Assume that $a \geq 0$ and that the differential signature is positive. By Theorem~\ref{ThmDifMultDsimple}, the ring $R$ is simple as a module over the ring of differential operators. This implies that the tangent bundle $T_X$ is big \cite[Theorem 1.2]{Hsiaobigness}. We have $\bigwedge^{\dim(X)} T_X = {\mathcal O}_X(-a)$. On the other hand, if $a \geq 0$, then $T_X$ is semistable with respect to ${\mathcal O}_X(1)$ \cite[Theorem 3.1]{Peternell}. Then also the restriction of $T_X$ to a generic (smooth) complete intersection curve $C$ of sufficiently high degree is semistable \cite{Flennerrestriction} and its degree is still $\leq 0$. Then also its symmetric powers are semistable \cite[Theorem I.10.5]{ Hartshorneamplesubvarieties} and of nonpositive degree. Then, the restriction of $T_X$ to $C$ is big \cite[Corollary 2.2.11]{LazBook1}. But this contradicts the Riemann-Roch theorem for curves. \end{proof} This means also that the smooth projective variety corresponding to an isolated graded Gorenstein singularity is a Fano variety and has in particular negative Kodaira-dimension \cite[Definition V.1.1]{Kollarrationalcurves}. It follows for example that a graded hypersurface $R=K[x_1, \ldots , x_n]/(f)$ with an isolated singularity with positive differential signature must have degree $\operatorname{deg} (f) \leq \operatorname{dim} (R)$. We conjecture, in analogy with the situation in positive characteristic between $F$-regular, positive $F$-signature and negative $a$-invariant (\cite{Harainjectivity}, \cite[Section 5.3]{Hararational}, \cite[Theorem~0.2]{AL}), that the converse is true, but the first open case is already that of cubics in four variables. The following corollary uses singularity notions from the minimal model program (see \cite{kollarsingularities}) and $F$-singularities which are explained in the next section. \begin{corollary}\label{CorRedCharP} Let $K$ be an algebraically closed field of characteristic zero and let $R$ be a standard-graded normal $K$-domain with an isolated singularity and that is a Gorenstein ring. Suppose that $R$ has positive differential signature. Then $R$ has a rational singularity, it is log-terminal, and it is of strongly $F$-regular type. \end{corollary} \begin{proof} The rationality of the singularity follows from work of Flenner-Watanabe \cite{Flennerrational, Watanaberational} which says that under the hypotheses negative $a$-invariant implies rationality. A Gorenstein rational singularity is also log-terminal. Then, the rationality implies that $R$ has $F$-rational type \cite[Theorem~1.1]{HaraSing}. Under the Gorenstein condition this means that $R$ has strongly $F$-regular type, which again implies that $R$ is log-terminal \cite[Theorem~5.2]{HaraSing}. \end{proof} We discuss the notions mentioned in Corollary \ref{CorRedCharP} and other connections to $F$-singularities in the next section. \section{Differential signature in prime characteristic}\label{five} In this section we focus on positive characteristic. In particular, we compare the $F$-signature and differential signature in the case where both invariants can be defined. \subsection{Differential Frobenius powers} \begin{setup} Unless specified, in this section $R$ denotes an $F$-finite Noetherian ring with prime characteristic $p>0$. \end{setup} \begin{definition} Let $R$ be a Noetherian ring with prime characteristic $p>0$. \begin{enumerate} \item[(i)] We note that $R$ acquires an $R$-module structure by restriction of scalars via the $e$-th iteration of the Frobenius map, $F^e$. We denote this module action on $R$ by $F^e_* R$. {To make explicit what structure is considered, we denote $F^e_* f$ for an element in $F^e_*$.} \item[(ii)] We say that $R$ is $F$-finite if $F^e_* R$ is a finitely generated $R$-module. \item[(iii)] {If $R$ is $F$-finite, we say that $R$ is \textit{$F$-pure} if the natural map $R\to F^1_* R$ splits.} \item[(iv)] If $R$ is a domain, we say that $R$ is \textit{strongly $F$-regular} if for every $r\in R$, $r \neq 0$, there exists $e\in\mathbb{N}$ such that the map $\varphi:R\to F^e_* R$ defined by $1\mapsto F^e_* r$ splits. \item[(v)] We denote {$\operatorname{End}_{R^{p^e}}(R)$} by $D^{(e)}_R$.\index{$D^{(e)}_R$} \item[(vi)] An additive map $\psi:R\to R$ is a \textit{$p^{e}$-linear map} if $\psi(r f)=r^{p^e}\psi(f).$ Let $\mathcal{F}^e_R$ be the set of all the $p^{e}$-linear maps. \item[(vii)] An additive map $\phi:R\to R$ is a \textit{$p^{-e}$-linear map} if $\phi(r^{p^e} f)=r\phi(f).$ Let ${\mathcal C}^e_R$\index{${\mathcal C}^e_R$} be the set of all the $p^{-e}$-linear maps. \end{enumerate} \end{definition} \begin{remark}\label{RemCartFrobDmod} Let $R$ be a reduced $F$-finite ring. We note that $$ \mathcal{F}^e_R \cong \operatorname{Hom}_R(R, F^e_* R),\; {\mathcal C}^e_R \cong \operatorname{Hom}_R(F^e_* R,R),\; \text{and} \; D^{(e)}_R\cong \operatorname{Hom}_R(F^e_* R,F^e_* R). $$ If $R$ is $F$-pure and $\pi\in F^e_* R\to R$ is a splitting of the inclusion, then map that sends $\phi\in \operatorname{Hom}_R(F^e_* R,F^e_* R)$ to $\pi\circ\phi\in \operatorname{Hom}_R(F^e_* R,R)$ is a surjection. Furthermore, this surjection splits. \end{remark} We now recall a definition of $F$-signature. \begin{definition}[{\cite{SmithVDB,HLMCM,TuckerFSig,WatanabeYoshida}}] Let $(R,\mathfrak{m},K)$ be either a local ring or a standard graded $K$-algebra {of dimension $d$}. Suppose that $R$ is $F$-finite. Let $a_e$ denote the biggest rank of an $R$-free direct summand of $F^e_* R$, and {$\alpha=\hbox{log}_p[K:K^p]$}. Note that $a_e=0$ for all $e$ if $R$ is not $F$-pure. The \textit{$F$-signature} is defined by $$s(R)=\lim\limits_{e\to\infty}\frac{a_e}{p^{d(e+\alpha)}}.$$\index{$s(R)$} \end{definition} \begin{remark} [\cite{AE,YaoObsFsig}] Let $(R,\mathfrak{m},K)$ be an $F$-finite $F$-pure ring {of dimension~$d$}. We define \[ I_e=\{r\in R\;|\; \varphi(r)\in\mathfrak{m}\;\; \forall \varphi\in{\mathcal C}^e_R \}. \]\index{$I_e$} Then, if $R$ is $F$-finite, one has the equality $s(R)=\lim_{e\rightarrow\infty} \lambda(R/I_e)/p^{ed}$. In general, if $R$ is not $F$-finite, we define the $F$-signature by this formula. \end{remark} We recall a well-known description of the differential operators in prime characteristic. We include Part~(i) for comparison with Lemma~\ref{I-diff-ops}. \begin{proposition}\label{RemYek} Let $(R,\mathfrak{m},K)$ be an $F$-finite local ring of prime characteristic $p$. \begin{enumerate} \item[(i)]\label{De-Delta-bracket} $D^{(e)}_{R|\mathbb{Z}}$ is the set of $\Delta_{R|\mathbb{Z}}^{[p^e]}$-differential operators of $R$.\index{$D^{(e)}_{R}$} \item[(ii)] $D_{R|\mathbb{Z}}=\bigcup_{e\in\mathbb{N}} D^{(e)}_R$. \item[(iii)] Set $\mu=\dim_{K^p} (R/\mathfrak{m}^{[p]})$. Then $D^{p^e-1}_{R|\mathbb{Z}} \subseteq D^{(e)}\subseteq D^{\mu(p^{e}-1)}_{R|\mathbb{Z}}$. \item[(iv)] Suppose that $R$ is the localization of an algebra of finite type and let $\mu$ denote its global embedding dimension. Then $ D^{p^e-1} \subseteq D^{(e)} \subseteq D^{\mu ( p^e -1)} $. \end{enumerate} \end{proposition} \begin{proof} For (i), (ii), (iii), we refer previous work {\cite[Lemma~1.4.8,~Theorem~1.4.9]{Ye}} {(see also \cite[Susbection~2.5]{SmithVDB})}. (iv). We write $R=S_{\mathfrak m}$ with $S=K[X_1, \ldots, X_\mu]/ {\mathfrak a} $. The ideal $\Delta$ in $ S \otimes_K S $ is generated by $ (X_1-Y_1, ... ,X_ \mu-Y_\mu)$ and has thus $\mu$ generators. This is also true for $R$. As for any ideal in positive characteristic we have the containments \[ \Delta^{ \mu (p^e-1)+1} \subseteq \Delta^{[p^e]} \subseteq \Delta^{p^e} .\] By looking at the $R$-linear forms on $R \otimes_K R$ modulo these powers we get the inclusions \[ D^{p^e-1} \subseteq D^{(e)} \subseteq D^{\mu (p^e -1)} . \] \end{proof} \begin{definition} Let $R$ be an $F$-finite ring of characteristic $p>0$. Let $I$ be an ideal of $R$, and $e$ be a positive integer. We define the \textit{differential Frobenius powers} of $I$ by \[ I^{\Fdif{p^e}}= \{f\in R \, | \, \delta(f)\in I \hbox{ for all } \delta\in \operatorname{End}_{R^{p^e}}(R)\}.\]\index{$I^{\Fdif{n}}$} \end{definition} This notion enjoys many of the nice properties that differential powers enjoy. For example: \begin{lemma}\label{properties-Fdiff-powers} Let $R$ be a ring and $I,J_\alpha\subseteq R$ be ideals. \begin{itemize} \item[(i)] $I^\Fdif{p^e}$ is an ideal; \item[(ii)] $I^{[p^e]}\subseteq I^\Fdif{p^e};$ \item[(iii)] $\left(\bigcap_{\alpha}J_\alpha\right)^\Fdif{p^e} =\bigcap_{\alpha }(J_\alpha)^\Fdif{p^e}.$ \item[(iv)] If $I$ is $\mathfrak{p}$-primary, then $I^\Fdif{p^e}$ is also $\mathfrak{p}$-primary. \end{itemize} \end{lemma} \begin{proof} \begin{itemize} \item[(i)] This follows from the observation that if $\delta\in D^{(e)}$, then $\delta\circ f\in D^{(e)}$ for any $f\in R$. \item[(ii)] This is immediate from $I^{[q]}\subseteq (I\cap R^q)R$ and the previous part. \item[(iii)] Follows from the definition. \item[(iv)] This is analogous to the proof for differential powers {\cite[Proposition 2.6]{SurveySP}.} \qedhere \end{itemize} \end{proof} We also have the following analogue of Lemma~\ref{diff-localize2}. \begin{lemma}\label{Fdiff-localize} Let $W$ be a multiplicative set in $R$ and $I$ an ideal. Suppose that $R$ is $F$-finite. Then $I^\Fdif{p^e} (W^{-1}R) = (W^{-1} I)^\Fdif{p^e}$. \end{lemma} \begin{proof} Since $R$ is $F$-finite, we have that \[D^{(e)}_{W^{-1}R}=\operatorname{Hom}_{W^{-1}R^{p^e}}(W^{-1}R, W^{-1}R)=W^{-1}\operatorname{Hom}_{R^{p^e}}(R, R)=W^{-1} D^{(e)}_R\] by the natural map. We first show $I^\Fdif{p^e} (W^{-1}R) \subseteq (W^{-1} I)^\Fdif{p^e}$. Let {$r\in I^\Fdif{p^e}$}, $w\in W$, and $\delta\in D^{(e)}_{W^{-1}R}$. Write $\delta=\frac{1}{v} \cdot \eta$, with $v\in W$ and $\eta\in D^{(e)}_R$. Then $\delta(\frac{r}{w})=\frac{\eta(rw^{p^e-1})}{vw^{p^e}}$. Because $I^\Fdif{p^e}$ is an ideal containing $r$, we have $\eta(rw^{p^e-1})\in I$, and $\delta(\frac{r}{w})\in (W^{-1} I)^\Fdif{p^e}$. We now focus on the other containment. Let $\frac{r}{w}$ lie in $ (W^{-1} I)^\Fdif{p^e}$, fix $v\in W$ such that $v(W^{-1}I \cap R)\subseteq I$, and take some $\delta \in D^{(e)}_R$. We have that $\delta(w^{p^e-1} r) = w^{p^e} \delta(\frac{r}{w}) \in W^{-1}I \cap R$. Then, $\delta(v^{p^e}w^{p^e-1} r)=v^{p^e} \delta(w^{p^e-1} r) \in I$. Since $\delta$ was arbitrary, $v^{p^e} w^{p^e-1} r \in I^\Fdif{p^e}$. Thus, $\frac{r}{w}\in I^\Fdif{p^e} W^{-1}R$, as required. \end{proof} We now give a result that is a key ingredient to compare both signatures. \begin{proposition}\label{PropDvsCartIdeals} Let $(R,\mathfrak{m},K)$ be an $F$-finite $F$-pure local ring. Then, $\mathfrak{m}^{\Fdif{p^e}}=I_e$. \end{proposition} \begin{proof} We show the equivalent statement $R\setminus \mathfrak{m}^{\Fdif{p^e}}=R\setminus I_e$. Let $f\not\in \mathfrak{m}^{\Fdif{p^e}}.$ Then, there exists $\delta\in D^{(e)}$ such that $\delta(f)=1.$ By Remark~\ref{RemCartFrobDmod}, there exist a map $\tilde{\delta}\in\operatorname{Hom}_R(F^e_*R,F^e_*R)$ such that $\tilde{\delta}(F^e_*f)=1.$ Let $\beta:F^e_*R\to R$ be a splitting. Then, $\beta(\tilde{\delta}(F^e_*f))=1$. Since $\beta\circ \tilde{\delta}\in \operatorname{Hom}_R(F^e_*R,R)$, there exists {$\phi\in \mathcal{C}^e_R$} such that $\phi(f)=1$ by Remark~\ref{RemCartFrobDmod}. Hence, $f\not\in I_e.$ {Let $f\not\in I_e$. Then, there exists $\phi\in {\mathcal C}^{e}_R$ such that $\phi(f)=1.$ By Remark~\ref{RemCartFrobDmod}, there exists a map $\tilde{\phi}\in\operatorname{Hom}_R(F^e_*R,R)$ such that $\tilde{\phi}(F^e_* f)=1.$ Let $\iota:R\to F^e_*R$ be the inclusion. Then, $\iota(\tilde{\phi}(F^e_* f))=1$. Since $\iota\circ \tilde{\phi}\in \operatorname{Hom}_R(F^e_*R,F^e_* R)$, there exists $\delta\in D^{(e)}_R$ such that $\delta\cdot f=1$ by Remark~\ref{RemCartFrobDmod}. Hence, $f\not\in \mathfrak{m}^{\Fdif{p^e}}.$ } \end{proof} \subsection{Differential signature and $F$-signature} Using Proposition~\ref{PropDvsCartIdeals}, we observe that the $F$-signature can be defined in terms of differential Frobenius powers. \begin{corollary}[{see~\cite[Corollary 2.8]{AE}}]\label{RemFsigDifPowers} Suppose that $(R,\mathfrak{m},K)$ is an $F$-finite $F$-pure {local} ring. Let $d=\dim(R)$. Then, \[s(R)=\lim_{e\rightarrow\infty} \frac{\lambda(R/\mathfrak{m}^{\Fdif{p^e}})}{p^{ed}}.\] \end{corollary} \begin{proof} This follows from the fact that $\mathfrak{m}^{\Fdif{p^e}}=I_e$ for every $e\in\mathbb{N}$ by Proposition~\ref{PropDvsCartIdeals}. \end{proof} \begin{remark}\label{analogy} Let $K$ be a perfect field of positive characteristic, and $(R,\mathfrak{m})$ be an algebra with pseudocoefficient field $K$. Set $\ModDif{[p^e]}{R}{K}=(R\otimes_K R)/\Delta^{[p^e]}_{R|K}$. By the same argument in Proposition~\ref{freerank}, using Proposition~\ref{RemYek}~(i) and Proposition~\ref{representing-differential}, one can show that $\lambda_R(R/\mathfrak{m}^{\Fdif{p^e}})=\mathrm{freerank}_R(\ModDif{[p^e]}{R}{K})$. Thus, if $R$ is $F$-pure, \[s(R)=\lim_{e\rightarrow\infty} \frac{\mathrm{freerank}_R\big(\ModDif{[p^e]}{R}{K}\big)}{p^{ed}}.\] We note also that \[\mu_R\big(\ModDif{[p^e]}{R}{K}\big)=\lambda_R\left( \frac{R/\mathfrak{m} \otimes_K R}{\Delta^{[p^e]}_{R|K}}\right)=\lambda_R \big(R/\mathfrak{m}^{[p^e]}\big),\] so that $e_{HK}(R)=\lim_{e\rightarrow\infty} \mu_R\big(\ModDif{[p^e]}{R}{K}\big)/p^{ed}$. For comparison, \[\mu_R\big(\ModDif{n}{R}{K}\big) =\lambda_R\left( \frac{R/\mathfrak{m} \otimes_R R}{\Delta^{n+1}_{R|K}}\right)=\lambda_R \big(R/\mathfrak{m}^{n+1}\big),\] where the right hand equation comes from \cite[Corollaire~16.4.12]{EGAIV}, so that $e(R)= d! \lim_{e\rightarrow\infty} \mu_R\big(\ModDif{n}{R}{K}\big)/n^d$. This motivates the analogy that the differential signature is to the $F$-signature as Hilbert-Samuel multiplicity is to the Hilbert-Kunz multiplicity. The function $\mu_R\big(\ModDif{n}{R}{K}\big)$ is studied by Kunz~\cite{KunzDiff} under the name of \emph{differential Hilbert series}, without the assumption that $R$ is algebra with a pseudocoefficient field. \end{remark} \begin{remark} Continuing with the previous remark, one may speculate what the analogue of the Hilbert-Kunz function for an $\mathfrak{m}$-primary ideal $I$ or an Artinian $R$-module $M$ and what the analogue of tight closure in the setting of principal parts might be. Since the Hilbert-Kunz numerator function is given as $e \mapsto \lambda_R ( R/I^{[p^e]})= \lambda_R (R/I \otimes_R {}^e R)$, the analogue function is $n \mapsto \lambda_R\big( M \otimes_R \ModDif{n}{R}{K} \big) $ for an Artinian $R$-module $M$. Tight closure can be reduced to the tight closure of $0$ in an Artinian module $M$, by declaring $v \in 0^*$ if the normalized Hilbert-Kunz functions (divided by $p^{e d}$) of $M$ and of $M/(v)$ agree asymptotically (see \cite[Theorem 5.4]{CraigBookTC}). Hence the condition that $ \frac{\lambda_R\big( M \otimes_R \ModDif{n}{R}{K} \big) }{n^d} $ and $ \frac{\lambda_R\big( M/(v) \otimes_R \ModDif{n}{R}{K} \big) }{n^d} $ coincide asymptotically defines a closure operation. If the differential signature is positive, then the substantial free part of $\ModDif{n}{R}{K} $ should imply that this closure is trivial for such rings. \end{remark} \begin{lemma} \label{LemmaCofinalDif} {Let $(R,\mathfrak{m},K)$ be a ring of positive characteristic $p$, and let $\mu$ be one of the numbers of Proposition~\ref{RemYek} (iii), (iv).} Then, for any ideal $I \subseteq R$ and any $e>0$ one has the containments { \[ I\dif{\mu (p^e-1)}{\mathbb{Z}} \subseteq I^{\Fdif{p^e}} \subseteq I\dif{p^e-1}{\mathbb{Z}}. \] } \end{lemma} \begin{proof} This follows directly from the definition of $\mathbb{Z}$-linear differential powers, differential Frobenius powers, and Proposition~\ref{RemYek}. \end{proof} We are ready to compare both signatures. In particular, we obtain that, in the $F$-pure case, one is positive if and only if the other is. As a consequence, we have that the differential signature also detects strong $F$-regularity when the ring is $F$-pure. \begin{lemma}\label{LemmaCompDifMultFsig} Let $(R,\mathfrak{m},K)$ be local $F$-pure ring of prime characteristic $p$ and let $\mu$ be one of the numbers of Proposition \ref{RemYek} (iii), (iv). Then, $$ s(R)\leq \frac{\mu^d}{d!} \dm{\mathbb{Z}}(R). $$ \end{lemma} \begin{proof} We have, by Lemma~\ref{LemmaCofinalDif}, {$\mathfrak{m}\dif{\mu(p^e-1)}{\mathbb{Z}} \subseteq \mathfrak{m}^{\Fdif{p^e}}$}. As a consequence, $$ \lambda_R(R/ \mathfrak{m}^{\Fdif{p^e}})\leq \lambda_R(R/\mathfrak{m}\dif{\mu(p^e-1)}{\mathbb{Z}} ). $$ By dividing by $p^{e\dim(R)},$ we obtain that $ s(R)\leq \frac{\mu^d}{d!} \dm{\mathbb{Z}}(R).$ \end{proof} \begin{remark} Under the same hypotheses, a similar argument yields the inequality $\liminf\limits_{n\to\infty} \lambda_R(R/\mathfrak{m}\dif{n}{\mathbb{F}_p})/n^d \leq s(R).$ In particular, when the sequence defining differential signature converges, we have that {$\frac{1}{d!} \dm{\mathbb{Z}}(R)\leq s(R)$}. \end{remark} \begin{theorem}\label{ThmFregPos} Let $(R,\mathfrak{m})$ be an $F$-finite $F$-pure local ring of prime characteristic $p$. Then, $\dm{\mathbb{Z}}(R)>0$ if and only if $R$ is a strongly $F$-regular ring. \end{theorem} \begin{proof} We first assume that $\dm{\mathbb{Z}}(R)>0$. Since every $F$-pure ring is reduced, we can apply Theorem~\ref{ThmDifMultDsimple}. We conclude that $R$ is $D_{R|\mathbb{Z}}$-simple. Then, $R$ is strongly $F$-regular \cite[Theorem~2.2]{DModFSplit}. We now assume that $R$ is strongly $F$-regular. Then, $s(R)>0$ \cite[Theorem 0.2]{AL}. Then, the claim follows from Lemma~\ref{LemmaCompDifMultFsig}. \end{proof} The previous theorem does not hold without the assumption that the ring is $F$-pure, see Example~\ref{palmostfermat} below. \begin{remark} We point out that if $(R,\mathfrak{m},K)$ is a complete local ring with $K$ perfect, then $D_{R|\mathbb{Z}}=D_{R|K}$, and the relation in Lemma~\ref{LemmaCofinalDif} still holds when $\mathbb{Z}$ is replaced by $K$ \cite{Ye}. As a consequence, we can replace $\dm{\mathbb{Z}}(R)$ for $\dm{K}(R)$ in the previous theorem. \end{remark} \begin{proposition} \label{sandwichpositive} Let $(R,\mathfrak{m},K)$ be a local ring. Let $(S,\mathfrak{n},K)$ be a regular ring of positive characteristic $p$ and let $\iota: S\subseteq R $ be a module-finite extension of local rings. Suppose that there is an embedding $\rho:R \subseteq S^{1/p^t}$ of local rings such that the composition $\rho\circ \iota: S \to S^{1/p^t} $ is the inclusion. Then, $\dm{\mathbb{Z}}(R)>0.$ \end{proposition} \begin{proof} Let $d=\dim(R)=\dim(S)$. Let $f \in R\setminus \{0\}$ such that $f^{p^t}\not\in \mathfrak{n}^{{\Fdif{p^e}}}$. Then, there exists $\delta\in \operatorname{Hom}_{S^{p^e}} (S,S)$ such that $\delta(f^{p^t})=1$. By extracting $p^t$-th roots, we get $\widetilde{\delta}\in \operatorname{Hom}_{S^{p^{e-t}}} (S^{1/p^t},S^{1/p^t})$ such that $\widetilde{\delta}(f)=1$. Let $\beta: S^{1/p^t}\to S$ be an $S$-linear splitting. Observe that $R^{p^e} \subseteq R \cap S^{p^{e-t}} \cap S$. Thus, $\beta,\iota,\rho$, and $\widetilde{\delta}$ are all $R^{p^e}$-linear, hence the map $\iota\circ\beta \circ \widetilde{\delta} \circ \rho: R\to R$ is as well. Since this composition sends $f$ to $1$, we conclude that $f\not\in \mathfrak{m}^{{\Fdif{p^e}}}$. Thus, \[\mathfrak{m}^{{\Fdif{p^e}}} \subseteq (\mathfrak{n}^{{\Fdif{p^e}}})^{1/p^t} \cap R = (\mathfrak{n}^{1/p^t})^{[p^e]} \cap R.\] It follows that there is a surjection \[R/\mathfrak{m}^{\Fdif{p^e}} \twoheadrightarrow R/((\mathfrak{n}^{1/p^t})^{[p^e]}\cap R).\] It follows $\lambda_R(R/\mathfrak{m}^{\Fdif{p^e}}) \geq \lambda_R(R/((\mathfrak{n}^{1/p^t})^{[p^e]}\cap R))$. Also, there is an injective map \[S/((\mathfrak{n}^{1/p^t})^{[p^e]}\cap S)\hookrightarrow R/((\mathfrak{n}^{1/p^t})^{[p^e]}\cap R),\] hence the inequality $\lambda_S(S/((\mathfrak{n}^{1/p^t})^{[p^e]}\cap S)) \leq \lambda_R(R/((\mathfrak{n}^{1/p^t})^{[p^e]}\cap R))$: we have used that the $R$-length of the latter module equals its $S$-length, since the residue fields agree. Now, $(\mathfrak{n}^{1/p^t})^{[p^e]}\cap S = \mathfrak{n}^{[p^{e-t}]}$ for $e\geq t$, so we obtain the inequality \[\dm{\mathbb{Z}}(R)=\limsup_{e\to\infty}\frac{\lambda_R( R/ \mathfrak{m}^{\Fdif{p^{e}}})}{p^{ed}}\geq \limsup_{e\to\infty}\frac{\lambda_S(S/ \mathfrak{n}^{[p^{e-t}]})}{p^{ed}}=\limsup_{e\to\infty}\frac{p^{(e-t)d}}{p^{ed}}=\frac{1}{p^{td}}>0.\] \end{proof} \subsection{Characteristic zero and reduction to prime characteristic}\label{SubSecrelative} We want to compare the differential signature in an algebra over a field of characteristic zero with the differential signature of reductions modulo a prime number and hence also with the $F$-signature of the reductions. We fix the following situation. \begin{setup} \label{relativesetup} Let $K$ be a field of characteristic zero, $X$ a scheme of finite type over $K$, and $x$ a $K$-point in $X$. There exists a finitely generated $\mathbb{Z}$-subalgebra $A\subseteq K$ together with a scheme $X_A$ of finite type over $A$ and an $A$-point $x_A$ of $ X_A$ such that $(X_A,x_A)\times_{\operatorname{Spec} A} K \cong (X,x)$. We can assume that $X_A$ and $x_A$ are flat over $\operatorname{Spec} A$ by generic freeness. For a closed point $ s \in \operatorname{Spec} A$ let $X_s$ denote the fiber of $X_A$ over $s$, which is a scheme of finite type over the residue field $\kappa(s)$ of $s$. We observe that $\kappa(s)$ is finite, and so $\kappa(s)$ and $X_s$ are $F$-finite. We take $x_s \in X_s$ to be the fiber of $x \in X$ over $s$. Under these conditions we say that {$(X_A,x_A)$ is a \emph{model}} of $x\in X$. \index{model} { If $X=\operatorname{Spec} R$, $X_A$ and $X_s$ are affine. We denote the corresponding rings by $R_A$ and $R_s$ (or just $R_s$).} Moreover, $Q$ denotes the quotient field of $A$ and $R_Q$ the ring over $Q$. If $R$ is the localization of an algebra of finite type, then we can also find $A$ and a prime ideal $\mathfrak{m}_A$ of $R_A$ which extends to the maximal ideal of $R$. \end{setup} We want to compare the free ranks of $P^n_{R_K|K} $, $P^n_{R_Q|Q} $, $P^n_{R_A|A} $, and $P^n_{R_{\kappa(s)} |\kappa(s)} $, and the differential signatures of these algebras. The basis for such comparison follows from the fact that $P^n_{R'|A'} \cong P^n_{R|A} \otimes_R R' $ for any base change $A \rightarrow A'$ and $R'=R \otimes_AA'$ \cite[Proposition~16.4.5]{EGAIV}. The following lemma implies that we can choose $A$ such that the free rank of $P^n_{R_K|K} $ equals the free rank of $P^n_{R_Q|Q} $. \begin{lemma} \label{freerankflatextend} Let $(R,\mathfrak{m},\Bbbk)$ and $(R',\mathfrak{m}',\Bbbk')$ be local rings and let $R \rightarrow R'$ be a flat ring homomorphism such that $\mathfrak{m} R'=\mathfrak{m}'$. Let $M$ be a finitely generated $R$-module and $M'=M \otimes_RR'$. Then \[\mathrm{freerank}_R M = \mathrm{freerank}_{R'}M' . \] \end{lemma} \begin{proof} We look at the short exact sequence \[0 \longrightarrow \operatorname{Hom}_R(M,\mathfrak{m}) \longrightarrow \operatorname{Hom}_R(M,R) \longrightarrow N \longrightarrow 0 , \] where the {$\Bbbk$-dimension} of $N$ gives the free rank of $M$ by {Proposition~\ref{freerank-conditions}~(2).} We tensor with $R'$ and get by flatness \[0 \longrightarrow \operatorname{Hom}_R(M,\mathfrak{m}) \otimes_RR' \longrightarrow \operatorname{Hom}_R(M,R) \otimes_RR' \longrightarrow N \otimes_RR' \longrightarrow 0 . \] By the assumptions we have $\operatorname{Hom}_R(M,R) \otimes_RR' \cong \operatorname{Hom}_{R'} (M', R' ) $ \cite[Proposition 2.10]{Eisenbud} and \[ \operatorname{Hom}_R(M,\mathfrak{m}) \otimes_RR' \cong \operatorname{Hom}_R (M', \mathfrak{m} \otimes_R R' ) \cong \operatorname{Hom}_{R'} (M', \mathfrak{m} R' ) \cong \operatorname{Hom}_{R'} (M', \mathfrak{m}' ) . \] So the quotient of $ \operatorname{Hom}_{R' }(M', \mathfrak{m}' ) \rightarrow \operatorname{Hom}_{R'} (M', R' ) $ is $N \otimes_RR'$. Suppose that {$N\cong \Bbbk^r$}. Then, \[ N \otimes_RR' = (R/\mathfrak{m})^r \otimes_RR' = (R/\mathfrak{m} \otimes_RR' )^r = (R'/\mathfrak{m} R')^r =\Bbbk'^r .\]\qedhere \end{proof} \begin{lemma} \label{freerankrelative} Let $R$ be a local $K$-algebra essentially of finite type over a field $K$ of characteristic $0$ and let $R_A$ be a model for $R=R_K$ in the sense of Setup~\ref{relativesetup}. Let $M_K$ be a finitely generated $R_K$-module and $M_A$ be a finitely generated $R_A$-module with $M_A \otimes_A K=M_K$. Then for all points $s \in \operatorname{Spec} A$ in an open nonempty subset we have \[\mathrm{freerank} M_K = \mathrm{freerank} M_{{\kappa}(s)} \, \] \end{lemma} \begin{proof} Let $Q=Q(A)$. The free ranks of {$M_K$ and $M_Q$} coincide by Lemma~\ref{freerankflatextend}. Note that the condition $\mathfrak{m}_Q R_K = \mathfrak{m}_K$ can be obtained by enlarging $A$. By further enlarging $A$ the short exact sequence \[0 \longrightarrow \operatorname{Hom}_{R_Q} (M_Q,\mathfrak{m}_Q) \longrightarrow \operatorname{Hom}_{R_Q} (M_Q,R_Q) \longrightarrow N \cong Q^r \longrightarrow 0 , \] which describes the free rank of $M_Q$, descends to a short exact sequence \[0 \longrightarrow \operatorname{Hom}_{R_A} (M_A,\mathfrak{m}_A) \longrightarrow \operatorname{Hom}_{R_A} (M_A,R_A) \longrightarrow N_A \cong A^r \longrightarrow 0 \, \] of $R_A$-modules. We tensor this sequence with $\otimes_A \kappa(s)$ (which is the same as $\otimes_{R_A} R_{\kappa(s)}$) and get a short exact sequence \[0 \longrightarrow \operatorname{Hom}_{R_A} (M_A,\mathfrak{m}_A)\otimes_A \kappa(s) \longrightarrow \operatorname{Hom}_{R_A} (M_A,R_A)\otimes_A \kappa(s) \longrightarrow N_{\kappa(s)} \cong \kappa(s)^r \longrightarrow 0 ,\] since $N_A \cong A^r $ is a flat $A$-module (see \cite[Observation 2.1.5]{HHCharZero}). Moreover, we have \[\operatorname{Hom}_{R_A} (M_A, L_A) \otimes_A \kappa(s) \cong \operatorname{Hom}_{R_{\kappa(s)} } (M_{\kappa(s)}, L_{\kappa(s)}) \] for $L_A=R_A,\mathfrak{m}_A$ after enlarging $A$ again \cite[Theorem~2.3.5 (e)]{HHCharZero}). Then, the tensored sequence is the sequence which computes the free rank of $M_{\kappa(s)}$ to be $r$. \end{proof} \begin{corollary} \label{freerankprincipalrelative} Let $R$ be a local ring essentially of finite type over a field $K$ of characteristic $0$ and let $R_A$ be a model for $R=R_K$ in the sense of Setup~\ref{relativesetup}. Then for all points $s \in \operatorname{Spec} A$ {and for any fixed $n$}, in an open nonempty subset we have \[\mathrm{freerank} P^n_{R_K|K} = \mathrm{freerank} P^n_{R_s|{\kappa}(s)} \, \] \end{corollary} \begin{proof} We have $ P^n_{R_A|A} \otimes_{R_A} R_K \cong P^n_{R_Q|Q} \otimes_{R_Q} R_K \cong P^n_{R_K|K} $ and $P^n_{R_A|A} \otimes_{R_A} R_{\kappa(s)} \cong P^n_{R_{\kappa(s)}|\kappa(s) }$ \cite[Proposition 16.4.5]{EGAIV}. So, this follows from Lemma~\ref{freerankrelative}. \end{proof} If $A$ has dimension $1$, then the existence of the open subset means that the statement is true for all sufficiently large prime characteristics. In this situation we have \[ \dm{\mathbb{Q}}(R_\mathbb{Q}) = \lim_{n \rightarrow \infty } \left( \lim_{p \rightarrow \infty} \frac{ \mathrm{freerank}(P^n_{R_{\kappa(p)}|\kappa(p) } ) }{ n^{d} /d! } \right) . \] \begin{remark} In Corollary~\ref{freerankprincipalrelative} the elements $f$ which describe the shrinking to $D(f)$ depend on $n$. It is not clear in general whether there exists an $f$ which works for all $n$. However, Lemma~\ref{JacobiTaylorperiodic} provides an instance where one unitary operator produces many unitary operators, so by extending the operator over $A_f$ extends all its companions as well. In addition, if $R_K$ has positive differential signature, and if we can find $A \subseteq K$ such that also $R_A$ has positive differential signature, then we get that almost all reductions $R_s$ have positive differential signature bounded from below by $\dm{A}(R_A) $, since the estimate $\mathrm{freerank} P^n_{R_{\kappa(s)|\kappa(s)} } \geq \mathrm{freerank} P^n_{R_A|A} $ holds without further shrinking. Hence, under the assumption that the reductions are $F$-pure, they also have positive $F$-signature by Lemma~\ref{LemmaCofinalDif} with a common bound from below. \end{remark} We now present a corollary that gives an instance of how the differential signature in characteristic zero affects the behavior in varying positive characteristic. \begin{corollary} \label{freerankprincipalrelativesequence} Suppose that $R_\mathbb{Z}$ is a generically flat $\mathbb{Z}$-algebra essentially of finite type of relative dimension $d$ such that $R_\mathbb{Q}$ is local. Then there exists a sequence of prime numbers $p_n$, $n \in \mathbb{N}$, such that {\[ \dm{\mathbb{Q}}(R_\mathbb{Q}) = \limsup_{n \rightarrow \infty } \frac{ \mathrm{freerank}(P^n_{R_{\kappa(p_n)}|\kappa(p_n) } ) }{ n^{d} /d! } \]} \end{corollary} \begin{proof} This follows from Corollary \ref{freerankprincipalrelative}. \end{proof} \begin{definition} We retain the situation described in Setup~\ref{relativesetup}. We say that $x \in X$ is of {\it strongly $F$-regular type } (resp.\,\,{\it $F$-pure type}) if there exists a model of $x\in X$ over a finitely generated $\mathbb{Z}$-subalgebra $A$ of $K$ and a dense open subset $U\subseteq \operatorname{Spec} A$ such that $x_s\in X_s$ is strongly $F$-regular (resp.\,\,$F$-pure) for all closed points $s\in U$. We say that $x\in X$ is of {\it dense strongly $F$-regular type} or {\it dense $F$-pure type} if there exists a model and a dense (not necessarily open) set as above. \end{definition} We note that the previous definitions do not depend of the choice of the model (see \cite[Chapter~2]{HHCharZero} and \cite[Section~3.2]{MustataSrinivas}) Hara showed that strongly $F$-regular type is equivalent to log-terminal singularities \cite[Theorem~5.2]{HaraSing} (see also \cite{KarenFrational}). Hara and Watanabe extended this result to dense strongly $F$-regular type \cite[Theorem~3.3]{HaraWatanabe}. Aberbach and Leuschke \cite[Theorem 0.2]{AL} established that $R$ is strongly $F$-regular if and only if $s(R)>0$. The following result gives a partial analogue for the differential signature in characteristic zero. We recall that the \emph{anticanonical cover}\index{anticanonical cover} of a normal local ring is the symbolic Rees algebra of the inverse of the canonical module (in the class group of $R$). The condition that the anticanonical cover of $R$ is finitely generated (as an $R$-algebra) is a weakening of the condition that $R$ is $\mathbb{Q}$-Gorenstein. \begin{theorem}\label{ThmKLTPos} Let $K$ be a field of characteristic zero and $X$ be a scheme of finite type over $K$. Let $x\in X$ be a normal singularity defined over a field of characteristic zero $K$. Let $R={\mathcal O}_{X,x}$ be the germ of functions at $x$ and $\mathfrak{m}$ its maximal ideal. If $x\in X$ is of dense $F$-pure type, the anticanonical cover of $R$ is finitely generated, and $\dm{K}(R)>0$, then $x\in X$ is log-terminal. \end{theorem} \begin{proof} Since $x\in X$ is of dense $F$-pure type there exists a model of $x\in X$ over a finitely generated $\mathbb{Z}$-subalgebra $A$ of $K$ and a dense subset of closed points $W\subseteq \operatorname{Spec} A$ such that $x_s\in X_s$ is $F$-pure for all $s\in W$. Since $\dm{K}(R)>0$, $R$ is $D$-simple by Theorem~\ref{ThmDifMultDsimple}. Then, {$ R_s:={\mathcal O}_{X_s,x_s}=(R_A)_\eta\otimes_A A/\mathfrak{m}_S$} is a simple $D_{R_s| \kappa(S)}$-module for every closed point $s\in U$, where $U$ is a dense open subset $U\subseteq \operatorname{Spec} A$ \cite[Theorem 5.2.1]{SmithVDB}. Since $\kappa(s)$ is $F$-finite, $R_s$ is strongly $F$-regular for every $s\in U\cap W$ \cite[Theorem 2.2]{DModFSplit}. Since $U\cap W$ is dense, $x\in X$ is of dense strongly $F$-regular type. Hence, $x\in X$ is log-terminal \cite[Theorem~D]{CEMS}. \end{proof} We recall a conjecture that relates $F$-purity with log-canonical singularities. Let $K$ be a field of characteristic zero and $X$ be a scheme of finite type over $K$. Let $x\in X$ be an $n$-dimensional normal $\mathbb{Q}$-Gorenstein singularity. Then $x\in X$ is log canonical if and only if it is of dense $F$-pure type. The direction ``$F$-pure implies log-canonical'' of this conjecture is already known \cite{HaraWatanabe}. There has been intense research regarding the other direction; {see \cite{FujTak,TakagiANT,Her} for some positive results.} Assuming this conjecture, Theorem~\ref{ThmKLTPos} states that if $\dm{K}(R)>0$ and $x\in X$ is log-canonical, then $x\in X$ is log-terminal. \section{Extending and restricting operators} We devote this section to establish concepts and results that allow us to compute the differential signature for several examples. \subsection{Definitions and examples} \begin{definition}\label{def:extensible}$ $ \begin{enumerate} \item An inclusion of $A$-algebras $R \subseteq S$ is \emph{differentially extensible}\index{differentially extensible} over $A$ if for every $\delta \in D_{R|A}$ there exists an element $\tilde{\delta}\in D_{S|A}$ such that $\tilde{\delta}|_{R} = \delta$. \item We say $R\subseteq S$ as above is \emph{differentially extensible with respect to the order filtration} or \emph{order-differentially extensible} if for {every $n\in\mathbb{N}$ and every} $\delta \in D^n_{R|A}$ there exists an element $\tilde{\delta}\in D^n_{S|A}$ such that $\tilde{\delta}|_{R} = \delta$. \item If $A$ has characteristic $p>0$, we say $R\subseteq S$ is \emph{differentially extensible with respect to the level filtration} or \emph{level-differentially extensible} if {for every $e\in\mathbb{N}$ and every $\delta \in D^{(e)}_{R|A}$} there exists an element $\tilde{\delta}\in D^{(e)}_{S|A}$ such that $\tilde{\delta}|_{R} = \delta$. \end{enumerate} \end{definition} Even though these notions are subtle, it includes many interesting examples, e.g., the inclusions of many classical invariant rings in their ambient polynomial rings. We discuss these examples further in this and the next section. Clearly, if $R\subseteq S$ is differentially extensible with respect to the order or the level filtration, it is differentially extensible. However, there are inclusions that are level-differentially extensible but not order-differentially extensible. \begin{example} Let $R=\mathbb{F}_p[xy] \subseteq S=\mathbb{F}_p[x,y]$. We have that $D^{(e)}_R=\operatorname{End}_{R^q}(R)$ and $D^{(e)}_S=\operatorname{End}_{S^q}(S)$. Since $R$ is free over $R^q$, any map in $D^{(e)}_R$ corresponds to a choice of images for $1, xy, \dots, (xy)^{q-1}$. As these elements form part of a free basis for $S$ over $S^q$, one can extend the operator to an element of $D^{(e)}_S$ by choosing arbitrary values for the rest of the free basis. However, the map $\delta\in D^{p-1}_{R|\mathbb{F}_p}$ that sends $xy \mapsto 1$ and the rest of the free basis to zero does not extend to an element of $D^{p-1}_{S|\mathbb{F}_p}$, since any such operator decreases degrees by at most $p-1$. We also note that in characteristic zero, the analogous inclusion $R=\mathbb{C}[xy] \subseteq S=\mathbb{C}[x,y]$ is not differentially extensible. Indeed, the derivation $\theta=\frac{d}{d(xy)}$ on $R$ does not extend to a differential operator of any order on $S$. To see this, observe that any extension of $\theta$ must be of the form {$\theta=\sum_{a,b\geq 1} c_{a,b} x^{a-1} y^{b-1} \frac{1}{a!b!} \partial^{(a,b)}$ for some constants $c_{a,b}\in \mathbb{C}$, with $c_{a,b}=0$ for all but finitely many pairs $a,b$. Plugging in $(xy)^n$ and extracting the $(xy)^{n-1}$ coefficient yields the equality $\sum_{a,b\geq 1} c_{a,b} \binom{n}{a}\binom{n}{b} =n$ for all $n$. But, for fixed integers $a,b\geq 1$, we have $n \mid \binom{n}{a}$ and $n\mid \binom{n}{b}$ in $\mathbb{C}[n]$, so} the left-hand side is a polynomial divisible by $n^2$, so that the equation above for fixed constants yields a nonzero polynomial with roots for all integers $n$, a contradiction. \end{example} The notion of differential extensibility has been studied earlier in the literature, though not under this name \cite{LS,Schwarz}. \begin{remark} By Proposition~\ref{localization1}, localization maps are order-differentially extensible over a field $K$. \end{remark} Part of the following lemma is well-known; see, e.g., {\cite[Proposition~3.2]{Knop}}. \begin{proposition}\label{prop-extensible-finite} Let $R$ and $S$ be algebras essentially of finite type over a field $K$. Suppose that both $R$ and $S$ are normal, and that $S$ is a module-finite extension of $R$, \'etale in codimension one, and split as $R$-modules. Then the inclusion of $R$ into $S$ is order-differentially extensible over $K$. If $K$ has characteristic $p>0$, then the inclusion of $R$ into $S$ is also level-differentially extensible over $K$. \end{proposition} \begin{proof} By Lemma~\ref{etalemap} and the hypotheses, there is an $S$-module homomorphism $\alpha:S\otimes_R \ModDif{n}{R}{K} \to \ModDif{n}{S}{K}$ that is an isomorphism in codimension one. Then, we obtain a map { \[\operatorname{Hom}_R(\ModDif{n}{R}{K},S)\cong\operatorname{Hom}_S(S\otimes_R \ModDif{n}{R}{K},S) \leftarrow \operatorname{Hom}_S(\ModDif{n}{S}{K},S) \cong D^n_{S|K} \] } that is an isomorphism in codimension one. We observe that this map agrees with restriction of functions. We obtain that $D^n_{R|K}(S) \cong D^n_{S|K}$ \cite[{Lemma 0AV6 and Lemma 0AV9}]{stacks-project}. Since $R$ is a a direct summand of $S$ as an $R$-module, we find that $D^n_{R|K} \subseteq D^n_{R|K}(S)$. The statement about level-extensibility in positive characteristic follows in the same way. \end{proof} Much of the literature in terms of extending differential operators is in the context of invariant rings. If $S$ is a $K$-algebra with an action of a linearly reductive group $G$, any $G$-invariant differential operator on $S$ yields a differential operator on $S^G$. The question of whether this restriction homomorphism $\pi: (D_{S|K})^G \rightarrow D_{S^G|K}$ is surjective, and whether it is surjective for each filtered piece $\pi_n: (D^n_{S|K})^G \rightarrow D^n_{S^G|K}$ has been studied (see~\cite{LS, Schwarz} among others) in connection to the question of when rings of differential operators on invariant rings are simple algebras. For example, one has the following result. \begin{theorem}[\cite{Musson}]\label{Musson} Let $K$ be an algebraically closed field of characteristic zero, and $\Lambda$ be a semigroup of the form $L\cap \mathbb{N}^d$ for some linear space $L\subseteq \mathbb{R}^d$. Then the inclusion of $K[\Lambda]\subseteq K[\mathbb{N}^d]$ is order-differentially extensible if and only if the following hold. \begin{itemize} \item[(i)] The spaces $L\cap \{x_i \geq 0\}$ are distinct for distinct $i$. \item[(ii)] The spaces $L\cap \{x_i = 0\}$ are facets of $L\cap \mathbb{R}_{\geq 0}^d$. \item[(iii)] The image of $\Lambda$ under each coordinate function generates $\mathbb{Z}$ as a group. \end{itemize} \end{theorem} For contrast, we note the following. \begin{proposition}\label{charp-toric-extend} Let $K$ be a field of characteristic $p>0$, and $\Lambda$ be a semigroup of the form $L\cap \mathbb{N}^d$ for some linear space $L\subseteq \mathbb{R}^d$. Then $K[\Lambda]\subseteq K[\mathbb{N}^d]$ is level-differentially extensible. \end{proposition} \begin{proof} Note first that, from the $\mathbb{N}^d$-grading, $D^{(e)}_{K[\Lambda]}$ is generated by maps that send monomials to monomials, so it suffices to show that such a map extends. Let $\phi: K[\Lambda] \rightarrow K[\Lambda]$ be a $K[\Lambda]^q$-linear map that sends monomials to monomials. Define a map $\tilde{\phi}:K[\mathbb{N}^d]\rightarrow K[\mathbb{N}^d]$ as follows. For a monomial $x^\alpha$, set $\tilde{\phi}(x^\alpha)=x^{q\gamma}\phi(x^\beta)$ if $\alpha$ can be written as $\beta + q \gamma$ with $\beta \in \Lambda$ and $\gamma\in \mathbb{N}^d$; set $\tilde{\phi}(x^\alpha)=0$; otherwise, extend by $K$-linearity. To see that this map is well-defined, write $\alpha=\beta+q\gamma=\beta' + q\gamma'$ for $\beta, \beta'\in \Lambda$ and $\alpha,\gamma,\gamma'\in\mathbb{N}^d$. Then, $(q-1)\beta + \beta' = q(\beta + \gamma-\gamma')\in \Lambda \cap q\mathbb{Z}^d = q\Lambda$. Thus, \[ \frac{x^{q\gamma}\phi(x^\beta) }{ x^{q\gamma'} \phi(x^{\beta'})} = \frac{x^{q(\beta + \gamma - \gamma')}\phi(x^{\beta}) }{ x^{q\beta} \phi(x^{\beta'})} =\frac{ \phi(x^{q(\beta + \gamma - \gamma')+\beta})}{\phi(x^{q\beta +\beta'})} = \frac{ \phi(x^{(q-1)\beta + \beta' +\beta}) }{\phi(x^{q\beta +\beta'})}=1. \] It follows from the definition that $\tilde{\phi}$ is $K[\mathbb{N}^d]^q$--linear and agrees with $\phi$ on $K[\Lambda]$. \end{proof} We pose the following dual condition to differential extensibility. We find it useful to pose the following in a bit more generality than the setting of Definition~\ref{def:extensible}. \begin{definition} Let $R$ be an $A$-algebra, and $S$ be a $B$-algebra. Suppose we have a commutative diagram \[ \xymatrix{R \ar[r]^{\varphi} & S \\ A \ar[u]\ar[r] & B, \ar[u]}\] which we refer to as a map of algebras $(R,A)\to (S,B)$ for short. A map of algebras is \emph{differentially retractable}\index{differentially retractable} if for every $\delta \in D_{S|B}$ there exists an element $\hat{\delta}\in D_{R|A}$ such that $\delta \circ \varphi = \varphi \circ \hat{\delta}$; it is \emph{order-differentially retractable} if we require the order of $\hat{\delta}$ to be no greater than that of $\delta$. \end{definition} The following fact was used systematically by \`Alvarez-Montaner, Huneke, and the third author \cite{AMHNB} to obtain results on local cohomology and Bernstein-Sato polynomials of direct summands of polynomial rings. \begin{proposition}[{\cite[Lemma~3.1]{AMHNB}}] Let $R\subseteq S$ be an inclusion of $A$-algebras such that $R$ is a ($R$-linear) direct summand of $S$. Then the inclusion $(R,A)\to (S,A)$ is order-differentially retractable. \end{proposition} \begin{lemma} Let $R$ be a polynomial ring over a field $K$, and $S=R/I$ for some ideal $I$. Then the surjection $(R,K)\to (S,K)$ is order-differentially retractable. \end{lemma} \begin{proof} This is a well-known fact; see \cite[Section 15.5.6]{MCR} \end{proof} \subsection{Applications to Bernstein-Sato polynomials and differential powers} The property of differential extensibility has interesting consequences for differential invariants of rings. For example, this notion has strong implications for Bernstein-Sato polynomials. We first recall the following definition. \begin{definition}\label{def-BS-poly} Let $R$ be a ring, $A$ a subring, and $f$ be an element of $R$. We say that the polynomial $b^{R|A}_{f}(s)\in A[s]$ is the \emph{Bernstein-Sato polynomial of $f$}\index{Bernstein-Sato polynomial}\index{$b^{R \vert A}_{f}(s)$} {if $b_f$ is monic,} there exists a differential operator $\delta\in D_{R|A}$ such that, for $b(t)=b^{R|A}_{f}(t)$, \begin{equation}\tag{$\dagger$}\label{BS-equation} \delta\cdot f^{t+1}=b(t) f^t \quad \text{ for all } \quad t\in \mathbb{Z}, \end{equation} and for any pair $\delta$ and $b$ that satisfy~(\ref{BS-equation}), $b^{R|A}_{f}(s)$ divides $b$. If there do not exist $\delta$ and {nonzero} $b$ that satisfy~(\ref{BS-equation}), we say that $f$ has no Bernstein-Sato polynomial. \end{definition} It is well-known that Bernstein-Sato polynomials exist for every element of $S$ when $A=K$ is a field of characteristic zero and $S=K[x_1,\dots,x_n]$ \cite{BernsteinPoly,SatoPoly}. Recently, it was shown that when $R$ is a direct summand of a polynomial ring $S$, then $b^{R|K}_{f}(s)$ exists and $b^{R|K}_{f}(s)\;\big|\;b^{S|K}_{f}(s)$ for any $f\in R$, and examples are given where {$b^{R|K}_{f}(s)\neq b^{S|K}_{f}(s)$ \cite[Example~3.17]{AMHNB}}. The following result establishes a case where the equality is reached. \begin{theorem}\label{BS-Thm} Let $A\subseteq R\subseteq S$ be rings, $f$ be an element of $R$, and suppose that $R$ is a direct summand of $S$ that is differentially extensible over $A$. If $b^{S|A}_{f}(s)$ exists, then $b^{R|A}_{f}(s)$ exists and $b^{R|A}_{f}(s)=b^{S|A}_{f}(s)$. \end{theorem} \begin{proof} We know that the Bernstein-Sato polynomial $b^{R|A}_{f}(s)$ exists and $b^{R|A}_{f}(s)\;\big|\;b^{S|A}_{f}(s)$ \cite[Theorem~3.14]{AMHNB}, so it suffices to show that $b^{S|A}_{f}(s)\;\big|\;b^{R|A}_{f}(s)$. Choose a differential operator $\delta\in D_{R|A}$ that satisfies the equation~(\ref{BS-equation}) for $b=b^{R|A}_{f}$ as an equation in $R$. Replacing $\delta$ by an extension to $D_{S|A}$, we may view this as an equation in $S$. It then follows from the defintion of $b^{S|A}_{f}(s)$ that $b^{S|A}_{f}(s)\;\big|\;b^{R|A}_{f}(s)$. \end{proof} \begin{remark}\label{Rem-com-BS} Let $K$ be a field of characteristic zero, $S=K[x_1,\ldots,x_d]$, and $R\subseteq S$ be a $K$-subalgebra that is a direct summand of $S$. Suppose that the inclusion of $R$ into $S$ is differentially extensible over $K$. Given the previous result, one can compute $b^{R|K}_f(s)$ by using the existing methods \cite{Noro,Oaku} to compute $b^{S|K}_f(s)$. \end{remark} The following propositions show the utility of the notion of extensibility in computing differential powers. \begin{proposition}\label{behave-diff-powers} Let $A \subseteq R \subseteq S$ be rings. Let $I\subseteq R$ and $J\subseteq S$ be ideals. \begin{enumerate} \item[(i)] For any $R$-linear map $\pi:S\rightarrow R$, if $\pi^{-1}(I)\subseteq J$, one has $I\dif{n}{A} \subseteq J\dif{n}{A} \cap R$. \item[(ii)] If the inclusion of $R$ into $S$ is order-differentially extensible and $I \supseteq J\cap R$, then $I\dif{n}{A} \supseteq J\dif{n}{A} \cap R$. \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate} \item[(i)] We show that if $f \in R\setminus J\dif{n}{A}$, then $f \notin I\dif{n}{A}$. Suppose that $f\in R$ is not in $J\dif{n}{A}$. Then there is a differential operator $\delta\in D^{n-1}_{S|A}$ such that $\delta(f)\notin J$. By precomposing with the inclusion and postcomposing with $\pi$, one gets a differential operator $\delta' \in D^{n-1}_{R|A}$; see \cite[Proof of Proposition 3.1]{DModFSplit}. Then, since $\delta'(f)=\pi(\delta(f))\notin I$, it follows that $f \notin I\dif{n}{A}$. \item[(ii)] Suppose that $f \notin I \dif{n}{A}$. Then, there is a differential operator $\delta\in D^{n-1}_{R|A}$ with $\delta(f)\notin I$. Since the inclusion is order-differentially extensible over $A$, there is a differential operator $\delta'\in D^{n-1}_{S|A}$ with $\delta'(f)\in (R\setminus I) \subseteq (S\setminus J)$. Thus, $f\notin J\dif{n}{A}$.\qedhere \end{enumerate} \end{proof} \begin{proposition}\label{intersect-maximal} Let $A \subseteq R \subseteq S$ be rings, with $(R,\mathfrak{m})$ and $(S,\mathfrak{n})$ local. Suppose that the inclusion of $R$ into $S$ is local, admits an $R$-linear splitting, and is order-differentially extensible over $A$. Then, $\mathfrak{m}\dif{n}{A}=\mathfrak{n}\dif{n}{A}\cap R$. If one removes the hypothesis that the inclusion is order-differentially extensible, then one still has that $\mathfrak{m}\dif{n}{A}\subseteq \mathfrak{n}\dif{n}{A}\cap R$. \end{proposition} \begin{proof} If $\pi$ is a splitting of the inclusion and the inclusion is differentially extensible, the hypotheses of both parts of the previous proposition are satisfied for $I=\mathfrak{m}$, $J=\mathfrak{n}$. For the second statement, the first part of the previous proposition applies. \end{proof} Since direct summands of regular rings are strongly $F$-regular \cite{HHStrongFreg}, they also have positive $F$-signature \cite{AL}. We show a similar result for differential signature. \begin{theorem}\label{ThmDirSumPos} Let $(R,\mathfrak{m},K)$ be an algebra with pseudocoefficient field $K$ that is a direct summand of a regular ring $(S,\mathfrak{n},K)$ that is an algebra with pseudocoefficient field $K$. Then $\dm{K}(R)>0$. \end{theorem} \begin{proof} By Proposition~\ref{intersect-maximal}, we have that $\mathfrak{m}\dif{n}{K} \subseteq \mathfrak{n}^n\cap R$. There is a constant $C$ such that $\mathfrak{n}^n \cap R \subseteq \mathfrak{m}^{\lfloor n/C \rfloor}$ \cite[Theorem~1]{Hubl-completions}. Then, setting $d=\dim(R)$, \begin{align*} \dm{K}(R)&=\limsup_{n\rightarrow\infty}\frac{\lambda_R(R/\mathfrak{m}\dif{n}{K})}{n^d / d!}\\ & \geq \limsup_{n\rightarrow\infty}\frac{\lambda_R(R/\mathfrak{m}^{\lfloor n/C \rfloor})}{n^d / d!}\\ & = \limsup_{n\rightarrow\infty}\frac{\lambda_R(R/\mathfrak{m}^n)}{(Cn)^d / d!} = \frac{e(R)}{C^d}, \end{align*} which is positive, as required. \end{proof} We conclude this section by noting that differential signature has a very simple interpretation for order-differentially extensible direct summands of polynomial rings. \begin{theorem}\label{COMPUT-prop} Let $S$ be a standard graded polynomial ring over a field $K$, and $R$ be a graded subring such that $R\rightarrow S$ is split and differentially extensible. Then, the differential powers of $\mathfrak{m} =R_+$ form a graded system and $R\cong \bigoplus_{n\in \mathbb{N}} \frac{\mathfrak{m}\dif{n}{K}}{\mathfrak{m}\dif{n+1}{K}}$. In particular, the gradings agree. Consequently, the differential signature of $R$ converges as a limit to a rational number, which is the degree of the graded ring $R$. If, in addition, $R$ is generated in a single degree $t$, then $\displaystyle \dm{K}(R)=\frac{e(R)}{t^{\dim(R)}}$. \end{theorem} \begin{proof} By Proposition~\ref{intersect-maximal}, we have that $(R_+)\dif{n}{K}=(S_+) \dif{n}{K}\cap R = [R]_{\geq n}$ for all $n$. Thus, ${\mathcal G}_n=[R]_{\geq n} / [R]_{\geq n+1}$ for all $n$. Then, since $R$ is a Noetherian graded ring, its degree is a rational number. In the case $R$ is generated in a single degree $d$, the stated formula is the degree of $R$ as a graded ring. \end{proof} For the most general results on which inclusions of invariant rings are differentially extensible, we refer the reader to the work of Schwarz \cite{Schwarz}. \section{Differential signature of certain invariant rings} In this section, we apply Proposition~\ref{COMPUT-prop} to compute differential signature for certain rings of invariants. \subsection{Formula for invariant rings under the action of a finite group} We now compute the differential signature for rings associated to finite groups. In this case, we obtain that $\dm{K}(R^G)=s(R^G).$ \begin{theorem}\label{group-formula} Let $G$ be a finite group such that $|G| \in K^\times$, and let $G$ act linearly on a polynomial ring $R$ over $K$. Suppose that $G$ contains no elements that fix a hyperplane in the space of one-forms $[R]_1$. Then $\dm{K}(R^G)=1/|G|$. \end{theorem} \begin{proof} The ramification locus of a finite group action corresponds to the union of fixed spaces of elements of $G$. Consequently, the assumption that no element fixes a hyperplane ensures that the extension is unramified in codimension one. The inclusion is order-differentially extensible over $K$ by Proposition~\ref{prop-extensible-finite}. Furthermore, since $|G| \in K^\times$, there exists a Reynolds operator that splits the inclusion map. By Theorem~\ref{COMPUT-prop}, the differential signature is just the degree, which, by Molien's formula, is \[ \dm{K}(R^G)=\limsup_{n \rightarrow \infty} \frac{d!}{n^d}\; \lambda_{R^G}\left( {R^G}/{[R^G]_{\geq n}}\right) = \frac{1}{|G|}. \] \qedhere \end{proof} \subsection{Volume formulas for toric rings}\label{SubSecToric} We also obtain a formula for pointed normal affine semigroup rings. We simply refer to such rings as \emph{toric} rings\index{toric ring}, although different authors mean different things by this term. \begin{setup}\label{toric-setup} Let $C$ be a pointed rational cone in $\mathbb{R}^d$. That is, the rays of $C$ each contain a nonzero point in $\mathbb{Z}^d$, and $C$ contains no lines. Let $K$ be a field, and $R$ be the semigroup ring $K[\mathbb{Z}^d \cap C]\subseteq K[\mathbb{Z}^d]=K[x_1^{\pm1} , \dots, x_d^{\pm 1}]$; that is, the subring of the {Laurent polynomials} with $K$-basis given by monomials whose exponents lie in $\mathbb{Z}^d \cap C$. Given such a realization, let $F_1,\dots,F_r$ be the facets. For each facet $F_i$ there exists a unique linear form $\ell_i$ with $F_i \subseteq \ker \ell_i$, with integral coprime coefficients and positive in the interior of the cone. Let $\ell =(\ell_1, \ldots, \ell_r)$ be the (injective) map from $\mathbb{R}^d$ to $\mathbb{R}^r$. Note that $\ell(C)=\mathbb{R}_{\geq 0}^r\cap \ell(\mathbb{R}^d)$. This gives an embedding of $R$ into $S=K[\mathbb{N}^r]=K[y_1,\dots,y_r]$ and $R$ is the degree-$0$-part of $ K[y_1,\dots,y_r]$ under the grading given by the group $\mathbb{Z}^r/\ell(\mathbb{Z}^d)$. \end{setup} The grading group $\mathbb{Z}^r/\ell(\mathbb{Z}^d)$ is known to be the divisor class group of the monoid ring. We also use the standard-grading of $S=K[y_1,\dots,y_r]$, and a monomial $x^\mu$ will have in $S$ the degree $ |\ell(\mu)|=\ell_1(\mu) + \cdots + \ell_r(\mu) $. The linear form $|\ell|=\ell_1 + \ldots+ \ell_r$ defines in $\mathbb{R}^d$ the compact polytope \[ \{P \in C\ |\ (\ell_1 + \cdots+ \ell_r ) (P) \leq 1 \} .\] Its volume is directly related to the differential signature of the normal monoid ring $K[\mathbb{Z}^d \cap C]$. \begin{lemma}[{\cite[Proof of Theorem~3.4]{Hsiao-Matusevich}},\cite{Musson}]\label{toric-extl} If $K$ is algebraically closed of characteristic zero, the inclusion of $R$ into $S$ is order-differentially extensible over $K$. \end{lemma} Hsiao and Matusevich gave a construction for an order-preserving lift of an arbitrary differential operator from $R$ to $S$ \cite[Proof of Theorem~3.4]{Hsiao-Matusevich}. \begin{theorem}\label{ThmToric} In the context of Setup~\ref{toric-setup}, if $K$ has characteristic zero, then $\dm{K}(R)= d! \ \mathrm{vol} \{P \in C \ | \ (\ell_1 + \cdots+ \ell_r ) (P) \leq 1 \}$. \end{theorem} \begin{proof} By extending the base field, we may assume that $K$ is algebraically closed. It follows from Lemma~\ref{toric-extl} that the inclusion of $R$ into $S$ is order-differentially extensible over $K$. Since $R$ is the degree-$0$-part under a grading (or the invariant ring by a linearly reductive group) of $S$, the inclusion map is split. We can then apply Theorem~\ref{COMPUT-prop} to get \[ \dim_K(R/ (R_+)\dif{n+1}{K} ) = \dim_K(R/R_{\geq n+1}) = \# \{ \mu \in \mathbb{Z}^d \cap C\ |\ (\ell_1 + \cdots+ \ell_r ) (\mu ) \leq n \} . \] This number, divided by $n^d$, converges to the volume of the polytope \[ \{P \in C|\, (\ell_1 + \cdots+ \ell_r ) (P) \leq 1 \} .\qedhere\] \end{proof} We note that a simpler version of the argument for Theorem~\ref{ThmToric} can be used to obtain the description of the $F$-signature of toric varieties due to Watanabe--Yoshida, Von Korff \cite{WatanabeYoshida,VonKorff} (see also \cite{SinghSemigroup}). Our statement of their result, which is somewhat simpler than but closely parallels our description of differential signature, differs from that of the original sources. The key difference is that less specific realizations of toric rings as direct summands of polynomial rings suffice, so it is not necessary to appeal to Setup~\ref{toric-setup}. \begin{theorem}[Watanabe--Yoshida, Von Korff, Singh]\label{WYVKS} Let $K$ be a field of positive characteristic, $L$ be a rational linear subspace of $\mathbb{R}^r$, and $R=K[\mathbb{N}^r \cap L] \subseteq S=K[\mathbb{N}^r]$. Then $s(R)=\mathrm{vol}_L(\cube \cap L)$ where $\cube$ is the unit cube in $\mathbb{R}^r$. \end{theorem} \begin{proof} The inclusion of $R$ into $S$ is split, so every element of $D^{(e)}_S$ restricts to an element of $D^{(e)}_R$ by composition with a retraction. By Proposition~\ref{charp-toric-extend}, every element of $D^{(e)}_R$ extends to an element of $D^{(e)}_S$. The same proof as Proposition~\ref{intersect-maximal} shows that $\mathfrak{m}^\Fdif{p^e}=\mathfrak{n}^\Fdif{p^e}\cap R$, where $\mathfrak{m}$ and $\mathfrak{n}$ are the homogeneous maximal ideals of $R$ and $S$, respectively. As these ideals are generated by monomials, the length of $R/I_e=R/\mathfrak{m}^\Fdif{p^e}$ may be computed as $\# (n\cube \cap L \cap \mathbb{N}^r) = \# (\cube \cap \frac{1}{n}(L \cap \mathbb{N}^r))$. The formula again follows by definition of the Riemann integral. \end{proof} This also shows that for a simplicial cone, i.e., a cone where the number of facets coincides with the dimension ($d=r$ in Setup~\ref{toric-setup}), the $F$-signature and the differential signature are the same. In fact, both are the inverse of the order of the finite group $D= \mathbb{Z}^d/\ell(\mathbb{Z}^d)$. This follows, since in the simplicial case the determinant of the ray vectors normalized by the condition $|\ell|=1$ determines the volume of the relevant polytope as well as the order of this group. Since the $D$-grading of the polynomial ring corresponds to an action of the dual group, and the invariant ring is the degree-$0$-part, this also follows from Theorem~\ref{group-formula}. We give some examples to illustrate how to use Theorem~\ref{ThmToric}. \begin{example} Let $R$ be the $d$-th Veronese subring of $K[x,y]$. One may realize this ring as $K[C \cap \mathbb{Z}^2]$, where $C$ is the cone bounded by the rays $\mathbb{R}_{\geq 0} (1,0)$ and $\mathbb{R}_{\geq 0} (1,d)$. Then, the linear forms are $\ell_1=y$ and $\ell_2=dx-y$, and the generators of the monoid, namely $(1,0),(1,1), \ldots , (1,d-1), (1,d)$, are sent under $\ell$ to $(d-i,i)$, $i=0, \ldots, d$. The condition $dx=1$ determines the points $(\frac{1}{d},0)$ and $(\frac{1}{d},1 )$ on the rays and the area of the given triangle is $1/2d$, which gives $1/d$ when multiplied by $2$. \end{example} \begin{example} Let $R$ be the hypersurface given by the equation $ac - b^2d$. This can be realized as $K[C \cap \mathbb{Z}^3]$, where $C$ is the cone generated by $ (1,0,1), (1,0,0), (1,1,0)$ and $(0,1,1)$. The primitive linear forms for the facets of $C$ are $y,z, x-y+z $ and $x+y-z $. The sum of these linear forms is $2x+y+z$ and the condition $2x+y+z=1$ yields the points on the rays $ ( \frac{1}{3},0,\frac{1}{3} ), (\frac{1}{2},0,0), (\frac{1}{3},\frac{1}{3},0)$ and $(0,\frac{1}{2},\frac{1}{2})$. We triangulate the polytope using these points and get $\dm{K}(R)=1/6$. \end{example} \subsection{Determinantal rings}\label{SubSecDet} We are now ready to compute the differential signature for rings obtained from matrices of variables. We point out that the $F$-signature is not known in the following examples. \begin{theorem}\label{ThmDet} Let $K$ be a field of characteristic zero, $Y=[y_{ij}]$ be an $m\times r$ matrix of indeterminates and $Z=[z_{ij}]$ be an $r\times n$ matrix of indeterminates, with $r \leq m \leq n$. Let $X=YZ$ be the $m\times n$ matrix obtained as the product of $Y$ and $Z$. Let $S=\mathbb{C}[Y,Z]$ be the polynomial ring in the entries of $Y$ and $Z$, and $R=K[X]$ be the subring of $S$ generated by the entries of $X$. We note that $R$ is isomorphic to a ring generated by an $m \times n$ matrix of indeterminates quotiented out by the ideal generated by the $(r+1) \times (r+1)$ minors. Then, \[\displaystyle\dm{K}(R)=\frac{1}{2^{r(m+n-r)}} \det\left[ \binom{m+n-i-j}{m-i} \right]_{i,j=1,\dots,r}.\] \end{theorem} \begin{proof} By base change, we can reduce to the case where $K=\mathbb{C}$. Then, the inclusion map of $R$ into $S$ is order-differentially extensible over $\mathbb{C}$ \cite[Case~A, Main Theorem 0.3, 0.7]{LS}. As this is an invariant ring of a linearly reductive group, the inclusion splits. Theorem~\ref{COMPUT-prop} then applies. The ring $R$ is generated in degree 2, and the formulas for the dimension and multiplicity are classical {; see, e.g., \cite[Theorem~3.5]{HerzogTrung}.} \end{proof} \begin{example} \label{signaturesegre} As the special case where $r=1$, we obtain the analogue of Singh's formula \cite[Example~7]{SinghSemigroup} for the $F$-signature of the homogeneous coordinate ring for the Segre embedding of $\mathbb{P}^{m-1} \times \mathbb{P}^{n-1}$. The differential signature of this ring is $\displaystyle \frac{\binom{m+n-2}{m-1}}{2^{m+n-1}}$. For comparison, the $F$-signature is $\displaystyle \frac{A(m+n-1,n)}{n!}$, where $A$ denotes the Eulerian numbers. \end{example} \begin{theorem} Let $K$ be a field of characteristic zero. $Y=[y_{ij}]$ be an $k\times n$ matrix of indeterminates, with $n > k$. Let $X=Y^t Y$, $S=\mathbb{C}[Y]$ be the polynomial ring in the entries of $Y$, and $R=K[X]$ be the subring of $S$ generated by the entries of $X$. We note that $R$ is isomorphic to a ring generated by a $n \times n$ symmetric matrix of indeterminates quotiented out by the ideal generated by the $k+1\times k+1$ minors. Then, \[\displaystyle\dm{K}(R)=\frac{1}{2^{kn-\binom{k}{2}}} \sum_{1\leq \ell_1 < \dots < \ell_k \leq n} \det\left[ \binom{n-i}{n-\ell_j}\right]_{i,j=1,\dots,k}.\] \end{theorem} \begin{proof} By base change, we can reduce to the case where $K=\mathbb{C}$. Then, the inclusion map of $R$ into $S$ is order-differentially extensible over $\mathbb{C}$ \cite[Case~B, Main Theorem 0.3, 0.7]{LS}. Since $R$ is the invariant ring of $S$ under a natural action of the orthogonal group \cite[II~3.3]{LS}, the inclusion splits. Theorem~\ref{COMPUT-prop} then applies. The ring $R$ is generated in degree 2, and the multiplicity of these rings is known by the work of Conca {\cite[Theorem~3.6]{ConcaSym}.} \end{proof} \begin{theorem} Let $K$ be a field of characteristic zero. $Y=[y_{ij}]$ be an $2k\times n$ matrix of indeterminates, with $n > 2(k+1)$. Let $Q$ be the $2n \times 2n$ antisymmetric matrix $\begin{bmatrix} 0_{n \times n} & I_{n \times n} \\ -I_{n \times n} & 0_{n \times n} \end{bmatrix}$. Let $X=Y^t Q Y$, $S=\mathbb{C}[Y]$ be the polynomial ring in the entries of $Y$, and $R=K[X]$ be the subring of $S$ generated by the entries of $X$. We note that $R$ is isomorphic to a ring generated by a $n \times n$ matrix of indeterminates quotiented out by the ideal generated by the $2(k+1) \times 2(k+1)$ minors. Then, \[\displaystyle\dm{K}(R)=\frac{1}{2^{r(2n-2r-1)}} \det\left[ \binom{2n-4r-2}{n-2r-i+j-1}-\binom{2n-4r-2}{n-2r-i-j-1} \right]_{i,j=1,\dots,r}.\] \end{theorem} \begin{proof} By base change, we can reduce to the case where $K=\mathbb{C}$. Then, the inclusion map of $R$ into $S$ is order-differentially extensible over $\mathbb{C}$ \cite[Case~C, Main Theorem 0.3, 0.7]{LS}. Since $R$ is the invariant ring of $S$ under the action of the symplectic group \cite[II~4.3]{LS}, the inclusion splits. Again, we apply Theorem~\ref{COMPUT-prop} then applies. The ring $R$ is generated in degree 2, and the multiplicity of these rings is known by the work of Herzog and Trung {\cite[Theorem~5.6]{HerzogTrung}.} \end{proof} We end with one more related example. \begin{example} \label{Grassmann} Let $\displaystyle R=\frac{\mathbb{C}[u,v,w,x,y,z]}{(ux+vy+wz)}$. If each variable has degree two, this is isomorphic to the coordinate ring of the Grassmannian $\operatorname{Gr}(2,4)$, which is an invariant ring of an action by $SL_2(\mathbb{C})$. The inclusion map of this invariant ring is order-differentially extensible \cite[Theorem~11.9]{Schwarz}. Since $R$ has multiplicity 2, we find $\dm{\mathbb{C}}(R)=1/16$. \end{example} \subsection{Quadrics}\label{SubQuadrics} We deal now with the quadric hypersurface \[ R=K[x_1, \ldots, x_{d+1}]/(x_1^2 + \cdots + x_{d+1}^2) .\] Over an algebraically closed field of characteristic $0$, all nondegenerate quadrics can be brought into this form. Nondegeneracy is equivalent to the property that $\operatorname{Proj} R$ is smooth. We show that in this case the differential signature is $ \left( \frac{1}{2} \right)^{d-1}$ provided $d \geq 2$. The arguments are quite involved and need several preparations. The first lemma gives explicitely the existence of sufficiently many unitary operators to deduce the estimate $\geq \left( \frac{1}{2} \right)^{d-1} $. For the other estimate we then study global sections of symmetric powers of syzygy bundles on the quadric $\operatorname{Proj} R$. The cases $d=2,3,5$ are covered by the examples done in previous subsections; see Theorem~\ref{group-formula}, Theorem~\ref{ThmToric}, Example~\ref{signaturesegre}, and Example~\ref{Grassmann}. \begin{lemma} \label{quadriclemma} Let $f=x_1^2 + \cdots + x_{d+1}^2$, $d \geq 2$, and $R=K[x_1, \ldots, x_{d+1}]/(f)$ over a field $K$ of characteristic $0$. Then the following hold. \begin{enumerate} \item There exists a differential operator $\delta_1$ of order $2$ and homogeneous of degree $-1$ given by \[\delta_1 =(d-1) \partial_1 +x_1 \partial_1^2- \sum_{j \neq 1} x_1 \partial_j^2 + 2 \sum_{ j \neq 1} x_j \partial_1 \partial_j . \] \item A monomial $x^\lambda$ is sent by $\delta_1$ to \[ \delta_1 (x^\lambda) =\lambda_1 \left( (d-1 ) + (\lambda_1-1) + 2 \sum_{j \neq 1} \lambda_j \right) x^{\lambda -e_1 } - \sum_{j\neq 1} \lambda_j ( \lambda_j-1) x^{ \lambda + e_1-2e_j} .\] \item We have the identity (as operators on the polynomial ring) \[\partial^\nu \circ \delta_1 = \left( d-1+ \nu_1 +2 \sum_{j \neq 1} \nu_j \right) \partial^{\nu+e_1} - \nu_1 \sum_{j \neq 1 } \partial^{\nu -e_1+2e_j } + \theta , \] where $\theta$ is a sum of operators of the form $f_\lambda \partial^\lambda $ with $f_\lambda$ homogeneous of positive degree. \item For every monomial $x^\nu$ of degree $n$ with $\nu_{d+1} \leq 1$ there exists a differential operator $\xi_\nu$ of order $ \leq 2n$ and homogeneous of degree $-n$ of the form \[\xi_\nu = \frac{1}{\nu!} \partial^\nu + \zeta+\theta \, \] where $\zeta$ is a linear combination of $\partial^\mu$ with $ \mu_{d+1} \geq 2 $ and where $\theta$ is a sum of operators of the form $f_\lambda \partial^\lambda $ with $f_\lambda$ homogeneous of positive degree. \item The induced $K$-valued operators $\tilde{\xi}_\nu$ have the property that \[\tilde{\xi}_\nu (x^\nu) =1 \text{ and } \tilde{\xi}_\nu (x^\mu) = 0 \text{ for all monomials } \mu \neq \nu \text{ with } \mu_{d+1} \leq 1 . \] \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item We claim that the tuple $a_\lambda$ indexed by monomials of degree $\leq 2$ given by \[ a_{ e_1 } = d-1,\, a_{ 2e_1} =2x_1,\, a_{2e_j }= -2x_1, a_{e_1+e_j} = 2 x_j \text{ for } j \neq 1, \] and all other entries $0$, gives a relation between the columns of the second Jacobi-Taylor matrix. From this relation, the corresponding differential operator arises via Corollary~\ref{JacobiTayloroperators}. To prove the claim we have to establish the relations \[\sum_\lambda a_\lambda \frac{\partial^{\lambda - \mu } }{ (\lambda - \mu)!}(x_1^2 + \cdots + x_{d+1}^2) =0\] in $R$ for all $\mu$ of degree $\leq 1$. For $\mu =0$ we get \[ \begin{aligned} \sum_\lambda a_\lambda \frac{\partial^{\lambda } }{ \lambda !}(f) & = (d-1) \partial^1 (f) + 2x_1 \frac{ \partial_1^2}{2} (f) - 2 x_1 \sum_{j \neq 1} \frac{\partial_j^2 }{2} (f) \\ & = (d-1) 2x_1 + 2x_1 - \sum_{j \neq 1} 2x_1 \\ & = 0 , \end{aligned}\] for $\mu =e_1$ we get \[ \begin{aligned} \sum_\lambda a_\lambda \frac{\partial^{\lambda - \operatorname{e}_1 } }{ ( \lambda -e_1) !}(f) & = (d-1) f + 2x_1 \partial_1 (f) + 2 x_j \sum_{j \neq 1} \partial_j (f) \\ & =4x_1^2 + \sum_{j \neq 1} 4x_j^2 \\ & = 0 , \end{aligned}\] and for $\mu =e_k$, $k \neq 1$, we get \[ \begin{aligned} \sum_\lambda a_\lambda \frac{\partial^{\lambda -e_k } }{ ( \lambda-e_k) !}(f) & = - 2x_1 \partial_k (f) + 2 x_k \partial_1 (f) \\ & = 0 .\end{aligned}\] \item This is a direct computation. \item A direct computation using $ \partial^\nu \circ x_1 \partial_j^2 = x_1 \partial^{\nu+2e_j} + \nu_1 \partial^{\nu-e_1+2e_j} $ gives \[ \begin{aligned} \partial^\nu \circ \delta_1 & = { \partial^\nu \left( (d-1) \partial_{1} +x_1 \partial_{1}^2 - \sum _{j \neq 1} x_1 \partial_{j}^2 + 2 \sum_{j \neq 1} x_j \partial_{1} \partial_{j} \right) } \\ & = (d-1) \partial^{\nu+e_1} + \nu_1 \partial_1^{\nu+e_1} - \nu_1 \sum_{j \neq 1} \partial^{\nu -e_1+2e_j } +2 \sum_{j \neq 1} \nu_j \partial^{\nu+e_1} + \theta \\ & = \left( d-1+ \nu_1 +2 \sum_{j \neq 1} \nu_j \right) \partial^{\nu+e_1} - \nu_1 \sum_{j \neq 1} \partial^{\nu -e_1+2e_j } + \theta . \end{aligned} \] \item We do induction on $n$. For $n=0$ the statements are clear and for $n=1$ the operators $\delta_1, \ldots, \delta_{d+1}$ have the required properties. We construct the operators $\xi_\nu$ inductively using compositions of the $\delta_i$. So assume that we have already constructed the operators $\xi_\nu$ for all $\nu$ of degree $n$. With the help of (3) we get (with some $\theta$ as in (3)) { \[ \begin{aligned} & \frac{ 1 }{ \nu! } \partial^\nu \circ \delta_1 - \frac{ \nu_1 }{ (\nu_2+1) \nu! } \partial^{\nu -e_1+e_2} \circ \delta_2 \\ & = \frac{ d-1+ \nu_1 + 2 \sum_{j \neq 1} \nu_j }{ \nu! } \partial^{\nu+e_1} - \frac{ \nu_1 }{ \nu! } \sum_{j \neq 1} \partial^ {\nu-e_1 +2e_j} \\ & \qquad - \frac{ \nu_1 (d-1+ \nu_2 +1 + 2 \sum_{j \neq 2} \nu_j -2 ) }{ (\nu_2+1) \nu! } \partial^{\nu-e_1+2e_2} \\ & \qquad + \frac{ \nu_1 }{ (\nu_2+1) \nu! }(\nu_2+1) \sum_{j \neq 2} \partial^{\nu-e_1 +2e_j} +\theta \\ & = \frac{ d-1+ \nu_1 + 2 \sum_{j \neq 1} \nu_j }{ \nu! } \partial^{\nu+e_1} - \frac{ \nu_1 (d-2+ \nu_2 + 2 \sum_{j \neq 2} \nu_j ) }{ (\nu_2+1) \nu! }\partial^{\nu-e_1+2e_2} \\ & \qquad - \frac{ \nu_1 }{ \nu! } \partial^{\nu-e_1+2e_2} + \frac{ \nu_1 }{ \nu! } \partial^{\nu-e_1 +2e_1} +\theta \\ & = \frac{ d-1+ 2 \sum_{j} \nu_j }{ \nu! } \partial^{\nu+e_1} - \frac{ \nu_1 (d-1+ 2 \sum_{j } \nu_j ) }{ (\nu_2+1) \nu! }\partial^{\nu-e_1+2e_2} +\theta \\ & = \left( d-1+ 2 \sum_{j} \nu_j \right) \left( \frac{ \nu_1 +1 }{ (\nu+e_1)! } \partial^{\nu+e_1} - \frac{ \nu_2+2 }{ (\nu-e_1+2e_2)! } \partial^{\nu-e_1+2e_2} \right) +H . \end{aligned}\]} From this we get operators \[ \begin{aligned} \xi_\nu \circ \delta_1 &- \xi_ { \nu-e_1 + e_2 } \circ \delta_2 \\ & = \left( \frac{ \partial^\nu }{ \nu! } +\zeta_\nu +\theta_\nu \right) \circ \delta_1 - \left(\frac{ \partial^{\nu-e_1 + e_2 } } { ( \nu-e_1 + e_2 )! } +\zeta_{\nu-e_1 + e_2 } +H_{\nu-e_1 + e_2 } \right) \circ \delta_2 \\ & = a \partial^{\nu+e_1} +b \partial^{\nu-e_1+2e_2} +G+H \end{aligned}\] with certain coefficients $a,b \neq 0$. Summing up such operators we get for each $\mu$ of degree $n+1$ an operator of the form \[ \frac{ \partial^\mu}{\mu!} + c \partial^\lambda +\zeta+\theta \] where $\lambda_j \leq 1$ for $j=2, \ldots, d+1$ (we shift as much as possible to the first exponent). If, say, $\lambda_2=1$, then $\xi_{\lambda-e_2} \circ \delta_2$ shows the existence of an operator as looked for: the summand $\zeta_{\lambda-e_2}$ gives the new summand of this kind; this does not work for $\lambda_{d+1}=1$. So, assume that $ \lambda_2=\cdots= \lambda_{d}=0$, $\lambda_{d+1} =0,1$ and $\lambda_1=n+1,n$. We have operators of the form $\partial_1^{n+1} +c \partial_{d+1}^{n+1} +\zeta+\theta$ or $\partial_1^{n+1} + c \partial_1 \partial_{d+1}^{n}+\zeta+\theta$ or $\partial_1^{n}\partial_{d+1} + c \partial_1 \partial_{d+1}^{n}+\zeta+\theta$ or $\partial_1^{n} \partial_{d+1} + c \partial_{d+1}^{n+1}+\zeta+\theta$. For $n \geq 2$ the second summand can be moved to the $\zeta$, for $n=1$ the operators can be given directly anyway. \item This follows from (4). Indeed, the operator $\theta$ induces the zero operator to $K$ by base change, and the operator $\zeta$ annihilates all monomials $x^\mu$ with ${\mu_{d+1} \leq 1}$.\qedhere \end{enumerate} \end{proof} The previous lemma shows the existence of many unitary differential operators on a quadric. In order to get an upper bound for the differential signature we apply the methods from Subsection~\ref{graded case}, in particular Corollary~\ref{gradedhypersurfacesymdersignature} and Remark~\ref{Symsyzcomputation}. Note that in the quadric case the bundle $\operatorname{Syz} ( \partial_1f, \ldots, \partial_{d+1} f)$ is (up to the scalar $2$) just the syzygy bundle $\operatorname{Syz} (x_1, \ldots,x_{d+1})$ on $Q_d=\operatorname{Proj} R$, which is the restriction of the cotangent bundle on $\mathbb{P}^d$. \begin{lemma} \label{quadricsymmetricsection} Let $K$ be an algebraically closed field of characteristic $0$ and let $f$ be an irreducible quadric in $d+1$ variables, $d \geq 2$. Then $\operatorname{Sym} ^q(\operatorname{Syz} (x_1, \ldots, x_{d+1} )) (m )$ on $Q_d = \operatorname{Proj} K[x_1, \ldots,x_{d+1}]/(f)$ has no nontrivial section for $m < \frac{3}{2}q$. \end{lemma} \begin{proof} We do induction on the dimension $d$. For $d=2$, the quadric $Q$ is a projective line $\mathbb{P}^1$ as an abstract variety, but embedded as a quadric. Let $\mathcal L$ be the unique ample line bundle of degree $1$ on $Q$, so that ${\mathcal O}_Q(1) \cong \mathcal L^2$. It is known that $\operatorname{Syz} (x_1,x_2, x_3)$ splits as $\mathcal L^{-3} \oplus \mathcal L^{-3}$ on $Q$. Hence $\operatorname{Sym} ^q( \operatorname{Syz} (x_1,x_2, x_3)) \cong \bigoplus_{q+1} \mathcal L^{-3q}$. So \[ \operatorname{Sym} ^q( \operatorname{Syz} (x_1,x_2, x_3)) (m) \cong \bigoplus_{q+1} \mathcal L^{-3q} (m) \cong \bigoplus_{q+1} \mathcal L^{-3q+2m} \] has no nontrivial section for $m < \frac{3}{2}q$. So suppose now that $d \geq 3$ and that the statement is true for smaller $d$. A generic hyperplane section of a smooth quadric is again a smooth quadric. The restriction of the syzygy bundle $\mathcal F_{d} = \operatorname{Syz} (x_1, \ldots, x_{d+1} )$ on $Q_d$ to $Q_{d-1}$ (say, given by $x_{d+1}=0$) is isomorphic to \[\mathcal F_{d} |_{Q_{d-1} }\cong \mathcal F_{d-1} \oplus {\mathcal O}_{Q_{d-1} } (-1) . \] Therefore, the restriction of the symmetric powers of $\mathcal F_d$, which are the symmetric powers of the restriction, $\operatorname{Sym} ^q ( \mathcal F_{d-1} \oplus {\mathcal O}_{Q_{d-1} } (-1) )$, is \[ \operatorname{Sym} ^q (\mathcal F_{d-1} ) \oplus \operatorname{Sym} ^{q-1} (\mathcal F_{d-1} ) (-1) \oplus \operatorname{Sym} ^{q-2} (\mathcal F_{d-1} ) (-2) \oplus \cdots \oplus {\mathcal O}_{Q_{d-1} } (-q) . \] By the induction hypothesis, we have information about the global sections of the summand on the left, but not about the other summands. This decomposition is compatible with the decomposition of $\operatorname{Sym} ^q\( \mathcal{O}_{Q_d } (-1)^{\oplus d+1} \)$ coming from $ \mathcal{O}_{Q_d }^{\oplus d+1} \cong ( \mathcal{O}_{Q_{d} }^{\oplus d} ) \oplus \mathcal{O}_{Q_d} $. Therefore, if a section of $\operatorname{Sym} ^q(\mathcal F_d) (m)$ on $Q_d$ is given as a tuple $(\alpha_\nu)$ in the kernel of the map $ \bigoplus \mathcal{O}_{Q_d}(m -q) \rightarrow \bigoplus \mathcal{O}_{Q_d}(m -q+1) $, then its restriction to $Q_{d-1}=V_+(X_{d+1})$ is directly given with respect to the decomposition $\bigoplus_{k =0}^q \operatorname{Sym} ^{q-k} ( \mathcal F_{d-1}) (-k) $ as the family of the kernel elements of \[ \operatorname{Sym} ^{q-k} \(\bigoplus_d \mathcal{O}_{Q_{d-1} }(-1)\) \rightarrow \operatorname{Sym} ^{q-k-1} \(\bigoplus_d \mathcal{O}_{Q_{d-1} }(-1) \) .\] So assume now that there is a nonzero section of $\operatorname{Sym} ^q(\mathcal F_{d}) (m)$ with $m <\frac{3}{2}q$ given by a tuple $(\alpha_\nu)$, which are homogeneous elements of $K[x_1, \ldots,x_{d+1}]/(f)$ of degree $ m -q $. We look at linear coordinate changes given by an invertible $(d+1) \times (d+1)$-matrix $M$ over $K$ (or field extensions of it) and giving rise to the commutative diagram \[ \xymatrixcolsep{5pc}\xymatrix{ \mathcal{O}_{Q_{d} }(-1)^{\oplus d+1} \ar[r]^-{x_1, \ldots ,x_{d+1} }\ar[d]^-M & \mathcal{O}_{Q_{d} } \ar[d]^-{=} \\ \mathcal{O}_{Q_{d} }(-1)^{\oplus d+1} \ar[r]^-{ y_1, \ldots, y_{d+1} } & \mathcal{O}_{Q_{d} }, } \] and to \[ \xymatrix{ \operatorname{Sym} ^q ( \mathcal{O}_{Q_{d} }(-1)^{\oplus d+1} ) \ar[r] \ar[d]^{\operatorname{Sym} ^q(M) } & \operatorname{Sym} ^{q-1} ( \mathcal{O}_{Q_{d} }(-1)^{\oplus d+1} ) \ar[d]^{ \operatorname{Sym} ^{q-1}(M) } \\ \operatorname{Sym} ^q ( \mathcal{O}_{Q_{d} }(-1)^{\oplus d+1} ) \ar[r] & \operatorname{Sym} ^{q-1} ( \mathcal{O}_{Q_{d} }(-1)^{\oplus d+1} ), } \] and also $m$-twists thereof. Here $\operatorname{Sym} ^q(M) $ is the $q$th symmetric power of $M$. Now, we look at the field extension $K \subseteq K'=K(t_{ij})$ (or its algebraic closure) with new algebraically independent elements $t_{ij}$, $1 \leq i,j \leq d+1$, and we consider the matrix $ M=(t_{ij} )$ which gives corresponding diagrams over $K'$. Corollary~\ref{matrixtranscendent} below applied to the vector space of forms of degree $m-q$ shows that in $\operatorname{Sym} ^q(M) (\alpha_\nu) $ all entries are nonzero. As the transformed representation is as good as the starting one, we may assume that all entries of $\alpha_\nu$ are nonzero. Now we restrict to $Q_{d-1}=V_+(L)$ for a linear form $L$. As the polynomials $\alpha_\nu$ only have finitely many linear factors altogether (since $K[x_1, \ldots, x_{d+1}]/(f) $ is factorial for $d \geq 4$, for $d=3$ the argument is slightly more complicated), we find $L$ such that the restrictions of all $\alpha_\nu$ to $V_+(L)$ are nonzero. So this produces a contradiction to the induction hypothesis. \end{proof} \begin{lemma} \label{matrixsymmetric} Let $K$ be a field of characteristic $0$ and let $ M = [t_{ij}]$ be an $n \times n$ matrix of indeterminates. Then the entries in the symmetric powers $\operatorname{Sym}^{ q } \left( M \right) $ are linearly independent over $K$. \end{lemma} \begin{proof} The symmetric power $\operatorname{Sym}^{ q } \left( M \right)$ of the matrix describes the induced map on the polynomial ring $K(t_{ij})[x_1 , \ldots , x_n]$ in degree $q$ given by the linear map $ x_i \mapsto \sum_{j = 1}^n t_{ji} x_j$. The entry in the $\mu$th row and the $\nu$th column is the coefficient of $x^\mu$ of \[ \left( t_{11}x_1 + \cdots + t_{n1}x_n \right)^{\nu_1} \cdots \left( t_{1n}x_1 + \cdots + t_{nn} x_n \right)^{\nu_n} \, . \] The $i$-th power is \[ \sum_{ \mondeg {\lambda_i } = \nu_i} { { \nu_i } \choose \lambda_i} \cdot t_i^{\lambda_i} x^{\lambda_i} . \] To determine the coefficient of $x^\mu$ in the product, we have to consider the product of the form \[ t_1^{\lambda_1} x^{\lambda_1} \cdots t_n^{\lambda_n} x^{\lambda_n} = t_1^{ \lambda_1} \cdots t_n^{\lambda_n} x^{ \lambda_1 + \cdots + \lambda_n} \] with $\lambda_1 + \cdots + \lambda_n = \mu $. Since the $t_{ij}$ are variables, we get from the monomial $t_1^{ \lambda_1} \cdots t_n^{\lambda_n} $ the multi-tuple $(\lambda_1 , \ldots , \lambda_n)$. This determines $\mu$ as the sum and it determines $\nu$ via $\nu_i =\mondeg {\lambda_i }$. This means that each $t_1^{ \lambda_1} \cdots t_n^{\lambda_n} $ occurs only in one entry of the symmetric power matrix. Since the binomial coefficients are not $0$ in characteristic zero, the entries are linearly independent. \end{proof} \begin{corollary} \label{matrixtranscendent} Let $V$ be a finite dimensional $K$-vector space, let $t_{ij}$, $1 \leq i,j \leq n$, be variables with corresponding field extension $K \subseteq K'=K(t_{ij})$ and let $M=[t_{ij}]$. Let $ \operatorname{Sym} ^q (M): V^{q+n-1 \choose n-1 } \otimes_K K' \rightarrow V^{ q +n-1\choose n-1 } \otimes_KK' $. Then every nonzero element in $ V^{ q +n-1\choose n-1 } $ is sent by this map to an element such that all its entries are nonzero. \end{corollary} \begin{proof} Let $V=K^m$ and let $\beta=(\alpha_\nu) =(\alpha_{\nu,j}) \neq 0$. Then $\alpha_{\nu,j} \neq 0 $ for some $\nu,j$. Assume that $(\operatorname{Sym} ^q(M) (\beta))_\mu = 0$. Writing $ \operatorname{Sym} ^q(M) = (c_{\mu , \nu} ) $, this means that $\sum_\nu c_{\mu, \nu} \alpha_\nu =0$ and in particular $\sum_\nu c_{\mu, \nu} \alpha_{\nu,j} =0$ for all $j=1, \ldots, m$. But this means that there exists a nontrivial $K$-linear relation between the entries in the $\mu$-row of $\operatorname{Sym} ^q(M)$ contradicting Lemma~\ref{matrixsymmetric}. \end{proof} \begin{theorem} \label{quadricsignature} Let $R=K[x_1, \ldots, x_{d+1}]/(x_1^2 + \cdots + x_{d+1}^2) $, $ d \geq 2$, over an algebraically closed field $K$ of characteristic $0$. Then the differential signature of $R$ is $\left( \frac{1}{2} \right)^{d-1}$. \end{theorem} \begin{proof} Lemma~\ref{quadriclemma}~(5) tells us that for every monomial $x^\nu$ of degree $n$ there exists an operator $F_\nu$ of order $\leq 2n$ and homogeneous of degree $-n$ sending $x^\nu$ to a unit and sending the other monomials to $0$. This means that these operators form an independent system of unitary operators of order $2n$ and of cardinality { \[\sum_{j=0}^n \dim_K( R_j) = \dim_K R/\mathfrak{m}^{n+1} =2 \frac{n^{d} }{d!}. \] } Therefore the quotient of unitary operators of order up to $2n$ compared with all operators of order up to $2n$ is $\geq \frac{2n^d/d!}{ (2n)^d/d!} $ and the differential signature $\geq (1/2)^{d-1}$. From Lemma~\ref{quadricsymmetricsection} and Corollary~\ref{gradedhypersurfacesymdersignature} we get (with $e=2$ and $\alpha =1/2$) $ \dm{K}(R) \leq (1/2)^{d-1} $. \end{proof} \begin{remark} For $d=1$ the equation is $x^2+y^2=0$. In this case the situation is completely different. On one hand, there are no unitary operators beside the identity at all (the operators constructed in Lemma~\ref{quadriclemma} exist, but are not unitary). On the other hand, there are many global sections of $\operatorname{Sym} ^q (\operatorname{Syz} (x,y) ) \cong \operatorname{Sym} ^q ( \mathcal{O} (-2)) $ of low degree. \end{remark} \begin{remark} The operators $\xi_\nu$ of order $\geq 2n$ from Lemma~\ref{quadriclemma} yield sections in $\operatorname{Sym} ^{2n} (\operatorname{Syz} (x_1, \ldots, x_{d+1} )) (3n) $ on $Q_d $, hence Lemma~\ref{quadricsymmetricsection} is best possible. For example, $\delta_1$ yields a section in $\operatorname{Sym} ^{2} (\operatorname{Syz} (x_1, \ldots, x_{d+1} )) (3) $ given by variables. \end{remark} \begin{remark} With the methods of this section we can also compute the differential powers of the maximal ideal of a quadric, the result is \[ \mathfrak{m}\dif{2n-1}{K}= \mathfrak{m}\dif{2n}{K} = \mathfrak{m}^n .\] We restrict ourselves to monomials. If $x^\nu \notin \mathfrak{m}^n$, then the degree of $x^\nu$ is $ \leq n-1$ and then there exists by Lemma~\ref{quadriclemma} a unitary operator of order $\leq 2n-2$ sending it to a unit. Hence $x^\nu \notin \mathfrak{m}\dif{2n-1}{}$. If $x^\nu \in \mathfrak{m}^n$, then its degree is $\geq n$. Lemma~\ref{quadricsymmetricsection} and the proof of Theorem~\ref{quadricsignature} shows that there does not exist an operator of order $< 2n$ sending $x^\nu$ to a unit. Hence $x^\nu \in \mathfrak{m}\dif{2n}{K}$. \end{remark} \begin{remark} The limit of the $F$-signature of the quadrics $R_{d,p}=\mathbb{F}_p[x_1, \ldots, x_{d+1}]/(x_1^2+ \cdots + x_{d+1}^2)$ as $p$ goes to infinity can be computed via \cite[Example~2.3]{WatanabeYoshida} from results of Gessel and Monsky \cite[Theorem~3.8]{GesselMonsky}. The result is that the limit is one minus the coefficient of $z^d$ in the power series expansion of $\operatorname{tan}(z) + \operatorname{sec}(z)$. This gives the values \ \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline $ d $ & 2 & 3 &4&5&6&7 \\ \hline $ \lim_{p \rightarrow \infty} s(R_{d,p} )$ & $\frac{ 1 }{ 2 }$ & $ \frac{ 2 }{ 3 }$ &$ \frac{ 19 }{ 24 }$ &$\frac{ 13 }{ 15 }$ &$ \frac{ 659 }{ 720 }$ & $\frac{ 298 }{ 315 }$ \\ \hline \end{tabular} \end{center} \ \noindent which look much wilder than the differential signature. \end{remark} \begin{remark} For $d=2c+1$ odd, and $K$ algebraically closed of characteristic zero, the quadric hypersurface $R=K[x_1, \ldots, x_{d+1}]/(x_1^2+ \cdots + x_{d+1}^2) \cong K[y_1, \ldots, y_{d+1}]/(y_1 y_2+ y_3 y_4 + \cdots + y_d y_{d+1})$ can be realized as a ring of invariants of an action of $\mathrm{SL}_{c}(K)$: if $V$ is the standard representation, then the invariant ring of the representation $V^{\oplus c-1} \oplus V^*$ is isomorphic to $R$ \cite[\S6,\S14]{Weyl}. However, the inclusion of this invariant ring into the ambient polynomial ring is not differentially extensible \cite[Theorem~11.15]{Schwarz}, so the methods of the previous subsections of this section do not apply. \end{remark} \section{Comparison with differential symmetric signature}\label{Comparison} In this section we compare the differential signature and the differential symmetric signature recently introduced by Caminata and the first author \cite{SymSig,BCHigh}. Before recalling the definition of this signature, we make a few observations. We have the short exact sequence \[ 0 \longrightarrow \Delta^{n}/\Delta^{n+1} \longrightarrow P^n_{R|A} = (R \otimes_A R)/\Delta^{n+1} \longrightarrow P^{n-1}_{R|A} = (R \otimes_A R)/\Delta^{n} \longrightarrow 0 \] of $R$-modules. The direct sum $ \operatorname{gr}_{\bullet}(\ModDif{\bullet}{R}{A})=\bigoplus _{n \in \mathbb{N}} \Delta^n/\Delta^{n+1}$ is the \emph{graded associated ring} (for the diagonal embedding). There is a surjective graded $R$-linear map \cite[16.3.1.1]{EGAIV}: \[ \bigoplus_{n \in \mathbb{N}} \operatorname{Sym} ^n (\Omega_{R|A} ) \longrightarrow \bigoplus_{n \in \mathbb{N}} \Delta^{n}/\Delta^{n+1}. \] The algebra on the left is called the \emph{tangent algebra} as its spectrum gives the tangent scheme over $\operatorname{Spec} R$. This map is induced in degree one by the identity $\Omega_{R|A} \rightarrow \Delta/\Delta^2$. If $R$ is differentially smooth in the sense of \cite[D{e}finition~16.10.1]{EGAIV}, then by definition $\Omega_{R|A}$ is locally free and this canonical homomorphism is an isomorphism. In this case also the modules of principal parts are locally free. So in the affine smooth case we have a decomposition \[ P^n_{R|A} = \bigoplus_{k \leq n} \operatorname{Sym} ^k (\Omega_{R|A} ) . \] The \emph{symmetric signature}\index{symmetric signature} \cite{SymSig,BCHigh} is defined as the limit (if it exists) for $n \rightarrow \infty$ of \[ \frac{ \mathrm{freerank} \left( \bigoplus_{k \leq n} \operatorname{Sym} ^k (\Omega_{R|A} )^{**} \right) }{ \operatorname{rank} \left( \bigoplus_{k \leq n} \operatorname{Sym} ^k (\Omega_{R|A} ) \right) } = \frac{ \mathrm{freerank} \left( \bigoplus_{k \leq n} \operatorname{Sym} ^k (\Omega_{R|A} )^{**} \right) }{ \binom{d+n}{n} } , \] where $\, ^{**}$ denotes the double dual functor $\operatorname{Hom}_R (\operatorname{Hom}_R (-,R),R)$. This gives the reflexive hull of the module, which is also the evaluation of the (sheaf) module over an open subset $U$ containing all points of codimension one. If $U$ is also smooth, and such subsets exist in the normal case, then there is an exact sequence \[ 0 \longrightarrow \operatorname{Sym} ^n (\Omega_{R|K})|_U \longrightarrow P^n_{R|K}|_U \longrightarrow P^{n-1}_{R|K}|_U \longrightarrow 0 \, \] of locally free sheaves, which does not split in general. We describe a situation where the module of principal parts on $U$ splits and is isomorphic to the direct sum of the symmetric powers of the K\"ahler differentials. \begin{theorem} \label{differentialsymmetriccompare} Let $S=K[x_1, \ldots, x_d] $ be a polynomial ring and $G$ a finite group acting on $S$, with order coprime to the characteristic of the field $K$, with invariant ring $R=S^G$. Suppose that $G$ contains no elements that fix a hyperplane in the space of one-forms $[S]_1$. Let $U$ denote the smooth locus of $\operatorname{Spec} R$. Then there exists the following commutative diagram \[ \xymatrix{ \operatorname{Sym} ^n (\Omega_{R|K}) \ar[r]\ar[dr] & \Delta^{n}/\Delta^{n+1} \ar[r]\ar[d] & \ModDif{n}{R}{K} \ar[r]\ar[d] & \ModDif{n-1}{R}{K} \ar[r]\ar[d] & 0 \\ 0 \ar[r] & \operatorname{Sym} ^n (\Omega_{R|K}) (U) \ar[r]\ar[d]^-{\cong} & \ModDif{n}{R}{K}(U) \ar[r]\ar[d]^-{\cong} \ar@/_/@{-->}[l] & \ModDif{n-1}{R}{K}(U) \ar[r]\ar[d]^-{\cong}\ar@/_/@{-->}[l] & 0\\ 0 \ar[r] & (\operatorname{Sym} ^n (\Omega_{S|K}))^G \ar[r] & (\ModDif{n}{S}{K})^G \ar[r]\ar@/_/@{-->}[l] & (\ModDif{n-1}{S}{K})^G \ar[r]\ar@/_/@{-->}[l] & 0 }\] where the dotted arrows indicate splittings. The free rank of $P^n_{R|K} $ and of $(P^{n}_{ S|K} )^G$ coincide, and they equal the sum of the free ranks of $\operatorname{Sym} ^k (\Omega_{R|K}) (U)$ for $k \leq n$. In particular, the differential signature equals the symmetric signature, namely $1/ |G|$. \end{theorem} \begin{proof} The first row exists for every $K$-algebra $R$. The first downarrows on the right are the restrictions for the open subset $U$. The exact row in the middle comes from the smoothness of $U$ (without the splittings). The first downarrows on the left are induced by the exactness we have so far. The smooth locus $U$ contains by the smallness assumption on the group action all points of codimension one and the same is true for its preimage $V \subseteq {\mathbb A}^d_K$. Hence we have natrual maps $ \ModDif{n}{R}{K}(U) \rightarrow \ModDif{n}{S}{K}(V) = \ModDif{n}{S}{K}$ where the identity comes from the freeness of $ \ModDif{n}{S}{K} $ and the codimension property of $V$. The image lies in the invariant subspace of the induced action on $P^n_{S|K}$. This gives the second downarrows. For the polynomial ring we have $P^n_{S|K} = \bigoplus_{k \leq n} \operatorname{Sym} ^k(\Omega_{S|K} ) $ and hence the invariants of the induced action on $P^n_{S|K}$ are $(P^n_{S|K})^G = \bigoplus_{k \leq n} ( \operatorname{Sym} ^k(\Omega_{S|K} ))^G $. Hence the splitting in the last row is clear. The induced map $V \rightarrow U=V/G$ is \'{e}tale, hence the second downarrows are isomorphisms, as they are locally isomorphisms on the affine smooth (invariant) subsets. Therefore we get the splitting in the second rows. The differential operators on $R$ correspond to the invariant differential operators on $S$. This is true for the quotient fields $Q(R) \subseteq Q(S)$ (which is a Galois extension) and so it is also true for the rings as every operator on $R$ has an extension to $S$ by Proposition \ref{prop-extensible-finite} which must be the invariant one. A free summand of $P^n_{R|K}$ is the same as a surjection $P^n_{R|K} \rightarrow R $ which gives also a surjection $P^n_{R|K} (U) \rightarrow R$. On the other hand, such a map corresponds to a differential operator $\delta$ on $U$ and on $R$. Let $\tilde{\delta}$ be the corresponding invariant differential operator on $S$. Suppose now that $P^n_{R|K} (U) \cong (P^n_{S|K} )^G \rightarrow R$ is surjective. Then also $\tilde{\delta}: P^n_{S|K} \rightarrow S$ is surjective and so there exists $ f \in S$ such that $\tilde{\delta} (f) =1$. Then by the invariance of the operator $\tilde{\delta}$, $\tilde{\delta} \left(\sum_{\varphi \in G} \varphi(f) \right) = |G| $, which is a unit, and since $\sum_{\varphi \in G} \varphi(f) \in R$, also the operator $\delta$ is unitary. Hence $\delta $ defines a surjection $P^n_{R|K} \rightarrow R$ by Lemma~\ref{unitary}. This argument works also for a family of unitary operators and shows that the free rank of $P^n_{R|K}$ and of $P^n_{R|K} (U) = (P^{n}_{ S|K} )^G $ coincide. Therefore the free rank of $P^n_{R|K}$ equals by the splitting of the second row the sum of the free ranks of $\operatorname{Sym} ^k (\Omega_{R|K}) (U)$ for $k \leq n$. By the codimension property of $U$, $\operatorname{Sym} ^k (\Omega_{R|K}) (U)$ is the reflexive hull of $ \operatorname{Sym} ^k (\Omega_{R|K}) $ and the sum of its free ranks enters as denominators the definition of the symmetric signature. Hence the signatures must be the same in the current setting. The symmetric signature was alredy known \cite[Theorem 2.8]{BCHigh} and the differential signature was computed in Theorem~\ref{group-formula}. \end{proof} \begin{example} The first downarrows in Theorem \ref{differentialsymmetriccompare} are not isomorphisms, not even for $n=1$. We consider the invariant ring $R=K[x,y,z]/(xy -z^2)\cong K[s^2,t^2,st] \subseteq K[s,t]$ with the group action of $\mathbb{Z}/2$ given by sending the variables to their negatives. The module $\Omega_{R|K}$ is generated by $dx=2sds$, $dy=2tdt$ and $dz=sdt+tds$, whereas $(\Omega_{S|K})^G$ contains also $sdt$ and $tds$. \end{example} Theorem~\ref{differentialsymmetriccompare} can not be extended to more general situations. \begin{example} For the toric (and determinantal) hypersurface given by the equation $ux-vy $, the symmetric signature is $0$ \cite[Example 3.9]{BCHigh}, but the differential signature is $1/4$ by Example~\ref{signaturesegre}. \end{example} We expect that, at least under some conditions, the symmetric signature gives a lower bound for the differential signature. The following considerations deal with this point. See also Example~\ref{palmostfermat} below for what can go wrong. \begin{lemma} Let $(R,\mathfrak{m},\Bbbk)$ be a local $K$-algebra essentially of finite type over a field $K$. Suppose that $R$ is an isolated singularity and that $\operatorname{Hom}_R(\operatorname{Sym} ^n( \Omega_{R|K}),R)$ has depth $\geq 3$ for all $n$. Then the natural map $D^n_{R|K} \rightarrow \operatorname{Hom}_R(\operatorname{Sym} ^n( \Omega_{R|K}),R)$ is surjective. \end{lemma} \begin{proof} Let $U=\operatorname{Spec} R \setminus \{ {\mathfrak m} \}$. We proof by induction the statement that the map is surjective and that $H^1(U, D^n_{R|K} )=0$. For $n=1$ the statement is clear since $D^1_{R|K}= R \oplus \operatorname{Der}_{R|K}$ and $H^1(U,R)=H^2_{\mathfrak m}(R)=0$ due to the depth assumption (for $n=0$). Let now the statement be known for $n-1$ and look at the commutative diagram \[ \xymatrix{ 0 \ar[r] & D^{n-1}_{R|K} \ar[r]\ar[d]^-{\cong} & D^{n}_{R|K} \ar[r]\ar[d]^-{\cong} & \operatorname{Hom}_R(\operatorname{Sym} ^n( \Omega_{R|K}),R) \ar[d]^-{\cong} & \\ 0 \ar[r] & D^{n-1}_{R|K}(U) \ar[r] & D^{n}_{R|K}(U) \ar[r] & \operatorname{Hom}_R(\operatorname{Sym} ^n( \Omega_{R|K}),R) (U)\ar[r] & H^1(U,D^{n-1}_{R|K}). } \] The downarrow maps are isomorphisms because of reflexivity. On the smooth locus $U$ we have a short exact sequence of sheaves and so the second row is exact. By the induction hypothesis, $H^1(U, D^{n-1}_{R|K} )=0$, and hence the map is surjective. The second statement follows from \[ \cdots \rightarrow H^1(U, D^{n-1}_{R|K}) \rightarrow H^1(U, D^n_{R|K}) \rightarrow H^1(U,\operatorname{Hom}_R(\operatorname{Sym} ^n( \Omega_{R|K}),R) ) \rightarrow \cdots \, \] and the depth assumption. \end{proof} \begin{remark} There are many results on depth properties for $\operatorname{Sym} ^n(\Omega_{R|K})$ and on conditions for $ \operatorname{Sym} ^n(\Omega_{R|K}) \rightarrow \Delta^n/\Delta^{n+1} $ to be a bijection in the literature. For instance, the K\"ahler differentials in a complete intersection ring has projective dimension $\leq 1$ and one can deduce that the depth of $\operatorname{Sym} ^n(\Omega_{R|K})$ is $ \geq \dim(R)-1$ \cite[Proposition 3 (3)]{Avramovcomplete}; see also \cite[Propositions~2.10 \&~3.4]{SimisUlrichVasconcelostangentalgebrastangentstar}. It is however more difficult to find depth conditions for $\operatorname{Hom}_R (\operatorname{Sym} ^n (\Omega_{R|K} ),R )$ and even for $\operatorname{Der}_{R|K} $. If $\Omega_{R|K}$ itself is a maximal Cohen-Macaulay module, then for Gorenstein rings \cite[Proposition 3.3.3]{BrHe} also the dual is a maximal Cohen-Macaulay module. This can be applied to certain determinantal rings \cite[Proposition 14.7]{BrunsVetter}. In addition, the derivation module for Pl\"ucker algebras of Grasmannians $\neq G(2,4)$ has depth $\geq \dim(R) -2$ \cite[Proposition 3.4]{ChristophersenIlten}. It would be interesting to know whether these results extend to depth conditions on $\operatorname{Hom}_R (\operatorname{Sym} ^n (\Omega_{R|K} ) ,R)$. \end{remark} \begin{lemma} \label{differentialdualsymmetric} Let $(R,\mathfrak{m},\Bbbk)$ be local and essentially of finite type over a field $K$. Suppose that the natural maps $D^n_{R|K} \rightarrow \operatorname{Hom}_R(\operatorname{Sym} ^n( \Omega_{R|K}),R)$ are surjective and that the free ranks of $ P^n_{R|K}$ and of $D^n_{R|K}$ conincide. Then the sum of the free ranks of $(\operatorname{Sym} ^k( \Omega_{R|K}))^{**}$ for $k \leq n$ is bounded above by the free rank of $ P^n_{R|K}$, and the symmetric signature is bounded above by the principal parts signature. \end{lemma} \begin{proof} Suppose by induction that we have already a free summand $N$ of $D^{n-1}_{R|K}$ of rank equal to $\sum_{k = 0}^{n-1} \mathrm{freerank} ( (\operatorname{Sym} ^k( \Omega_{R|K}))^{**} )$. By assumption, there exists a free summand $V$ of $P^{n-1}_{R|K}$ such that $N=\operatorname{Hom}_R(V,R)$. As $V$ is also a free summand of $P^{n}_{R|K}$, also $N$ is a free summand of $D^n_{R|K}$. Let $M$ be a free direct summand of $(\operatorname{Sym} ^n( \Omega_{R|K}))^{**} $. This defines a corresponding free direct summand of the dual of it, which is isomorphic to $(\operatorname{Sym} ^n( \Omega_{R|K}))^{*} $. By assumption we have the short exact sequence \[ 0 \longrightarrow D^{n-1}_{R|K} \longrightarrow D^n_{R|K} \longrightarrow \operatorname{Hom}_R(\operatorname{Sym} ^n( \Omega_{R|K}),R) \longrightarrow 0 \] hence we get a free direct summand $M$ of $ D^n_{R|K}$. As the free summand $N$ of $D^n_{R|K}$ maps to $0$, we have $N \cap M= 0$. Hence $N\oplus M$ is a free summand of $D^n_{R|K}$. \end{proof} The following theorem says that a significant part of $\operatorname{Hom}_R ( \operatorname{Sym}_R^n( \Omega_{R|K} ),R)$ is always inside the image of the map from $D^n_{R|K}$. \begin{theorem} \label{compareoperatorcomposition} Let $K$ be a field, $R$ be a $K$-algebra, and let $\delta_1 , \ldots , \delta_n$ denote derivations. Then the composition $\delta_n \circ \cdots \circ \delta_1$ is mapped under the natural mapping \[ D^n_{R|K} \longrightarrow \operatorname{Hom}_R ( \operatorname{Sym}_R^n(\Omega_{R{{|K}} } ),R) \] to the image of the symmetric product $\delta_n \cdots \delta_1$ under the natural map \[ {\operatorname{Sym}_R^n( \operatorname{Der}_{R|K}) \cong \operatorname{Sym}^n_R (\operatorname{Hom}_R ( \Omega_{R{{|K}} },R)) \longrightarrow \operatorname{Hom}_R ( \operatorname{Sym}_R^n(\Omega_{R{{|K}} } ),R) } . \] \end{theorem} \begin{proof} The homomorphisms in $\operatorname{Hom}_R ( \operatorname{Sym}_R^n(\Omega_{R{{|K}} },R)) $ are determined on the symmetric products of the differential forms $df$, as they generate this module. Let $ f_1 , \ldots , f_n \in R$. The $df_i \in \Omega_{R|K} \cong \Delta/\Delta^2$ are $f_i \otimes 1 -1 \otimes f_i$ and their product $f_1 \cdots f_n$ is sent to $ \sum_{I \subseteq \{1 , \ldots , n\} } (-1)^ { \#( I ) } \left( \prod_{i \notin I} f_i \right) \otimes \left( \prod_{i \in I} f_i \right) $ in $P^n_{R|K}$. Under a differential operator $\eta$ this is sent to \[ \sum_{I \subseteq \{1, \ldots, n\} } (-1)^{ \#( I ) } \left( \prod_{i \notin I} f_i \right) \eta \left( \prod_{i \in I} f_i \right) . \] In the case $\eta= \delta_n \circ \cdots \circ \delta_1$ we have for $I=\{i_1, \ldots, i_m\}$ \[ (\delta_n \circ \cdots \circ \delta_1) \left( \prod_{i \in I} {f_i} \right) = \left( \sum_{ \{1 , \ldots , n \} = A_1 \uplus \cdots \uplus A_m } \delta_{A_1} (f_{i_1}) \cdots \delta_{A_m} (f_{i_m}) \right) , \] where $\delta_{A_j}(f_{i_j})$ denotes the composition of the derivations given by $A_j$ in the given order applied to $f_{i_j}$ and where the sum runs over all ordered partitions of $\{1, \ldots, n\}$. Hence the evaluation yields \begin{eqnarray*} & \, & \sum_{I \subseteq \{1 , \ldots , n\} } (-1)^{ \#( I ) } \left( \prod_{i \notin I} f_i \right) (\delta_n \circ \cdots \circ \delta_1 ) \left( \prod_{i \in I} f_i \right) \\ & =& \sum_{I \subseteq \{1 , \ldots , n\} } (-1)^{ \#( I ) } \left( \prod_{i \notin I} f_i \right) \left( \sum_{ \{1 , \ldots , n \} = A_1 \uplus \cdots \uplus A_m } \delta_{A_1} (f_{i_1}) \cdots \delta_{A_m} (f_{i_m}) \right) \\ & =& \sum_{I \subseteq \{1, \ldots, n\} } (-1)^{ \#( I ) } \sum_{ \{1 , \ldots , n \} = B_1 \uplus \cdots \uplus B_n \text{ with } B_i = \emptyset \text{ for } i \notin I } \delta_{B_1} (f_{1}) \cdots \delta_{B_n} (f_{n}) \\ &= & \sum_{ \{1 , \ldots , n \} = B_1 \uplus \cdots \uplus B_n } \left( \sum_{I \subseteq I(B) } (-1)^{ \#( I ) } \right) \delta_{B_1} (f_{1}) \cdots \delta_{B_n} (f_{n}) , \end{eqnarray*} where here $I(B)$ denotes for an ordered partition $B=(B_1 , \ldots , B_n)$ the set of indices $i$ for which $B_i$ is empty. Note that in the first equation we can omit the summand corresponding to $ I = \emptyset$, since every derivation annihilates $1$. For $ I(B) \neq \emptyset$ the inner sum is $0$, and for $I(B) = \emptyset $ the inner sum is $1$. Hence only those partitions are relevant, where no subset is empty, thus all subsets contain just one element. These correspond to the permutations on $ \{ 1 , \ldots , n \}$, so this is the same as \[ \sum_{\pi \in S_n} \delta_{\pi (1)}(f_1) \cdots \delta_{\pi (n)}(f_n). \] The symmetric product \[\delta_n \cdots \delta_1 \in \operatorname{Sym}_R^n( \operatorname{Der}_{R|K}) \cong \operatorname{Sym}^n_R (\operatorname{Hom}_R ( \Omega_{R{{|K}} },R)) \] is sent under the natural map $\operatorname{Sym}^n_R (\operatorname{Hom}_R ( \Omega_{R{{|K}} },R)) \rightarrow \operatorname{Hom}_R ( \operatorname{Sym}_R^n(\Omega_{R{{|K}} } ),R) $ to \[ \omega_1 \cdots \omega_n \longmapsto \sum_{\pi \in S_n} \delta_{\pi(1)} (\omega_1) \cdots \delta_{\pi(n)} (\omega_n) .\] For $ \omega_i = df_i $ this coincides with the above result. \end{proof} This theorem says that we have a commutative diagram \[ \xymatrix{ & \operatorname{Sym}^n(\operatorname{Der}_{R|K} ) \ar[d] \\ \mathscr{D}_n \ar[d]\ar[r] & \operatorname{im} \big(\operatorname{Sym}^n(\operatorname{Der}_{R|K}) \rightarrow \operatorname{Hom}_R(\operatorname{Sym} ^n(\Omega_{R|K}),R) ) \big) \ar[d] \\ D^n_{R|K} \ar[r] & \operatorname{Hom}_R(\operatorname{Sym} ^n(\Omega_{R|K}),R) }\] where the downarrows in the second row are injective, $ \mathscr{D}_n$ denotes the submodule generated by the composition of $n$ derivations as in Remark~\ref{rem:der-simple}, and the first horizontal map is surjective. \begin{example} \label{palmostfermat} Fix a prime number $p$, let $K$ be a field of characteristic $p$ and consider $f=x^p+y^{p+1}+z^{p+1}$ and the ring $R=K[x,y,z]/(f)$. As $f$ is irreducible, $R$ is a domain. The partial derivatives are $ \frac{\partial f}{\partial x} =0$, $ \frac{\partial f}{\partial y} =y^p$, and $ \frac{\partial f}{\partial z} =z^p$. Hence, in the singular locus $y$ and $z$ vanish, and then also $x$ has to vanish, so we have an isolated singularity and $R$ is a normal domain. Because of $x^p=-yy^p -zz^p$ we have $x^p \in (y^p,z^p)$, so $x$ is in the Frobenius closure of the ideal $y,z$, but $x \notin (y,z)$. Hence $R$ is not $F$-pure and thus not strongly $F$-regular. Then, the $F$-signature of $R$ is $0$ \cite[Theorem~0.2]{AL}. We compute the other signatures considered in this paper. We first show that the differential signature is positive. As a consequence, Theorem~\ref{ThmFregPos} does not hold for non $F$-pure rings. We have the following sandwich situation {\[ K[y,z] \subseteq R \subseteq K[y^{1/p},z^{1/p}]\cong K[y,z] , \] where $R$ is a free module over $K[y,z]$ of rank $p$.} In this situation it follows from Proposition~\ref{sandwichpositive} that $R$ has positive differential signature. The ratios start in characteristic $2$ with $1/1$, $2/3$, $4/6$, $7/10$, but we do not know the value of the signature. The module of K\"ahler differentials is given by the exact sequence \[0 \longrightarrow R \stackrel{(0,y^p,z^p)}{\longrightarrow} R^3 \longrightarrow \Omega_{R|K} \cong R(dx,dy,dz)/df \longrightarrow 0 .\] Hence \[\Omega_{R|K} \cong R \oplus R^2/(y^p,z^p) \cong R \oplus I , \] where $I=(y^p,z^p)$. The second isomorphism comes from the fact that $I$ is a parameter ideal. Hence the symmetric powers of the K\"ahler differentials itself are \[\operatorname{Sym} ^n(\Omega_{R|K} ) \cong R \oplus I \oplus I^{\otimes 2} \oplus \cdots \oplus I^{\otimes n} , \] with just one free summand. The derivation module $\operatorname{Der}_{R|K} \cong \operatorname{Hom}_R (\Omega_{R|K},R)$ is free (so this is another example showing that for Zariski-Lipman we need characteristic $0$, see also \cite[Section~7]{LipmanfreeDerivation}). A basis for the derivations is given by $\delta= \frac{ \partial}{\partial x}$ and $\epsilon = z^p \frac{ \partial}{\partial y}-y^p \frac{ \partial}{\partial z}$. The two derivations commute, and $\delta$ is a unitary derivation but $\epsilon$ is not. From that we get that $\operatorname{Sym} ^n(\operatorname{Der}_K(R,R)) \cong R^{n+1}$ with the basis $\delta^i\epsilon^j$, $i+j=n$. Therefore also the double duals $(\operatorname{Sym} ^n(\Omega_{R|K} ))^{**} $ are free and hence the symmetric signature is $1$, though $R$ is normal and not regular. To set up the Jacobi-Taylor matrices, only the following entries are relevant (and those with $z$ instead of $y$). \[ \frac{1}{p!} \left( \partial_x \right)^p (f) =1,\, \partial_y (f) = y^p ,\, \frac{1}{p!} \left( \partial_y \right)^{p} (f) =y,\, \frac{1}{(p+1)!} \left( \partial_y \right)^{p+1} (f) =1 . \] For the element $x$ the unitary derivation $\delta$ sends $x$ to $1$. But for $x^2$ we have to go in characteristic $2$ up to order $8$ to find an operator sending $x^2$ to $1$. A computation with the Jacobi-Taylor matrices yields \[ a^2+b^3+yb^4+y^3a^2b^3 +y^3a^4+y^4a^4b+y^5a^4b^2+y^7a^6b+y^{9} a^8 . \] This operator is homogeneous of degree $-6$ and involves only partial derivatives with respect to $x$ and $y$. We claim that $\Delta^n/\Delta^{n+1} \cong \bigoplus_{\ell =0 }^n I^\ell$. This rests on the fact that the matrices $T_n$ in the sense of Remark~\ref{JacobiTaylorrelation} have the block matrix form \[ T_n = \begin{pmatrix} 0 & 0& 0 & \cdots \\ M_1 & 0 &0 & \cdots \\ 0 & M_2 &0 & \cdots \\ 0& 0 & M_3 & \cdots \\ \vdots & \vdots & \vdots & \ddots \end{pmatrix} ,\, \text{ where } M_\ell = \begin{pmatrix} y^p & 0& \cdots& 0 \\ z^p & y^p & 0 & \vdots \\ 0 & z^p & y^p & \vdots \\ \vdots & \ddots & \ddots & \vdots \\ 0& 0 & z^p & y^p \\ 0 & \cdots & 0 & z^p \end{pmatrix} \] with $\ell +1$ rows. The $T_n$ and hence the Jacobi-Taylor matrices define injective maps, and so, $\Delta^n/\Delta^{n+1}$ is the cokernel of the $T_n$. The cokernel of every matrix $M_\ell$ is $I^\ell =(y^{\ell p}, y^{(\ell -1)p } z^p, \ldots , z^{\ell p})$. The natural surjection \[ \operatorname{Sym} ^n( \Omega_{R|K}) \cong \bigoplus_{\ell =0}^n I^{\otimes \ell} \longrightarrow \Delta^n/\Delta^{n+1} \cong \bigoplus_{\ell =0 }^n I^\ell \] is naturally given by $ I^{\otimes \ell} \rightarrow I^\ell $. From this it also follows that in the exact complex \[ 0 \longrightarrow D^{n-1}_{R|K} \longrightarrow D^n_{R|K} \longrightarrow \operatorname{Hom}_R (\Delta^n/\Delta^{n+1}, R ) = \operatorname{Hom}_R (\operatorname{Sym} ^n( \Omega_{R|K}) ,R ) \cong R^{n+1} \] the last map is not surjective. The relation $(1,0, \ldots ,0)$ for the rows of the matrix $T_n$ can not for $n \geq p$ be extended to a relation on $J_n^\text{tr} $. \end{example} \section{Duality and convergence}\label{sec-duality} In this section we discuss the differential signature for rings such that the associated graded of $D_{R|K}$ is a finitely generated $R$-algebra. Our main goal is to show that the differential signature is a limit rather than a limsup and that it is a rational number. {Our approach involves the approach of Matlis duality for $D$-modules. This idea appears first in work of Yekutieli \cite{YekDuality}, and is developed further developed by Switala~\cite{Switala} and Switala and Zhang~\cite{SwiZha}. We recall some facts about this duality; see \cite[\S 3 and \S 4]{Switala}. For an algebra $(R,\mathfrak{m},K)$ with coefficient field $K$, and $R$-modules $M,N$, we use the notation $\operatorname{Hom}^{\mathfrak{m}\mathrm{-cts}}_{K}(M,N):=\varinjlim \operatorname{Hom}_K(M/\mathfrak{m}^n M,N)$.\index{$\operatorname{Hom}^{\mathfrak{m}\mathrm{-cts}}_{K}(M,N)$} We denote by $E$ the injective hull of the residue field.} \begin{proposition}\label{MatlisD} Let $(R,\mathfrak{m},K)$ be a complete or graded ring with coefficient field $K$. \begin{enumerate} \item There is an exact functor $(-)^{\vee}$\index{$M^{\vee}$} from $R$-modules to $R$-modules that sends left $D$-modules to right $D$-modules and vice versa, such that $(-)^{\vee}$ agrees with Matlis duality up to $R$-isomorphism for $R$-modules that are Noetherian or Artinian. \item For $M$ Noetherian, one has $M^{\vee}=\operatorname{Hom}^{\mathfrak{m}\mathrm{-cts}}_{K}(M,K)\cong\operatorname{Hom}_R(M,E)$. The last isomorphism comes from composition with a fixed $K$-linear projection onto the socle. \item The right $D$-action on $E=R^{\vee}=\operatorname{Hom}^{\mathfrak{m}\mathrm{-cts}}_{K}(R,K)$ comes from precomposition with a differential operator. \end{enumerate} \end{proposition} \begin{remark}\label{RemJ} We set ${\mathcal J}_{R|K}=\{\delta \in D_{R|K} \;|\; \delta(R)\subseteq \mathfrak{m}\}= \bigcup_{n \in \mathbb{N}} D^n_{R|K} (R,\mathfrak{m}) $, i.e., the collection of all nonunitary operators.\index{${\mathcal J}_{R \vert K}$} Then, ${\mathcal J}_{R|K}$ is a right ideal of $D_{R|K}$ and $\mathfrak{m} D_{R|K}\subseteq {\mathcal J}_{R|K}$. \end{remark} The following lemma is an immediate consequence of Proposition~\ref{MatlisD}. \begin{lemma}\label{LemmaESimple} Let $(R,\mathfrak{m},K)$ be a complete or graded ring with coefficient field $K$, and $E$ be the injective hull of $K$. Suppose that $R$ is a simple $D_{R|K}$-module. Then, $E\cong R^\vee$ is a simple right $D_{R|K}$-module. \end{lemma} \begin{setup}\label{Setup9} Let $(R,\mathfrak{m},K)$ be a complete or graded ring with coefficient field $K$, and $E$ be the injective hull of $K$. As in Proposition~\ref{MatlisD}, we identify $E=\operatorname{Hom}^{\mathfrak{m}\mathrm{-cts}}_{K}(R,K)$ and pick a generator $\eta\in \operatorname{Hom}^{\mathfrak{m}\mathrm{-cts}}_{K}(R,K)$ for its socle. Let $\displaystyle G_{R|K}=\bigoplus_{n=0}^{\infty} \frac{D^n_{R|K}}{D^{n-1}_{R|K}}$ be the associated graded ring of $D_{R|K}$ with respect to the order filtration. \end{setup} We now present a few preparation lemmas in order to reduce, in some cases, the study of differential signature to the classical Hilbert-Samuel theory. \begin{lemma}\label{LemmaLenDeta} {In the situation of Setup~\ref{Setup9}, we} have the equality $\lambda_R(R/\mathfrak{m}\dif{n}{K})=\lambda_R(D^{n-1}_{R|K}\cdot \eta)$. \end{lemma} \begin{proof} Let $\eta:R\rightarrow K$ be the quotient map. We identify $\eta$ as a generator of $R^{\vee}$. Evidently, $f\in \mathfrak{m}$ if and only if $\eta(f)=0$. We claim that $f\in \mathfrak{m}\dif{n}{K}$ if and only if $\mu(f)=0$ for all $\mu\in D^{n-1}_{R|K}\cdot \eta$. Indeed, this is immediate from $D^{n-1}_{R|K}\cdot \eta = \{ \eta \circ \delta \ | \ \delta \in D^{n-1}_{R|K}\}$. To conclude the proof of the lemma, it suffices to show that, given a finite length submodule $N\subseteq R^{\vee}=\operatorname{Hom}^{\mathfrak{m}\mathrm{-cts}}_{K}(R,K)$, the ideal $I=\{r\in R \ | \ \psi(r)=0 \ \text{for all} \ \psi \in N\}$ satisfies $\lambda_R(N)=\lambda_R(R/I)$. To see this, write $\widetilde{N}$ for the image of $N$ in $\operatorname{Hom}_R(R,E)$ via Proposition \ref{MatlisD}(2), and set $J=\{r\in R \ | \ \rho(r)=0 \ \text{for all} \ \rho \in \widetilde{N} \}$. It is evident that $J\subseteq I$. If $r\notin J$, there is some $\rho\in \widetilde{N}$ and $\theta \in E\setminus\{0\}$ with $\rho(r)=\theta$. Since $E$ is divisible, there is some $s\in R$ such that $s\theta$ is nonzero in the socle. Then $s\rho$ is a map in $\widetilde{N}$ that corresponds to a map in $N$ that sends $r$ to a nonzero element, so $r\notin I$. Thus $I=J$, so $\lambda_R(R/I)=\lambda_R(R/J)=\lambda_R(\widetilde{N})=\lambda_R(N),$ where the middle equality is a standard fact from Matlis duality. \end{proof} \begin{remark} {In the situation of Setup~\ref{Setup9}, the} cyclic $D^{n-1}_{R|K}$-module $D^{n-1}_{R|K} \cdot \eta \subseteq \operatorname{Hom}_K(R,K) $ is isomorphic to $D^{n-1}_{R|K}/D^{n-1}_{R|K}(R,\mathfrak{m}) $. Therefore, if $R$ is essentially of finite type over $K$ with residue class field $K$, the equality of Lemma \ref{LemmaLenDeta} follows also directly from Proposition~\ref{freerank} or Remark~\ref{equalityvariant}. \end{remark} \begin{lemma}\label{LemmaGradedE} Suppose that $R$ is $D_{R|K}$-simple. Then, the map $\psi:D_{R|K} \to E$ defined by $\psi(\delta)=\eta\circ\delta$ is a surjective morphism of right $D_{R|K}$-modules with kernel ${\mathcal J}_{R|K}$. As a consequence, $$ \bigoplus_{n=0}^{\infty} \frac{D^n_{R|K}\cdot \eta}{D^{n-1}_{R|K}\cdot \eta}\cong \bigoplus_{n=0}^{\infty} \frac{D^n_{R|K}}{ {\mathcal J}_{R|K} \cap D^{n}_{R|K}+D^{n-1}_{R|K}} $$ as graded $G_{R|K}$-modules. \end{lemma} \begin{proof} Since $R$ is a simple left $D_{R|K}$-module, we have that $E$ is a simple $D_{R|K}$-module by Lemma~\ref{LemmaESimple}. Since $\eta\neq 0,$ we have that $E$ is generated by $\eta$ as $D_{R|K}$-module. Then, $\psi$ is a surjective map. We now show that $\operatorname{Ker}(\psi)={\mathcal J}_{R|K}.$ We have that \begin{align*} \delta \in \operatorname{Ker}(\psi) &\Longleftrightarrow \eta\circ \delta=0\\ &\Longleftrightarrow \eta( \delta(f))=0\; \;\forall f\in R\\ &\Longleftrightarrow \delta(f)\in\operatorname{Ker}(\eta)\;\;\forall f\in R\\ &\Longleftrightarrow \delta(f)\in \mathfrak{m}\; \;\forall f\in R. \end{align*} We conclude that $E$ is isomorphic to $D_{R|K}/{\mathcal J}_{R|K}$. The last claim follows from giving to $D_{R|K}/{\mathcal J}_{R|K}$ the filtration by the image of the filtration $\{D^n_{R|K}\}$ and passing to the associated graded. \end{proof} The following theorem presents the existence and rationality of differential signature for rings such that $G_{R|K}$ is a finitely generated $R$-algebra. \begin{theorem}\label{existenceandraionality} Let $(R,\mathfrak{m},\Bbbk)$ be an algebra with coefficient field $K$, and suppose that $G_{R|K}$ is a finitely generated $R$-algebra. Then the sequence $\displaystyle \frac{\lambda_R(R/\mathfrak{m}\dif{n}{K})}{n^d/d!}$ converges to $\dm{K}(R)=\dm{K}(\widehat{R})$, and the limit is rational. \end{theorem} \begin{proof} {By Proposition~\ref{diff-ops-completion}, $G_{\widehat{R}|K} \cong \widehat{R}\otimes_R G_{R|K}$ is a finitely generated $\widehat{R}$-algebra. If $R$ is not $D_{R|K}$-simple, then $\dm{K}(R)=0$. } Now, we can assume that $R$ is a complete local ring and a simple $D_{R|K}$-module. Let $\operatorname{gr}(E)=\bigoplus_{n \in \mathbb{N} } \frac{D^n\cdot\eta}{D^{n-1}\cdot\eta}.$ We note that $D^n_{R|K}/\mathfrak{m} D^n_{R|K}$ surjects onto $D^n_{R|K}/({\mathcal J}_{R|K}\cap D^n_{R|K}).$ Then, $\displaystyle G_{R|K}/\mathfrak{m} G_{R|K}\to \operatorname{gr}(E)$ is a surjection of graded $G_{R|K}$-modules. As a consequence, we have that $\operatorname{gr}(E)$ is a cyclic. Therefore, it is a finitely generated, graded $G_{R|K}/\mathfrak{m} G_{R|K}$-module. Then, $$ \dm{K}(R)=\lim\limits_{n\to\infty}\frac{\lambda_R(R/\mathfrak{m}\dif{n}{K})}{n^d/d!} =\lim\limits_{n\to\infty}\frac{\dim_K \operatorname{gr}(E)_{\leq n-1} }{n^d/d!} $$ by Lemmas~\ref{LemmaLenDeta} and \ref{LemmaGradedE}. The convergence and rationality statements follow from the last description by standard Hilbert function theory. \end{proof} \section{Applications to symbolic powers} In this section, we discuss some connections between differential operators, symbolic powers, and singularities. A classical theorem of Zariski and Nagata \cite{ZariskiHolFunct,Nagata} characterizes the symbolic powers of primes in $\mathbb{C}[x_1,\dots,x_n]$ as differential powers: $\mathfrak{p}\dif{n}{\mathbb{C}}=\mathfrak{p}^{(n)}$. More generally, if $K$ is a perfect field, and $R=K[x_1,\ldots,x_n]$, and $I\subseteq R$ is a radical ideal, then $I\dif{n}{K}=I^{(n)}$. We point out that there is a recent extension of this result to mixed characteristic using $p$-derivations \cite{DSGJ}. With the general notion of differential powers, one may ask to what extent the Zariski-Nagata theorem holds over other $K$-algebras $R$. It turns out that this is very closely tied to the singularities of $R$. The first two results below show that in reasonably geometric situations, the Zariski-Nagata theorem actually characterizes smoothness. \begin{proposition}\label{ZN-reg-primes} Let $K$ be a perfect field, and $R$ be a ring essentially of finite type over $K$. Let $\mathfrak{p} \in \operatorname{Spec}(R)$ be a prime such that $R_{\mathfrak{p}}$ is regular. Then, $\mathfrak{p}\dif{n}{K}=\mathfrak{p}^{(n)}$. \end{proposition} \begin{proof} By Proposition~\ref{properties-diff-powers}, we have that $\mathfrak{p}\dif{n}{K} \supseteq \mathfrak{p}^{(n)}$, and that both ideals are $\mathfrak{p}$-primary. It suffices to check the equality after localizing at $\mathfrak{p}$ and completing. Then, by Lemma~\ref{diff-localize2} and Lemma~\ref{diff-powers-completion}, $\mathfrak{p}\dif{n}{K} \widehat{R_{\mathfrak{p}}} = (\mathfrak{p} \widehat{R_{\mathfrak{p}}} )\dif{n}{K}$. Since $K$ is perfect, the residue field of $R_{\mathfrak{p}}$ is separable over $K$. Then, there exists a coefficient field $L$ for $\widehat{R_{\mathfrak{p}}}$ containing $K$ \cite[Theorem~28.3~(iii,iv)]{MatsumuraRing}. Since $R_{\mathfrak{p}}$ is regular, we have $\widehat{R_{\mathfrak{p}}}\cong L\llbracket y_1,\dots, y_e \rrbracket$ for some $e$. Under this isomorphism, $\mathfrak{p}=(y_1,\dots, y_e)$. By Lemma~\ref{diff-powers-completion}, it follows that $(\mathfrak{p} \widehat{R_{\mathfrak{p}}} )\dif{n}{L} = \mathfrak{p}^n \widehat{R_{\mathfrak{p}}}$. We obtain containments \[ \mathfrak{p}^n \widehat{R_{\mathfrak{p}}} = (\mathfrak{p} \widehat{R_{\mathfrak{p}}})^n \subseteq (\mathfrak{p} \widehat{R_{\mathfrak{p}}} )\dif{n}{K} \subseteq (\mathfrak{p} \widehat{R_{\mathfrak{p}}} )\dif{n}{L} = \mathfrak{p}^n \widehat{R_{\mathfrak{p}}}, \] so that equality must hold throughout. \end{proof} For maximal ideals in algebras with pseudocoefficient fields, the converse holds. We thank Mel Hochster for helping us complete the proof below. \begin{theorem}\label{Mel} Let $(R,\mathfrak{m},\Bbbk)$ be a domain with pseudocoefficient field $K$ such that $\operatorname{Frac}(R)$ is separable over $K$. Then, $\mathfrak{m}\dif{n}{K}=\mathfrak{m}^n$ for some $n \geq 2$ if and only if $R$ is smooth over $K$. Furthermore, if $\Bbbk$ is perfect, then the previous statements are equivalent to the property that $\ModDif{n}{ {R}}{K}$ is free for some $n \geq 1$. \end{theorem} \begin{proof} Since we already know that if $R$ is regular, then $\mathfrak{m}\dif{n}{K}=\mathfrak{m}^n$, we focus on the other implication. We have that $\lambda_R(R/\mathfrak{m}^n) =\lambda_R(R/\mathfrak{m}\dif{n}{K})\leq \binom{n+d}{d}$ by Proposition~\ref{RankDiff}. We now show that this inequality forces $R$ to be regular. For this purpose, after a flat base change, we can reduce to the case where $K$ is infinite and perfect. Then there exists a $d$-generated minimal reduction of $\mathfrak{m}$, say $J=(x_1,\dots,x_d)$. Let $G=\operatorname{gr}_{\mathfrak{m}}(R)$ be the associated graded ring of $R$. Set $T=K \otimes_R \operatorname{gr}_{J}(R) \cong K[\overline{x_1},\dots,\overline{x_d}]$; since $J$ is generated by a system of parameters, this is a $d$-dimensional polynomial ring over $K$. We claim that $T$ is a graded subring of $G$. Since $J$ is a minimal reduction of $\mathfrak{m}$, we have that $J^n \cap \mathfrak{m}^{n+1} = \mathfrak{m} J^n$ \cite[Corollary~8.3.6]{SwansonHuneke}. Suppose that $R$ is not regular. Then $\dim_K G_1 >d$. Since $\dim_K G_i \geq \dim_K T_i = \binom{i+d-1}{d-1}$ for every $i$, we get since $n \geq 2$ \[ \lambda_R(R/\mathfrak{m}^n) = \sum_{i=0}^{n-1} \dim_K G_i \geq \sum_{i=0}^{n-1} \dim_K T_i + 1 > \binom{n+d}{d},\] contradicting the inequality above. Thus, $R$ is regular. The last statement follows from Propositions~\ref{freerank} and \ref{RankDiff}. \end{proof} The previous result was independently and simultaneously proven for hypersurfaces by Barajas and Duarte \cite{BarajasDuarte}. We also have an analogue of the Zariski-Nagata Theorem/Proposition~\ref{ZN-reg-primes} that describes the differential Frobenius powers of primes outside of the singular locus. \begin{proposition}\label{ZN-frobenius} Let $K$ be a perfect field, and $R$ be a ring essentially of finite type over $K$. Let $\mathfrak{p} \in \operatorname{Spec}(R)$ be a prime such that $R_{\mathfrak{p}}$ is regular. Then $\mathfrak{p}^\Fdif{p^e}$ is the $\mathfrak{p}$-primary component of $\mathfrak{p}^{[p^e]}$. In particular, if $R$ is regular, $\mathfrak{p}^\Fdif{p^e}=\mathfrak{p}^{[p^e]}$. \end{proposition} \begin{proof} The proof is the same as that of Proposition~\ref{ZN-reg-primes}, using Lemmas~\ref{properties-Fdiff-powers}~and~\ref{Fdiff-localize}. The second claim follows from the first because $\mathfrak{p}^{[p^e]}$ is $\mathfrak{p}$-primary in a regular ring. \end{proof} \begin{remark}\label{Kunz} A result of Kunz~\cite{KunzReg} combined with Propositions~\ref{ZN-frobenius}~and~\ref{PropDvsCartIdeals} gives a characterization of regularity for a local $F$-pure $F$-finite ring $(R,\mathfrak{m})$. Namely, the following are equivalent: \begin{itemize} \item $R$ is a regular ring; \item $\mathfrak{m}^\Fdif{p^e}=\mathfrak{m}^{[p^e]}$ for every $e\in \mathbb{N};$ \item $\mathfrak{m}^\Fdif{p^e}=\mathfrak{m}^{[p^e]}$ for some $e\in\mathbb{N}.$ \end{itemize} Then, we can think of the Zariski-Nagata theorem and Theorem~\ref{Mel} as a differential version analogue of Kunz's Theorem. We point out that unlike Kunz's result, Theorem~\ref{Mel} is characteristic-free. \end{remark} The comparison between symbolic powers and differential operators also reflects finer qualities of singularities, beyond smoothness versus nonsmoothness. In strongly $F$-regular rings, the Zariski-Nagata Theorem fails, but the topologies defined by symbolic powers and differntial powers are linearly equivalent. \begin{theorem}[Linear Zariski-Nagata Theorem] Let $R$ be an $F$-finite $F$-pure $K$-algebra, where $K$ is a perfect field. Then, for any $\mathfrak{p}\in \operatorname{Spec}(R)$, \[ R_{\mathfrak{p}} \text{ is strongly $F$-regular } \Longleftrightarrow \exists C>0 : \mathfrak{p}\dif{Cn}{K}\subseteq \mathfrak{p}^{(n)}\text{ for all }n>0. \] \end{theorem} \begin{proof} Since the ideals $\mathfrak{p}^{(n)}$ and $\mathfrak{p}\dif{n}{K}$ are $\mathfrak{p}$-primary for all $n$, by Proposition~\ref{diff-localize}, the condition on the right-hand side is equivalent to that for some $C$, $(\mathfrak{p} R_{\mathfrak{p}})\dif{Cn}{K}=\mathfrak{p}\dif{Cn}{K} R_{\mathfrak{p}} \subseteq \mathfrak{p}^{n}R_{\mathfrak{p}}$ for all $n>0$. Thus, since $F$-finiteness and $F$-purity localize, we can assume that $(R,\mathfrak{m})$ is local and $\mathfrak{p}=\mathfrak{m}$. If $R$ is not strongly $F$-regular, then $R$ is not $D$-simple \cite{DModFSplit}, so ${\mathcal P}_K\neq 0$ by Corollary~\ref{CorDifPrimeDsimple}. Since $\bigcap_{n\in \mathbb{N}} \mathfrak{m}^n =0$, the condition on the right-hand side fails. Now, assume that $R$ is strongly $F$-regular. For an integer $n$, set $l(n)=\lceil \log_p(n) \rceil$: this is the smallest integer $e$ such that $p^e\geq n$. Observe that $n \leq p^{l(n)} \leq pn$. If $\mu$ is the embedding dimension of $R$, then $D^{(e)}_{R | K} \subseteq D_{R|K}^{\mu(p^e-1)}$. We obtain that $\mathfrak{m}\dif{\mu p^e}{K} \subseteq I_e(R)$. There is a constant $e_0$ such that $I_{e+e_0}(R)\subseteq \mathfrak{m}^{[p^e]}$ for all $e$ \cite{AL}. Put together, we obtain \[ \mathfrak{m}\dif{\mu p^{e_0+1} n}{K} \subseteq \mathfrak{m}\dif{\mu p^{l(n)+e_0}}{K} \subseteq I_{l(n) + e_0}(R) \subseteq \mathfrak{m}^{[p^{l(n)}]} \subseteq \mathfrak{m}^n, \] so the constant $C=\mu p^{e_0+1}$ suffices. \end{proof} This result should be compared with the linear comparison between ordinary and symbolic powers \cite{SwansonLinear}; see also \cite{HKV,HKV2}. Connections between strong $F$-regularity and symbolic powers are not new; we refer the reader to \cite{HH-Symb,GrifoHuneke,Smolkin,JavierSmolkin}. We end this section with an algorithm to compute symbolic powers for radical ideals. For algorithmic aspects of the computations of differential powers for other rings, see Remark~\ref{JacobiTayloralgorithms}. \begin{proposition}\label{PropAlgSymbPowers} Let $S=K[x_1,\ldots,x_d]$ is a polynomial ring over a field $K$, and $J$ an ideal. Let $T=K[x_1,\ldots,x_d,\tilde{x}_1 , \ldots , \tilde{x}_d]\cong \ModDif{}{S}{K}$, and $\Delta=(x_1-\tilde{x}_1,\dots,x_d-\tilde{x}_d)$. Then, $$ J\dif{n}{K}=\Big( \tilde{J} + \Delta^n \Big)\cap S, $$ where $\tilde{J}$ denotes the ideal in $T$ generated by the elements in $J$ written in the variables $\{\tilde{x}_i\}$. As a consequence, if $K$ is perfect and $I$ is radical, then $$J^{(n)}=\Big( \tilde{J} + \Delta^n \Big)\cap S.$$ \end{proposition} \begin{proof} Let $\tilde{S}$ be the polynomial subring of $T$ generated by the variables $\{\tilde{x}_i\}$. We know that $T\cong {\ModDif{}{S}{K}}$ and that $d$ corresponds to the inclusion $S\to T.$ We note that $T/\Delta^n \cong {\ModDif{n-1}{S}{K}}$. We have then that $T/\Delta^n$ is a free $S$-module. As a consequence, $\Big( \tilde{J} + \Delta^n \Big)=\bigcap_{\phi} \phi^{-1}(\tilde{J})$, where $\phi$ runs over all $\tilde{S}$-module morphisms $\phi:S/\Delta^n\to \tilde{S}/\tilde{J}$. Then, by Proposition~\ref{universaldifferential}, we have $J\dif{n}{K}=\Big( \tilde{J} + \Delta^n \Big)\cap S$. The claim about symbolic powers follows from the characterization of differential powers in this case \cite{SurveySP}. \end{proof} The formula in Proposition \ref{PropAlgSymbPowers} is similar in spirit to the characterization of symbolic powers in terms of joins of ideals \cite{Sullivant}. The key advantage of the formula above is that the computation involves only twice (not three times) as many variables. \section*{Acknowledgments} We thank Josep \`Alvarez Montaner, Alessio Caminata, Elo\'isa Grifo, Daniel Hern\'andez, Mel Hochster, Craig Huneke, Luis Narv\'aez-Macarro, Claudia Miller, Mircea Musta\c{t}\u{a}, Julio Moyano, Eamon Quinlan, Anurag K.~Singh, Ilya Smirnov, Karen E.~Smith, Jonathan Steinbuch, Robert Walker, Wenliang Zhang, {and the anonymous referee} for helpful conversations and comments. We are very thankful for the work of Gennady Lyubeznik, which introduced us to $D$-module theory. \printindex \bibliographystyle{alpha}
2,869,038,156,108
arxiv
\section*{QUBO reformulation of the genome assembly problem} The growing interest in the use of quantum computing devices (and in particular to quantum annealers) is related to their potential in solving combinatorial optimization problems. It is widely discussed that the potential of quantum annealing is rooted in the quantum effects that allow us to efficiently explore the cost-function landscape in ways unavailable to classical methods. Therefore, the important stage is to map the problem of interest to a Hamiltonian, which maps the binary representation of a graph path into a corresponding energy value. The existing physical implementation of quantum annealing is the D-Wave processor, which can be described as Ising spin Hamiltonian. The Ising Hamiltonian can be transformed into a QUBO problem. Thus, we have to find the mapping to a problem that we would like to solve on the D-Wave quantum processor to the QUBO form. However, establishing correspondence between a problem of interest and the QUBO form may require additional overhead. In particular, in our case the transformation of the OLC graph to a QUBO problem requires the use of additional variables (see below). In addition, the D-Wave quantum processor has its native structure (the chimera structure). That is why after the formulation of the problem of interest in the QUBO form an additional stage of embedding problem in the native structure of the quantum device is required. So additional overhead in the number of physical qubits, which is related to the representation of logical variables by physical qubits of the processor (that takes into account the native structure), is required (see below). \subsection*{Formulation of the Hamiltonian path problem} Along the lines of Ref.~\cite{Lucas2014}, we reformulate the task of finding the Hamiltonian path in the OLC graph as a QUBO problem. Let a directed OLC graph be given in the form $G = (V, E)$, where $V=\{1,2,\ldots,N\}$ is a set of vertices, and $E$ is set of edges consisting of pairs $(u,v)$ with $u,v\in V$. The solution of the Hamiltonian path problem is represented in the form of $N\times N$ permutation matrix ${\cal X}=(x_{v,i})$, whose unit elements $x_{v,i}$ represent the path going through the $v$th node at the $i$th step. Then, we assign {\it each} element $x_{v,i}$ of the matrix ${\cal X}$ a separate logical variable (spin) within an optimization problem. Note, that this representation results in polynomial overhead in the number of logical variables of the QUBO problem: The solution for $N$-vertex graph requires $N^{2}$ logical variables. The resulting Hamiltonian of the corresponding QUBO problem takes the following form: \begin{equation}\label{eq:QUBO} \begin{split} \mathcal{H}&=A \sum_{v=1}^{N}\left(1- \sum_{j=1}^{N}x_{v,j}\right)^2 \\ & + A \sum_{j=1}^{N}\left(1- \sum_{v=1}^{N}x_{v,j} \right)^2 \\ & + A \sum_{(u,v)\notin E}\sum_{j=1}^{N-1}(x_{u,j}x_{v,j+1}), \end{split} \end{equation} where $A>0$ is a penalty coefficient. The first two terms in Eq.~\eqref{eq:QUBO} ensure the fact that each vertex appears only once in the path, and there is a single vertex at each step of the path. The third term provides a penalty for connections within the path that are beyond the allowed ones. With this QUBO formulation, we are able to run the genome assembly task using quantum annealers and quantum-inspired algorithms. We note that the applicability of the method requires the existence of the Hamiltonian path in the corresponding graph, which is not universally the case for arbitrary genetic data given by an OLC-graph. \subsection*{Formulation of the Hamiltonian path problem for acyclic graphs} In general, Hamiltonian path mapping is suitable both for cyclic and acyclic directed graphs. However, it is often the case that the OLC graph contains no cycles. It is then possible to further simplify transformation and reduce the qubit overhead. Here, we demonstrate more compact mapping that requires only $M$ logical variables, where $M=|E| < N^{2}$ is the number of edges. For the acyclic OLC graph $G = (V, E)$, let us define a set of binary variables $\{x_{u,v}\}_{(u,v)\in E}$ that indicate whether an edge $(u,v)$ is included in the Hamiltonian path. Then the corresponding Hamiltonian should include the following two components: \begin{equation}\label{formula:hamiltonian_long} \begin{split} \mathcal{H}= A \sum\limits_{\substack{u\in V}}&\left(1-\sum_{\substack{(u,v)\in E}}x_{u,v} \right)^2 \\ &+A\sum\limits_{\substack{v\in V}}\left(1-\sum_{\substack{(u,v)\in E}}x_{u,v} \right)^2. \end{split} \end{equation} The first and the second terms in Eq.~\eqref{formula:hamiltonian_long} assure that each vertex is incident (if possible) with a single incoming and outgoing path edges correspondingly. Although this realization is helpful and can be used for solving genome assembly problems on quantum annealers without polynomial qubit overhead (the encoding requires $M$ variables, where $M$ is the number of edges in the corresponding OLC-graph), the asymptotic computational speed-up versus classical algorithm is not exponential (since there exist efficient classical algorithms for solving this problem). \subsection*{Embedding to the processor native structure} As it is mentioned before, the logical variables in Hamiltonians~\eqref{eq:QUBO} and \eqref{formula:hamiltonian_long} are not necessarily equivalent to physical qubits of a quantum processor. This is due to the fact that quantum processors have its native structure, i.e. topology of physical qubits and couplers between them. For example, each D-Wave 2000Q QPU is fabricated with 2048 qubits and 6016 couplers in a C16 Chimera topology~\cite{Boixo2013,Boixo2014,Ronnow2014}. Within a C16 Chimera graph, physical qubits are logically mapped into a $16\times16$ matrix of unit cells of $8$ qubits, where each qubit is connected with at most 6 other qubits. In order to realize Hamiltonians~\eqref{eq:QUBO} and~\eqref{formula:hamiltonian_long}, whose connections structure may differ from the one of the processor, we employ an additional embedding stage, described in~\cite{Zbinden2020}. It allows obtaining a desired effective Hamiltonian by assigning several physical qubits of the processor to a single logical variable of the original Hamiltonian (see Fig.~\ref{fig:scheme}c-d). The embedding to the native structure introduces considerable overhead in qubit number relative to the fully-connected model, yet allows solving problems with existing quantum annealers. \section*{Results} Here we apply our method for the experimental realization of {\it de novo} genome assembly using quantum and quantum-inspired annealers. As a figure of merit we use the time-to-solution (TTS), which is the total time required by the solver to find the optimal solution (ground state) at least once with a probability of 0.99. We first define $R_{99}$ as the number of runs required by the solver to find the ground state at least once with a probability of 0.99. Using binomial distribution one can calculate $R_{99}$ as follows: \begin{equation} R_{99} = \frac{\log(1-0.99)}{\log(1-\theta)}, \end{equation} where $\theta$ is an estimated success probability of each run. Then we define TTS it in the following way: \begin{equation} {\rm TTS} = {t_{a}}R_{99}, \label{formula:tts} \end{equation} where $t_{a}$ for D-Wave is 20$\mu{s}$ (default value). Quantum-inspired optimization algorithms can be also used for solving QUBO problems. In our experiments, we employ SimCIM quantum-inspired optimization algorithm~\cite{Tiunov2019}, which is based on the differential approach to simulating specific quantum processors called Coherent Ising Machine (CIM; see Methods). SimCIM runs on conventional hardware and is easily parallelizable on graphical processing units (GPU). This is the time for simulating a single annealing run using our implementation of SimCIM, measured on Intel core i7-6700 Quad-Core, 64GB DDR4, GeForce GTX 1080. \subsection*{$\phi$X~174 bacteriophage genome} We start with the paradigmatic example of the $\phi$X~174 bacteriophage genome~\cite{Sanger1977}. In order to realize {\it de~novo} genome assembly, we construct the adjacency matrix for OLC graphs and use pre-processing for packing this graph into D-Wave processor. We then transform each adjacency graph into the QUBO matrix according to Eq.~(\ref{formula:hamiltonian_long}). For the case of using the D-Wave system we embed the QUBO problem in the native structure of the annealing device (which naturally adds overhead in the number of qubits; see Methods). In order to embed $\phi$X 174 graph into the D-Wave processor, we have preliminary conducted manual graph partitioning using classical algorithms implemented in the METIS tool~\cite{METIS}. Then the problem can be solved with the use of quantum annealing hardware by D-Wave and quantum-inspired optimization algorithm. For each instance, a total of 10$^3$ anneals (runs) were collected from the processor, with each run having an annealing time of 20 $\mu$s. The total number of instances is 1000 (the process of their generation is described in Methods). The results are presented in Table~\ref{tab:bacteriophage}. Up to our best knowledge, this is the first realistic-size {\it de novo} genome assembly employing the use of quantum computing devices and quantum-inspired algorithms. The presented time is required for finding the optimal solution since only one solution has a right interpretation. We note that the time required for the data pre-processing is not included in Table~\ref{tab:bacteriophage}. Details of graph size for each part after manual graph partitioning is presented in Table~\ref{tab:graph} in Methods. Statistics presented for 1000 simulated Phi-X 174 bacteriophage OLC graphs. We use D-Wave hybrid computing mode, which employs further graph decompositions with parallel computing on both classical and quantum backends. D-Wave gives access to the following timing information in the information system: $T_{\rm run}$ (run time), $T_{\rm charge}$ (charge time), and $T_{\rm QPU}$ (QPU access time). We assume CPU time $=T_{\rm run}-T_{\rm QPU}$. We summarize QPU access time and CPU time for obtained OLC graphs. For the case of SimCIM, we use TTS. \begin{table}[] \begin{tabular}{|l|l|l|l|l|l|} \hline & & Mean, $\mu$s & Min, $\mu$s & Max, $\mu$s & 90\% \\ \hline \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Quantum\\ annealer\end{tabular}} & CPU & 8483 & 8314 & 8619 & 8579 \\ \cline{2-6} & QPU & 535 & 369 & 672 & 600 \\ \hline \begin{tabular}[c]{@{}l@{}}Quantum-\\ inspired\\ annealer \\ (SimCIM)\end{tabular} & & 262 & 9.9 & 7212 & 1061 \\ \hline \end{tabular} \caption{Genome assembly time for $\phi$X 174 bacteriophage for 1000 instances. For the data based on experiments with quantum annealers we highlight required classical processor unit (CPU) time and quantum processor unit (QPU) time.} \label{tab:bacteriophage} \end{table} \subsection*{Benchmarking quantum-assisted {\it de novo} genome assembly using the synthetic dataset} In order to perform a complete analysis of the suggested approach, we realize the quantum-assisted {\it de novo} genome assembly for the synthetic dataset. We generate a synthetic dataset, which consists of 60 random reads of length from 5 to 10 (for details, see Methods), 10 problems are generated for every sequence length. We then split each read into $k$-mers of length 3 and compute adjacency matrix for the corresponding OLC graph using Eq.~(\ref{formula:hamiltonian_long}). Finally, we transform each adjacency graph into the QUBO matrix according to our algorithm and minimize it using quantum annealing hardware by D-Wave and quantum-inspired optimization algorithms. Our goal is to check the applicability of existing quantum annealers to the task of genome assembly, evaluate the upper bound on the input problem size (particularly, the length of the original genome), compare the performance of the D-Wave quantum annealer with a software annealing simulator SimCIM. The choice of tools is motivated by their maturity in terms of quantum dimensionality and compatibility with the original formulation in terms of the optimization problem. The similar routine is realized with the use of quantum-inspired annealing. We test the suggested approach with the simulated data first with the D-Wave quantum annealer (see Fig.~\ref{fig:comparison}) and compare our results with quantum-inspired optimization algorithm SimCIM. We note that the D-Wave annealer shows an advantage in genome assembly for short-length sequences, while it cannot be applied for sequences of length 6 and more due to the fact that the decoherence time becomes comparable with the annealing time. What we observe is that the performance of the annealing system is dependent on the properties of input data. While the exhaustive investigation of the nature of D-Wave performance with respect to input data goes beyond the scope of our research, we consider the connectivity of the input graph as one of the critical factors. The D-Wave 2000Q processor is based on Chimera Topology with native physical connectivity of 6 basic qubit cells. During the experiments on the synthetic dataset, we were able to embed the problems with a maximum sequence of up to 10 nucleotides. This length corresponds to the fully connected graph with 8 nodes (K8,8) and this is the maximum possible graph size, which can be embedded to the chimera lattice using clique embedding tool from DWave Ocean SDK. While Ocean SDK allows using other types of embedding (e.g., minor miner) we observed that clique embedding demonstrates more stable results due to the deterministic nature of embedding in comparison to the minor-miner tool, which is intrinsically based on randomized heuristics contributing to larger deviations across experiments. However, no viable solution that could reconstruct the original sequence was found for sequences longer than 6. \begin{figure} \includegraphics[width=1\linewidth]{fig2.pdf} \vskip -4mm \caption{Comparison of the performance of quantum and quantum-inspired methods for {\it de novo} genome assembly based on synthetic data (10 problems were generated for every sequence length): we compare TTS for quantum device D-Wave and quantum-inspired optimization algorithm SimCIM.} \label{fig:comparison} \end{figure} \section*{Discussions} In our work, we have demonstrated the possibility of solving the simplified bioinformatics problem of reconstructing genome sequences using quantum annealing hardware and quantum-inspired algorithms. We have implemented the experimental quantum-assisted assembly of $\phi$X 174 bacteriophage genome. On the basis of synthetic data, we have shown that the existing quantum hardware allows reconstructing short sequences of up to 7 nucleotides. In order to use quantum optimization for realistic tasks, the ratio of the decoherence time to the annealing time should be considerably improved. We note that while the decoherence time is not a fundamental limitation of the technology, the realization of quantum annealers with sufficient decoherence time remains a challenge. While D-Wave machines use superconducting quantum circuits, setups based on ultracold Rydberg atom arrays~\cite{Lukin2017,Lukin2018,Browaeys2020} and trapped ions~\cite{Monroe2017,Blatt2018} can be also used for the efficient implementation of quantum annealing and other quantum optimization algorithms. Specifically, the system of Rydberg atom arrays has been studied in the context of solving the maximum independent set problem~\cite{Lukin2018,Browaeys2020}, which is NP-hard. For longer sequences, as we have demonstrated, it is possible to use quantum-inspired algorithms that are capable of solving more complex problems using classical hardware. We note that our work is a proof-of-principle demonstration of the possibility to use existing quantum devices for solving the genome assembly problem. The problem scale considered in this paper is still far from real sequences ($\sim$130 kilo-base pairs for primitive bacterias) and is lacking numerous complications, such as errors in sequence reads and handling repeating sequences. However, the proposed method demonstrates that newly evolving computing techniques based on quantum computers and quantum-inspired algorithms are quickly developing and can be soon applied in new areas of science. Limitations of existing quantum hardware do not allow universally outperform existing solutions for {\it de novo} genome assembling. At the same time, one of the most interesting practical questions is when one can expect computational advantages from the use of quantum computing in genome assembling tasks. We note that in real-life conditions a number of additional challenges arise. Examples include errors (random insertions and deletions, repeats, etc.), genome contaminants (pieces of the genome not related to the subject of interest), polymer chain reaction artifacts, and others require additional post-processing steps. These problems are beyond the scope of our proof-of-principle demonstration and they should be considered in the future. Another complication comes from the fact that temperature and other noise effects play a significant role in the case of the use of realistic quantum devices. Thermal excitation and relaxation processes affect performance. Our further directions include optimization of the QUBO model for more compact spin representation and integration of error model into our algorithm. Solving these two issues can enable reconstruction of real sequences using the quantum approach. \section*{Methods} \subsection*{Quantum annealing protocol} The beginning Hamiltonian of the D-Wave processor is a transverse magnetic field of the following form: \begin{equation} \mathcal{H}_0=\sum_{i\in{V}}h_i\sigma_i^x, \end{equation} where $\sigma_i^x$ is the Pauli $x$-matrix, which acts on $i$th qubit. The problem Hamiltonian can be encoded to the following Ising Hamiltonian: \begin{equation} \mathcal{H}_{\rm P}=\sum_{i\in{V}}h_i\sigma_i^z+\sum_{(i,j)\in{E}}J_{ij}\sigma_i^z\sigma_j^z, \end{equation} where $h_i$ describe local fields, $J_{ij}$ stands for couplings, $\sigma_i^z$ are the Pauli $z$-matrices, and $E$ is the set of edges. One can see that $\mathcal{H}_{\rm P}$ is of diagonal form, so $\sigma_i^z$ can be treated as spin values $\{\sigma_i^z=\pm1\}$. For a given spin configuration ${\sigma_i^z}$ the total energy of the system is given by $\mathcal{H}_{\rm P}$, so by measuring the energy one can find a solution to the problem of interest. Quantum annealing can be applied to any optimization problem that can be expressed in the QUBO form. The idea is then to reduce the problem of interest to the QUBO form. \subsection*{QUBO transformation} The Ising Hamiltonian can be directly transformed to a quadratic unconstrained binary optimization (QUBO) problem. The following transformation can be applied for this purpose: \begin{equation} w_i=\frac{\sigma_i^z+1}{2} \in \{0,1\}, \end{equation} where $\{\sigma_i^z=\pm1\}$. For solving the problem on the D-Wave quantum processor, all $h_i$ and $J_{ij}$ values are scaled to lie between $-1$ and $1$. As a result, the processor outputs a set of spin values $\{\sigma_i^z=\pm1\}$ that attempts to minimize the energy, and the lower energy indicates better solution of the optimization problem. We note that Ref.~\cite{Lucas2014} provides a method for QUBO/Ising formulations of many NP problems. \subsection*{Quantum-inspired annealing using SimCIM} SimCIM is an example of a quantum-inspired annealing algorithm, which works in an iterative manner. It can be used for sampling low-energy spin configurations in the classical Ising model. The algorithm treats each spin value $s_i$ as a continuous variable, which lie in the range $[-1, 1]$. Each iteration of the SimCIM algorithm starts with calculating the mean field \begin{equation} \Phi_i = \sum_{j \neq i}J_{ij}s_j + b_i, \end{equation} which act on each spin by all other spins ($b_i$ is an element of the bias vector). Then the gradients for the spin values are calculated according to $\Delta s_i = p_t s_i + \zeta \Phi_i + N(0,\sigma)$, where $p_t, \zeta$ are the annealing control parameters and $N(0,\sigma)$ is the noise of the Gaussian form. Then the spin values are updated according to $s_i \leftarrow \phi(s_i + \Delta s_i)$, where $\phi(x)$ is the activation function \begin{equation}\label{acti} \phi(x)=\begin{cases} x \textrm{ for } |x|\leq 1;\\ x/|x| \textrm{ otherwise} \end{cases} \end{equation} After multiple updates, the spins will tend to either $-1$ or $+1$ and the final discrete spin configuration is obtained by taking the sign of each $s_i$. \subsection*{Bacteriophage simulations} \begin{figure}[t] \includegraphics[width=0.65\linewidth]{fig3.jpeg} \vskip -4mm \caption{Experimental scheme for the synthetic dataset.} \label{fig:dataset} \end{figure} We use {\it Grinder} \cite{Angly2012} to simulate raw reads from $\phi$X 174 bacteriophage complete genome (NCBI Reference Sequence: NC$\underline{\quad}$001422.1). To simplify the task and make it feasible for quantum computing we generate 50 reads in each run of simulations. In our proof-of-concept research, we are focused on finding the Hamiltonian path in OLC graph using quantum and quantum-inspired annealing. We generate the raw reads with no sequencing errors and the length of each read is equal to 600 base pairs. We build the OLC graph using the pairwise alignment of the raw reads implemented in {\it minimap2} package~\cite{Li2018_2}. We run {\it minimap2} with the predefined set of parameters {\sf ava-ont} and $k=10$. We apply {\it miniasm}~\cite{Li2016} to the same data as the benchmark assembler, which uses OLC graphs. \medskip \begin{table}[] \begin{tabular}{|c|c|c|c|} \hline Sequence length & Graph size & Qubo size & Physical qubits \\ \hline 5 & 3 & 9 & 36 \\ \hline 6 & 4 & 16 & 80 \\ \hline 7 & 5 & 25 & 200 \\ \hline 8 & 6 & 36 & 360 \\ \hline 9 & 7 & 49 & 686 \\ \hline 10 & 8 & 64 & 1088 \\ \hline \end{tabular} \caption{Experimental scheme for the synthetic dataset.}\label{tab:graph} \end{table} For experiments with quantum annealing, we use public access to D-Wave 2000Q via Leap SDK. We evaluate the impact of tunable parameters (particularly, annealing time) on the final solution quality; however, no significant improvement was discovered against default values, so annealing time was set to 20 $\mu$s (default value). The number of annealing runs is set to $10^3$ (maximum possible value). During our experiments we use mostly the standard configuration of the D-Wave processor, so we do not have any specific requirements on the weights/couplers in the model. Synthetic dataset graphs, which consist of reads no longer than 7 nucleotides (25 graph nodes), are small enough to fit into quantum annealer, so we can use DW$\underline{\quad}$2000Q$\underline{\quad}$5 backend (pure quantum mode of operation; see the following section). However, the size of the $\phi$X 174 bacteriophage graph (248 vertices) is too large. In order to embed $\phi$X 174 graph into the D-Wave processor, we have preliminary conducted manual graph partitioning using classical algorithms implemented in the METIS tool~\cite{METIS}. It allows splitting the original graph into 3 sub-parts. They are carefully selected so that only a single edge remains between them. The longest path is then calculated separately for each part and concatenated into a single path of the original graph. Each part is still large enough to be computed using the purely quantum mode, so we use the D-Wave hybrid computing mode --- hybrid$\underline{\quad}$v1 backend. D-Wave hybrid computing mode employs further graph decompositions with parallel computing on both classical and quantum backends. Specifics of such decomposition are not publicly available and physical qubit count is also not shown to the end-user. Details of graph size for each part after manual graph partitioning is presented in Table~\ref{tab:graph}. According to D-Wave Leap specification, hybrid$\underline{\quad}$v1 backend automatically combines the power of classical and quantum computation. \subsection*{Simulations with synthetic dataset} In order to evaluate the performance of the algorithm in a controlled setup, we generated several hundreds of random nucleotide sequences with variable length and performed corresponding transformations as shown in Fig.~\ref{fig:dataset}. Further, we eliminated graph duplicates or other trivial cases, where graph structure contained no auxiliary edges. Synthetic dataset graphs up to the length of 7 nucleotides (25 graph nodes) are small enough to fit into quantum annealer, so we can use DW$\underline{\quad}$2000Q$\underline{\quad}$5 backend (pure quantum mode of operation). Finally, we selected 60 sequences that produce unique OLC graphs with comparable complexity. {\it Note Added}. Recently, we became aware of the work reporting studies of the quantum acceleration using gate-based and annealing-based quantum computing~\cite{Sarkar2020}. Corresponding author: A.K.F. ([email protected]). \section*{Acknowledgements} We are grateful to A.I. Lvovsky for fruitful discussions as well as A.S. Mastiukova and D.V. Kurlov for useful comments. We also thank the anonymos referee for careful reading our manuscript and insightful comments that helped to improve the paper. We thank E.S. Tiunov for providing information about the SimCIM algorithm and A.E. Ulanov for the discussion of various quantum-inspired optimization algorithms. This work is supported by Russian Science Foundation (19-71-10092). We also thank D-Wave Systems (the research is conducted within the program of global response to COVID-19). \section*{Competing interests} Owing to the employments and consulting activities of A.S.B., S.R.U., E.O.K., and A.K.F., they have financial interests in the commercial applications of quantum computing. A.S.R., I.V.P., and V.V.I. are employees of Genotek Ltd, they declare that they have no other competing interests. A.N.K. declares no competing interests. \section*{QUBO reformulation of the genome assembly problem} The growing interest in the use of quantum computing devices (and in particular to quantum annealers) is related to their potential in solving combinatorial optimization problems. It is widely discussed that the potential of quantum annealing is rooted in the quantum effects that allow us to efficiently explore the cost-function landscape in ways unavailable to classical methods. Therefore, the important stage is to map the problem of interest to a Hamiltonian, which maps the binary representation of a graph path into a corresponding energy value. The existing physical implementation of quantum annealing is the D-Wave processor, which can be described as Ising spin Hamiltonian. The Ising Hamiltonian can be transformed into a QUBO problem. Thus, we have to find the mapping to a problem that we would like to solve on the D-Wave quantum processor to the QUBO form. However, establishing correspondence between a problem of interest and the QUBO form may require additional overhead. In particular, in our case the transformation of the OLC graph to a QUBO problem requires the use of additional variables (see below). In addition, the D-Wave quantum processor has its native structure (the chimera structure). That is why after the formulation of the problem of interest in the QUBO form an additional stage of embedding problem in the native structure of the quantum device is required. So additional overhead in the number of physical qubits, which is related to the representation of logical variables by physical qubits of the processor (that takes into account the native structure), is required (see below). \subsection*{Formulation of the Hamiltonian path problem} Along the lines of Ref.~\cite{Lucas2014}, we reformulate the task of finding the Hamiltonian path in the OLC graph as a QUBO problem. Let a directed OLC graph be given in the form $G = (V, E)$, where $V=\{1,2,\ldots,N\}$ is a set of vertices, and $E$ is set of edges consisting of pairs $(u,v)$ with $u,v\in V$. The solution of the Hamiltonian path problem is represented in the form of $N\times N$ permutation matrix ${\cal X}=(x_{v,i})$, whose unit elements $x_{v,i}$ represent the path going through the $v$th node at the $i$th step. Then, we assign {\it each} element $x_{v,i}$ of the matrix ${\cal X}$ a separate logical variable (spin) within an optimization problem. Note, that this representation results in polynomial overhead in the number of logical variables of the QUBO problem: The solution for $N$-vertex graph requires $N^{2}$ logical variables. The resulting Hamiltonian of the corresponding QUBO problem takes the following form: \begin{equation}\label{eq:QUBO} \begin{split} \mathcal{H}&=A \sum_{v=1}^{N}\left(1- \sum_{j=1}^{N}x_{v,j}\right)^2 \\ & + A \sum_{j=1}^{N}\left(1- \sum_{v=1}^{N}x_{v,j} \right)^2 \\ & + A \sum_{(u,v)\notin E}\sum_{j=1}^{N-1}(x_{u,j}x_{v,j+1}), \end{split} \end{equation} where $A>0$ is a penalty coefficient. The first two terms in Eq.~\eqref{eq:QUBO} ensure the fact that each vertex appears only once in the path, and there is a single vertex at each step of the path. The third term provides a penalty for connections within the path that are beyond the allowed ones. With this QUBO formulation, we are able to run the genome assembly task using quantum annealers and quantum-inspired algorithms. We note that the applicability of the method requires the existence of the Hamiltonian path in the corresponding graph, which is not universally the case for arbitrary genetic data given by an OLC-graph. \subsection*{Formulation of the Hamiltonian path problem for acyclic graphs} In general, Hamiltonian path mapping is suitable both for cyclic and acyclic directed graphs. However, it is often the case that the OLC graph contains no cycles. It is then possible to further simplify transformation and reduce the qubit overhead. Here, we demonstrate more compact mapping that requires only $M$ logical variables, where $M=|E| < N^{2}$ is the number of edges. For the acyclic OLC graph $G = (V, E)$, let us define a set of binary variables $\{x_{u,v}\}_{(u,v)\in E}$ that indicate whether an edge $(u,v)$ is included in the Hamiltonian path. Then the corresponding Hamiltonian should include the following two components: \begin{equation}\label{formula:hamiltonian_long} \begin{split} \mathcal{H}= A \sum\limits_{\substack{u\in V}}&\left(1-\sum_{\substack{(u,v)\in E}}x_{u,v} \right)^2 \\ &+A\sum\limits_{\substack{v\in V}}\left(1-\sum_{\substack{(u,v)\in E}}x_{u,v} \right)^2. \end{split} \end{equation} The first and the second terms in Eq.~\eqref{formula:hamiltonian_long} assure that each vertex is incident (if possible) with a single incoming and outgoing path edges correspondingly. Although this realization is helpful and can be used for solving genome assembly problems on quantum annealers without polynomial qubit overhead (the encoding requires $M$ variables, where $M$ is the number of edges in the corresponding OLC-graph), the asymptotic computational speed-up versus classical algorithm is not exponential (since there exist efficient classical algorithms for solving this problem). \subsection*{Embedding to the processor native structure} As it is mentioned before, the logical variables in Hamiltonians~\eqref{eq:QUBO} and \eqref{formula:hamiltonian_long} are not necessarily equivalent to physical qubits of a quantum processor. This is due to the fact that quantum processors have its native structure, i.e. topology of physical qubits and couplers between them. For example, each D-Wave 2000Q QPU is fabricated with 2048 qubits and 6016 couplers in a C16 Chimera topology~\cite{Boixo2013,Boixo2014,Ronnow2014}. Within a C16 Chimera graph, physical qubits are logically mapped into a $16\times16$ matrix of unit cells of $8$ qubits, where each qubit is connected with at most 6 other qubits. In order to realize Hamiltonians~\eqref{eq:QUBO} and~\eqref{formula:hamiltonian_long}, whose connections structure may differ from the one of the processor, we employ an additional embedding stage, described in~\cite{Zbinden2020}. It allows obtaining a desired effective Hamiltonian by assigning several physical qubits of the processor to a single logical variable of the original Hamiltonian (see Fig.~\ref{fig:scheme}c-d). The embedding to the native structure introduces considerable overhead in qubit number relative to the fully-connected model, yet allows solving problems with existing quantum annealers. \section*{Results} Here we apply our method for the experimental realization of {\it de novo} genome assembly using quantum and quantum-inspired annealers. As a figure of merit we use the time-to-solution (TTS), which is the total time required by the solver to find the optimal solution (ground state) at least once with a probability of 0.99. We first define $R_{99}$ as the number of runs required by the solver to find the ground state at least once with a probability of 0.99. Using binomial distribution one can calculate $R_{99}$ as follows: \begin{equation} R_{99} = \frac{\log(1-0.99)}{\log(1-\theta)}, \end{equation} where $\theta$ is an estimated success probability of each run. Then we define TTS it in the following way: \begin{equation} {\rm TTS} = {t_{a}}R_{99}, \label{formula:tts} \end{equation} where $t_{a}$ for D-Wave is 20$\mu{s}$ (default value). Quantum-inspired optimization algorithms can be also used for solving QUBO problems. In our experiments, we employ SimCIM quantum-inspired optimization algorithm~\cite{Tiunov2019}, which is based on the differential approach to simulating specific quantum processors called Coherent Ising Machine (CIM; see Methods). SimCIM runs on conventional hardware and is easily parallelizable on graphical processing units (GPU). This is the time for simulating a single annealing run using our implementation of SimCIM, measured on Intel core i7-6700 Quad-Core, 64GB DDR4, GeForce GTX 1080. \subsection*{$\phi$X~174 bacteriophage genome} We start with the paradigmatic example of the $\phi$X~174 bacteriophage genome~\cite{Sanger1977}. In order to realize {\it de~novo} genome assembly, we construct the adjacency matrix for OLC graphs and use pre-processing for packing this graph into D-Wave processor. We then transform each adjacency graph into the QUBO matrix according to Eq.~(\ref{formula:hamiltonian_long}). For the case of using the D-Wave system we embed the QUBO problem in the native structure of the annealing device (which naturally adds overhead in the number of qubits; see Methods). In order to embed $\phi$X 174 graph into the D-Wave processor, we have preliminary conducted manual graph partitioning using classical algorithms implemented in the METIS tool~\cite{METIS}. Then the problem can be solved with the use of quantum annealing hardware by D-Wave and quantum-inspired optimization algorithm. For each instance, a total of 10$^3$ anneals (runs) were collected from the processor, with each run having an annealing time of 20 $\mu$s. The total number of instances is 1000 (the process of their generation is described in Methods). The results are presented in Table~\ref{tab:bacteriophage}. Up to our best knowledge, this is the first realistic-size {\it de novo} genome assembly employing the use of quantum computing devices and quantum-inspired algorithms. The presented time is required for finding the optimal solution since only one solution has a right interpretation. We note that the time required for the data pre-processing is not included in Table~\ref{tab:bacteriophage}. Details of graph size for each part after manual graph partitioning is presented in Table~\ref{tab:graph} in Methods. Statistics presented for 1000 simulated Phi-X 174 bacteriophage OLC graphs. We use D-Wave hybrid computing mode, which employs further graph decompositions with parallel computing on both classical and quantum backends. D-Wave gives access to the following timing information in the information system: $T_{\rm run}$ (run time), $T_{\rm charge}$ (charge time), and $T_{\rm QPU}$ (QPU access time). We assume CPU time $=T_{\rm run}-T_{\rm QPU}$. We summarize QPU access time and CPU time for obtained OLC graphs. For the case of SimCIM, we use TTS. \begin{table}[] \begin{tabular}{|l|l|l|l|l|l|} \hline & & Mean, $\mu$s & Min, $\mu$s & Max, $\mu$s & 90\% \\ \hline \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Quantum\\ annealer\end{tabular}} & CPU & 8483 & 8314 & 8619 & 8579 \\ \cline{2-6} & QPU & 535 & 369 & 672 & 600 \\ \hline \begin{tabular}[c]{@{}l@{}}Quantum-\\ inspired\\ annealer \\ (SimCIM)\end{tabular} & & 262 & 9.9 & 7212 & 1061 \\ \hline \end{tabular} \caption{Genome assembly time for $\phi$X 174 bacteriophage for 1000 instances. For the data based on experiments with quantum annealers we highlight required classical processor unit (CPU) time and quantum processor unit (QPU) time.} \label{tab:bacteriophage} \end{table} \subsection*{Benchmarking quantum-assisted {\it de novo} genome assembly using the synthetic dataset} In order to perform a complete analysis of the suggested approach, we realize the quantum-assisted {\it de novo} genome assembly for the synthetic dataset. We generate a synthetic dataset, which consists of 60 random reads of length from 5 to 10 (for details, see Methods), 10 problems are generated for every sequence length. We then split each read into $k$-mers of length 3 and compute adjacency matrix for the corresponding OLC graph using Eq.~(\ref{formula:hamiltonian_long}). Finally, we transform each adjacency graph into the QUBO matrix according to our algorithm and minimize it using quantum annealing hardware by D-Wave and quantum-inspired optimization algorithms. Our goal is to check the applicability of existing quantum annealers to the task of genome assembly, evaluate the upper bound on the input problem size (particularly, the length of the original genome), compare the performance of the D-Wave quantum annealer with a software annealing simulator SimCIM. The choice of tools is motivated by their maturity in terms of quantum dimensionality and compatibility with the original formulation in terms of the optimization problem. The similar routine is realized with the use of quantum-inspired annealing. We test the suggested approach with the simulated data first with the D-Wave quantum annealer (see Fig.~\ref{fig:comparison}) and compare our results with quantum-inspired optimization algorithm SimCIM. We note that the D-Wave annealer shows an advantage in genome assembly for short-length sequences, while it cannot be applied for sequences of length 6 and more due to the fact that the decoherence time becomes comparable with the annealing time. What we observe is that the performance of the annealing system is dependent on the properties of input data. While the exhaustive investigation of the nature of D-Wave performance with respect to input data goes beyond the scope of our research, we consider the connectivity of the input graph as one of the critical factors. The D-Wave 2000Q processor is based on Chimera Topology with native physical connectivity of 6 basic qubit cells. During the experiments on the synthetic dataset, we were able to embed the problems with a maximum sequence of up to 10 nucleotides. This length corresponds to the fully connected graph with 8 nodes (K8,8) and this is the maximum possible graph size, which can be embedded to the chimera lattice using clique embedding tool from DWave Ocean SDK. While Ocean SDK allows using other types of embedding (e.g., minor miner) we observed that clique embedding demonstrates more stable results due to the deterministic nature of embedding in comparison to the minor-miner tool, which is intrinsically based on randomized heuristics contributing to larger deviations across experiments. However, no viable solution that could reconstruct the original sequence was found for sequences longer than 6. \begin{figure} \includegraphics[width=1\linewidth]{fig2.pdf} \vskip -4mm \caption{Comparison of the performance of quantum and quantum-inspired methods for {\it de novo} genome assembly based on synthetic data (10 problems were generated for every sequence length): we compare TTS for quantum device D-Wave and quantum-inspired optimization algorithm SimCIM.} \label{fig:comparison} \end{figure} \section*{Discussions} In our work, we have demonstrated the possibility of solving the simplified bioinformatics problem of reconstructing genome sequences using quantum annealing hardware and quantum-inspired algorithms. We have implemented the experimental quantum-assisted assembly of $\phi$X 174 bacteriophage genome. On the basis of synthetic data, we have shown that the existing quantum hardware allows reconstructing short sequences of up to 7 nucleotides. In order to use quantum optimization for realistic tasks, the ratio of the decoherence time to the annealing time should be considerably improved. We note that while the decoherence time is not a fundamental limitation of the technology, the realization of quantum annealers with sufficient decoherence time remains a challenge. While D-Wave machines use superconducting quantum circuits, setups based on ultracold Rydberg atom arrays~\cite{Lukin2017,Lukin2018,Browaeys2020} and trapped ions~\cite{Monroe2017,Blatt2018} can be also used for the efficient implementation of quantum annealing and other quantum optimization algorithms. Specifically, the system of Rydberg atom arrays has been studied in the context of solving the maximum independent set problem~\cite{Lukin2018,Browaeys2020}, which is NP-hard. For longer sequences, as we have demonstrated, it is possible to use quantum-inspired algorithms that are capable of solving more complex problems using classical hardware. We note that our work is a proof-of-principle demonstration of the possibility to use existing quantum devices for solving the genome assembly problem. The problem scale considered in this paper is still far from real sequences ($\sim$130 kilo-base pairs for primitive bacterias) and is lacking numerous complications, such as errors in sequence reads and handling repeating sequences. However, the proposed method demonstrates that newly evolving computing techniques based on quantum computers and quantum-inspired algorithms are quickly developing and can be soon applied in new areas of science. Limitations of existing quantum hardware do not allow universally outperform existing solutions for {\it de novo} genome assembling. At the same time, one of the most interesting practical questions is when one can expect computational advantages from the use of quantum computing in genome assembling tasks. We note that in real-life conditions a number of additional challenges arise. Examples include errors (random insertions and deletions, repeats, etc.), genome contaminants (pieces of the genome not related to the subject of interest), polymer chain reaction artifacts, and others require additional post-processing steps. These problems are beyond the scope of our proof-of-principle demonstration and they should be considered in the future. Another complication comes from the fact that temperature and other noise effects play a significant role in the case of the use of realistic quantum devices. Thermal excitation and relaxation processes affect performance. Our further directions include optimization of the QUBO model for more compact spin representation and integration of error model into our algorithm. Solving these two issues can enable reconstruction of real sequences using the quantum approach. \section*{Methods} \subsection*{Quantum annealing protocol} The beginning Hamiltonian of the D-Wave processor is a transverse magnetic field of the following form: \begin{equation} \mathcal{H}_0=\sum_{i\in{V}}h_i\sigma_i^x, \end{equation} where $\sigma_i^x$ is the Pauli $x$-matrix, which acts on $i$th qubit. The problem Hamiltonian can be encoded to the following Ising Hamiltonian: \begin{equation} \mathcal{H}_{\rm P}=\sum_{i\in{V}}h_i\sigma_i^z+\sum_{(i,j)\in{E}}J_{ij}\sigma_i^z\sigma_j^z, \end{equation} where $h_i$ describe local fields, $J_{ij}$ stands for couplings, $\sigma_i^z$ are the Pauli $z$-matrices, and $E$ is the set of edges. One can see that $\mathcal{H}_{\rm P}$ is of diagonal form, so $\sigma_i^z$ can be treated as spin values $\{\sigma_i^z=\pm1\}$. For a given spin configuration ${\sigma_i^z}$ the total energy of the system is given by $\mathcal{H}_{\rm P}$, so by measuring the energy one can find a solution to the problem of interest. Quantum annealing can be applied to any optimization problem that can be expressed in the QUBO form. The idea is then to reduce the problem of interest to the QUBO form. \subsection*{QUBO transformation} The Ising Hamiltonian can be directly transformed to a quadratic unconstrained binary optimization (QUBO) problem. The following transformation can be applied for this purpose: \begin{equation} w_i=\frac{\sigma_i^z+1}{2} \in \{0,1\}, \end{equation} where $\{\sigma_i^z=\pm1\}$. For solving the problem on the D-Wave quantum processor, all $h_i$ and $J_{ij}$ values are scaled to lie between $-1$ and $1$. As a result, the processor outputs a set of spin values $\{\sigma_i^z=\pm1\}$ that attempts to minimize the energy, and the lower energy indicates better solution of the optimization problem. We note that Ref.~\cite{Lucas2014} provides a method for QUBO/Ising formulations of many NP problems. \subsection*{Quantum-inspired annealing using SimCIM} SimCIM is an example of a quantum-inspired annealing algorithm, which works in an iterative manner. It can be used for sampling low-energy spin configurations in the classical Ising model. The algorithm treats each spin value $s_i$ as a continuous variable, which lie in the range $[-1, 1]$. Each iteration of the SimCIM algorithm starts with calculating the mean field \begin{equation} \Phi_i = \sum_{j \neq i}J_{ij}s_j + b_i, \end{equation} which act on each spin by all other spins ($b_i$ is an element of the bias vector). Then the gradients for the spin values are calculated according to $\Delta s_i = p_t s_i + \zeta \Phi_i + N(0,\sigma)$, where $p_t, \zeta$ are the annealing control parameters and $N(0,\sigma)$ is the noise of the Gaussian form. Then the spin values are updated according to $s_i \leftarrow \phi(s_i + \Delta s_i)$, where $\phi(x)$ is the activation function \begin{equation}\label{acti} \phi(x)=\begin{cases} x \textrm{ for } |x|\leq 1;\\ x/|x| \textrm{ otherwise} \end{cases} \end{equation} After multiple updates, the spins will tend to either $-1$ or $+1$ and the final discrete spin configuration is obtained by taking the sign of each $s_i$. \subsection*{Bacteriophage simulations} \begin{figure}[t] \includegraphics[width=0.65\linewidth]{fig3.jpeg} \vskip -4mm \caption{Experimental scheme for the synthetic dataset.} \label{fig:dataset} \end{figure} We use {\it Grinder} \cite{Angly2012} to simulate raw reads from $\phi$X 174 bacteriophage complete genome (NCBI Reference Sequence: NC$\underline{\quad}$001422.1). To simplify the task and make it feasible for quantum computing we generate 50 reads in each run of simulations. In our proof-of-concept research, we are focused on finding the Hamiltonian path in OLC graph using quantum and quantum-inspired annealing. We generate the raw reads with no sequencing errors and the length of each read is equal to 600 base pairs. We build the OLC graph using the pairwise alignment of the raw reads implemented in {\it minimap2} package~\cite{Li2018_2}. We run {\it minimap2} with the predefined set of parameters {\sf ava-ont} and $k=10$. We apply {\it miniasm}~\cite{Li2016} to the same data as the benchmark assembler, which uses OLC graphs. \medskip \begin{table}[] \begin{tabular}{|c|c|c|c|} \hline Sequence length & Graph size & Qubo size & Physical qubits \\ \hline 5 & 3 & 9 & 36 \\ \hline 6 & 4 & 16 & 80 \\ \hline 7 & 5 & 25 & 200 \\ \hline 8 & 6 & 36 & 360 \\ \hline 9 & 7 & 49 & 686 \\ \hline 10 & 8 & 64 & 1088 \\ \hline \end{tabular} \caption{Experimental scheme for the synthetic dataset.}\label{tab:graph} \end{table} For experiments with quantum annealing, we use public access to D-Wave 2000Q via Leap SDK. We evaluate the impact of tunable parameters (particularly, annealing time) on the final solution quality; however, no significant improvement was discovered against default values, so annealing time was set to 20 $\mu$s (default value). The number of annealing runs is set to $10^3$ (maximum possible value). During our experiments we use mostly the standard configuration of the D-Wave processor, so we do not have any specific requirements on the weights/couplers in the model. Synthetic dataset graphs, which consist of reads no longer than 7 nucleotides (25 graph nodes), are small enough to fit into quantum annealer, so we can use DW$\underline{\quad}$2000Q$\underline{\quad}$5 backend (pure quantum mode of operation; see the following section). However, the size of the $\phi$X 174 bacteriophage graph (248 vertices) is too large. In order to embed $\phi$X 174 graph into the D-Wave processor, we have preliminary conducted manual graph partitioning using classical algorithms implemented in the METIS tool~\cite{METIS}. It allows splitting the original graph into 3 sub-parts. They are carefully selected so that only a single edge remains between them. The longest path is then calculated separately for each part and concatenated into a single path of the original graph. Each part is still large enough to be computed using the purely quantum mode, so we use the D-Wave hybrid computing mode --- hybrid$\underline{\quad}$v1 backend. D-Wave hybrid computing mode employs further graph decompositions with parallel computing on both classical and quantum backends. Specifics of such decomposition are not publicly available and physical qubit count is also not shown to the end-user. Details of graph size for each part after manual graph partitioning is presented in Table~\ref{tab:graph}. According to D-Wave Leap specification, hybrid$\underline{\quad}$v1 backend automatically combines the power of classical and quantum computation. \subsection*{Simulations with synthetic dataset} In order to evaluate the performance of the algorithm in a controlled setup, we generated several hundreds of random nucleotide sequences with variable length and performed corresponding transformations as shown in Fig.~\ref{fig:dataset}. Further, we eliminated graph duplicates or other trivial cases, where graph structure contained no auxiliary edges. Synthetic dataset graphs up to the length of 7 nucleotides (25 graph nodes) are small enough to fit into quantum annealer, so we can use DW$\underline{\quad}$2000Q$\underline{\quad}$5 backend (pure quantum mode of operation). Finally, we selected 60 sequences that produce unique OLC graphs with comparable complexity. {\it Note Added}. Recently, we became aware of the work reporting studies of the quantum acceleration using gate-based and annealing-based quantum computing~\cite{Sarkar2020}. Corresponding author: A.K.F. ([email protected]). \section*{Acknowledgements} We are grateful to A.I. Lvovsky for fruitful discussions as well as A.S. Mastiukova and D.V. Kurlov for useful comments. We also thank the anonymos referee for careful reading our manuscript and insightful comments that helped to improve the paper. We thank E.S. Tiunov for providing information about the SimCIM algorithm and A.E. Ulanov for the discussion of various quantum-inspired optimization algorithms. This work is supported by Russian Science Foundation (19-71-10092). We also thank D-Wave Systems (the research is conducted within the program of global response to COVID-19). \section*{Competing interests} Owing to the employments and consulting activities of A.S.B., S.R.U., E.O.K., and A.K.F., they have financial interests in the commercial applications of quantum computing. A.S.R., I.V.P., and V.V.I. are employees of Genotek Ltd, they declare that they have no other competing interests. A.N.K. declares no competing interests.
2,869,038,156,109
arxiv
\section{Introduction} The estimation of the impact flux of near-Earth objects (NEOs) is important not only for the protection of human civilization, but also for the protection of space assets, which could be damaged or at least perturbed even by small, mm- to cm-size impactors. A recent example that reveals this necessity was the Chelyabinsk event in 2013, which was caused by a 20-m impactor and member of the NEO population. This collision was responsible for 1,500 injured civilians and a few thousand damaged human assets in the area \citep{popova2013}. The precise flux density of objects in this size range is not yet well known. A promising method for constraining NEO flux densities in this size range is via the detection of impact flashes on the Moon. A plethora of laboratory impact experiments have been conducted over the last 40 years initiated primarily to study spacecraft shielding, using mainly metallic materials \citep[and references therein]{2002holsappleast3}. Apart from these technical experiments, hypervelocity impacts (impact speeds $v>1$~km~s$^{-1}$) are also studied at small scales. One important goal is to extrapolate the results to larger size and velocity scales, towards the understanding of the collisions on planetary surfaces by asteroids and comets or even among the small bodies, for example the inter-asteroid collisions in the Main Belt. It has been clearly shown that several impactor parameters, such as the impactor's size, density, velocity and impact angle affect the collision outcome, for example,\ the crater formation and the size and speed of the ejecta plume \citep{ryan1998,1999housen,2003housen,2011housen,me2016,me2017}. Observations have shown that, for example, highly porous objects tend to be destroyed tens of km above the surface of the Earth, as was the case with 2008~TC$_3$ \citep{2009jenniskens, 2010MAPS...45.1638B} and the Benesov bolide \citep{borovicka1998}. Telescopic surveys, such as the Catalina Sky Survey \citep{drake2009} and Pan-STARRS \citep{chambers2016}, are continuously discovering new objects, which are verified by follow-up observations from observers all around the world. Space missions such as WISE and the {\it Spitzer} Space Telescope, along with spectroscopic observations, provide valuable data to start characterizing the physical properties of NEOs, such as their diameters, albedos, and spectral types. Over 16,300 NEOs have been identified \citep[as of August 2017]{jpl} of which only 1,142 have known diameters $d$ \citep{delbo2017}, with the smallest ones being less than ten meters. The NEO population consists of small bodies delivered from the source regions of the Main Belt via mean motion and secular resonances with the planets \citep{bottke2000,bottke2002}. Currently, the completeness of the observed sample is at $d\sim1$~km, as surveys are not able to massively detect NEOs that are smaller than a few tens of meters in diameter. In fact, the very small bodies are usually detected when their position on their orbit comes at close proximity to Earth. For example, the 4-m near-Earth asteroid 2008~TC$_3$ was discovered only 19~h prior to its impact on Earth and immediate radar observations provided its size \citep{2009jenniskens}. During the last decades, advances have been made by several groups leading to a better estimation of the sizes of small impactors and their flux on the Moon, correcting for the Earth as a target. This was done by calculating the luminous efficiency $\eta$ of the detected flashes, which is defined as the fraction of the impactor's kinetic energy ($KE$) that is emitted as light at visible wavelengths ($L$), that is,\ $L = \eta \times KE$. Great uncertainties occur during these calculations when the events originate from sporadic NEOs (those not associated to meteor streams), since the collisional velocity of the meteoroids on the Moon is unknown. Several authors adopt average impact speeds for the lunar surface spanning a wide range between 16 and 24~km~s$^{-1}$ \citep{ortiz2000,ortiz2002,suggs2014}. Uncertainties in the speed estimation lead to uncertainties of the luminous efficiency value $\eta$. The current estimations of the luminous efficiency of the lunar impactors range over an order of magnitude resulting in weakly constrained masses. However, when the impact events are linked to a known meteoroid stream, this unknown parameter can be constrained \citep[e.g.,][]{bellotrubio2000A} yielding masses that can be appended to the current known impactors' size distribution \citep{harris_ast4} and can also be used for further studies. Apart from the NEO flux and size distribution, the lunar surface serves as a large-scale impact laboratory to study the impact events. The term ``large-scale'' refers both to the impactor sizes and speeds when comparing them to laboratory-based hypervelocity experiments, where the sizes of impactors are typically a few mm and the speeds below 8~km s$^{-1}$ \citep[e.g.,][]{1999MeScT..10...41B}. The collisions of NEOs on the Moon give rise to several phenomena that can be detected and further studied, such as impact cratering \citep{speyerer2016}, seismic waves \citep{olberst1991} and the enhancement of the lunar atmosphere with sodium \citep{verani1998,smith1999}. The light flash produced by an impact depends on several parameters, including the mass and speed of the impactor. Even when the mass and speed are known, the different combinations of mineralogical compositions of both the target and impactor will affect the result. Pioneering laboratory experiments were conducted more than 40 years ago, using dust accelerators and photomultipliers with filters at several wavelengths, allowing the estimation of the plasma temperature \citep{eichhorn1975,eichhorn1976, burchell1996B, burchell1996A}. Therefore the study of impact flashes could provide insight to the complex problem of energy partitioning during an impact event, when the majority of physical parameters are constrained or measured (e.g.,\ mass and speed of the impactor, crater size, ejecta speed). The NELIOTA project\footnote{https://neliota.astro.noa.gr} (Xilouris et al., in prep.) provides the first lunar impact flash observations performed simultaneously in more than one wavelength band. In Section~2 we present the instrumentation, observation strategy and the first ten lunar impact flashes from NELIOTA, providing their durations and magnitudes. In Section~3, we focus on the first ever measurement of impact flash temperatures using our two-colour observation technique, while in Section~4 we present a new approach to estimate the impactors' masses. The discussion and conclusions are given in Section~5. \section{Observations} \label{observations} NELIOTA has upgraded the 1.2-m Kryoneri telescope\footnote{http://kryoneri.astro.noa.gr/} and converted it to a prime-focus instrument with a focal ratio of f/2.8 for lunar monitoring observations. The telescope has been equipped with two identical Andor Zyla scientific CMOS cameras, which are installed at the prime focus and are thermoelectrically cooled to 0 degrees. A dichroic beam-splitter with a cut-off at 730~nm directs the light onto the two cameras ($2560\times2160$~pixels$^2$, $6.48~\mu$m per pixel), which observe in visible and near-infrared wavelengths using $R$ and $I$ Cousin filters, respectively. The maximum transmittance of each filter is at $\lambda_{R}=641$~nm and $\lambda_{I}=798$~nm, corresponding to a maximum quantum efficiency of $\sim50\%$ and $\sim40\%$, respectively. The field-of-view of this setup is $16.0' \times 14.4'$. We use the $2\times2$ binning mode, which yields a pixel-scale of $0.8''$, as it best matches the $1.2-1.5''$ average seeing and results in a lower volume of data. Currently, the NELIOTA system has the largest telescope with the most evolved configuration that performs dedicated monitoring of the Moon, in search of faint lunar impact flashes. Observations are conducted on the dark side of the Moon, between lunar phases $\sim$0.1 and 0.4. The maximum lunar phase during which observations can be obtained is set by the strength of the glare coming from the sun-lit side of the Moon. The observations begin/end $\sim$20~min after/before the sunset/sunrise and last for as long as the Moon is above an altitude of 20$\degr$. The altitude limit is set due to limitations from the dome slit. The cameras simultaneously record at a frame rate of 30 frames-per-second or every 33~ms, in $2\times2$ binning mode. The exposure time of each frame is 23~ms, followed by a read-out time of 10~ms. The observations are split into ``chunks'' that are $15$~min in duration. At the end of each chunk, a standard star is observed for calibration purposes. The standard stars have been carefully selected a) to be as close as possible to the altitude of the Moon and b) to have similar color indices to the expected colors of the flashes (i.e.,\ $0.3<R-I<1.5$~mag). Flat-field images are taken on the sky before or after the lunar observations, while dark frames are obtained directly after the end of the observations. The duration of the observations varies between $\sim$25~min and $\sim$4.5~h, depending on the lunar phase and the time of year. The novelty of the NELIOTA instrumentation setup is that it simultaneously acquires data from two detectors and at two different wavelengths. This setup enables the validation of a flash from a single telescope and site, since a real event that is bright enough will be detected by both cameras at the same position and at the same time, whereas cosmic ray artefacts will only be detected by one camera at any given position and time. Although satellites are also common artefacts, they are typically recorded as streaks. Satellites moving at low enough speeds so as not to show up as streaks in our 23-ms exposure time have to be far away -- assuming, in the worst case, an object with a perigee of 300~km, the apogee has to be at least at 17,500~km to result in an apparent movement of less than 1 pixel per 23~ms. At this distance, an object with a reflectivity of 0.5 would need to be at least 2~m in size to be detected as a magnitude-11 flash. Geosynchronous satellites could also produce artefacts due to reflection of sunlight off their solar panels. However, since their positions are well known and clustered around a declination of approximately zero, they pose no major concern as artefacts, as they can be ruled out by using available catalogs of geosynchronous satellite positions. This paper presents and analyzes the first ten flashes that were validated during the testing phase and the first months of the NELIOTA campaign, from February to July 2017. These flashes originate from sporadic NEOs. We checked various orbital catalogs of satellites and could not find any objects in front of the Moon at the times of the detected flashes. We note that the synchronization of the cameras during the frame acquisition for these flashes is better than 6~ms. All validated flashes are made available on the NELIOTA website within 24 hours of the observations. \subsection*{Photometry} The data reduction is performed automatically by the NELIOTA pipeline (described in Xilouris et al., in prep.) using the median images of the respective calibration files (flat and dark images). These master-images are used to calibrate the data of the Moon as well as those of the standard stars. The pipeline searches for flashes on the images, after computing and subtracting a running, weighted average image, which removes the lunar background. Due to the nonuniform background around a flash, which is caused by surface features of the Moon (e.g.,\ craters, maria) and earthshine, we performed photometry of the flashes on background-subtracted images. We created these images by subtracting a median lunar image based on the five frames before and five frames after the event. Aperture photometry with the AIP4WIN software \citep{Berry00} was then performed for both the flashes and standard stars observed nearest in time for each flash. Optimal apertures corresponding to the maximum in the signal-to-noise ratio (S/N) of the flux measurement were used for the flashes to avoid adding noise from the subtracted background, while large apertures were used for the standards. Since the standard stars are observed at approximately the same airmass as the lunar surface, we can compute the flash magnitudes in each filter as: \begin{equation} m_{flash} = m_{star} + 2.5 \log \left(\frac{S} {F}\right), \label{magnitudes} \end{equation} where m$_{star}$ and m$_{flash}$ are the calibrated magnitude of the standard star and the magnitude of the flash, respectively, and $S$ and $F$ are the fluxes of the star and flash for the same integration time. All photometric measurements and error determinations were independently computed using the IRAF\footnote{IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under cooperative agreement with the National Science Foundation.} \textit{apphot} package and were found to agree within errors with the results from AIP4WIN. Table~\ref{table1} presents the date and universal time at the start of the observation for each impact flash detection, its $R$ and $I-$band magnitude and error, the duration recorded in $I$, as well as the temperature and mass measurements are described in the following Sections. The flash durations are estimated by multiplying the 33-ms frame rate by the number of frames the flash was detected on and are thus upper limits to the real flash duration. The durations range between 33 and 165 ms, in agreement with previously reported values \citep[e.g.,][]{Yanagisawa02}. We note that Flashes 2, 6, 7, and 10 were detected over multiple, consecutive frames. Flashes 2 and 10 are the brightest flashes in the current dataset and had simultaneous detections in both bands in consecutive frames. They are used below to measure the temperature evolution of the flashes. \begin{table* \centering \caption{Dates, universal times (UT), magnitudes in each filter, duration recorded in the $I-$band and listed for the first entry of each flash, temperatures and impactor masses of the first ten NELIOTA flashes. Multiple entries per flash correspond to the consecutive frames they were detected on. Masses are calculated for both $\eta_{1}$ and $\eta_{2}$ values (see text for details).}\label{table1} \begin{tabular}{l|ccrrclrr} \hline\hline Flash & Date & UT& R $\pm$ $\sigma_R$ & I $\pm$ $\sigma_I$ & Duration & T $\pm$ $\sigma_T$ & Mass ($\eta_1$) $\pm$ $\sigma_M$& Mass ($\eta_2$) $\pm$ $\sigma_M$\\ &&& (mag) & (mag) & (ms) & (K) & (kg) & (kg) \\ \hline 1\tablefootmark{a} & 2017-02-01 & 17:13:57.863 &10.15 $\pm$ 0.12 & 9.05 $\pm$ 0.05 & 33 & 2350 $\pm$ 140 & 0.6 $\pm$ 0.3 & 0.2 $\pm$ 0.1\\ \hline 2\_1\tablefootmark{b} & 2017-03-01 & 17:08:46.573 & 6.67 $\pm$ 0.07 & 6.07 $\pm$ 0.06 & 132 & 3100 $\pm$ 30 & 4.4 $\pm$ 0.5 & 1.6 $\pm$ 0.2\\ 2\_2 & 2017-03-01 & 17:08:46.606 & 10.01 $\pm$ 0.17 & 8.26 $\pm$ 0.07 & $-$ & 1775 $\pm$ 100 & $-$ & $-$\\ 2\_3 & 2017-03-01 & 17:08:46.639 & $-$ & 9.27 $\pm$ 0.10 & $-$ & $-$ & $-$ & $-$\\ 2\_4 & 2017-03-01 & 17:08:46.672 & $-$ & 10.57 $\pm$ 0.15 & $-$ & $-$ & $-$ & $-$\\ \hline 3\tablefootmark{b} & 2017-03-01 & 17:13:17.360 & 9.15 $\pm$ 0.11 & 8.23 $\pm$ 0.07 & 33 & 2568 $\pm$ 130 & 0.9 $\pm$ 0.4 & 0.4 $\pm$ 0.1\\ \hline 4\tablefootmark{c} & 2017-03-04 & 20:51:31.853 & 9.50 $\pm$ 0.14 & 8.79 $\pm$ 0.06 & 33 & 2900 $\pm$ 270 & 0.4 $\pm$ 0.3 & 0.2 $\pm$ 0.1\\ \hline 5\tablefootmark{d} & 2017-04-01 & 19:45:51.650 &10.18 $\pm$ 0.13 & 8.61 $\pm$ 0.03 & 33 & 1910 $\pm$ 100 & 2.3 $\pm$ 1.0 & 0.8 $\pm$ 0.4\\ \hline 6\_1\tablefootmark{e} & 2017-05-01 & 20:30:58.137 &10.19 $\pm$ 0.18 & 8.84 $\pm$ 0.05 & 66 & 2070 $\pm$ 170 & 1.3 $\pm$ 0.9 & 0.5 $\pm$ 0.3\\ 6\_2 & 2017-05-01 & 20:30:58.170 & $-$ & 10.44 $\pm$ 0.21 & $-$ & $-$ & $-$ & $-$\\ \hline 7\_1\tablefootmark{f} & 2017-06-27 & 18:58:26.680 &11.07 $\pm$ 0.32 & 9.27 $\pm$ 0.06 & 66 & 1730 $\pm$ 210 & 2.5 $\pm$ 2.4 & 0.9 $\pm$ 0.9\\ 7\_2 & 2017-06-27 & 18:58:26.713 & $-$ & 10.80 $\pm$ 0.21 & $-$ & $-$ & $-$ & $-$\\ \hline 8\tablefootmark{g} & 2017-07-28 & 18:42:58:027 &10.72 $\pm$ 0.24 & 9.63 $\pm$ 0.10 & 33 & 2340 $\pm$ 310 & 0.4 $\pm$ 0.5 & 0.2 $\pm$ 0.2\\ \hline 9\tablefootmark{g} & 2017-07-28 & 18:51:41.683 &10.84 $\pm$ 0.24 & 9.81 $\pm$ 0.09 & 33 & 2410 $\pm$ 310 & 0.3 $\pm$ 0.3 & 0.1 $\pm$ 0.1\\ \hline 10\_1\tablefootmark{g} & 2017-07-28 & 19:17:18.307 & 8.27 $\pm$ 0.04 & 6.32 $\pm$ 0.01 & 165 & 1640 $\pm$ 20 &55 $\pm$ 19 & 20 $\pm$ 7\\ 10\_2 & 2017-07-28 & 19:17:18.340 & 9.43 $\pm$ 0.12 & 7.44 $\pm$ 0.02 & $-$ & 1620 $\pm$ 70 & $-$ & $-$\\ 10\_3 & 2017-07-28 & 19:17:18.373 & $-$ & 8.89 $\pm$ 0.07 & $-$ & $-$ & $-$& $-$\\ 10\_4 & 2017-07-28 & 19:17:18.406 & $-$ & 9.38 $\pm$ 0.11 & $-$ & $-$ & $-$& $-$\\ 10\_5 & 2017-07-28 & 19:17:18.439 & $-$ & 10.29 $\pm$ 0.23 & $-$ & $-$ & $-$& $-$\\ \hline \end{tabular} \tablefoot{The standard stars used for the calibration of each flash are: \tablefoottext{a}{SA 92$-$263,} \tablefoottext{b}{SA 93$-$333,} \tablefoottext{c}{SA 97$-$345,} \tablefoottext{d}{LHS 1858,} \tablefoottext{e}{2MASS J09212193$+$0247282,} \tablefoottext{f}{GSC 04932$-$00246,} \tablefoottext{g}{GSC 00362$-$00266.}} \end{table*} \section{Temperature estimation of the impact flashes} \label{temperature} The NELIOTA observations provide the first observational evidence for the temperature of impact flashes. Since we measure the emitted flux density in two different filters ($R$ and $I$), we can determine the flash temperature by comparing the intensities in the two wavelength bands. Assuming black-body emission \citep{eichhorn1975, burchell1996A,ernst2004,suggs2014}, a given temperature will result in a specific ratio between the measured intensities in the $R$ and $I-$bands. The ratio of the energies $E_{1}/E_{2}$ released in two different wavelengths depends only on the temperature $T$. Here we present an analytical method for calculating the temperatures of the NELIOTA flashes. The Planck formula is given by: \begin{equation} B(\lambda,T) = \frac{2hc^{2}}{\lambda^{5}}\frac{1}{\exp(\frac{hc}{\lambda k_{B}T}) - 1}, \end{equation} where $h=6.62 \times 10^{-34}$\;kg\;m$^{2}$\;s$^{-1}$ is the Planck constant, $c=3 \times 10^{8}$\;m\;s$^{-1}$ the speed of light, $k_{B}$\;=\;1.38$\times$10$^{-23}$\;kg\;m$^{2}$\;s$^{-2}$\;K$^{-1}$ the Boltzmann constant, $T$ and $\lambda$ the temperature of the flash and the wavelength of the photons, respectively. Dividing the Planck formula with the energy $E=h c/\lambda$ per photon, we obtain the photon radiance per wavelength $L_P(\lambda,T)$: \begin{equation} L_P(\lambda,T) = \frac{2c}{\lambda^{4}}\frac{1}{\exp(\frac{hc}{\lambda k_{B}T}) - 1} .\end{equation} Equation~3 is now linked to the absolute flux, $f_{\lambda}$, of the flash as: \begin{equation} f_{R} = \Omega L_P(R,T) \quad\text{and}\quad f_{I} = \Omega L_P(I,T) \quad\text{for each filter,}\\ \end{equation} where $\Omega$ is a constant. Since the observations are performed simultaneously at two different wavelengths, $R$ and $I$, we measure the two instrumental fluxes for the flash ($F_{R}$ and $F_{I}$) and for the standard star ($S_{R}$ and $S_{I}$). These measured fluxes are linked to the absolute ones ($f_{R},f_{I}$ and $s_{R},s_{I}$) with the factors $\xi_{R}$ and $\xi_{I}$, which depend on the instrument and atmospheric transmission. Therefore, for each $\lambda$ we get: \begin{subequations} \begin{equation} F_{R} = \xi_{R} f_{R} \quad\text{and}\quad F_{I} = \xi_{I} f_{I}\quad \text{for the flash,} \end{equation} \begin{equation} S_{R} = \xi_{R} s_{R} \quad\text{and}\quad S_{I} = \xi_{I} s_{I}\quad\text{for the star.} \end{equation} \end{subequations} Using the color of the standard star ($R-I$), which is known from the literature, and the ratio of Eq.~5b we obtain the value of the ratio of $\xi_{I}/\xi_{R}$, \begin{subequations} \begin{equation} R-I= -2.5 \log \left(\frac{s_{R}}{s_{I}}\right) = -2.5 \log \left(\frac {\xi_{I}} {\xi_{R}}\frac{S_{R}}{S_{I}}\right) ,\end{equation} \begin{equation} \xi=\frac {\xi_{I}} {\xi_{R}} = \frac{S_{I}}{S_{R}} 10^{-0.4 \;(R-I)}. \end{equation} \end{subequations} \begin{figure}[!hb] \includegraphics[width=0.35 \textwidth, angle=270]{lc.eps} \includegraphics[width=0.35 \textwidth, angle=270]{temperatures.eps} \caption{\textit{Upper panel:} Light curves of the four multi-frame events in the $I$ (filled circles; solid line) and $R$ (open circles; dotted line) bands. \textit{Lower panel:} Temperature evolution for Flashes 2 and 10.} \label{flash} \end{figure} The $\xi$ value is now used to find the ratio of the flash flux in both filters $f_{R}/f_{I}$ using Eq.~5a. From the ratio of Eq.~4, substituting the $L_P(R,T)$/$L_P(I,T)$ expressions from Eq.~3 and the $f_{R}/f_{I}$ using Eq.~5a, we have: \begin{equation} \frac {L_P(R,T)}{L_P(I,T)} = \xi \frac {F_{R}} {F_{I}} ,\end{equation} and thus the temperature $T$ becomes the only unknown parameter, which is calculated numerically using Eq.~7. For each event, we performed 10$^{5}$ Monte Carlo simulations in order to compute the standard deviation of each temperature measurement. At each iteration, random numbers were obtained from the observed flux distribution. The values of $f_{R}$ and $f_{I}$ were extracted from a Gaussian distribution centered at the nominal value of each flux, while adopting the standard deviation that resulted from the photometry. All temperatures and their uncertainties are presented in Table~\ref{table1}. The multi-frame Flashes 2 and 10 enable us to calculate the drop of the temperature for the first time, as they have simultaneous detections in both bands in consecutive frames. We find a temperature decrease of 1,325~$\pm$~104~K for Flash 2 and 20~$\pm$~73~K for Flash 10, that is,\ between the first detection and the subsequent one 33~ms later. The temperature evolution appears very different for each case and indicates a large difference in the impactor size, as a larger and heavier object will take longer to cool. A larger sample of multi-frame flashes from NELIOTA will allow us to determine the cooling behavior of the flashes and its relation to the impactor mass. Figure~\ref{flash} illustrates the light curve evolution for the four multi-frame flashes and temperature evolution for Flashes 2 and 10. The data are plotted at the end of the frame read-out of the corresponding measurement. All $I-$band light curves have a similar slope. Flash 2 presents a steeper decrease in the $R-$band than in the $I-$band. \section{Mass estimation of the impactors} \label{mass} The first step for the mass estimation is to derive the luminosity $L$ of the impact event. Given that observations up to now were mostly carried out using a single $R-$band filter, the value of $L$ was not well constrained \citep{bellotrubio2000A, bouley2012, ortiz2015, madiedo2015, suggs2014, suggs2017}. In this paper, we are able to estimate $T$ for the first time from the two wavelength bands provided by NELIOTA, and therefore can directly derive the luminous energy. Assuming black-body radiation from a spherical area, the bolometric energy is expressed in Joules as: \begin{equation} L = \sigma A T^4 t, \end{equation} where $\sigma=5.67 \times 10^{-8}$\;W\;m$^{-2}$\;K$^{-4}$ is the Stefan-Boltzmann constant, $A=2 \pi r^{2}$ the emitting area of radius $r$ for a flash near the lunar surface, $T$ the flash temperature derived above, and $t$ the exposure time of the frame when the photons were integrated. However, this calculation is not straightforward since we do not know the size of the radiating plume. A reasonable assumption is that the flashes are not resolved and thus the area is smaller than the pixel-scale ($0.8"$), which corresponds to a linear distance of $\sim$1,500~m at the center of the Moon's disk. The flux of the event at a specific wavelength, $f_{\lambda}$, is related to Planck's law expressed in photon radiance per wavelength (as described in Eq.~3 \& 4): \begin{equation} f_{\lambda} = \frac{L_P(\lambda, T)\epsilon \pi r^{2}}{D^{2}}, \end{equation} where $r$ is the radius of the radiative area, $D$ the Earth-Moon distance at the time of the observation and $\epsilon$ the emissivity, which we assume to be 1. The monochromatic flux of the flash $f_{\lambda}$ can be calculated from: \begin{equation} m_{\lambda}-m_{Vega(=0)} = -2.5 \log \left(\frac{f_{\lambda}}{f_{Vega,\lambda}}\right) ;\end{equation} therefore Eq.~9 can be solved for the unknown area of radius $r$. For the error estimation in $r$, we followed the approach described for the $T$ error estimation. We performed Monte Carlo simulations for the absolute flux estimation using Eq.~10, by randomly selecting flash magnitudes ($m_{\lambda}$) from their Gaussian distribution, with centers and standard deviations from the values of Table~\ref{table1}. This procedure was repeated for each filter and returned the absolute fluxes with their 1$\sigma$ values. In turn, these values were used as input for new Monte Carlo simulations, the calculation of $r$, and the final value comes from the average of the $A$-value that was found for each filter. We use a simple average of the two derived areas (one for each filter) for a single event since the differences were small. The luminosity of the flash $L$, which now can be easily derived from Eq.~8, is just a fraction $\eta$ (luminous efficiency) of the impactor's initial kinetic energy $KE$: \begin{equation} KE = \frac{L} {\eta}= \frac{1}{2} m v^2, \end{equation} where $m$ in kg is the mass of the impactor and $v$ in m~s$^{-1}$ the impact speed. In this work we use the formula derived by \citet{swift2011} and also used by \citet{suggs2014}: \begin{equation} \eta = 1.5\times 10^{-3} e^{-(v_o/v)^2 } ,\end{equation} where $v_o=9.3$~km s$^{-1}$, in order to estimate the luminous efficiencies $\eta_{1}$ and $\eta_{2}$ for two extreme impact velocities, 16~km~s$^{-1}$ and 24~km~s$^{-1}$ \citep{steel1996,mcnamara2004}, respectively. Table~\ref{table1} presents the resulting masses, which range between $0.3-55$~kg for $\eta_{1}=1.07 \times 10^{-3}$ and $0.1-20$~kg for $\eta_{2}=1.29 \times 10^{-3}$. \section{Discussion and Conclusions} NELIOTA is the first lunar monitoring system that enables the direct temperature measurement of observed lunar impact flashes, thanks to its unique twin camera and two-filter observation setup. Until now, the temperature could only be estimated, as it was based on modeling or experimental work. For example, \citet{suggs2014} used $T$=2,800~K from \citet{nemtchinov1998}. \citet{cintala1992} suggested that the flash temperatures, which depend on the type of material on the lunar surface, should range between 1,700~K and 3,800~K. The agreement of the values we obtain for the first NELIOTA flashes ($\sim$1,600--3,100~K) with the theoretical range is of great importance for estimating the luminosity of an impact flash and therefore its mass and size. The estimation of the masses of the impactors is a challenging procedure because many factors contribute to the mass uncertainty. The uncertainties of the observed fluxes contribute to the temperature estimation, which propagates to the calculation of the radiating area $A$ and then to the calculation of the bolometric luminosity $L$. However, the parameter that has the most important effect for the mass estimation is the luminous efficiency. Luminous efficiencies derived from laboratory experiments \citep{ernst2005} tend to be smaller by a few orders of magnitude compared to the ones derived from observations. Previous studies have proposed various values for the luminous efficiency, for example,\ $\eta\sim 2 \times 10^{-3}$ from observations of lunar Leonids \citep{bellotrubio2000A}. While we compute $\eta$ from Eq.~12 to be $\sim1.1-1.3 \times 10^{-3}$, other extreme values have been used for the sporadic impactor population. Specifically, when values in the range 10$^{-3}$$<\eta<$ 10$^{-4}$ are used, the mass of the same impactor can differ by an order of magnitude, even larger than the one we calculate here. Since large uncertainties exist in the calculation of the mass due to the unknown impact velocity, any estimation of the size will also be uncertain. Despite these uncertainies, our mass estimates (100 g to 55 kg) are at least an order of magnitude higher than the values (0.4 g to 3.5 kg) reported by \citet{suggs2014}. The impactors can be either asteroidal or cometary in origin, implying a difference in the density. Even if we consider the scenario that the bodies are near-earth asteroids, their densities can span a large range. Bulk densities of asteroids differ according to their mineralogy and macroporosity \citep{2002BRITTast3, carry2012}. However, there now exists a large collection of meteorites, pieces of asteroids, and an advanced knowledge of their densities. Average bulk densities of meteorites are between 1,600 and 7,370~kg~m$^{-3}$, where these extremes correspond to carbonaceous and iron meteorites, respectively \citep{consolmagno1998, britt2003, consolmagno2008, macke2010, macke2011a, macke2011b}. For all these reasons, new laboratory experiments using several types of materials will be very important for understanding impacts and the flash-generation mechanism, as they will provide a database of the impact parameters and their correlations (mass, impact speed, composition, flash duration, etc.). In summary, we report the first ten lunar impact flashes detected by the NELIOTA project, using the 1.2~m Kryoneri telescope. The multi-band capability of the NELIOTA cameras enables us to directly measure the temperatures of the impact flashes for the first time and to estimate the impactor masses. We find the measured temperature values ($\sim$1,600--3,100~K) to agree with previously published theoretical estimates, as discussed above. Furthermore, our sample contains four multi-frame flashes, two of which offer the opportunity for the estimation of the temperature evolution of the flash. We find a decrease of $1,325\pm104$~K for Flash 2 in 33~ms, while the decrease in the same time interval for Flash 10 ($20\pm73$~K) is consistent with zero. This difference is likely related to the fact that the impactor producing Flash 10 has a mass that is an order of magnitude larger than that of the impactor producing Flash 2. We also note that Flash 10 does not appear as a point source. We expect future detections of multi-frame and multi-band flashes with NELIOTA to provide a large enough sample for it to help determine the temperature evolution properties of impact flashes. Furthermore, our mass estimations rely on direct measurement of the luminous energy, given the directly measured temperature. The mass estimates that we report (100 g to 55 kg) are higher than previous estimations, despite the range of values assumed for the impact velocities and the resulting values of $\eta$. Obtaining NEO flux densities requires increasing the number of measurements of lunar impact flashes made during meteor showers. These will be important for estimating the impactor sizes, since their impact velocity will be constrained. NELIOTA is expected to contribute to detections of stream impact flashes, which will also constrain the critical, yet uncertain, value of $\eta$. The multi-band capability of NELIOTA will generate valuable statistics on the temperatures of impact flashes and their evolution. The comparison of these measurements with the laboratory results will provide insight to the physics of impact flashes. \begin{acknowledgements} AL, EMX, AD, IBV, PB, AF and AM acknowledge financial support by the European Space Agency under the NELIOTA program, contract No. 4000112943. This work has made use of data from the European Space Agency NELIOTA project, obtained with the 1.2-m Kryoneri telescope, which is operated by the Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing, National Observatory of Athens, Greece. Thanks to Danielle Moser, Robert Suggs and Steven Ehlert (NASA Marshall Space Flight Center) for discussions and comments on the manuscript. CA would like to thank Regina Rudawska and Elliot Sefton-Nash (ESTEC/ESA) for input. \end{acknowledgements}
2,869,038,156,110
arxiv
\section{Introduction} The nearly simultaneous detection of gravitational waves GW170817 and the $\gamma$-ray burst GRB 170817A~\cite{TheLIGOScientific:2017qsa,% Monitor:2017mdv,GBM:2017lvd} places a very tight constraint on the speed of gravitational waves, $c_{\rm GW}$. The deviation of $c_{\rm GW}$ from the speed of light is less than 1 part in $10^{15}$. This can be translated to constraints on modified gravity such as scalar-tensor theories, vector-tensor theories, massive gravity, and Ho\v{r}ava gravity~\cite{Creminelli:2017sry,% Sakstein:2017xjx,Ezquiaga:2017ekz,Baker:2017hug,Gumrukcuoglu:2017ijh,% Oost:2018tcv}. In particular, in the context of the Horndeski theory (the most general scalar-tensor theory having second-order equations of motion\footnote{The original form of the Horndeski action is different from its modern expression obtained by extending the Galileon theory~\cite{Deffayet:2011gz}. The equivalence of the two apparently different theories was shown in~\cite{Kobayashi:2011nu}.})~\cite{Horndeski:1974wa}, two of the four free functions in the action are strongly constrained, leaving only the simple, traditional form of nonminimal coupling of the scalar degree of freedom to the Ricci scalar, {\em i.e.,} the ``$f(\phi){\cal R}$''-type coupling. However, it has been pointed out that there still remains an interesting, nontrivial class of scalar-tensor theories beyond Horndeski that can evade the gravitational wave constraint as well as solar-system tests~\cite{Creminelli:2017sry,Ezquiaga:2017ekz,Sakstein:2017xjx,% Crisostomi:2017lbg,Langlois:2017dyl,Babichev:2017lmw,% Dima:2017pwp,Crisostomi:2017pjs,Bartolo:2017ibw}. Such theories have higher-order equations of motion as they are more general than the Horndeski theory, but the system is degenerate and hence is free from the dangerous extra degree of freedom that causes Ostrogradski instability. Earlier examples of degenerate higher-order scalar-tensor (DHOST) theory are found in~\cite{Zumalacarregui:2013pma,Gleyzes:2014dya}, and the degeneracy conditions are systematically studied and classified at quadratic order in second derivatives of the scalar field in~\cite{Langlois:2015cwa,Crisostomi:2016czh} and at cubic order in~\cite{BenAchour:2016fzp}. Degenerate theories can also be generated from nondegenerate ones via noninvertible disformal transformation~\cite{Takahashi:2017pje,Langlois:2018jdg}. One of the most interesting phenomenologies of DHOST theories is efficient Vainshtein screening outside matter sources and its partial breaking in the inside~\cite{Kobayashi:2014ida}. The partial breaking of screening modifies, for instance, stellar structure~\cite{Koyama:2015oma,Saito:2015fza}. This fact was used to test DHOST theories, or, more specifically, the Gleyzes-Langlois-Piazza-Vernizzi (GLPV) subclass~\cite{Gleyzes:2014dya}, against astrophysical observations~\cite{Sakstein:2015zoa,% Sakstein:2015aac,Jain:2015edg,Sakstein:2016ggl,Sakstein:2016lyj,Salzano:2017qac}. Going beyond the weak-field approximation, relativistic stars in the GLPV theory have been studied in~\cite{Babichev:2016jom,Sakstein:2016oel}. In this paper, we consider relativistic stars in DHOST theories that are more general than the GLPV theory but evade the constraint on the speed of gravitational waves~\cite{Creminelli:2017sry,Sakstein:2017xjx,Ezquiaga:2017ekz}. So far, this class of theories have been investigated by employing the weak-field approximation~\cite{Crisostomi:2017lbg,Langlois:2017dyl,Dima:2017pwp} and in a cosmological context~\cite{Crisostomi:2017pjs}. Very recently, compact objects including relativistic stars in the GLPV theory with $c_{\rm GW}=1$ have been explored in Ref.~\cite{Chagoya:2018lmv}. This paper is organized as follows. In the next section, we introduce the DHOST theories with $c_{\rm GW}=1$ and derive the basic equations describing a spherically symmetric relativistic star. To check the consistency with the previous results, we linearize the equations and see the gravitational potential in the weak-field approximation in Sec.~III. Then, in Sec.~IV, we give boundary conditions imposed at the stellar center and in the exterior region. Our numerical results are presented in Sec.~V. We draw our conclusions in Sec.~VI. Since some of the equations are quite messy, their explicit expression is shown in Appendix~A. \section{Field equations} The action of the quadratic DHOST theory we study is given by \begin{align} S=\int {\rm d}^4x\sqrt{-g}\left[f(X){\cal R}+\sum_{I=1}^5{\cal L}_I+{\cal L}_{\rm m}\right], \label{eq:Lagrangian} \end{align} where ${\cal R}$ is the Ricci scalar, $X:=\phi_\mu \phi^\mu$, and \begin{align} {\cal L}_1&:=A_1(X)\phi_{\mu\nu}\phi^{\mu\nu}, \\ {\cal L}_2&:=A_2(X)(\Box\phi)^2, \\ {\cal L}_3&:=A_3(X)\Box\phi\phi^\mu\phi_{\mu\nu}\phi^\nu, \\ {\cal L}_4&:=A_4(X)\phi^\mu\phi_{\mu\rho}\phi^{\rho \nu}\phi_\nu, \\ {\cal L}_5&:=A_5(X)(\phi^\mu\phi_{\mu\nu}\phi^\nu)^2, \end{align} with $\phi_\mu=\nabla_\mu\phi$ and $\phi_{\mu\nu}=\nabla_\mu\nabla_\nu\phi$. The functions $A_I(X)$ must be subject to certain conditions in order for the theory to be degenerate and satisfy $c_{\rm GW}=1$, as explained shortly. Here, shift symmetry is assumed and the other possible terms of the form $G_2(X)$ and $G_3(X)\Box\phi$ are omitted. In particular, we do not include the usual kinetic term $-X/2$ in this paper.\footnote{The inclusion of the usual kinetic term $-X/2$ gives rise to an interesting branch of asymptotically locally flat solutions with a deficit angle, as has been recently explored in Ref.~\cite{Chagoya:2018lmv}. We leave this possibility for future research. The appearance of a deficit angle in the GLPV theory was pointed out earlier in Refs.~\cite{DeFelice:2015sya,Kase:2015gxi}. As we will see, our ansatz for the scalar field is different from that assumed in Refs.~\cite{DeFelice:2015sya,Kase:2015gxi} (see Eq.~(\ref{eq:vt}) below). Probably this is the reason why we can obtain spherically symmetric solutions that are regular at the center.} These assumptions are nothing to do with the degeneracy conditions and the $c_{\rm GW}=1$ constraint to be imposed below. However, with this simplification one can concentrate on the effect of Vainshtein breaking. Note in passing that the DHOST theories are equivalent to the Horndeski theory with disformally coupled matter. Therefore, the setup we are considering is in some sense similar to that explored in Ref.~\cite{Minamitsuji:2016hkk}. However, the crucial difference is that, in contrast to Ref.~\cite{Minamitsuji:2016hkk}, our theory corresponds to the case where the conformal and disformal coupling functions depend on the first derivative of the scalar field. We require that the speed of gravitational waves, $c_{\rm GW}$, is equal to the speed of light~\cite{Monitor:2017mdv}. In our theory $c_{\rm GW}$ is given by $c_{\rm GW}^2=f/(f-XA_1)$~\cite{deRham:2016wji}, so that \begin{align} A_1=0. \end{align} The degeneracy conditions read \begin{align} A_2&=-A_1=0, \\ A_4&=-\frac{1}{8f}\left(8A_3f-48f_X^2-8A_3f_XX+A_3^2X^2\right),\label{eq:degn1} \\ A_5&=\frac{A_3}{2f}\left(4f_X+A_3X\right).\label{eq:degn2} \end{align} We thus have two free functions, $A_3$ and $f$, in the quadratic DHOST sector with $c_{\rm GW}=1$. This theory has been explored recently in Refs.~\cite{Creminelli:2017sry,Sakstein:2017xjx,Ezquiaga:2017ekz,% Baker:2017hug,Crisostomi:2017lbg,Langlois:2017dyl,Dima:2017pwp,Crisostomi:2017pjs}. (See Ref.~\cite{Bartolo:2017ibw} for a different subclass with $f=\;$const and $A_2=-A_1\neq 0$.) Following Ref.~\cite{Crisostomi:2017pjs}, we introduce \begin{align} B_1:=\frac{X}{4f}(4f_X+XA_3), \end{align} and use this instead of $A_3$. In the special case with $B_1=0=A_5$, the action reduces to that of the GLPV theory. We consider a static and spherically symmetric metric, \begin{align} {\rm d} s^2=-e^{\nu(r)}{\rm d} t^2+e^{\lambda(r)}{\rm d} r^2 +r^2{\rm d}\Omega^2. \end{align} The scalar field is taken to be \begin{align} \phi(t,r)=vt+\psi(r),\label{eq:vt} \end{align} where $v \,(\neq0)$ is a constant. Even though $\phi$ is linearly dependent on the time coordinate, it is consistent with the static spacetime because the action~(\ref{eq:Lagrangian}) possess a shift symmetry, $\phi\to\phi+c$, and $\phi$ without derivatives does not appear in the field equations. This ansatz was also used to obtain black hole solutions in the Galileon and Horndeski theories in Refs.~\cite{Babichev:2013cya,Kobayashi:2014eva,Babichev:2016rlq,% Babichev:2016fbg,Babichev:2017guv}. The field equations are given by \begin{align} {\cal E}_\mu^\nu&=T_\mu^\nu,\label{eq:grav-eq} \\ \nabla_\mu J^\mu& =0,\label{eq:dJ=0} \end{align} where ${\cal E}_{\mu\nu}$ is obtained by varying the action with respect to the metric and $J^\mu$ is the shift current defined by $\sqrt{-g}J^\mu=\delta S/\delta\phi_\mu$. The energy-momentum tensor is of the form \begin{align} T_\mu^\nu={\rm diag}(-\rho, P,P,P). \end{align} The radial component of the conservation equations, $\nabla_\mu T^{\mu}_\nu=0$, reads \begin{align} P'=-\frac{\nu'}{2}(\rho+P),\label{eq:fluid} \end{align} where ${}':={\rm d}/{\rm d} r$. With direct calculation we find that \begin{align} J^r\propto {\cal E}_{tr}. \end{align} Therefore, the gravitational field equation ${\cal E}_{tr}=0$ requires that $J^r$ vanishes. Then, the field equation for the scalar field~(\ref{eq:dJ=0}) is automatically satisfied. To write Eq.~(\ref{eq:grav-eq}) and $J^r=0$ explicitly, it is more convenient to use $X=-e^{-\nu}v^2+e^{-\lambda}\psi'^2$ instead of $\psi$. In terms of $X$, we have \begin{align} {\cal E}_t^t&=b_1 \nu''+b_2X''+\widetilde{{\cal E}}_t(\nu,\nu',\lambda, \lambda', X, X'), \label{eq:Ett} \\ {\cal E}_r^r&=c_1 \nu''+c_2X''+\widetilde{{\cal E}}_r(\nu,\nu',\lambda, \lambda', X, X'), \label{eq:Err} \\ \psi' J^r&=c_1 \nu''+c_2X''+\widetilde{{\cal E}}_J(\nu,\nu',\lambda, \lambda', X, X'), \label{eq:Jr} \end{align} where \begin{align} b_1&=2fB_1e^{-\lambda}\left(\frac{e^{-\nu}v^2}{X}\right), \\ b_2&=b_1 \left[\frac{e^\nu}{v^2}+\frac{B_1(3X+4e^{-\nu}v^2)}{X^2} -\frac{4e^{-\nu}v^2f_X}{Xf}\right], \\ c_1&=-2fB_1e^{-\lambda}\left( \frac{e^{-\nu}v^2+X}{X}\right), \\ c_2&=c_1 \left[ \frac{B_1(3X+4e^{-\nu}v^2)}{X^2} -\frac{4e^{-\nu}v^2f_X}{Xf} \right], \end{align} but the explicit expression of $\widetilde{{\cal E}}_t$, $\widetilde{{\cal E}}_r$, and $\widetilde{{\cal E}}_J$ are messy. We see that ${\cal E}_r^r$ and $\psi' J^r$ have the same coefficients $c_1$ and $c_2$. Moreover, we find by an explicit computation that ${\cal E}_r^r$ and $\psi' J^r$ are linearly dependent on $\lambda'$ and their coefficients are also the same. Therefore, by taking the combination ${\cal E}_r^r-\psi' J^r$ one can remove $\nu''$, $X''$, and $\lambda'$. Then, the field equation ${\cal E}_r^r-\psi'J^r=P$ can be solved for $\lambda$ to give \begin{align} e^\lambda = {\cal F}_\lambda (\nu,\nu',X,X',P), \label{eq:elambda} \end{align} where \begin{align} {\cal F}_\lambda=\frac{2X+B_1rX'}{2X^3(2f+r^2P)} \bigl\{4e^{-\nu}v^2(fB_1-Xf_X)rX'& \notag \\ +Xf\left[3B_1rX'+2X(1+r\nu')\right] \bigr\}.& \end{align} Using Eq.~(\ref{eq:elambda}), we can eliminate $\lambda$ and $\lambda'$ from Eqs.~(\ref{eq:Ett}) and~(\ref{eq:Jr}). In doing so we replace $P'$ with $\nu'$, $\rho$, and $P$ by using Eq.~(\ref{eq:fluid}). We then obtain \begin{align} \psi' J^r=k_1\nu''+k_2X''+{\cal J}_1(\nu,\nu',X,X',\rho,P)=0, \label{eq:cJ} \end{align} where $k_{1,2}=k_{1,2}(\nu,\nu',X,X',P)$. The field equation ${\cal E}_t^t+\rho=0$ can also be written in the form \begin{align} k_1\nu''+k_2X''+{\cal J}_2(\nu,\nu',X,X',\rho,P)=0. \end{align} Note that we have the same coefficients $k_1$ and $k_2$. This is due to the degeneracy conditions. We thus arrive at a first-order equation, ${\cal J}_1={\cal J}_2$, which can be solved for $X'$ as \begin{align} X'={\cal F}_1(\nu,X,\rho, P)\nu'+\frac{{\cal F}_2(\nu,X,\rho,P)}{r}, \label{eq:dX} \end{align} where ${\cal F}_1$ and ${\cal F}_2$ are complicated. Their explicit form is presented in Appendix~\ref{App:f1f2f3}. Finally, we use Eq.~(\ref{eq:dX}) to eliminate $X'$ and $X''$ from Eq.~(\ref{eq:cJ}). This manipulation also removes $\nu''$, as it should be because the theory is degenerate. We thus arrive at \begin{align} \nu'={\cal F}_3(\nu,X,\rho,\rho',P),\label{eq:dnu} \end{align} where the explicit expression of ${\cal F}_3$ is extremely long and is presented in Appendix~\ref{App:f1f2f3}. We have thus obtained our basic equations describing the Tolman-Oppenheimer-Volkoff system in DHOST theories. Given the equation of state relating $\rho$ and $P$, one can integrate Eqs.~(\ref{eq:fluid}), (\ref{eq:dX}), and~(\ref{eq:dnu}) to determine $P=P(r)$, $\nu=\nu(r)$, and $X=X(r)$. Equation~(\ref{eq:elambda}) can then be used to obtain $\lambda=\lambda(r)$. \section{Nonrelativistic, weak-field limit} Since our procedure to obtain spherically symmetric solutions is different from that of previous works~\cite{Crisostomi:2017lbg,% Langlois:2017dyl,Dima:2017pwp,Crisostomi:2017pjs}, it is a good exercise to check here that one can reproduce the previous result in a nonrelativistic, weak-field limit. We write \begin{align} \nu=\delta\nu,\quad \lambda = \delta \lambda,\quad X=-v^2+\delta X, \end{align} and derive linearized equations for a nonrelativistic source with $P=0$. It is straightfoward to derive \begin{align} {\cal F}_\lambda &\simeq 1+r\left(\delta\nu'+\frac{2f_X}{f} \delta X'\right), \\ {\cal F}_1&\simeq -\frac{v^2 f}{2v^2f_X+fB_1}, \\ {\cal F}_2&\simeq \frac{f(\delta X-v^2\delta\nu)}{2v^2f_X+fB_1} -\left(\frac{v^2}{v^2f_X+fB_1}\right)\frac{r^2\rho}{8}, \\ {\cal F}_3&\simeq \frac{\delta X-v^2\delta\nu}{rv^2}-4\pi G_Nr\rho \notag \\ &\quad +2\pi G_N\left[-\frac{12v^2f_X}{f} +\frac{(1-3B_1)B_1f}{v^2f_X+fB_1} \right]r\rho \notag \\ &\quad +2\pi G_N \Upsilon_1 r^2\rho', \end{align} where we introduced \begin{align} 8\pi G_N:=\left.\frac{1}{2f(1-3B_1)+4Xf_X}\right|_{X=-v^2},\label{def:GNinf} \end{align} and \begin{align} \Upsilon_1:=\left. -\frac{(-2Xf_X+fB_1)^2}{f(-Xf_X +fB_1)}\right|_{X=-v^2}.\label{def:Up1} \end{align} We will see below that $G_N$ can indeed be regarded as the Newton constant. We then solve the following set of equations: \begin{align} \delta \lambda&={\cal F}_\lambda -1,\label{eq:Newton1} \\ \delta X'&={\cal F}_1\delta\nu'+\frac{{\cal F}_2}{r},\label{eq:Newton2} \\ \delta \nu'&={\cal F}_3.\label{eq:Newton3} \end{align} Combining Eqs.~(\ref{eq:Newton2}) and~(\ref{eq:Newton3}), the following second-order equation for $\delta\nu$ can be derived, \begin{align} \delta\nu''+\frac{2}{r}\delta\nu'= 2G_N\left[ \frac{M'}{r^2}+\Upsilon_1\left(\frac{M''}{2r}+\frac{M'''}{4}\right)\right], \label{eq:ddnu} \end{align} where $M(r)$ is the enclosed mass defined as \begin{align} M(r):=4\pi\int^r\rho(s) s^2{\rm d} s. \end{align} Equation~(\ref{eq:ddnu}) can be integrated to give \begin{align} \delta\nu'=\frac{C_0}{r^2} +2G_N\left(\frac{M}{r^2}+\frac{\Upsilon_1}{4}M''\right), \end{align} where $C_0$ is an integration constant. Combining Eqs.~(\ref{eq:Newton2}) and~(\ref{eq:Newton3}) again, we obtain \begin{align} \delta X &=v^2\delta \nu+ \frac{v^2C_0}{r}+\frac{2v^2G_N M}{r} \notag \\ &\quad + v^2G_N\left[ 1+\frac{2v^2f_X}{f}-\frac{(1-B_1)fB_1}{2(v^2f_X+fB_1)} \right]M'. \end{align} Finally, we use Eq.~(\ref{eq:Newton1}) to get \begin{align} \delta \lambda = \frac{C_0}{r}+2G_Nr\left( \frac{M}{r^2}-\frac{5\Upsilon_2}{4}\frac{M'}{r} +\Upsilon_3 M'' \right), \end{align} where \begin{align} \Upsilon_2&:=\left.\frac{8Xf_X}{5f}\right|_{X=-v^2},\label{def:Up2} \\ \Upsilon_3&:=\left.-\frac{B_1}{4}\left( \frac{-2Xf_X+fB_1}{-Xf_X +fB_1}\right)\right|_{X=-v^2}.\label{def:Up3} \end{align} Imposing regularity at the center, we take $C_0=0$. We may set $M'=0$ and $M''=0$ outside the source, and then we have $\delta\nu= - \delta\lambda = -2G_NM/r$, which coincides with the solution in general relativity if $G_N$ is identified as the Newton constant. Gravity is modified only inside the matter source, and we have seen that there are three parameters, $\Upsilon_{1,2,3}$, that characterize the deviation from the standard result. They are subject to \begin{align} 2\Upsilon_1^2-5\Upsilon_1\Upsilon_2-32\Upsilon_3^2=0, \end{align} so that actually only two of them are independent. Note that in the case of the GLPV theory, one has $B_1=0$, and hence $\Upsilon_2=2\Upsilon_1/5$ and $\Upsilon_3=0$. To see that the previous result is correctly reproduced, we perform a small coordinate transformation \begin{align} \varrho=r\left[1+\frac{1}{2}\int^r\frac{\delta\lambda(r')}{r'}{\rm d} r'\right]. \end{align} The metric then takes the form \begin{align} {\rm d} s^2=-(1+2\Phi){\rm d} t^2 + (1-2\Psi)\left({\rm d}\varrho^2+\varrho^2{\rm d}\Omega^2\right), \end{align} where \begin{align} \Phi =\frac{\delta\nu}{2}, \quad \Psi = \frac{1}{2}\int^r\frac{\delta\lambda(r')}{r'}{\rm d} r'. \label{eq:wfpsiphi} \end{align} Thus, we can confirm that Eq.~(\ref{eq:wfpsiphi}) reproduces the previous result found in the literature~\cite{Crisostomi:2017lbg,Langlois:2017dyl,Dima:2017pwp,Crisostomi:2017pjs}. Constraints on the $\Upsilon$ parameters have been obtained from astrophysical observations in the case of $\Upsilon_3=0$~\cite{Koyama:2015oma,% Sakstein:2015zoa,Sakstein:2015aac,Jain:2015edg,% Sakstein:2016ggl,Sakstein:2016lyj,Salzano:2017qac,Babichev:2016jom,Sakstein:2016oel}. For example, the mass-radius relation of white dwarfs yields the constraint $-0.18<\Upsilon_1<0.27$~\cite{Jain:2015edg}. This is valid even in the case of $\Upsilon_3\neq 0$, because the constraint comes only from $\Phi$ in such nonrelativistic systems. To probe $\Psi$, one needs nonrelativistic systems and/or observations based on propagation of light rays such as gravitational lensing, and a tighter constraint has been imposed on $\Upsilon_1$ combining the information on $\Psi$~\cite{Sakstein:2016ggl}. However, this relies on the assumption that $\Upsilon_3=0$. For this reason, it is important to study relativistic stars in theories with $\Upsilon_3\neq 0$. Another constraint can be obtained from the Hulse-Taylor pulsar, which limits the difference between $G_N$ and the effective gravitational coupling for gravitational waves, $G_{\rm GW}=1/16\pi f(-v^2)$~\cite{Jimenez:2015bwa}. The constraint reads~\cite{Dima:2017pwp} \begin{align} \frac{G_{\rm GW}}{G_N}-1 \left.= \frac{2Xf_X}{f}-3B_1\right|_{X=-v^2} <{\cal O}(10^{-3}).\label{Ggwconstraint} \end{align} Note, however, that this constraint is based on the assumption that the scalar radiation does not contribute to the energy loss, whose validity must be ascertained in the Vainshtein-breaking theories. \section{Boundary conditions} \subsection{Boundary conditions at the center}\label{subsec:center} To derive the boundary conditions at the center of a star, we expand \begin{align} &\nu=\nu_c+\frac{\nu_2}{2}r^2+\cdots, \quad X=X_c\left(1+\frac{X_2}{2}r^2+\cdots\right), \notag \\ &\rho=\rho_c+\frac{\rho_2}{2}r^2+\cdots, \quad P=P_c+\frac{P_2}{2}r^2+\cdots, \end{align} where \begin{align} X_c:=-e^{-\nu_c}v^2.\label{eq:Xc} \end{align} In deriving the relation~(\ref{eq:Xc}) we used regularity at the center, $\phi'(t,0)=\psi'(0)=0$. We then expand ${\cal F}_\lambda$, ${\cal F}_1$, ${\cal F}_2$, and ${\cal F}_3$ around $r=0$ to obtain \begin{align} &{\cal F}_\lambda\simeq 1+a_\lambda r^2, \quad {\cal F}_1\simeq a_1, \notag \\ & {\cal F}_2\simeq a_2 r^2,\quad {\cal F}_3\simeq a_3r, \end{align} where \begin{align} a_\lambda&:=\nu_2+\frac{4X_cX_2f_X(X_c)-P_c}{2f(X_c)}, \\ a_1&:=\frac{X_cf(X_c)}{f(X_c)B_1(X_c)-2X_cf_X(X_c)}, \end{align} while $a_2$ and $a_3$ are similar but slightly more messy. Equation~(\ref{eq:elambda}) implies $e^\lambda\simeq 1+a_\lambda r^2$, so that we find \begin{align} \lambda=a_\lambda r^2+\cdots. \end{align} Equations~(\ref{eq:dX}) and~(\ref{eq:dnu}) reduce to the following algebraic equations \begin{align} X_cX_2=a_1 \nu_2+a_2,\quad \nu_2=a_3, \end{align} leading to \begin{align} \nu_2&=\frac{8\pi G_c}{3}\left(\rho_c+3P_c\right) \notag \\ &\quad +4\pi G_c\left[\eta_1 \rho_c+\left(5\eta_2+12\eta_3\right) P_c\right], \\ X_2&=-8\pi G_c \left[ 2\rho_c+3P_c-\frac{4\eta_3}{\eta_1+4\eta_3}(\rho_c-3P_c) \right], \end{align} where \begin{align} \eta_1&:=\left.-\frac{(-2Xf_X+fB_1)^2}{f(-Xf_X+fB_1)}\right|_{X=X_c}, \\ \eta_2&:=\left.\frac{8Xf_X}{5f}\right|_{X=X_c}, \\ \eta_3&:=\left.-\frac{B_1}{4}\left( \frac{-2Xf_X+fB_1}{-Xf_X +fB_1}\right)\right|_{X=X_c}, \end{align} and we defined the effective gravitational constant at the center as \begin{align} 8\pi G_c:=\left.\frac{1}{2f(1-3B_1)+4Xf_X}\right|_{X=X_c}. \end{align} The above quantities are defined following Eqs.~(\ref{def:GNinf}), (\ref{def:Up1}), (\ref{def:Up2}), and~(\ref{def:Up3}), but now they are evaluated at the center, $X=X_c$. If gravity is sufficiently weak and the Newtonian approximation is valid, we have $|\nu_c|\ll 1$ and hence $X_c\simeq -v^2$, leading to $G_c\simeq G_N$ and $\eta_i\simeq \Upsilon_i$. In this case, corrections to the standard expression for the metric near $r=0$ are small as long as $\Upsilon_i\ll 1$. However, if the system is in the strong gravity regime, $X_c$ may differ significantly from $-v^2$ and therefore the internal structure of relativistic stars may be modified notably even in theories where only a tiny correction is expected in the Newtonian regime. Finally, from Eq.~(\ref{eq:fluid}) we get \begin{align} P_2=-\frac{\nu_2}{2}(\rho_c+P_c). \end{align} Now, given $\nu_c$ and $\rho_c$ (or $P_c$), Eqs.~(\ref{eq:fluid}), (\ref{eq:dX}), and~(\ref{eq:dnu}) can be integrated from the center outward. Let us move to the boundary conditions outside the star. \subsection{Exterior solution} For $\rho=P=0$, Eqs.~(\ref{eq:dX}) and~(\ref{eq:dnu}) reduce to the following simple set of equations: \begin{align} X'=0,\quad \nu'=-\frac{X+e^{-\nu}v^2}{rX}. \end{align} This can be integrated to give \begin{align} X=-v^2,\quad e^{\nu}=1-\frac{{\cal C}}{r}, \end{align} where ${\cal C}$ is an integration constant and we imposed that $\psi'\to 0$ as $r\to \infty$. We then have \begin{align} e^\lambda &= {\cal F}_\lambda(\nu,\nu',-v^2,0,0)=1+\nu' r \notag \\ &=\left(1-\frac{{\cal C}}{r} \right)^{-1}. \end{align} The exterior metric is thus obtained exactly without linearizing the equations, which coincides with the Schwarzschild solution in general relativity. The stellar interior must be matched to this exterior solution. It will be convenient to write \begin{align} {\cal C}=2G_N\mu, \end{align} because then $\mu$ is regarded as the mass of the star. \subsection{Matching at the stellar surface} \label{subsec:match} The stellar surface, $r=R$, is defined by $P(R)=0$. The induced metric is required to be continuous across the surface, so that $\nu$ must be continuous there. Since $X=\phi_\mu\phi^\mu$ is a spacetime scalar, it is reasonable to assume that this quantity is also continuous across the surface. We thus have the two conditions imposed at $r=R$: \begin{align} e^{\nu(R)}&= 1-\frac{2G_N\mu}{R},\label{jc:2} \\ X(R)&=-v^2.\label{jc:1} \end{align} We tune the central value $\nu_c$ in order for the solution to satisfy Eq.~(\ref{jc:1}). The second condition~(\ref{jc:2}) is used to determine the integration constant $\mu$. As we have seen in the previous section, $\mu=M(R)$ in the nonrelativistic, weak-field limit. In the present case, however, $\mu$ does not necessarily coincide with $M(R)$ because the nonrelativistic and weak-field approximations are not justified in general for our interior solutions. Note that in general we may have $\rho_-':=\rho'(R_-)\neq 0$ while $\rho'(R_+)=0$, where $R_\pm:=\lim_{\varepsilon\to 0} R(1\pm\varepsilon)$. As a particular feature of the DHOST theories with partial breaking of the Vainshtein mechanism, the right-hand side of Eq.~(\ref{eq:dnu}) depends on $\rho'$~\cite{Babichev:2016jom}. This implies that $\nu'$ is discontinuous across the stellar surface. Then, from Eq.~(\ref{eq:dX}) we see that $X'$ is also discontinuous across the surface. Furthermore, since the right hand side of Eq.~(\ref{eq:elambda}) depends on $\nu'$ and $X'$, $\lambda$ is also discontinuous there in general. With some manipulation we see that \begin{align} &1-e^{[\lambda(R_-)-\lambda(R_+)]/2} \notag \\ &=\pi G_N\rho_-'R^3 B_1\left.\left[2e^{-\nu(R)}-\frac{B_1f}{B_1f-Xf_X}\right] \right|_{X=-v^2}, \end{align} which shows that $\lambda$ is nevertheless continuous in theories with $B_1=0$. However, it is found that \begin{align} &X'(R_+)-X'(R_-) \notag \\ & = 2\pi G_N\rho_-'R^2X \left.\left[2e^{-\nu(R)}-\frac{B_1f}{B_1f-Xf_X}\right] \right|_{X=-v^2}, \end{align} and therefore $X'$ is discontinuous even if $B_1=0$. This is also the case for $\nu'$. In the next section, we will show our numerical results in which one can find these discontinuities. \section{Numerical results} As a specific example, we study the model of Ref.~\cite{Crisostomi:2017pjs}: \begin{align} f=\frac{M_{\rm Pl}^2}{2}+\alpha X^2,\quad A_3=-8\alpha-\beta, \end{align} where $\alpha$ and $\beta$ are constants. Note that we are using the notation such that $8\pi G_N\neq M_{\rm Pl}^{-2}$. We have \begin{align} B_1=-\frac{\beta X^2}{2(M_{\rm Pl}^2+2\alpha X^2)}, \end{align} and hence the model with $\beta\neq 0$ is more general than the GLPV theory. For this choice of the functions the degeneracy conditions~(\ref{eq:degn1}) and~(\ref{eq:degn2}) leads to \begin{align} A_4&=\frac{M_{\rm Pl}^2(8\alpha+\beta)+(16\alpha^2-6\alpha\beta-\beta^2/4)X^2}{M_{\rm Pl}^2+2\alpha X^2}, \\ A_5&=\frac{\beta(8\alpha+\beta)X}{M_{\rm Pl}^2+2\alpha X^2}. \end{align} This model, with the addition of the lower order Horndeski terms, admits viable self-accelerating cosmological solutions~\cite{Crisostomi:2017pjs}, and therefore is interesting. Hereafter we will use the dimensionless parameters defined as \begin{align} \overline{\alpha}:=\frac{\alpha v^4}{M_{\rm Pl}^2},\quad \overline{\beta}:=\frac{\beta v^4}{M_{\rm Pl}^2}. \end{align} The parameters that characterize Vainshtein breaking in the Newtonian regime, $\Upsilon_i$, can then be estimated as \begin{align} \Upsilon_i \sim \overline{\alpha},\overline{\beta}. \end{align} From Eq.~(\ref{Ggwconstraint}) we also estimate \begin{align} \frac{G_{\rm GW}}{G_N}-1\sim \overline{\alpha},\overline{\beta}. \end{align} Therefore, by taking sufficiently small $\overline{\alpha}$ and $\overline{\beta}$ (say, ${\cal O}(10^{-3})$), current constraints can be evaded. In the following numerical calculations, we will employ such small values of the parameters. The equation of state we use is given by \begin{align} \rho=\left(\frac{P}{K}\right)^{1/2}+P, \end{align} with $K=7.73\times 10^{-3}\, (8\pi G_N)^3M_{\odot}^2$ ($K=123\,M_{\odot}^2$ in the units where $G_N=1$), which has been used frequently in the modifed gravity literature~\cite{Cisterna:2015yla,Cisterna:2016vdx,Maselli:2016gxk,Silva:2016smx,% Babichev:2016jom}. With this simple equation of state we focus on the qualitative nature of the solutions. \begin{figure}[tbp] \begin{center} \includegraphics[width=80mm]{plot_valpha_rhoM.eps} \end{center} \caption{The mass ($\mu$) versus central density diagram for $\overline{\beta}=0$. The parameters are given by $\overline{\alpha} = 2\times 10^{-3}, 10^{-3}, 4\times 10^{-4},% 2\times 10^{-4}, 10^{-4}, 0$ (GR) (solid lines, from top to bottom), and $\overline{\alpha}=-10^{-4}, -2\times 10^{-4}, -4\times 10^{-4}, -10^{-3}, -2\times 10^{-3}$ (dashed lines, from top to bottom). \label{fig:plot_valpha_rhoM.eps} \end{figure} \begin{figure}[tbp] \begin{center} \includegraphics[width=80mm]{plot_valpha_RM.eps} \end{center} \caption{The mass ($\mu$) versus radius diagram for $\overline{\beta}=0$. The parameters are given by $\overline{\alpha} = 2\times 10^{-3}, 10^{-3}, 4\times 10^{-4},% 2\times 10^{-4}, 10^{-4}, 0$ (GR) (solid lines, from right to left), and $\overline{\alpha}=-10^{-4}, -2\times 10^{-4}, -4\times 10^{-4}, -10^{-3}, -2\times 10^{-3}$ (dashed lines, from top to bottom).}% \label{fig:plot_valpha_RM.eps} \end{figure} \begin{figure}[tbp] \begin{center} \includegraphics[width=80mm]{plot_vbeta_rhoM.eps} \end{center} \caption{The mass ($\mu$) versus central density diagram for $\overline{\alpha}=0$. The parameters are given by $\overline{\beta} = 2\times 10^{-3}, 10^{-3}, 4\times 10^{-4},% 2\times 10^{-4}, 10^{-4}, 0$ (GR) (solid lines, from top to bottom), and $\overline{\beta}=-10^{-4}, -2\times 10^{-4}, -4\times 10^{-4}, -10^{-3}, -2\times 10^{-3}$ (dashed lines, from top to bottom). \label{fig:plot_vbeta_rhoM.eps} \end{figure} \begin{figure}[tbp] \begin{center} \includegraphics[width=80mm]{plot_vbeta_RM.eps} \end{center} \caption{The mass ($\mu$) versus radius diagram for $\overline{\alpha}=0$. The parameters are given by $\overline{\beta} = 2\times 10^{-3}, 10^{-3}, 4\times 10^{-4},% 2\times 10^{-4}, 10^{-4}, 0$ (GR) (solid lines, from right to left), and $\overline{\beta}=-10^{-4}, -2\times 10^{-4}, -4\times 10^{-4}, -10^{-3}, -2\times 10^{-3}$ (dashed lines, from top to bottom).}% \label{fig:plot_vbeta_RM.eps} \end{figure} We start with the theories with $\overline{\beta}=0$ and focus on the effect of $\overline{\alpha}$. Figures~\ref{fig:plot_valpha_rhoM.eps} and~\ref{fig:plot_valpha_RM.eps} show the mass ($\mu$) versus central density relation and the mass versus radius relation, respectively. In all cases $\overline{\alpha}$ is taken to be very small so that the Vainshtein-breaking effect is not significant in the Newtonian regime. It can be seen that for fixed $\rho_c$ or $R$ the mass is larger (smaller) for $\overline{\alpha}>0$ ($\overline{\alpha}<0$) than in the case of general relativity (GR). Interestingly, there is a maximum central density, $\rho_{c,{\rm max}}$, above which no solution can be found. This property is similar to what was found in Ref.~\cite{Cisterna:2015yla}, where the subclass of the Horndeski theory having derivative coupling to the Einstein tensor was studied. For $\overline{\alpha}\lesssim 2\times 10^{-4}$, we see that $\mu<\infty$ as $\rho_c\to\rho_{c,{\rm max}}$, but for $\overline{\alpha}\gtrsim 2\times 10^{-4}$ we find that $\mu\to \infty$ and $R\to\infty$ as $\rho_c\to\rho_{c,{\rm max}}$. Therefore, in the latter case there are solutions at high densities that are very different from relativistic stars in GR. Note that this occurs even for a tiny value of $\overline{\alpha}$. \begin{figure}[tbp] \begin{center} \includegraphics[width=80mm]{plot_valpha_metric.eps} \end{center} \caption{Metric components $e^{\nu}$ and $e^{-\lambda}$ as a function of $r$. The parameters are given by $\overline{\alpha}=2\times 10^{-3}, \overline{\beta}=0$ (dashed lines) and $\overline{\alpha}=-2\times 10^{-3}, \overline{\beta}=0$ (dotted lines). Solid lines represent the result of GR.}% \label{fig:plot_valpha_metric.eps} \end{figure} \begin{figure}[tbp] \begin{center} \includegraphics[width=80mm]{plot_valpha_x.eps} \end{center} \caption{$X/v^2$ as a function of $r$. The parameters are given by $\overline{\alpha}=2\times 10^{-3}, \overline{\beta}=0$ (dashed line) and $\overline{\alpha}=-2\times 10^{-3}, \overline{\beta}=0$ (dotted line).}% \label{fig:plot_valpha_x.eps} \end{figure} \begin{figure}[tbp] \begin{center} \includegraphics[width=80mm]{plot_vbeta_metric.eps} \end{center} \caption{Metric components $e^{\nu}$ and $e^{-\lambda}$ as a function of $r$. The parameters are given by $\overline{\alpha}=0, \overline{\beta}=2\times 10^{-2}$ (dashed lines) and $\overline{\alpha}=0, \overline{\beta}=-2\times 10^{-2}$ (dotted lines). Solid lines represent the result of GR.}% \label{fig:plot_vbeta_metric.eps} \end{figure} \begin{figure}[tbp] \begin{center} \includegraphics[width=80mm]{plot_vbeta_x.eps} \end{center} \caption{$X/v^2$ as a function of $r$. The parameters are given by $\overline{\alpha}=0, \overline{\beta}=2\times 10^{-2}$ (dashed line) and $\overline{\alpha}=0, \overline{\beta}=-2\times 10^{-2}$ (dotted line).}% \label{fig:plot_vbeta_x} \end{figure} Next, we fix $\overline{\alpha}=0$ and draw the same diagrams for different (small) values of $\overline{\beta}$. The results are presented in Figs.~\ref{fig:plot_vbeta_rhoM.eps} and~\ref{fig:plot_vbeta_RM.eps}, which are seen to be qualitatively similar to Figs.~\ref{fig:plot_valpha_rhoM.eps} and~\ref{fig:plot_valpha_RM.eps}, respectively. Therefore, although $\overline{\beta}$ is supposed to signal the ``beyond GLPV'' effects, they are not manifest and the roles of the two parameters $\overline{\alpha}$ and $\overline{\beta}$ in relativistic stars are qualitatively similar. We have performed numerical calculations for more general cases with $\overline{\alpha}\neq 0$ and $\overline{\beta}\neq 0$, and confirmed that they also lead to qualitatively similar results. Just for reference some examples of radial profiles of the metric and $X$ in the stellar interior are presented in Figs.~\ref{fig:plot_valpha_metric.eps} and~\ref{fig:plot_valpha_x.eps} for $(\overline{\alpha},\overline{\beta})=(\pm 2\times 10^{-3},0)$ and in Figs.~\ref{fig:plot_vbeta_metric.eps} and~\ref{fig:plot_vbeta_x} for $(\overline{\alpha},\overline{\beta})=(0,\pm 2\times 10^{-3})$. As mentioned in Sec.~\ref{subsec:match}, one can find that $X'$ is nonvanishing at the surface, leading to the discontinuity of $X'$, and that $e^{-\lambda}$ does not agree with $e^{\nu}$ there in Fig.~\ref{fig:plot_vbeta_metric.eps}, indicating the discontinuity of $e^{-\lambda}$ since $e^{\nu}$ must be continuous. \section{Conclusions} In this paper, we have studied the Tolman-Oppenheimer-Volkoff system in degenerate higher-order scalar-tensor (DHOST) theory that is consistent with the GW170817 constraint on the speed of gravitational waves. Although the field equations are apparently of higher order, we have reduced them to a first-order system by combining the different components. This is possible because the theory we are considering is degenerate. In DHOST theories that are more general than Horndeski, breaking of the Vainshtein screening mechanism occurs inside matter~\cite{Kobayashi:2014ida,Crisostomi:2017lbg,Langlois:2017dyl,Dima:2017pwp}, which would modify the interior structure of stars. Assuming a simple concrete model of DHOST theory with two parameters and the equation of state, we have solved numerically the field equations. The parameters were chosen so that the Vainshtein-breaking effect in the Newtonian regime is suppressed by the factor $\Upsilon_i\lesssim 10^{-3}$. Nevertheless, we have found a possible large modification in the mass-radius relation. This is significant in particular at densities as high as the maximum above which no solutions can be obtained. In this paper, we have focused on the rather qualitative nature of relativistic stars in the DHOST theory, but it would be important to employ more realistic equations of state for testing the theory against astrophysical observations. This is left for further study. It would be also interesting to explore to what extent the modification to the stellar structure depends on the concrete form of the DHOST Lagrangian. We hope to come back to this question in a future study. \acknowledgments This work was supported in part by MEXT KAKENHI Grant Nos.~JP15H05888, JP16H01102 and JP17H06359 (T.K.); JSPS KAKENHI Grant Nos.~JP16K17695 (T.H.) and JP16K17707 (T.K); and MEXT-Supported Program for the Strategic Research Foundation at Private Universities, 2014-2018 (S1411024) (T.H. and T.K.). \clearpage \begin{widetext}
2,869,038,156,111
arxiv
\section{Introduction} The low-energy behavior of the non--Abelian gauge field theories is an open problem of quantum field theory. At low energies the effective interactions become strong and break down the usual perturbation theory, which works very well in the high-energy limit. This means that the vacuum at low energies has a far more complicated structure than the perturbative high--energy one. To analyze the infra-red behavior of a $SU(2)$ Yang-Mills theory Savvidy \cite{Savvidy} put forward an explicit ansatz for the vacuum gauge fields in form of a covariantly constant gauge field strength. Such a field configuration is necessarily Abelian, i.e., it takes values in the Cartan subalgebra, and, hence, since ${\rm rank }\, SU(2)=1$, has the only nonvanishing color component. On the other hand this component has the form of a constant magnetic field. Therefore such a configuration is usually called {\it constant chromomagnetic field}. Savvidy showed that due to the quantum fluctuations of the gauge fields the energy of such a field configuration is below the perturbative vacuum level, which leads to infra--red instability of the perturbative vacuum under creation of the constant chromomagnetic field. Further investigations \cite{Pagels,Nielsen78,Nielsen81,Olesen,Abbot,Consoli,Preparata} showed that the constant chromomagnetic vacuum itself is unstable too, meaning that the real physical nonperturbative vacuum has even more complicated structure. In our recent papers \cite{Avr-jmp95a,Avr-leipz} we extended the Savvidy's investigation by considering more complicated gauge groups and spacetimes of dimensions higher than four. We showed that there exist more general nontrivial field configurations that turn out to be stable. In present paper we are going to consider some explicit example of such background field configurations when the problem can be solved {\it exactly} to the end. \section{Infra-red regularization of the low-energy effective action} We consider the pure Yang--Mills model with an arbitrary simple compact gauge group in an {\it Euclidean} $d$-dimensional flat spacetime. The classical action of the model is \begin{equation} S=\int dx\,{1\over 2g^2}{\rm tr}_{Ad}\,{\cal F}_{\mu\nu}{\cal F}^{\mu\nu}, \end{equation} where $g$ is the coupling constant, ${\cal F}_{\mu\nu}$ is the gauge field strength taking values in the Lie algebra of the gauge group and ${\rm tr}_{Ad}$ means the trace in adjoint representation. In the case of covariantly constant background fields, \begin{equation} \nabla_\mu{\cal F}_{\alpha\beta}=0, \end{equation} the Euclidean effective action does not depend on the gauge \cite{Avr-jmp95a} and is usually given in one-loop approximation by (see e.g. \cite{DeWitt}) \begin{equation} \Gamma=S+\hbar\Gamma_{(1)}+O(\hbar^2), \end{equation} \begin{equation} \Gamma_{(1)} ={1\over 2}\log\,{\rm Det}\,{\Delta\over\mu^2} -\log\,{\rm Det}\,{D\over\mu^2}. \label{1} \end{equation} \noindent Here `Det' means the functional determinant, $\mu$ is a dimensionful renormalization parameter and $\Delta$ and $D$ are elliptic second-order differential operators of Laplace type \begin{equation} \Delta^\mu_{\ \nu} =-\Box\delta^\mu_{\ \nu} -2{\cal F}^\mu_{\ \,\nu}, \label{2} \end{equation} \begin{equation} D=-\Box, \label{3} \end{equation} \noindent where $\Box=\nabla_\mu\nabla^\mu$ is the covariant Laplacian, the covariant derivative $\nabla_\mu$ being taken in adjoint representation. In practice, to study the infra--red behavior of the system one has to be a bit more careful. Namely, to be able to carry out the intermediate calculations one should impose some infra--red regularization by introducing, say, a regularization mass parameter $M$ and take it off at the very end. The regularization parameter $M$ must be sufficiently large to ensure the infra--red convergence. Then one has to do the analytical continuation to the neighborhood of the point $M=0$ and to take the limit $M\to 0$. If there are no infra--red divergences then this procedure is harmless. In the case of non--trivial low-energy behavior there appears an imaginary part of the effective action or some infra-red logarithmic singularities, so that in addition to the ultra-violet renormalization it could also be necessary to do the infra-red renormalization. The infra--red regularized effective action $\Gamma(M)$ is not analytic, in general, at the vicinity of the point $M=0$. Therefore, one should assume the infra--red regularization parameter to have an infinitesimal negative imaginary part, i.e., $M^2\to M^2-i\varepsilon$. This enables one to choose the correct way of analytical continuation leading to the correct sign of the imaginary part of the effective action. Thus, strictly speaking, the effective action should be defined by \begin{equation} \Gamma\stackrel{\rm def}{=} \lim_{M\to 0-i\varepsilon}\Gamma(M), \label{4} \end{equation} \noindent In the one-loop approximation one can define the infra-red regularized effective action by \begin{equation} \Gamma_{(1)}(M) ={1\over 2}\log\,{\rm Det}\,{(\Delta+M^2)\over\mu^2} -\log\,{\rm Det}\,{(D+M^2)\over\mu^2}, \label{5} \end{equation} \noindent and suppose $M^2$ to be large enough, so that the operators $\Delta+M^2$ and $D+M^2$ are positive, i.e., do not have negative modes. The functional determinants are also known to be ultra--violet divergent. To regularize the ultra--violet divergences one can take any usual regularization, e.g. the dimensional regularization or the zeta-function one etc. One should stress that the meaning of the infra--red regularization is very different from the ultraviolet one. Although the ultraviolet regularization is purely formal method to get physically predictable results, the infra--red regularization parameter can take, in principle, a {\it direct physical meaning}, something like $\Lambda_{QCD}$. Anyway, the infra-red regularization should be taken off at the very end, {\it after} the ultraviolet regularization. \section{Zeta--function and heat kernel} Within the zeta-function regularization \cite{Dowker,Hawking,Elizalde} the effective action has the form \begin{equation} \Gamma_{(1)}(M) =-{1\over 2}\zeta'_{\rm YM}(0, M), \label{6} \end{equation} \noindent where \begin{equation} \zeta_{\rm YM}(p, M)\stackrel{\rm def}{=} \zeta_{\Delta+M^2}(p) - 2\zeta_{D+M^2}(p) \label{7} \end{equation} \noindent is the infra-red regularized total Yang-Mills zeta-function. The zeta-function of an elliptic differential operator $L$ can be defined in terms of the associated heat kernel as follows \cite{Dowker,Hawking,Elizalde} \begin{equation} \zeta_L(p)=\mu^{2p}{\rm Tr}L^{-p} =\int d\,x\,{\mu^{2p}\over \Gamma (p)} \int\limits_0^\infty d\,t\,t^{p-1}{\rm tr}\,U_L(t), \label{8} \end{equation} \noindent where `Tr' means the functional trace, `tr' is the usual matrix trace and \begin{equation} U_L(t)=\exp(-t L)\delta(x,x')\Big\vert_{x=x'} \label{10} \end{equation} \noindent is the heat kernel diagonal. Therefore, the Yang--Mills zeta--function is expressed in terms of the heat kernel diagonals of the operators $\Delta$ and $D$ \begin{equation} \zeta_{\rm YM}(p,M) =\int d\,x\,{\mu^{2p}\over \Gamma (p)} \int\limits_0^\infty d\,t\,t^{p-1}e^{-t M^2}U_{\rm YM}(t), \label{11} \end{equation} \noindent where \begin{equation} U_{\rm YM}(t)={\rm tr}_{Ad}\,\{{\rm tr}_{T}\,U_\Delta(t)-2U_D(t)\} \label{12} \end{equation} \noindent and `${\rm tr}_{T}$' means the trace over the vector indices. The heat kernel diagonals of Laplace type operators $L=-\mathchoice{\square{6pt}}{\square{5pt}}{\square{4pt}}{\square{3pt}}+P$ on covariantly constant background are calculated in \cite{Avr-jmp95a} using an algebraic approach developed in \cite{Avr-plb93}. This approach was developed further, also to the case of curved manifolds, in \cite{Avr-jmp95b,Avr-plb94,Avr-jmp96a}. (For review see \cite{Avr-win,Avr-qgr6,Avr-thess}). By using the results of \cite{Avr-jmp95a} we have \begin{equation} U_{\rm YM}(t)=(4\pi t)^{-d/2}{\rm tr}_{Ad}\left\{ {\rm det}_{T}\left({t\hat {\cal F}\over \sinh(t\hat {\cal F})}\right)^{1/2} \Biggl({\rm tr}_{T}\exp(2t\hat{\cal F})-2\Biggr)\right\}, \label{13} \end{equation} \noindent where $\hat{\cal F}=\{{\cal F}^\mu_{\ \,\nu}\}$ is a matrix with vector indices and `${\rm det}_{T}$' means the determinant over the vector indices. To calculate the trace over the group indices we observe first that covariantly constant Yang--Mills fields are necessarily Abelian, \begin{equation} [{\cal F}_{\mu\nu}, {\cal F}_{\alpha\beta}]=0, \label{14} \end{equation} \noindent and, therefore, take their values in the Cartan subalgebra. Thus the maximal number of independent fields is equal to the dimension of the Cartan subalgebra, i.e., to the rank of the group $r$. Using the property (\ref{14}) one obtains \cite{Avr-jmp95a} \begin{equation} U_{\rm YM}(t)=(4\pi t)^{-d/2} \Biggl\{r(d-2) +2\sum_{\alpha>0} {\rm det}_{T}\left({t \hat F\over \sin(t\hat F)}\right)^{1/2} \Biggr({\rm tr}_{T}\cos(2t \hat F)-2\Biggl)\Biggr\}, \label{15} \end{equation} \noindent where \begin{equation} \hat F=\{ F^{a\,\mu}_{\ \ \ \nu}\alpha_a\}, \label{16} \end{equation} \noindent $\alpha_a$ are the positive roots of the Lie algebra of the gauge group and the sum runs over all positive roots. The number of positive roots is $p=(n-r)/2$, where $n$ is the dimension of the gauge group. Further, by orthogonal transformations one can always put the antisymmetric $d\times d$ matrix $\hat F$ in the block-diagonal form with $q$ two-dimensional antisymmetric blocks on the diagonal, all others entries being zero \begin{equation} \hat F=\left( \begin{array}{cccccccccc} 0 &H_1 &0 &0 &\cdots &0 &0 &0 &\cdots &0 \\ -H_1 &0 &0 &0 &\cdots &0 &0 &0 &\cdots &0 \\ 0 &0 &0 &H_2 &\cdots &0 &0 &0 &\cdots &0 \\ 0 &0 &-H_2 &0 &\cdots &0 &0 &0 &\cdots &0 \\ \vdots &\vdots &\vdots &\vdots &\ddots &\vdots &\vdots &\vdots &\cdots &\vdots \\ 0 &0 &0 &0 &\cdots &0 &H_q &0 &\cdots &0 \\ 0 &0 &0 &0 &\cdots &-H_q &0 &0 &\cdots &0 \\ 0 &0 &0 &0 &\cdots &0 &0 &0 &\cdots &0 \\ \vdots &\vdots &\vdots &\vdots &\cdots &\vdots &\vdots &\vdots &\ddots &\vdots \\ 0 &0 &0 &0 &\cdots &0 &0 &0 &\cdots &0 \end{array} \right). \label{17} \end{equation} \noindent Visually one can think of such field configuration as a number of `magnetic fields' in different directions and call the invariants $H_i$ the amplitudes of `magnetic fields'. They can be expressed in terms of the basic invariants of the matrix $\hat F$, \begin{equation} I_k = {1\over 2}(-1)^k{\rm tr}_{T} \hat F^{2k}, \label{20} \end{equation} \noindent by solving the equations \begin{eqnarray} \left\{ \begin{array}{lllllcl} H_1^2 &+ &\cdots & + &H_q^2 &= &I_1\\ \vdots & & & &\vdots &&\vdots\\ H_1^{2[d/2]}&+&\cdots &+ &H_q^{2[d/2]} &= &I_{[d/2]} \end{array}. \right. \label{19} \end{eqnarray} One should note that, although the matrix $\hat F$ depends linearly on the roots, its invariants $H_i$ depend on the roots in a very nontrivial way. By making use of (\ref{17}) one can compute the trace and the determinant over the vector indices in eq. (\ref{15}) and obtain finally \begin{eqnarray} \lefteqn{U_{\rm YM}(t)=(4\pi t)^{-d/2}\Biggl\{r(d-2)} \nonumber\\ &&+ 2\sum_{\alpha>0;} \prod_{1\le i\le q}\left({tH_i\over \sinh(tH_i)}\right) \left(d-2 + 4\sum_{1\le j\le q}\sinh^2(tH_j)\right)\Biggr\}. \label{21} \end{eqnarray} \noindent Hence the total zeta-function for the gauge fields, (\ref{11}), equals \begin{eqnarray} \lefteqn{\zeta_{YM}(p,M) = \int dx (4\pi)^{-d/2}{\mu^{2p}\over \Gamma(p)} \int\limits_0^\infty d t\,t^{p-d/2-1}e^{-t M^2} \Biggl\{r(d-2)} \qquad\ \ \nonumber\\ &&+2\sum_{{ \alpha}>0;} \prod_{1\le i\le q}\left({tH_i\over \sinh(tH_i)}\right) \left(d-2 + 4\sum_{1\le j\le q}\sinh^2(tH_j)\right)\Biggr\}. \label{22} \end{eqnarray} For ${\rm Re}\, p>d/2$ the integral (\ref{22}) over $t$ converges at $t\to 0$. For sufficiently large $M$ it also converges at $t\to\infty$ for arbitrary values of amplitudes of `magnetic fields' $H_i$. It is easy to see, however, that for $M=0$ there are some field configurations that break down the convergence of this integral at $t\to\infty$. The simplest example is the case of only one `magnetic field', i.e., $q=1$. In \cite{Avr-jmp95a} it was found a criterion of infra--red stability, i.e., the convergence of the integral (\ref{22}) over $t$ at $t\to\infty$ in the limit $M\to 0$. Roughly speaking, the integral (\ref{22}) converges when the amplitudes of `magnetic fields' $H_i$ do not differ much from each other. More precisely, the vacuum is infra--red stable, i.e., the integral (\ref{22}) converges at $t\to\infty$ for $M=0$, when the background `magnetic fields' satisfy the {\it condition of stability}, \begin{equation} \max_{1\le i\le q} H_i< \sum_{1\le i\le q}^{\ \quad\prime} H_i \qquad {\rm for\ any\ root\ } \alpha, \label{25} \end{equation} \noindent where the prime at the sum means that the summation does not include the maximal invariant. Obviously, the condition (\ref{25}) can be fulfilled only in the case when the number of `magnetic fields' is not less than two, $q\ge 2$, and, hence, only in dimensions not less than four, $d\ge 4$. The case $q=2$ in $d=4$ is the critical one (see the detailed discussion below). That is why we also study in present paper the higher--dimensional case ($d\ge 5$) in detail. Even in this case the vacuum can be unstable. Since the stability is supported by approximately \hbox{equal} amplitudes of `magnetic fields', we study in this paper the `most stable' configuration when all the `magnetic fields' have equal amplitudes: \begin{equation} H_i=H, \qquad (i=1,\dots, q). \label{26} \end{equation} \noindent This assumes certain algebraic relations between the basic invariants $I_k$ of the matrix $\hat F$ \begin{equation} H=\left({{I_1\over q}}\right)^{1/2}=\left({I_2\over q}\right)^{1/4} =\cdots=\left({I_{[d/2]}\over q}\right)^{1/(2[d/2])}. \end{equation} \noindent In this case the Yang-Mills zeta-function takes the form \begin{eqnarray} \zeta_{YM}(p,M) &=& \int dx (4\pi)^{-d/2}{\mu^{2p}\over \Gamma(p)} \int\limits_0^\infty d t\,t^{p-d/2-1}e^{-t M^2} \Biggl\{r(d-2)\nonumber\\ &&+2\sum_{{ \alpha}>0} \left({tH\over \sinh(tH)}\right)^q \left(d-2 + 4q\,\sinh^2(tH)\right)\Biggr\}. \label{27} \end{eqnarray} For sufficiently large $M$ the integral (\ref{27}) converges in the region ${\rm Re}\, p>d/2$ and defines an analytic function. Therefore, changing the integration variable $t\to t/H$ we get \begin{eqnarray} \zeta_{YM}(p,M)&=&\int dx (4\pi)^{-d/2}\mu^{2p} {\Gamma(p-d/2)\over\Gamma(p)} \Biggl\{r(d-2)M^{d-2p} \nonumber\\&& +2\sum_{{ \alpha}>0}H^{d/2-p}J_{d/2-p,\,q}\left({M^2\over H}\right)\Biggr\}, \label{28} \end{eqnarray} \noindent where \begin{equation} J_{s,\,q}(z)=(d-2)b_{s,\,q}(z)+4q\,s(s-1)b_{s-2,\,q-2}(z), \label{29} \end{equation} \begin{equation} b_{s,\,q}(z)={1\over\Gamma(-s)}\int\limits_0^\infty d t\,t^{-s-1+q}{e^{-tz}\over \sinh^q t}. \label{29d} \end{equation} Thus the regularized zeta-function is expressed in terms of the functions $b_{s,\,q}(z)$. These functions are studied in detail in the Appendix. It is well known that the coefficient at $(-\log\,\mu)$ in the effective action is determined by the zeta-function at zero. From eq. (\ref{28}) we immediately get \begin{equation} \zeta_{\rm YM}(0,M)=0 \qquad {\rm for \ odd\ } d, \label{32} \end{equation} \noindent and \begin{equation} \zeta_{\rm YM}(0,M)= \int dx (4\pi)^{-d/2} {(-1)^{d/2}\over\Gamma(d/2+1)} \Biggl\{r(d-2)M^{d} +2\sum_{{\alpha}>0}J_{d/2,\,q}\left({M^2\over H}\right)H^{d/2} \Biggr\} \label{34} \end{equation} for even $d$. Let us take the limit $M\to 0-i\varepsilon$ here. The contribution of the `free' fields is proportional to $M^d$ and vanishes in this limit. Defining \begin{equation} J_{s,\,q}\stackrel{\rm def}{=}\lim_{z\to 0-i\varepsilon}J_{s,\,q}(z) \label{33aa} \end{equation} and using the formulas of the Appendix we find that $J_{s,\,q}$ for odd positive $s=2k+1$ vanishes, \begin{equation} J_{2k+1,\,q}=0, \qquad (k=0,1,2,\dots). \label{34aa} \end{equation} whereas for even positive $s=2k$ \begin{equation} J_{2k,\,q}=(d-2)a_{2k,\,q}+8q\,k(2k-1)a_{2k-2,\,q-2} \label{34aaa} \end{equation} with (see Appendix) \begin{equation} a_{2k,\,q}=\left.\left({\partial\over\partial t}\right)^{2k}\left({t\over\sinh t}\right)^q\right|_{t=0}. \end{equation} Therefore, we have from (\ref{34}) \begin{equation} \zeta_{\rm YM}(0,0) =0 \qquad{\rm for\ odd}\ d\ge 3\ {\rm and\ odd}\ d/2\ge 1, \label{36} \end{equation} and \begin{equation} \zeta_{\rm YM}(0,0) =\int dx \sum_{{\alpha}>0}(4\pi)^{-d/2} {2\over\Gamma(d/2+1)}J_{d/2,\,q} H^{d/2} \quad{\rm for\ even}\ d/2\ge 2. \end{equation} \section{One-loop effective action} Using the zeta-function (\ref{28}) we obtain from (\ref{6}) then the effective action: \begin{equation} \Gamma_{(1)}(M)=\int dx {1\over 2}(4\pi)^{-d/2} {\pi(-1)^{(d-1)/2}\over\Gamma(d/2+1)} \Biggl\{r(d-2)M^{d} +2\sum_{{ \alpha}>0}J_{d/2,\,q}\left({M^2\over H}\right) H^{d/2}\Biggr\} \label{33} \end{equation} \centerline{for odd $d$} \noindent and \begin{eqnarray} \lefteqn{ \Gamma_{(1)}(M)=\int dx {1\over 2}(4\pi)^{-d/2} {(-1)^{d/2}\over\Gamma(d/2+1)} } \nonumber\\ && \times\Biggl\{r(d-2)M^{d} \Biggl(\log\,{M^2\over\mu^2}-\Psi(d/2+1)-{\bf C}\Biggr) \nonumber\\ && +2\sum_{{ \alpha}>0}H^{d/2}\left[J'_{d/2,\,q}\left({M^2\over H}\right) +J_{d/2,\,q}\left({M^2\over H}\right)\left(\log\,{H\over\mu^2}-\Psi(d/2+1)-{\bf C}\right) \right]\Biggr\} \nonumber\\ && \label{35} \end{eqnarray} \centerline{for even $d$,} \noindent where \begin{equation} J'_{d/2,\,q}(z)= \left.{\partial J_{s,\,q}(z)\over \partial s}\right|_{s=d/2}, \end{equation} $\Psi(z)\equiv d \log \Gamma(z)/d\,z$ and ${\bf C}=-\Psi(1)$ is the Euler constant. Very similar formulas for the zeta-function and the effective action were obtained in general case in \cite{Avr-npb91} in a bit different context (see especially the eqs. (2.25), (2.28) and (2.29) in Sect. 2 therein). Thus the effective action is determined by two quantities --- $J_{d/2,\,q}(M^2/H)$ and $J'_{d/2,\,q}(M^2/H)$, i.e., by the values of the function $J_{s,\,q}(z)$ at the positive integer and half-integer points, $s=k/2$, and its derivative at integer points, $s=k$. Using the eq. (\ref{29}) and the properties of the function $b_{s,\,q}(z)$ studied in the Appendix, it is not difficult to obtain all these coefficients. By taking off the infra-red regularization and using the eq. (\ref{34aa}) we obtain finally \begin{equation} \Gamma_{(1)}=\int dx \sum_{{ \alpha}>0} (4\pi)^{-d/2} {\pi(-1)^{(d-1)/2}\over\Gamma(d/2+1)}J_{d/2,\,q} H^{d/2} \ \ {\rm for\ odd}\ d\ge 3, \label{37} \end{equation} \begin{equation} \Gamma_{(1)}=\int dx\sum_{{ \alpha}>0} (4\pi)^{-d/2} {-1\over\Gamma(d/2+1)}J'_{d/2,\,q}H^{d/2} \qquad {\rm for\ odd}\ d/2\ge 1, \label{39} \end{equation} and \begin{eqnarray} \Gamma_{(1)}&=&\int dx\sum_{{ \alpha}>0} (4\pi)^{-d/2} {1\over\Gamma(d/2+1)} \nonumber\\ && \times H^{d/2}\left\{J'_{d/2,\,q} +J_{d/2,\,q}\left[\log\,{H\over\mu^2}-\Psi(d/2+1)-{\bf C}\right]\right\} \label{39b} \end{eqnarray} \centerline{for even $d/2\ge 2$.} Thus one has to calculate the values of the function $J_{s,\,q}$ and its derivative $J'_{s,\,q}$ at half-integer points and integer points. We distinguish three essentially different cases: i) $q=1$, ii) $q=2$, and iii) $q\ge 3$. \subsection{One `magnetic' field} The case $q=1$ is realizable in any dimensions $d\ge 2$. Therefore, we need to study the function $J_{s,\,1}$ in the region ${\rm Re}\,s\ge 2$. It should be noted, however, that this is the only possible case in four-dimensional spacetime of {\it Lorentzian} signature \cite{Avr-jmp95a}. It is this case that was studied by Savvidy \cite{Savvidy}. Using the formulas of the Appendix we obtain from eqs. (\ref{29}) and (\ref{33aa}) in this case \begin{equation} J_{s,\,1}=4(d-2)(2^{s-1}-1)(2\pi)^{-s}\cos\left({\pi \over 2}s\right)\Gamma(s+1)\zeta(s) +2s\left(e^{-i\pi s}+1\right). \label{52a} \end{equation} \noindent Therefrom we obtain for odd dimensions, i.e., for half-integer $s=d/2$, \begin{eqnarray} J_{d/2,\,1}&=&(-1)^{[(d+1)/4]}2(d-2)(2^{d/2-1}-1)\sqrt 2\,(2\pi)^{-d/2}\Gamma(d/2+1)\zeta(d/2) \nonumber\\ && +d-i (-1)^{(d-1)/2}d, \label{47a} \end{eqnarray} $$ {\rm for\ odd}\ d\ge 3, \ {\rm i.e., }\ d=3,5,7,\dots. $$ \noindent For even dimensions, i.e., integer $s=d/2$, we find from (\ref{52a}) in accordance with eqs. (\ref{34aa}) and (\ref{34aaa}) \begin{equation} J_{d/2,\,1}=0 \qquad {\rm for\ odd}\ d/2\ge 1, {\rm i.e., }\ d=2,6,10, \dots \end{equation} and, by using \cite{Erdelyi} \begin{equation} \zeta(2k)=(-1)^{k+1}(2\pi)^{2k}{B_{2k}\over 2(2k)!}, \end{equation} where $B_k$ are the Bernulli numbers, \begin{equation} J_{d/2,\,1}=-2(d-2)(2^{d/2-1}-1){B_{d/2}}+2d, \label{48} \end{equation} $$ {\rm for \ even}\ d/2\ge 2, \ {\rm i.e.,}\ d=4,8,12,\dots. $$ \noindent The derivative $J'_{d/2,\,1}$ for even dimensions is \begin{equation} J'_{d/2,\,1} =-(-1)^{(d-2)/4}(d-2)(2^{d/2-1}-1)\Gamma(d/2+1){\zeta(d/2)\over (2\pi)^{d/2-1}} +i\pi d \label{49d} \end{equation} $$ {\rm for \ odd}\ d/2\ge 1,\ {\rm i.e.,}\ d=2,6,10,\dots $$ \begin{eqnarray} J'_{d/2,\,1} &=&-2(d-2)(2^{d/2-1}-1)B_{d/2}\Biggl[{\zeta'(d/2)\over \zeta(d/2)} +\Psi(d/2+1) \nonumber\\ && +{2^{d/2-1}\over 2^{d/2-1}-1}\log 2-\log(2\pi)\Biggr] +4-i\pi d \nonumber\\ && \label{49} \end{eqnarray} $$ {\rm for \ even}\ d/2\ge 2, \ (d=4,8,12,\dots). $$ Substituting (\ref{47a}), (\ref{49d}) and (\ref{49}) in (\ref{37})--(\ref{39b}) we find that the effective action gets a negative imaginary part \begin{equation} {\rm Im}\, \Gamma_{(1)} =-\int dx \sum_{{ \alpha}>0} (4\pi)^{-d/2}{2\pi\over \Gamma(d/2)}H^{d/2}, \qquad {\rm for }\ q=1, \label{52} \end{equation} \noindent which indicates on the instability of the chromomagnetic vacuum with one `magnetic field', i.e., $q=1$. It is remarkable that this form is valid for any dimension. \subsection{Two `magnetic' fields} Now let us consider the most interesting case of two `magnetic' fields, i.e., $q=2$, that is possible in ($d\ge 4$). Again we have to study the region ${\rm Re}\,s\ge d/2\ge 2$. This case is {\it critical} because for $q=1$ there is a strong infra--red instability, whilst for $q\ge 3$ the model is infra-red stable. The limit $z\to 0-i\varepsilon$ of the function $J_{s,\,2}(z)$ is not regular. From eqs. (\ref{29}) and (\ref{33aa}) and the Appendix we have now \begin{eqnarray} \left.J_{s,\,2}(z)\right|_{z\to 0-i\varepsilon} &=&2^{s+1}(d-2)(s-1)(2\pi)^{-s} \cos\left({\pi \over 2}s\right)\Gamma(s+1)\zeta(s) \nonumber\\ && + 8s(s-1){z}^{s-2}. \label{51} \end{eqnarray} Note that the limit $z\to 0-i\varepsilon$ is well defined only in the region ${\rm Re}\,s> 2$, i.e., for $d\ge 5$. In this region we have \begin{equation} J_{s,\,2}=2^{s+1}(d-2)(s-1)(2\pi)^{-s}\cos\left({\pi \over 2}s\right)\Gamma(s+1)\zeta(s). \label{62c} \end{equation} \noindent Therefore, \begin{equation} J_{d/2,\,2}=(-1)^{[(d+1)/4]}2^{(d-1)/2} (d-2)^2(2\pi)^{-d/2}\Gamma(d/2+1)\zeta(d/2), \end{equation} $$ {\rm for\ odd}\ d\ge 5, \ (d=5,7,9\dots) $$ and (in accordance with (\ref{34aa}) and (\ref{34aaa}) ) \begin{equation} J_{d/2,\,2}=0 \qquad {\rm for\ odd}\ d/2\ge 3,\ (d=6,10,14,\dots) \end{equation} \begin{equation} J_{d/2,\,2}=-2^{d/2-1}(d-2)^2B_{d/2} \qquad {\rm for\ even}\ d/2\ge 4,\ (d=8,12,16,\dots). \label{57c} \end{equation} \noindent The derivative $J'_{d/2,\,2}$ for even dimensions is given by \begin{equation} J'_{d/2,\,2}=(-1)^{[(d+2)/4]}2^{d/2-2}(d-2)^2\Gamma(d/2+1){\zeta(d/2)\over (2\pi)^{d/2-1}} \end{equation} $$ {\rm for\ odd}\ d/2\ge 3,\ (d=6,10,14,\dots) $$ and \begin{equation} J'_{d/2,\,2}=-2^{d/2-1}(d-2)^2B_{d/2} \Biggl[{\zeta'(d/2)\over \zeta{d/2}} +\Psi(d/2+1)+\log 2 -\log(2\pi)+{2\over d-2}\Biggr] \label{49e} \end{equation} $$ {\rm for\ even}\ d/2\ge 4,\ (d=8,12,16,\dots). $$ The case $d=4$ is not regular one because the point $s=2$ is singular in the limit $z\to 0$. The value $J_{2,\,2}$ is still finite \begin{equation} J_{2,\,2}=-{4\over 3}+16={44\over 3}, \qquad {\rm for}\ d=4, \end{equation} where the second contribution, $+16$, comes from the $z$-\-de\-pen\-dent term in (\ref{51}). Note that this cannot be obtained from (\ref{57c}) by putting $d=4$, but is in fully accordance with eq. (\ref{34aaa}) for $q=2$ and $d=4$. However, the derivative $J'_{d/2,\,2}(z)$ for $d=4$ diverges in the limit $z\to 0$ \begin{equation} \left.J'_{2,\,2}(z)\right|_{z\to 0} =16\log{z}-{4\over 3}\left({\zeta'(2)\over \zeta(2)}-{\bf C}-\log (2\pi)+\log 2+1\right) +22+o(z). \end{equation} Therefore, from eq. (\ref{35}) we find that the effective action $\Gamma(M)$ has a logarithmic infra--red divergency in the limit $M\to 0$ \begin{equation} \Gamma_{(1)}(M)\Big|_{M\to 0}=-\int dx\sum\limits_{\alpha}(4\pi)^{-2}8H^2\log{H\over M^2}+O(1), \qquad (q=2,\ d=4). \label{61aa} \end{equation} \subsection{More than two `magnetic' fields} The case $q\ge 3$ is realizable only in dimensions $d\ge 6$. Therefore, we need, in fact, to know $J_{s,\,q}$ only for ${\rm Re}\,s\ge 3$. In this case $J_{s,\,q}(z)$ is an entire function of $s$ for ${\rm Re}\,z>(2-q)$. This means that it also remains entire function of s in the limit $z\to 0$. Therefore, we have from eq. (\ref{29}) \begin{equation} J_{d/2,\,q}=(d-2)a_{d/2,\,q}+q\,d(d-2)a_{d/2-2,q-2}, \end{equation} \begin{equation} J'_{d/2,\,q}=(d-2)a'_{d/2,\,q}+q\,d(d-2)a'_{d/2-2,q-2} +4q(d-1)a_{d/2,\,q-2}, \end{equation} where the function $a_{k,\,q}$, $a_{k+1/2,\,q}$ and $a'_{k,\,q}$ are some {\it real numerical} constants listed in the Appendix. \subsection{The total number of negative and zero modes} Let us sum up the results of this section. \paragraph{q=1.} In the case $q=1$ there are negative modes of the operator $\Delta$. Therefore, the effective action becomes complex. The imaginary part of the effective action `counts' the negative modes in the sense that \begin{equation} {\rm Im}\,\Gamma_{(1)}=-{1\over 2}\pi N_{\rm YM}^{(-)}, \label{53} \end{equation} \noindent where $N_{\rm YM}^{(-)}$ is the total `number' of negative modes. Comparing (\ref{52}) and (\ref{53}) we find the number of negative modes to be \begin{equation} N_{\rm YM}^{(-)} =\int dx \sum_{{ \alpha}>0} (4\pi)^{-d/2}{4\over \Gamma(d/2)}H^{d/2}, \qquad (q=1). \label{52b} \end{equation} This causes, of course, the infra-red instability of the vacuum under small (quantum) fluctuations in the directions of the negative modes. \paragraph{q=2, d=4.} In the case $q=2$, $d=4$ there are no negative modes but there are {\it zero modes}. The effective action is {\it real} but it has a logarithmic {\it infra-red divergency} in the limit $M\to 0$ \begin{equation} \Gamma_{(1)}(M)\stackrel{M\to 0}{=} {1\over 2}N_{\rm YM}^{(0)}\log\,{M^2\over H}+O(1), \label{64aa} \end{equation} where $N_{\rm YM}^{(0)}$ is the `number' of zero modes. Comparing eqs. (\ref{61aa}) and (\ref{64aa}) we find the total number of zero modes \begin{equation} N_{\rm YM}^{(0)} =\int dx \sum_{{ \alpha}>0} (4\pi)^{-2}16 H^{2}, \qquad (q=2,\ d=4) \label{54} \end{equation} The appearance of the logarithmic infra-red divergent term does not lead to serious physical problems. Whereas the zeta-function regularizes well the ultraviolet divergences, one has to renormalize additionally at the very end the coupling constants of the classical action in the infra-red region. This means that there appears an additional {\it infra-red renormalization} parameter $\lambda$. In other words, after we have calculated $\zeta'_{\rm YM}(0,M)$ there are no ultraviolet divergences any longer, but there are logarithmic infra-red ones. Thus one has to renormalize them as well. \paragraph{q=2, d$>$4 and q$\ge$3, d$\ge$ 6.} In the cases $q=2, d>4$ and $q\ge 3, d\ge 6$ there are no negative and zero modes. The effective action is real and the vacuum is stable at the quantum level, i.e., under small quantum disturbances. \section{Analysis of the effective potential} We define the total effective potential $V(H)$ by \begin{equation} \Gamma=S+\Gamma_{(1)} =\int dx\sum_{\alpha>0}V(H). \end{equation} Using the results of previous section with account of the classical Yang-Mills action calculated on the chromomagnetic background with equal magnetic fields \begin{equation} S=\int dx\sum_{\alpha>0}{2q\over g^2}H^2, \label{61} \end{equation} \noindent we find the general form of the effective potential to be \begin{equation} V(H)={2q\over g^2}H^2+H^{d/2}\left(a\,\log\,{H\over \mu^2} +b+c\,\log\,{H\over M^2}\right), \end{equation} where $a$, $b$ and $c$ are numerical constants depending only on the dimension of the spacetime $d$ and the number of `magnetic fields' $q$. The constants $a$, $b$ and $c$ are calculated in previous section. The form of the effective potential depends on the signs of these constants. The constant `$a$' is not equal to zero only for even $d/2\ge 2$, i.e., in dimensions $d=4 \ ({\rm mod}\, 4)$. The constant `$b$' is always different from zero and is complex in the case $q=1$. The constant $c$ is not equal zero only in the case $q=2, d=4$. Since in the case $q=1$ the effective potential becomes complex we do not study it in detail but concentrate our attention on the first nontrivial case $q=2$ that can be investigated analytically to the end. \subsection{q=2, d=4} In the case $q=2, d=4$ we have from previous section \begin{equation} a={22\over 3(4\pi)^{2}}, \end{equation} \begin{equation} c=-{8\over (4\pi)^{2}}, \end{equation} \begin{equation} b={2\over 3(4\pi)^{2}}\left[{\zeta'(2)\over \zeta(2)}-{\bf C} +\log 2-\log (2\pi)+1\right]. \end{equation} Now one should absorb the infra-red divergency by renormalizing the coupling $g$. We define first \begin{equation} M=\lambda \varepsilon, \end{equation} where $\lambda$ is arbitrary finite dimensionful parameter and $\varepsilon\to 0$. Since $V(H)$ cannot depend on the arbitrary parameters $\mu$ and $\lambda$ one assumes usually that the coupling constant depends on them in such a way that effectively the effective potential does not depend neither on $\mu$, $\lambda$ nor on $\varepsilon$, viz. \begin{equation} V(H)={4\over \alpha(\Lambda)}H^2 +\beta H^2 \log\,{H\over \Lambda}, \end{equation} where \begin{equation} {1\over \alpha(\Lambda)}={1\over \bar g^2(\mu, \lambda)}+{c\over 4}\log{\Lambda\over \lambda^2} +{a\over 4}\log{\Lambda\over \mu^2} \end{equation} \begin{equation} {1\over \bar g^2(\mu, \lambda)}={1\over g^2(\mu, \lambda)}-{c\over 2}\log{\varepsilon} +b, \end{equation} \begin{equation} \beta=a+c=-{2\over 3(4\pi)^2}, \end{equation} and $\Lambda$ is a constant scale of the `magnetic field'. As a consequence we have {\it two} different renormalization group equations for $\bar g$ with respect to $\mu$ and $\lambda$ \begin{equation} \mu{\partial\over\partial\mu}\bar g^2=\beta_{\rm UV}(\bar g), \label{75a} \end{equation} \begin{equation} \lambda{\partial\over\partial \lambda}\bar g^2=\beta_{\rm IR}(\bar g), \end{equation} where \begin{equation} \beta_{\rm UV}(\bar g)=-{a\over 2}\bar g^4=-{11\over 3(4\pi)^2}\bar g^4, \end{equation} \begin{equation} \beta_{\rm IR}(\bar g)=-{c\over 2}\bar g^4={4\over (4\pi)^2}\bar g^4, \end{equation} and an equation for $\alpha(\Lambda)=\bar g^2(\Lambda, \Lambda)$ \begin{equation} \Lambda{\partial\over\partial\Lambda}\alpha =\left.\left(\mu{\partial\over\partial\mu} +\lambda{\partial\over\partial \lambda}\right)\bar g^2\right|_{\mu=\lambda=\Lambda} =\beta_{\rm tot}(\alpha), \end{equation} \begin{equation} \beta_{\rm tot}(\alpha)=\beta_{\rm UV}+\beta_{\rm IR}=-{1\over 2}\beta\alpha^2={1\over 3(4\pi)^2}\alpha^2 \end{equation} Thus we find important properties of the beta-functions \begin{equation} \beta_{\rm UV}<0, \qquad \beta_{\rm IR}>0. \end{equation} We see that according to the usual theory the effective coupling $\bar g(\mu, \lambda)$ is asymptotically free in the high-energy limit $\mu\to\infty$, $\lambda$ being fixed. Moreover, it is asymptotically free in the limit $\lambda\to 0$, $\mu$ fixed. One should note that the behavior of the effective constant $\alpha(\Lambda)$ for $\Lambda\to\infty$ is {\it different} from the usual one. The contribution of the infra-red divergences (zero modes) {\it changes the sign} of the total beta function \begin{equation} \beta_{\rm tot}>0. \end{equation} This means that the effective coupling $\alpha(\Lambda)$ is {\it not asymptotically free}! This effect occurs whenever the constant $c$ is different from zero, i.e., {\it only} in the case $q=2, d=4$. In the standard case $q=1, d=4$ studied in the literature $\beta_{\rm IR}=c=0$ (there is no logarithmic infra-red divergency). Therefore, one has only one renormalization parameter $\mu$ and only one standard renormalization group equation (\ref{75a}) with $\beta_{\rm UV}<0$. This leads then automatically to the asymptotic freedom. The behavior of the effective potential is effected by the zero modes in a similar way. In our case, since $\beta<0$, it is immediately seen (see Fig. 1) that although the vacuum $H=0$ is perturbatively stable the effective potential is unbounded from below and, hence, the perturbative vacuum is, in fact, {\it metastable}. One should note again that without the contribution of the zero modes one would have $\beta=a>0$ and, hence, the perturbative vacuum would be absolutely stable (see Fig. 2). These are zero modes that break down the stability of the vacuum at the nonperturbative level --- the perturbative vacuum decays because of the zero modes. \begin{figure}[h] \begin{center} \begin{picture}(220,170) \put(0,0){\vector(0,1){150}} \put(-14,140){$V$} \put(0,80){\vector(1,0){200}} \put(190,60){$H$} \bezier{400}(0,80)(20,81)(50,100) \bezier{400}(50,100)(88,130)(100,130) \bezier{400}(100,130)(145,130)(180,0) \end{picture} \end{center} \caption{q=2, \ d=4, 5, 6, 7 \ ({\rm mod}\ 8)} \end{figure} \subsection{q=2, d$>$4.} \subsubsection{d=2k+1, 4k+2} Consider now the case of odd dimension and odd $d/2\ge 3$, i.e., $d=5,7,9,\dots$ and $d=6,10,14,\dots$. Using the results of previous section we find that in both cases the constants `$a$' and `$c$' vanish \begin{equation} a=c=0, \end{equation} and the constant `$b$' can be written in the form \begin{equation} b={2^{d/2-2}\over \sin(\pi d/4)} (d-2)^2 {2\pi\over (8\pi^2)^{d/2}}\zeta(d/2). \label{80aa} \end{equation} \paragraph{d=8k+1, 8k+2, 8k+3.} We see from (\ref{80aa}) that for $d=9,10,11$ (mod~8), the coefficient $b$ is positive and, therefore, the effective potential is positive and monotone (see Fig. 2). The vacuum $H=0$ gives the absolute minimum of the effective potential and is stable. \begin{figure}[h] \begin{center} \begin{picture}(120,110) \put(0,0){\vector(0,1){100}} \put(-14,90){$V$} \put(0,17){\vector(1,0){100}} \put(90,3){$H$} \bezier{400}(0,17)(76,18)(90,87) \end{picture} \end{center} \caption{q=2, \ d=9, 10, 11\ ({\rm mod}\ 8)} \end{figure} \paragraph{d=8k+5, 8k+6, 8k+7.} For $d=5,6,7$ (mod~8), the constant $b$ is negative. Qualitatively this case is similar to the case $d=4,q=2$. (See Fig. 1). Again the effective potential is unbounded from below and the perturbative vacuum $H=0$ is {\it metastable}. \subsubsection{d=4k} Finally let us consider the case of even $d/2\ge 4$, i.e., $d=8 \ ({\rm mod}\, 4)$. The constant `$c$' vanish \begin{equation} c=0 \end{equation} and the constants `$a$' and `$b$' read \begin{equation} a=(-1)^{d/4}{2^{d/2-1}(d-2)^2\over (4\pi)^{2}\Gamma(d/2+1)}|B_{d/2}|, \end{equation} \begin{equation} b=a\left({\zeta'(d/2)\over\zeta(d/2)}+{2\over d-2}-{\bf C} +\log 2-\log(2\pi)\right). \end{equation} \paragraph{d=8k+4.} We see therefrom that for $d=8k+4$, i.e., $d=12\ ({\rm mod}\, 8)$, the constant $a$ is negative. In this case the behavior of the effective potential repeats qualitatively that of the case $d=4$. (Fig. 1) \subsection{q=2, d=8k.} The only case left is that of $d=8\ ({\rm mod}\, 8)$. This is the most interesting one. In all other cases there are no stable nontrivial, $H\ne 0$, vacuum states. Now the constant $a$ is {\it positive} and this changes radically the behavior of the effective potential. First, it is bounded from below, i.e., there are {\it some} stable vacuums, at least the perturbative one, $H=0$. It is not difficult to see that for some values of the parameters there is indeed some nontrivial local minimum that can also provide the {\it absolute} minimum of the effective potential. The form of the effective potential is governed by the coupling constant. (See Fig. 3). One can study the behavior of the effective potential in detail and find the following. \begin{figure}[h] \begin{center} \begin{picture}(200,200) \put(0,0){\vector(0,1){200}} \put(-14,190){$V$} \put(0,100){\vector(1,0){200}} \put(190,80){$H$} \put(40,140){I: $g<g_1$} \bezier{60}(0,100)(6,100)(60,127) \bezier{40}(60,127)(94,139)(110,169) \bezier{500}(0,100)(10,100)(40,110) \bezier{500}(40,110)(59,113)(80,112) \bezier{500}(80,112)(110,112)(128,169) \put(118,170){$g_1$} \bezier{500}(0,100)(10,101)(24,104) \bezier{500}(24,104)(50,113)(70,104) \bezier{500}(70,104)(112,84)(136,169) \put(138,170){$g_{\rm crit}$} \put(82,102){II} \bezier{60}(0,100)(50,108)(80,88) \bezier{60}(80,88)(126,61)(146,169) \bezier{500}(0,100)(46,100)(76,61) \bezier{800}(76,61)(140,-35)(164,169) \put(80,60){III:$g>g_{\rm crit}$} \put(166,170){$g=\infty$} \end{picture} \end{center} \caption{q=2, \ d=8 \ ({\rm mod}\ 8)} \end{figure} There are three essentially different regions of the coupling constant \vbox{ \begin{eqnarray} {\rm I}&:&\qquad 0<g^2<g^2_1 \nonumber\\ {\rm II}&:&\qquad g^2_1<g^2<g^2_{\rm crit} \\ {\rm III}&:&\qquad g^2>g^2_{\rm crit} \nonumber \end{eqnarray} } \noindent where \begin{equation} g^2_1={8(d-4)\over d a}\exp\left[{2\over d}(d-2)\right]H_0^{-(d-4)/2}, \end{equation} \begin{equation} g^2_{\rm crit}=2(d-4){e\over a}H^{-(d-4)/2}, \end{equation} where \begin{equation} H_0=\mu^2e^{-{b\over a}}. \end{equation} In the region I the effective potential is a monotone increasing function for $H>0$ and has only one minimum at $H=0$ --- the perturbative vacuum. In region II there appear additional {\it local} minimum at some \begin{equation} H_{\rm min}=H_0 h_{\rm min}, \end{equation} where $h_{\rm min}$ is the solution of the equation \begin{equation} h^{(d-4)/2}\left(\log h+{2\over d}\right)+{4\over d\gamma}=0, \end{equation} with \begin{equation} \gamma={a\over 4}g^2H_0^{(d-4)/2}. \end{equation} \noindent The values of $H_{\rm min}$ lie in the region \begin{equation} H_1<H_{\rm min}<H_{\rm crit}, \end{equation} where \begin{equation} H_1=H_0\exp\left[-{4(d-2)\over d(d-4)}\right], \end{equation} \begin{equation} H_{\rm crit}=H_0\exp\left(-{2\over d-4}\right). \end{equation} This local minimum indicates that there is an additional metastable vacuum state with the energy $V_{\rm min}$ larger than that of the perturbative level, $V_{\rm min}>0$. Such metastable state must decay in the usual perturbative vacuum state $H=0$. At the critical value of the coupling constant $g=g_{\rm crit}$ the effective potential has the minimum at $H_{\rm min}=H_{\rm crit}$ and the energy of this additional state becomes equal to that of the perturbative vacuum, $V_{\rm min}=0$. This means that both states, $H=0$ and $H=H_{\rm crit}$, are physically equivalent. The true vacuum state in this case is a mixture of the perturbative vacuum and the nonperturbative one. In other words, at the critical value of the coupling constant a phase transition occurs: a new stable vacuum state appears and the system decays in it. In region III the perturbative vacuum is either metastable or absolutely unstable. The minimum of the effective potential is provided by the new nonperturbative stable vacuum state $H=H_{\rm vac}$. The values of $H_{\rm vac}$ range in \begin{equation} H_{\rm crit}<H_{\rm vac}<H_2, \end{equation} where \begin{equation} H_2=H_0 e^{-{2\over d}}. \end{equation} It is interesting to note that the values of the nontrivial vacuum fields are bounded from above by $H_2$ --- there are no vacuum states with $H_{\rm vac}>H_2$. Besides, as the coupling constant $g$ increases, the vacuum energy decreases and reaches its absolutely minimal value \begin{equation} \min_{0<g<\infty}V_{\rm vac}=-{2a\over d e}H_0^{d/2}. \end{equation} \section{Concluding remarks} In this paper we continued the study of the low-energy behavior of the non-Abelian gauge theories initiated in \cite{Avr-jmp95a}. We applied the covariant algebraic method for the calculation of the heat kernel of Laplace type operators developed in \cite{Avr-plb93,Avr-jmp95b} to the case of covariantly constant background fields, (\ref{2}), with $q$ `magnetic fields' of equal amplitudes, (see eqs. (\ref{16}), (\ref{17}), (\ref{26})). The heat kernels, zeta-function and the one-loop effective action are calculated explicitly in terms of the function $b_{s,\,q}(z)$ that is studied in detail in the Appendix. The cases $q=1$ and $q=2$ for any spacetime dimension $d$ are studied analytically to the end --- the zeta-functions and the effective action are calculated exactly. We confirmed the old result that in the case of only one `magnetic field', $q=1$, the vacuum is {\it essentially unstable}, i.e., unstable under small (quantum) disturbances. In other words, there are {\it negative} modes that destroy the vacuum immediately --- there are no stable vacuum states with only one magnetic field. In the case of two `magnetic fields' of equal amplitudes the effective potential is studied in detail. We distinguish two different cases $d=4$ and $d\ge 5$. For $d=4$ there are {\it zero} modes of the gauge fields. These zero modes do contribute to the renormalization group beta functions and {\it radically change} the asymptotic behavior of the effective coupling constant --- it is not asymptotically free any longer. The perturbative vacuum $H=0$ is stable under small (quantum) disturbances but is unstable under classical variations of the background field. For $d\ge 5,6,7,12\, ({\rm mod}\, 8)$ the vacuum is again metastable (Fig. 1). For $d=9,10,11\, ({\rm mod}\, 8)$ the perturbative vacuum $H=0$ is absolutely stable (Fig. 2). For $d=8\, ({\rm mod}\, 8)$ and sufficiently large coupling constant there appears some nonperturbative stable vacuum state $H=H_{\rm vac}\ne 0$ with negative vacuum energy $V_{\rm vac}<0$ (Fig.3). One should note that our analysis is applicable directly to Euclidean field theory. In case of spacetime of Lorentzian signature there is a more strong restriction $d\ge 2q+1$ --- there must be enough space to put $q$ magnetic fields. That is why our results are not applicable to the case of Lorentzian spacetimes of dimension $d=2q$ \cite{Avr-jmp95a}, e.g. for $q=2, d=4$. \section*{Acknowledgements} This work was supported in part by the Alexander von Humboldt Foundation and the Deutsche Forschungsgemeinschaft. A financial support of the Naples Section of the INFN, where a part of this work was done, is gratefully acknowledged. I would like also to thank R. Musto for many fruitful discussions. I am especially indebted to G. Esposito for arranging my visit to Naples and hospitality. \section*{Appendix. \ The function $b_{s,\,q}(z)$} In this appendix we study the function $b_{s,\,q}(z)$ defined by \begin{equation} b_{s,\,q}(z)={1\over\Gamma(-s)}\int\limits_0^\infty d t\,t^{-s-1+q}{e^{-tz}\over \sinh^q t}. \label{a1} \end{equation} It is not difficult to show that for ${\rm Re}\, z>-q$, the integral (\ref{a1}) converges in the region ${\rm Re}\,s<0$ and defines an {\it entire} function of $s$ by analytical continuation. In particular, in the region ${\rm Re}\, s<N$, $N$ being some natural number, it is defined by \begin{equation} b_{s,\,q}(z)={(-1)^N\over \Gamma(-s+N)} \int\limits_0^\infty d t\,t^{-s-1+N} {\partial^N\over\partial t^N} \left[e^{-tz}\left({t\over \sinh t}\right)^q\right]. \label{a30} \end{equation} For non-positive integer $s\ne 0,1,2,\dots$ the function $b_{s,\,q}(z)$ as a function of $z$ has a branching point $z=-q$. It is analytic function of $z$ in the complex plane cut along the real axis from $z=-q$ to $-\infty$. At positive integer points $s=0,1,2,\dots$, exactly in the same manner as in \cite{Avr-npb91}, we find that the values of the entire function $b_{s,\,q}(z)$ are determined by the Taylor expansion of the integrand \begin{equation} b_{k,\,q}(z)=\left.\left(-{\partial\over\partial t}\right)^{k} \left[e^{-tz}\left({t\over\sinh t}\right)^q\right] \right\vert_{t=0}, \qquad (k=0,1,2,\dots) \label{a31} \end{equation} For half-integer $s=k+1/2$, by choosing $N=k+1$ in (\ref{a30}) we get \begin{equation} b_{k+1/2,\,q}(z)={1\over\sqrt\pi}\int\limits_0^\infty {dt\over \sqrt t} \left(-{\partial\over\partial t}\right)^{k+1} \left[e^{-tz}\left({t\over\sinh t}\right)^q\right], \qquad (k=0,1,2,\dots) \label{a41c} \end{equation} \noindent By differentiating eq. (\ref{a30}) and choosing $N=k+1$ we also obtain the derivative $$ b'_{s,\,q}(z)\equiv{\partial \over \partial s}b_{s,\,q}(z) $$ at integer points $s=k$ \begin{equation} b'_{k,\,q}(z)= -{\bf C} b_{k,\,q}(z) -\int\limits_0^\infty d t\,\log\, t \left(-{\partial\over\partial t}\right)^{k+1} \left[e^{-tz}\left({t\over\sinh t}\right)^q\right] \label{a43c} \end{equation} For nonpositive integer $q$ the function $b_{s,\,q}(z)$ is expressed in elementary functions \begin{equation} b_{s,\,q}(z)=2^q{\Gamma(-q+1)\Gamma(-s+q)\over\Gamma(-s)} \sum\limits_{0\le n\le -q}{(-1)^n\over n!}{(z+q+2n)^{s-q}\over \Gamma(-q-n+1)} \end{equation} $$ (q=0,-1,-2,\dots). $$ In particular, \begin{equation} b_{s,\,0}(z)=z^s, \end{equation} \begin{equation} b_{s,\,-1}(z)={1\over 2(s+1)}\left[(z+1)^{s+1}-(z-1)^{s+1}\right], \end{equation} etc. For $q\ge 1$ one can expand $1/\sinh^q t$ in the powers of exponents to get an infinite series representation \begin{equation} b_{s,\,q}(z)=2^q(-1)^q{\Gamma(s+1)\over\Gamma(s-q+1)\Gamma(q)} \sum\limits_{n\ge 0}{\Gamma(q+n)\over n!}(z+q+2n)^{s-q} \label{aa20} \end{equation} Using the function (see \cite[p. 27]{Erdelyi}) \begin{equation} \Phi(\lambda, p, v)={1\over\Gamma(p)}\int\limits_0^\infty dt\,t^{p-1} {e^{-vt}\over 1-\lambda e^{-t}}=\sum\limits_{n\ge 0}{\lambda^n\over (n+v)^p} \end{equation} one has finally \begin{equation} b_{s,\,q}(z)=(-1)^q2^s{\Gamma(s+1)\over\Gamma(s-q+1)\Gamma(q)} \left.\left({\partial\over\partial\lambda}\right)^{q-1}\left[\lambda^{q-1} \Phi\left(\lambda,q-s,{z+q\over 2}\right)\right]\right|_{\lambda=1} \end{equation} Some properties of the function $\Phi(\lambda, p, v)$ as well as its connection with the Riemann and Hurwitz zeta-functions are described in \cite{Erdelyi}. Let us study now the boundary value of the function $b_{s,\,q}(z)$ and its derivatives as $z\to 0-i\varepsilon$ \begin{equation} a_{s,\,q}\stackrel{\rm def}{=}\lim_{z\to 0-i\varepsilon}b_{s,\,q}(z), \qquad a'_{s,\,q}\stackrel{\rm def}{=}\lim_{z\to 0-i\varepsilon}b'_{s,\,q}(z). \end{equation} The values of the function $b_{s,\,q}(z)$ at positive integer points given by (\ref{a31}) are polynomials in $z$ \begin{equation} b_{k,\,q}(z)=\sum\limits_{0\le n\le k} {k\choose n}z^{k-n}a_{n,\,q}, \end{equation} where \begin{equation} a_{k,\,q}=\left.\left(-{\partial\over\partial t}\right)^{k} \left({t\over\sinh t}\right)^q \right\vert_{t=0}, \qquad (k=0,1,2,\dots) \label{a31a} \end{equation} Taking the limit $z\to 0$ and observing that the Taylor series of the function $(t/\sinh t)^q$ is a power series in $(-t^2)$ with {\it positive} coefficients we find \begin{equation} a_{2k+1,\,q}=0 \qquad (k=0,1,2,\dots) \label{a38b} \end{equation} and the constants $a_{2k,\,q}$ have the property \begin{equation} a_{2k,\,q}=(-1)^k|a_{2k,\,q}|. \end{equation} The values of function $a_{s,\,q}$ at half integer points $s=k+1/2$ and its derivative $a'_{s,\,q}$ at integer points $s=k$ depend crucially on $q$. For nonpositive $q$ the function $b_{s,\,q}(z)$ is not analytic at the point $z=0$. Therefore, one has to take into account the infinitesimal imaginary part in $z\to 0-i\varepsilon$. We have for $q=-1$ \begin{equation} a_{k+1/2,\,-1}={1\over 2k+3} \left[1+i(-1)^{k+1}\right], \end{equation} \begin{equation} a'_{k,\,-1}=-{1\over 2(k+1)^2} \left[1+(-1)^k\right]+(-1)^{k+1}i\pi {1\over 2(k+1)}. \end{equation} and for $q=0$ \begin{equation} a_{k+1/2,\,0}=a'_{k+1,\,0}=0, \qquad (k=0,1,2,\dots) \end{equation} For $k=0$ $a'_{k,\,0}$ is not well defined because the derivative $b'_{s,\,0}(z)$ at $s=0$ is singular \begin{equation} b'_{0,\,0}(z)=\log z+o(z). \end{equation} As we already mentioned above $b_{s,\,q}(z)$ is an entire function of $s$ for ${\rm Re}\,z>-q$. This means that for $q\ge 1$ the function $b_{s,\,q}(z)$ also remains analytic in the limit $z\to 0$, i.e., $a_{s,\,q}$ is entire function of $s$ for $q\ge 1$. Therefore, for $q\ge 1$ we can just put $z=0$ in eqs. (\ref{a41c}) and (\ref{a43c}) to obtain \begin{equation} a_{k+1/2,\,q}={1\over\sqrt\pi}\int\limits_0^\infty {dt\over \sqrt t} \left(-{\partial\over\partial t}\right)^{k+1}\left({t\over \sinh t}\right)^q \qquad (k=0,1,2,\dots), \label{a41b} \end{equation} \noindent and \begin{equation} a'_{k,\,q}= -{\bf C} a_{k,\,q} -\int\limits_0^\infty d t\,\log\, t \left(-{\partial\over\partial t}\right)^{k+1}\left({t\over \sinh t}\right)^q \label{a43d} \end{equation} $$ (k=0,1,2,\dots). $$ We see that $a_{k+1/2,\,q}$ and $a'_{k,\,q}$ are some {\it real} numerical constants. There are two important particular cases when the integral \begin{equation} a_{s,\,q}={1\over\Gamma(-s)}\int\limits_0^\infty d t\,t^{-s-1+q} {1\over \sinh^q t}. \label{aa1} \end{equation} can be calculated analytically for $q\ge 1$ too, namely, $q=1$ and $q=2$. Substituting $z=0$ in eq. (\ref{aa20}) we obtain for $q=1$ \begin{equation} a_{s,\,1}=-2s(1-2^{s-1})\zeta(-s+1) \end{equation} and for $q=2$ \begin{equation} a_{s,\,2}=2^s s(s-1)\zeta(-s+1). \end{equation} Using the property \cite{Erdelyi} \begin{equation} \zeta(1-s)=2(2\pi)^{-s}\cos\left({\pi\over 2} s\right)\Gamma(s)\zeta(s) \end{equation} we obtain the values of the functions $a_{s,\,1}$ and $a_{s,\,2}$ and their derivatives at half-integer points and integer points \begin{equation} a_{k+1/2,\,1}=(-1)^{[(k+1)/2]}2^{3/2}(2^{k-1/2}-1) (2\pi)^{-k-1/2}\Gamma(k+3/2)\zeta(k+3/2) \end{equation} \begin{equation} a'_{2k+1,\,1}=(-1)^{k+1}(2^{2k}-1)(2k+1)!(2\pi)^{-2k}\zeta(2k+1) \end{equation} \begin{eqnarray} a'_{2k,\,1}&=&(-1)^{k}4(2^{2k-1}-1)(2k)!(2\pi)^{-2k}\zeta(2k) \nonumber\\ && \times \left[{\zeta'(2k)\over\zeta(2k)}+\Psi(2k+1) +{2^{2k-1}\over 2^{2k-1}-1}\log 2-\log(2\pi) \right] \end{eqnarray} \begin{equation} a_{k+1/2,\,2}=(-1)^{[(k+1)/2]}(2k-1)2^{k} (2\pi)^{-k-1/2}\Gamma(k+3/2)\zeta(k+3/2) \end{equation} \begin{equation} a'_{2k+1,\,2}=(-1)^{k+1}2^{2k}2k(2k+1)!(2\pi)^{-2k}\zeta(2k+1) \end{equation} \begin{eqnarray} a'_{2k,\,2}&=& (-1)^{k}2^{2k+1}(2k-1)(2k)!(2\pi)^{-2k}\zeta(2k) \nonumber\\ && \times \left[{\zeta'(2k)\over\zeta(2k)}+\Psi(2k+1) +\log 2-\log(2\pi)+{1\over 2k-1} \right]. \end{eqnarray}
2,869,038,156,112
arxiv
\section{Introduction} An open topic in machine learning is the transferability of a trained model to a new set of prediction categories without retraining efforts, in particular when some classes have very few samples. Few-shot learning algorithms have been proposed to address this, where prediction and training are based on the concept of an episode. Unlike other setups, the prediction in few-shot models is relative to the support set classes of an episode \cite{MAML:finn2017model,SNAIL:mishra2018a,MTL:sun2019meta}. The label categories vary in each episode and training is performed by drawing randomised sets of classes, thus iterating over varying prediction tasks when learning model parameters. Effectively, this learns a class-agnostic similarity metric which generalises to novel categories \cite{RN:sung2018learning,CAN:hou2019cross}. Unfortunately, the adversarial susceptibility of models under the few-shot paradigm remains relatively unexplored, albeit gaining traction \cite{xu2020meta,goldblum2019robust}. This is compared to models under the standard classification setting, where such phenomenon had been widely explored \cite{szegedy2013intriguing,madry2018towards}. The relative nature of predictions in few-shot setups allows going beyond crafting adversarial test samples. The attacker could craft adversarial perturbations for all $n$-shot support samples of the attacked class and insert them into the deployment phase of the model. The goal is to misclassify test samples of the attacked class regardless of the samples drawn in the other classes. In this work, we consider the impact on the few-shot accuracy of the attacked class, in the presence of adversarial perturbations, even when different samples were drawn for the non-attacked classes. This is a highly realistic scenario as the victim could unknowingly draw such adversarial support sets during the evaluation phase once they were inserted by the attacker. The use of adversarial samples to attack other settings than the one trained for are known as transferability attacks. Prior methods proposed to mitigate such adverse effects through the lenses of detection \cite{xu2017feature,cintasdetecting} and model robustness \cite{folz2020adversarial,zhang2020interpreting}. Though these methods work well for neural networks under the conventional classification setting, they will fail on few-shot classifiers due to limited data. Furthermore, these defences were not trained to transfer its pre-existing knowledge towards a novel distribution of class samples, contrary to few-shot classifiers. With the aforementioned drawbacks in mind, we propose a conceptually simple method for performing attack-agnostic detection of adversarial support samples in this setting. We exploit the concept of support and query sets of few-shot classifiers to measure the similarity of samples within a support set after filtering. We perform this by randomly splitting the original support set randomly into auxiliary support and query sets, followed by filtering the auxiliary support and predicting on the query. If the samples are not self-similar, we will flag the support set as adversarial. To this end, we make the following contributions in our work: \begin{enumerate} \item We propose a novel attack-agnostic detection mechanism against adversarial support sets in the domain of few-shot classification. This is based on self-similarity under randomised splitting of the support set and filtering, and is the first, to the best of our knowledge, for the detection of adversarial support sets in few-shot classifiers. \item We investigate the effects of a unique white-box adversary against few-shot frameworks, through the lens of transferability attacks. Rather than crafting adversarial query samples similar to standard machine learning setups, we optimise adversarial supports sets, in a setting where all non-target classes are varying \item We provide further analysis on the detection performance of our algorithm when using differing filtering functions and also different formulation variants of the aforementioned self-similarity quantity. \end{enumerate} The remaining of our paper is structured as follows: Section~\ref{related} discusses prior literature and Section~\ref{background} provides readers with the background to this study. We dive into our method in Section~\ref{method} and describe our experimental settings and evaluation results in Section~\ref{exps}. We provide further in-depth discussion in Section~\ref{discuss} and we conclude in Section~\ref{conclude} with summary and future work. \section{Related Works} \label{related} \textbf{Poisoning of Support Sets}: There is limited literature examining the poisoning of support sets in meta-learning. \cite{xu2020meta} proposed an attack routine, Meta-Attack, extending from the highly explored Projected Gradient Descent (PGD) attack \cite{madry2018towards}. They assumed a scenario where the attacker is unable to obtain feedback from the classification of the query set. Hence, the authors used the empirical loss on the support set to generate adversarial support samples to induce misclassification behaviours to unseen queries. \subsection{Autoencoder-based and Feature Preserving-based Defences} \cite{cintasdetecting} performs detection of such attacks using Non-parametric Scan Statistics (NPSS), based on hidden node activations from an autoencoder. \cite{folz2020adversarial} proposed using an autoencoder to reconstruct input samples such that only the necessary signals remain for classification. Their method requires fine-tuning the decoder based on the classification loss of the input with respect to the ground truth. However, under the few-shot setting, such fine-tuning based on the classification loss should be avoided as we would require large enough samples from each class for this step. \cite{zhang2020interpreting} attempts to stabilise sensitive neurons which might be more prone to the effects of adversarial perturbations, by enforcing similar behaviours of these neurons between clean and adversarial inputs. Their method requires adversarial samples during the training process which potentially makes defending against novel attacks challenging. Hence, we proposed a detection approach that does not make use of any adversarial samples. Though we employed the concept of feature preserving as one of our various filtering functions, our approach is different from \cite{zhang2020interpreting} as it does not suffer from this limitation. Hence, in our work, we adopted an approach that does not require any labelled data. \section{Background} \label{background} \subsection{Few-shot classifiers Used} A majority of the few-shot classifiers are trained with \emph{episodes} sampled from the training set. Each episode consists of a support set $S=\{x_s, y_s\}_{s=1}^{K*N}$ with $N$ labelled samples per $K$ classes, and a query set $Q=\{x_q\}_{q=1}^{N_q}$ with $N_q$ unlabelled samples from the same $K$ classes to be classified, denoted as a $K$-way $N$-shot task. The metric-based classifiers learn a distance metric that compares the features of support samples $x_s$ and query sample $x_q$ and generates similarity scores for classification. During inference, the episodes are sampled from the test set that has no overlapping categories with the training set. In this work, we explored two known metric-based few-shot classifiers, namely the RelationNet (RN) \cite{RN:sung2018learning} and a state-of-the-art model, the cross-attention network (CAN) \cite{CAN:hou2019cross}. As illustrated in Figure~\ref{fig:few_shot_classifiers}, the support and query samples are first encoded by a backbone CNN to get the image features \{$f^c_s \ | \ c=1, \dots , K$\} and $f_q$, respectively. The feature vectors $f_s^c$ and $f_q \in \mathbbm{R}^{d_f, h_f, w_f}$, where $d_f$, $h_f$, and $w_f$ are the channel dimension, height, and width of the image features. If $N>1$, $f^c_s$ will be the averaged feature of the support samples from class $c$. \begin{figure} \centering \includegraphics[trim={6cm 98.4cm 13cm 5.3cm},clip,width = 0.45\textwidth]{few-shot-models} \caption{RelationNet and CAN few-shot classifiers.} \label{fig:few_shot_classifiers} \end{figure} To measure the similarity between $f_s^c$ and $f_q$, the RN model concatenates $f_q$ and $f_s^c$ along the channel dimension pairwise and uses a \emph{relation module} to calculate the similarities. The CAN model adopts a \emph{cross-attention} module that generates attention weights for every $\{f_s^c, f_q\}$ pair. The attended image features are further classified with cosine similarity in the spirit of dense classification \cite{DENSECLASSIFICATION:lifchitz2019dense}. \subsection{Adversarial Attacks} Here, we describe the base adversarial attacks used in our experiments. The PGD attack \cite{madry2018towards} applies the sign of the gradient of the loss function to the input data as adversarial perturbations. It initialises an adversarial candidate by a small noise injection. This process repeats for a number of iterations. For an input $x_i$ at the $i^{th}$ iteration: \begin{gather} x_0 = x_{original} + Uniform(-\epsilon, \epsilon), \\ x_i = Clip_{x, \epsilon}\{x_{i-1} + \eta \, sign(\nabla_{x} L(h(x_{i-1}), y_{truth}))\}, \label{pgd} \end{gather} where $h(.)$ is a prediction logits for classifier $h$ of some input sample, $y_{truth}$ is the ground truth label, $L$ is the loss used during training (i.e. cross entropy with softmax), $\eta$ is the step size and $\epsilon$ is the adversarial strength which limits the adversarial candidate $x_i$ within an $\epsilon$-bounded $\ell_\infty$ ball. The Carlini-Wagner (CW) attack \cite{carlini2017towards} finds the smallest $\delta$ that successfully fools a target model using the Adam optimiser. Their attack solves the following objective function: \begin{equation} \begin{split} &\min_\delta ||\delta||_2 + const\cdot L(x+\delta, \kappa),\\ s.t.~L(x', \kappa) &= \max(-\kappa, \max_i(h(x')_{i \neq t}) - h(x')_t). \end{split} \label{cweqn} \end{equation} The first term penalises $\delta$ from being too large while the second term ensures misclassification. The value $const$ is a weighting factor that controls the trade-off between finding a low $\delta$ and having a successful misclassification. $h(\cdot)_i$ refers to the logits of prediction index $i$ and $t$ refers to the target prediction. $\kappa$ is the confidence value that influences the logits score differences between the target prediction $t$ and the next best prediction $i$. \subsection{Threat Model} We assume that the attacker wants to destroy the few-shot classifier's notion of a targeted class, $t$, unlike conventional machine learning frameworks where one is optimising single test samples to be misclassified. The attacker wants to find an adversarially perturbed set of support images, such that misclassification of most query samples from class $t$ occurs, regardless of the class labels of the other samples. He or she then replaces the defender support set for class $t$ with the adversarial support. We assume that the attacker has white-box access to the few-shot model (i.e. weights, architecture, support set). The adversarial support set would classify itself as self-similar, that is, they classify among each other as being within the same class, visually appear as class $t$, but classify true query images of class $t$ as belonging to another class. We now clarify our definition of $x$ used in our attacks. The attacks are applied on a fixed support set candidate $(x_1^{t}, \ldots, x_{n_{shot}}^t)$ for the target class. In every iteration of the gradient-based optimisation, we sample all classes randomly except for the target class. Specifically, we sample the support sets $S^{-t}$ and query sets $Q^{-t}$ of all the other classes randomly, and we randomly sample the query samples of the target $Q^t$, illustrated in the equations below. They are redrawn in every iteration of the optimisation of the above equations. \begin{equation} \begin{split} \mathcal{C}^{-t} &\sim Uniform(\mathcal{C}~\backslash~\{t\}) \\ S^{-t}, Q^{-t} &\sim Uniform(x | c \in \mathcal{C}^{-t}), \ Q^t \sim Uniform(x | c = t)\\ x &=(x_1^{t}, \ldots, x_{n_{shot}}^t), \\ h(x)&= h(x_1^{t}, \ldots, x_{n_{shot}}^t, S^{-t}, Q^t, Q^{-t}), \end{split} \label{threatmodel} \end{equation} where $\mathcal{C}$ is the set of all classes, and $\mathcal{C}^{-t}$ the random set of classes used in the episode together with class $t$. The last line in \eqref{threatmodel} indicates that the few-shot classifier $h$ takes in a support set made up of $x$ and $S^{-t}$ and a query set made up of $Q^t$ and $Q^{-t}$, which is a simplification to the expression, to relate to \eqref{pgd} and \eqref{cweqn}. The adversarial perturbation $\delta$ and the underlying gradients are computed only for each of the support samples $x$ of the target class. \section{Defence Methodology} \label{method} The defence is based on three components: sampling of auxiliary query and support sets, filtering the auxiliary support sets, and measuring the accuracy on the unfiltered auxiliary query set. We denote a statistic either averaged over all possible splits or for a randomly drawn split of a support set into auxiliary sets with filtering of the auxiliary supports as self-similarity. We elaborate further on auxiliary sets below. \subsection{Auxiliary Sets} \label{auxset} Few-shot classifiers' support and query sets can be freely chosen, implying that any sample can be used as either a support or query. Given a support set for class $c$, we randomly split it into auxiliary sets, where $S^c$ might be clean or adversarial: \begin{gather} \nonumber S^{c}_{aux} \cup Q^{c}_{aux} = S^c~and~S^{c}_{aux} \cap Q^{c}_{aux} = \emptyset,\\ s.t.~|S^{c}_{aux}| = n_{shot}-1~and~|Q^{c}_{aux}| = 1. \end{gather} The few-shot learner is now faced with a randomly drawn ($n-1$)-shot problem, evaluating on one query sample per way, with the option to average the $n$ possible splits. \subsection{Detection of Adversarial Support Sets} \label{detectsec} Our detection mechanism flags a support set as adversarial when support samples within a class are highly different from each other, as shown in Figure~\ref{fig:detectionl1}. Given a support set of class $c$, $S^c$, we split it randomly into two auxiliary sets $S^{c}_{aux}$ and $Q^{c}_{aux}$. We filter $S^{c}_{aux}$ using a function $r(\cdot)$ and use the resultant samples as the new auxiliary support set to evaluate $Q^{c}_{aux}$. Following which, we obtain the logits of $Q^{c}_{aux}$ both before and after the filtering of the auxiliary support (i.e. using $S^{c}_{aux}$ and $r(S^{c}_{aux})$ respectively) and compute the $\ell_1$ norm difference between them. The adversarial score $U_{adv}$ is given in Eq.~\eqref{ssrate} where $h$ is the few-shot classifier \begin{equation} \begin{split} U_{adv} =&\| h(r(S^c_{aux}),Q^c_{aux} ) - h(S^c_{aux},Q^c_{aux} ) \|_1 , \label{ssrate} \end{split} \end{equation} and $r$ is any filtering function which maps a support set onto its own space. \changemarker{The filter $r$ is chosen such that it causes smaller impact to clean samples, while inducing larger dissimilarity between the auxiliary support and query sets under adversariality.} We observe that we obtain already very high AUROC detection scores when computing $U_{adv}$ without averaging over $n_{shot}$ draws, which we elaborate further later. We flag a support set $S_c$ as adversarial if the adversarial score goes above a certain threshold\footnote{\changemarker{$T$ can be chosen by examining the $U_{adv}$ on clean support samples, according to a desired threshold based on False Positive Rates (e.g. @5\% FPR).}} (i.e.~$U_{adv}>T$). Different statistics can be used to compute $U_{adv}$, with Eq.~\eqref{ssrate} being one of many. Our main contribution lies rather in the proposal of using self-similarity of a support set for such detection. \begin{figure}[htb] \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip,width = 0.45\textwidth]{detectionfigurel1norm} \caption{Illustration of our detection mechanism based on self-similarity, by partitioning $S^c$ into two auxiliary sets $S^{c}_{aux}$ and $Q^{c}_{aux}$ and filtering. Best viewed in colour.} \label{fig:detectionl1} \end{figure} \subsection{Feature-space Preserving Autoencoder (FPA) for Auxiliary Support Set Filtering} \label{autoencoder} We explored using an autoencoder (AE) as a filtering function $r(\cdot)$, for the detection of adversarial samples in the support set, motivated by \cite{folz2020adversarial}. Initially we trained a standard autoencoder to reconstruct the clean samples in the image space using the MSE loss. However, the standard autoencoder performed poorly in detecting adversarial supports since it did not learn to preserve the feature space representation of image samples. Therefore, we switched to a feature-space preserving autoencoder which additionally reconstructs the images in the feature space of the few-shot classifier, contrary to prior work where they fine-tuned their AE on the classification loss. We argue that using classification loss for fine-tuning is inapplicable in few shot learning due to having very few labelled samples. We minimise the following objective function for the feature-space preserving autoencoder: \begin{equation} \mathcal{L_{FPA}} =\frac{1}{N'}\sum^{N'}_{i=1} 0.01 \cdot \ \frac{\| x_i - \hat{x_i} \|_{2}^{2}}{dim(x_i)^{1/2}} \ + \frac{\| f_i - \hat{f_i} \|_{2}^{2}}{dim(f_i)^{1/2}} \label{aeeqn} \end{equation} where $x_i$ and $\hat{x_i}$ are the original and reconstructed image samples, respectively, and, $f_i$ and $\hat{f_i}$ are the feature representation of the original and reconstructed image obtained from the few-shot model before any metric module (i.e. features from CNN backbone). The second loss term ensures that the reconstructed image features are similar to those of original image in the feature space of the few-shot models. We train the feature-space preserving autoencoder by fine-tuning the weights from the standard autoencoder. \subsection{Median Filtering (FeatS)} \label{feats} In our work, we also explored an alternative filtering function. We adopted a feature squeezing (FeatS) filter from \cite{xu2017feature} where it was used in a conventional classifier. It essentially performs local spatial smoothing of images by having the centre pixel taking the median value among its neighbours within a 2x2 sliding window. As their detection performance was reasonably high using this filter, we decided to use it as an alternative to FPA as an explorative step. However, their approach performs filtering on each individual test sample whereas we use it on the auxiliary support set. Since we would like to demonstrate our detection principle and the FPA performs already very well, we leave further filtering functions to future research. \section{Experiments and Results} \label{exps} \subsection{Experimental Settings} \textbf{Datasets}: MiniImagenet (MI) \cite{miniImageNet:vinyals2016matching} and CUB \cite{CUB:wah2011caltech} datasets were used in our experiments. We prepared them following prior benchmark splits \cite{LSTMoptimizer:ravi2016optimization,FeaturewiseTranslayer:tseng2020cross}, with 64/16/20 categories for the train/val/test sets of MI and 100/50/50 categories for the train/val/test sets of CUB. In our attack and detection evaluation, we chose an exemplary set of 10 and 25 classes from the test set for MI and CUB respectively, and we report the average metrics across them. This is purely for computational efficiency. For the RN model, we used image sizes of 224 while using image sizes of 96 for the CAN model across both datasets. We shrank the image size for the CAN model due to memory usage issues. \textbf{Attacks:} In our work, we used two different attack routines, one being PGD while the other being a slight variant of the CW attack. This variant uses a normal Stochastic Gradient Descent optimiser instead of Adam as we did not yield good performing adversarial samples with the latter. We still used the objective function defined in Eq.~\eqref{cweqn} to optimise our CW adversarial samples, while using Eq.~\eqref{pgd} to perform a perturbation step less the clipping and sign functions. We name this attack CW-SGD. For our PGD attack, we limit the $\ell_\infty$ norm of the perturbation to $12/255$ and a step size of $\eta=0.05$ (see Eq.~\eqref{pgd}). For our CW-SGD attack, clipping was not used due to the optimisation over $||\delta||_2$ while $\kappa=0.1$ and $\eta=50$. We would like to stress that optimising for the best set of hyperparameters for generating attacks is not the main focus of our work as we are more interested in obtaining viable adversarial samples. In both settings, we generate 50 sets of adversarial perturbations for each of the 10 and 25 exemplary classes for MI and CUB respectively. We also attack all $n$ support samples for the targeted class $t$. \textbf{Autoencoder}: We used a ResNet-50 \cite{he2016deep} architecture for the autoencoders\footnote{Autoencoder architecture adapted from GitHub repository https://github.com/Alvinhech/resnet-autoencoder.}. For the MI dataset, we trained the standard autoencoder from scratch with a learning rate of 1e-4. For CUB, we trained the standard encoder initialised from ImageNet with a learning rate of 1e-4, and the standard decoder from scratch with a learning rate of 1e-3. For fine-tuning of the feature-space preserving autoencoder, we used a learning rate of 1e-4. We employed a decaying learning rate with a step size of 10 epochs and $\gamma=0.1$. We used the Adam \cite{kingma2014adam} optimiser with a weight decay of 1e-4. In both settings, we used the train split for training and the validation split for selecting our best performing set of autoencoder weights out of 150 epochs of training. It is implemented in PyTorch \cite{paszke2017automatic}. \subsection{Baseline Accuracy of Few-shot Classifiers} We evaluated our classifiers by taking the average and standard deviation accuracy over 2000 episodes across all models and datasets, reported in Table~\ref{clf:baseline}, to show that we were attacking reasonably performing few-shot classifiers. \begin{table}[htb] \centering \begin{tabular}{lcc} \hline & RN - 5 shot & CAN - 5 shot \\ \hline MI & 0.727 $\pm$ 0.0037 & 0.787 $\pm$ 0.0033 \\ \hline CUB & 0.842 $\pm$ 0.0032 & 0.890 $\pm$ 0.0026 \\ \hline \end{tabular} \caption{Baseline classification accuracy of the chosen models on the two datasets, under a 5-way 5-shot setting, computed across 2000 randomly sampled episodes. We report the mean with 95\% confidence intervals for the accuracy. } \label{clf:baseline} \end{table} \subsection{Attack Evaluation Metrics} We evaluated the success of our attacks via computing the Attack Success Rate (ASR), measuring the proportion of samples that had adversarial candidates generated from attacks that successfully cause misclassification. We only considered samples from the targeted class when measuring ASR: \begin{align} ASR=\mathbb{E}_{S^t,Q^t \sim D}\{P(\mathrm{argmax}_j(h_j(S^t+\delta^t, Q^t)) \neq t)\}. \end{align} The remaining $(K-1)$ classes were sampled randomly. In the evaluation of the detection performances, we used the Area Under the Receiver Operating Characteristic (AUROC) metric, since detection problems are binary (whether an adversarial sample is present or not), and true and false positives can be collected at various predefined threshold values ($T$). \subsection{Transferability Attack Results} We conducted transferability experiments to evaluate how well the attacker generalised their generated adversarial perturbation under two unique scenarios: \textit{i) transfer with fixed supports} and \textit{ii) transfer with new supports}. Setting (i) assumes that we have the same adversarial support set for class $t$ and we evaluated the ASR over newly drawn query sets. Setting (ii) relaxes this assumption, and we instead applied the generated adversarial perturbation, that was stored during the attack phase, on newly drawn support sets for class $t$, similarly evaluating over newly drawn query sets. Contrary to transferability attacks in conventional setups where a sample is generated on one model and evaluated on another, we performed transferability to new tasks, by drawing randomly sets of non-target classes together with their support sets, and new query sets for the few-shot paradigm. As illustrated in Figure~\ref{fig:transfer}, the PGD generated adversarial samples showed higher transferability than the CW-SGD attack, across both models and under both scenarios. The exceptionally high transfer ASR we observed under scenario (i) implies that once the attacker had obtained an adversarial support set targeting a specific class, successful attacks can be carried out on new tasks for which the target class is present. This further reinforces the motivation to investigate defence methods for few-shot classifiers. Under scenario (ii), where the support set of the target class is also randomised, we see lower transfer ASR across the chosen classes. We would like to remind readers that the adversarial samples were optimised explicitly using setting (i) and not for (ii). Even though the ASR in scenario (ii) is lower than in (i), there still exist classes, where unpleasantly high ASR occurs. \begin{figure} \centering \includegraphics[trim={5.2cm 52cm 9.3cm 7.6cm},clip,width=0.47\textwidth]{transfer_vanilla} \caption{Transferability attack results under scenarios (i) Fixed Supports and (ii) New Supports, against our two explored attacks on RN and CAN models across both datasets. Reported ASRs were averaged across the chosen exemplary classes and across the 50 generated sets of adversarial perturbations as bar charts. Standard deviation represented as the whiskers. ASR metric reported.} \label{fig:transfer} \end{figure} \subsection{Detection of Adversarial Supports} We compared our explored approaches against a simple filtering function for $r(\cdot)$, since prior detection methods for adversarial samples in few-shot classifiers do not exist. We experimented with using normal distributed noise as a filter, in which we computed the channel-wise variance for drawing normal distributed noise to be added to the images. Being in the context of detection, we report the AUROC scores to evaluate the effectiveness of our detection algorithm. \begin{figure}[htb] \centering \begin{subfigure}{0.47\textwidth} \includegraphics[trim={8.cm 53.2cm 9cm 5.5cm},clip,width=\textwidth]{RNauc} \caption{RN (5-shot) model.} \label{fig:rn} \end{subfigure} ~ \begin{subfigure}{0.47\textwidth} \includegraphics[trim={8.cm 53.2cm 9cm 5.5cm},clip,width=\textwidth]{CANauc} \caption{CAN (5-shot) model.} \label{fig:rn} \end{subfigure} \caption{Area Under the Receiver Operator Characteristic (AUROC) scores for the various filter functions (normal distributed noise, median filtering from Feature Squeezing (FeatS), FPA) across our experiment settings, for RN and CAN models. Higher is better.} \label{fig:aucdetection} \end{figure} Our results in Figure~\ref{fig:aucdetection} shows that FPA exhibits good detection performance. \changemarker{This indicates that the self-similarity of clean samples under filtering of the auxiliary support set is preserved to a degree, which allows discrimination against adversarial samples.} Though "FeatS" exhibits already a good detection performance, our FPA approach consistently outperforms it across all settings. The "Noise" approach, however, does not detect well. We see highly varied detection performances across the different settings, which makes this approach highly unreliable\footnote{Cases with AUROC score less than 0.5 indicates that more favourable detection effectiveness can be achieved by flipping the detection threshold (i.e. $U_{adv} > T$ to $U_{adv} < T$). However, it will not be experimentally consistent. This is also a clear indication of the lack of reliability of using "Noise" as a filtering function.}. This result is hardly surprising since such methods require substantial manual fine-tuning of its noise parameters. This is not ideal as newer attacks can be introduced in the future and also, being in a few-shot framework, the optimal noise parameters between different task instances might not be consistent as the data might be different. However, our FPA filter approach exhibits such robustness even in such scenarios as it still achieved favourable AUROC scores. For clean samples, our FPA managed to reconstruct $S_{aux}^c$ such that the logits of $Q_{aux}^c$ before and after filtering remained consistent, even when the FPA did not encounter classes from the novel split during training. \section{Discussion} \label{discuss} \subsection{Study of Self-Similarity Computation Methods} In Section~\ref{detectsec}, we described one of the possible detection mechanisms based on logits differences. An alternative would be to use hard label predictions. Thus, we investigate the effect of a differing scheme as a justification for our choice $U_{adv}$. For the case of hard label predictions, we perform the following: we compute the average accuracy of $Q^{c}_{aux}$, across the different permutated partitions of $S^{c}$, illustrated in Figure~\ref{fig:partition}. This results in the statistic $U_{adv}'$: \begin{equation} U_{adv}' = \frac{1}{n_{shot}} \sum^{n_{shot}}_{i=1} \mathbbm{1} [argmax(h(r(S^{c}_{i, aux}), Q^{c}_{i, aux})) \neq c] , \label{ssrate2} \end{equation} where $h$ is the few-shot classifier, $r$ is the filtering function, and $\mathbbm{1}$ is the indicator function. Similarly, we flag the support set as adversarial when $U_{adv}'>T$, such that it goes beyond a certain threshold. \begin{figure}[htb] \centering \includegraphics[trim={48.3cm 101.5cm 4.5cm 4.5cm},clip,width = 0.35\textwidth]{partitions} \caption{Illustration of how partitioning $S^c$ into two auxiliary sets $S^{c}_{aux}$ and $Q^{c}_{aux}$ is performed. Best viewed in colour.} \label{fig:partition} \end{figure} \begin{table}[htb] \centering \resizebox{0.8\columnwidth}{!}{% \begin{tabular}{|c|c|cc|cc|} \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Dataset} & \multicolumn{2}{c|}{PGD} & \multicolumn{2}{c|}{CW-SGD} \\ \cline{3-6} & & $U_{adv}$ & $U_{adv}'$ & $U_{adv}$ & $U_{adv}'$ \\ \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}RN\\ (5-shot)\end{tabular}} & MI & \textbf{0.999} & 0.451 & \textbf{0.979} & 0.723 \\ & CUB & \textbf{0.997} & 0.326 & \textbf{0.974} & 0.524 \\ \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}CAN\\ (5-shot)\end{tabular}} & MI & \textbf{0.999} & 0.991 & \textbf{0.999} & 0.931 \\ & CUB & \textbf{0.999} & 0.998 & \textbf{0.988} & 0.821 \\ \hline \end{tabular} } \caption{Area Under the Receiver Operator Characteristic (AUROC) scores for the two detection mechanisms ($U_{adv}$ and $U_{adv}'$) using our FPA across our experiment settings. Higher is better.} \label{tab:aucdetectionDM} \end{table} Table~\ref{tab:aucdetectionDM} shows our AUROC scores comparing the two detection mechanisms, $U_{adv}$ and $U_{adv}'$, when using the FPA filtering function. It is evident that using logits scores to calculate differences as in $U_{adv}$ can be more informative than using hard label predictions to match class labels, as $U_{adv}$ consistently outperforms $U_{adv}'$, with the difference being bigger for RN. Differences in logits can be pronounced also in cases when the prediction label does not switch. \subsection{Varying Degrees of Regularisation of FPA} We observe lower AUROC scores for the RN model than the CAN model in Figure~\ref{fig:aucdetection}. As such, we question if this difference can be attributed to the FPA's ability to reconstruct clean samples effectively. Recalling from Eq.~\eqref{aeeqn}, we define an additional regularisation term to enforce stricter reconstruction requirements to also include class distribution reconstruction. More specifically, we minimise the following objective function: \begin{equation} \begin{split} \mathcal{L_{FPA'}} =\frac{1}{N'}\sum^{N'}_{i=1} 0.01 &\cdot \ \frac{\| x_i - \hat{x_i} \|_{2}^{2}}{dim(x_i)^{1/2}} \ + \frac{\| f_i - \hat{f_i} \|_{2}^{2}}{dim(f_i)^{1/2}}\\ &+ \frac{\| z_i - \hat{z_i} \|_{2}^{2}}{dim(z_i)^{1/2}} \ , \end{split} \label{aeeqn2} \end{equation} where $x_i$ and $\hat{x_i}$ are the original and reconstructed image samples, respectively, and $f_i$ and $\hat{f_i}$ are the feature representation of the original and reconstructed image obtained from the few-shot model before any metric module, and $z_i$ and $\hat{z_i}$ are the logits of the original and reconstructed image. We refer to this variant as $FPA'$. Similarly, we train $FPA'$ by fine-tuning the weights from the standard autoencoder. \begin{table}[htb] \centering \resizebox{0.6\columnwidth}{!}{% \begin{tabular}{|c|cc|cc|} \hline \multirow{2}{*}{Dataset} & \multicolumn{2}{c|}{PGD} & \multicolumn{2}{c|}{CW-SGD} \\ \cline{2-5} & $FPA$ & $FPA'$ & $FPA$ & $FPA'$ \\ \hline MI & 0.999 & 0.999 & 0.979 & 0.950 \\ \hline CUB & 0.997 & 0.997 & 0.974 & 0.971 \\ \hline \end{tabular} } \caption{AUROC results comparing $FPA$ and $FPA'$ for the RN. We computed the results across the 10 and 25 exemplary classes for MI and CUB respectively, and 50 sets of adversarial perturbations. } \label{tab:scratedirtycompare} \end{table} Our results in Table~\ref{tab:scratedirtycompare} shows that surprisingly, imposing a higher degree of regularisation marginally lowers the detection performance of our algorithm rather than improving it. This implies that $FPA$ is already sufficient to induce a large enough divergence in classification behaviours in the presence of an adversarial support set. \section{Conclusion} \label{conclude} Adversarial attacks against the support sets can be damaging to few-shot classifiers. To this end, we propose a novel adversarial attack detection algorithm on support sets in the few-shot framework, which has not been explored prior, to the best of our knowledge. Our algorithm works by using the concept of self-similarity among samples in the support set and filtering. We obtained high detection AUROC scores in the CAN and RN models, across MI and CUB datasets, with FPA and FeatS filtering functions, though FPA is superior. We have also found that using differences of the logits scores yield better detection performances and a higher degree of regularisation of FPA does not guarantee better detection results. Future work can explore the efficacy of our detection for black-box attack settings and the detection performances with different filtering functions. \section{Acknowledgements} This research is supported by both ST Engineering Electronics and National Research Foundation, Singapore, under its Corporate Laboratory @ University Scheme (Programme Title: STEE Infosec-SUTD Corporate Laboratory). \bibliographystyle{named}
2,869,038,156,113
arxiv
\section{Introduction} Many real world optimization problems are formulated as $$\underset{x \in \Omega}{\text{min} }f(x)$$ where $f:\mathbb{R}^n \rightarrow \mathbb{R}$ is a nonlinear function, and $\Omega=\{x\in \mathbb{R}^n:l_i \leq x_i \leq u_i, \; i=1,\ldots,n\}$. In many cases, the gradient information required to solve the optimization problem is either unavailable or too expensive to compute using the standard approaches. Such problems are categorized under the purview of box constrained \emph{derivative free optimization}(DFO), and usually arise due to complex simulations or physical experiments where cost of computing the function value is fairly high. The main objective of the algorithms applied for solving these problems is to obtain the optimum while being highly frugal over the number of function evaluations. Hence, gradient based approaches using finite differences or automatic differentiation and metaheuristic methods are often ineffective in this domain because of high cost of function evaluations. Further, the presence of noise or nonsmooth function nature poses additional limitations. Two fundamental classes of algorithms \cite{connBook,audet2017derivative} for solving derivative free optimization problems with guaranteed convergence to local optimum or Clarke-Jahn stationary point \cite{clarke1990optimization} (for nonsmooth cases) include the direct search and trust region methods. Direct-search methods \cite{connBook} explore the feasible search space by generating new points satisfying some conditions like well poisedness. These conditions are needed to ensure the proper exploration of search space around the current iterate. These methods generally comprise of three steps: a search step to explore the space for better solutions, a poll step to ensure convergence to some stationary point, and subsequent parameter update. Mesh adaptive direct search (MADS) \cite{audet_dennis_jr_mesh, abramson_audet_ea_orthomads} is a well known direct search approach. MADS was designed to address derivative free optimization problems which have nonsmooth nature to a large extent, and sometimes, have hidden constraints. It generates a set of directions which, when rounded to hypothetical mesh (integer lattice), are dense on unit sphere. For handling constraints, the application of the extreme barrier approach \cite{audet_dennis_jr_mesh} has been suggested. \citet{vicente2012analysis} proposed a replacement for the integrality requirements of direct search methods like MADS and suggested a condition of sufficient decrease in the function value during the poll step. This line of approach was further extended \cite{fasano2014linesearch} to handle constraints using projected line search and penalty approach. Since these algorithms do not rely on the function nature, they are quite robust and have the additional advantage of parallel implementation \cite{gray2006algorithm, Le09b,audet2008parallel}. However, because of their inability to explore and utilize the curvature information, they often require large number of function evaluations for convergence. Further, direct search methods \cite{custodio2007using,custodio2008using} using simplex gradients have been proposed, but their use was limited to reordering the poll directions instead of enhancing the search step. The other prominent method for derivative free optimization involves the trust region approach where the curvature information about the function is utilized by building models over the explored points. In this approach, a chosen model is fitted across already evaluated points at each iteration to obtain a close approximation to the original function. The ability to capture approximate curvature information in trust region approach is attributed for providing fast solutions to derivative free optimization problems. Several algorithms utilizing models based on quadratic \cite{powell2006newuoa,wang2016conjugate}, kriging \cite{jones1998efficient,gramacy_le_digabel_mesh} and radial basis function \cite{wild_regis_ea_orbit,regis2013combining} have been proposed. Notably, these algorithms require less function evaluations than direct search methods when the underlying function is well behaved. The effectiveness of these algorithms for solving smooth, piecewise-smooth and noisy problems in derivative free optimization is outlined in \cite{more_wild_benchmarking}. These approaches, however, are sequential in nature and their performance is affected when the function lacks good structural properties. Recently, approaches \cite{custodio2010incorporating,conn_le_digabel_use,amaioua2018efficient} based on the blending of quadratic models with direct search methods have been reported which utilize the curvature information of the function to find a better solution. However, when the function lacks a good structure, these methods revert to direct search. Many optimization problems in practice involve large number of variables, and solving them under derivative free conditions poses significant challenges. Such problems usually have a large number of local optima and lack good structural properties. Consequently, finding global optimum for these problems is generally unrealistic and non-trivial because of substantial number of function evaluations needed. Also, these problems are often plagued with noise (stochastic or non-stochastic) which generally leads to high and low frequency oscillations \cite{more_wild_benchmarking}; for instance, solving a differential equation with multiple parameters to a specified accuracy. Further, presence of even simple constraints like bound ones, introduces additional limitations and increases the overall complexity of the problem. Thus, a good local optimization algorithm which can provide sufficiently good solutions with a small computational budget is desirable. Further, the required algorithm should have the ability to exploit certain structural properties such as convexity and smoothness, wherever possible, in order to enhance its speed and accuracy. In this work, we propose an elegant approach based on direct search for solving box constrained derivative free optimization problems. In the proposed approach, we suggest a new strategy for integrating the quadratic models into direct search framework to achieve high performance due to the synergy of curvature information retrieval ability of quadratic models with search directions provided by scaled conjugate gradient. The structure of the paper is as follows. In section \ref{nota}, we give an overview of few definitions required for proving convergence results. In section \ref{Background}, we provide background information about direct search, quadratic models, simplex gradient and scaled conjugate gradient. In section \ref{Theory}, we outline the proposed approach and discuss convergence proofs. We report our numerical results in section \ref{Results} followed by conclusions in section \ref{Conclusion}. \section{Notation and definitions} \label{nota} For vectors $u,v\in \mathbb{R}^n$, we define max$\{u,v\}=x$ and min$\{u,v\}=y$ where $x,y \in \mathbb{R}^n$ such that $x_i=\text{max}\{u_i,v_i\}$ and $y_i=\text{min}\{u_i,v_i\}$ for $i=1,2,\ldots,n$. Let $\Omega=\{x\in \mathbb{R}^n:l_i \leq x_i \leq u_i,\; i=1,\ldots,n\}$. We denote a ball with center $c \in \mathbb{R}^n$ and radius $r \in \mathbb{R}_+$ as $\mathcal{B}(c,r)$. We now present few definitions and lemmas from \cite{fasano2014linesearch}, which are required for proving convergence results. \begin{defn} For a point $x \in \Omega$, cone of feasible directions $D(x)$ is defined as: \begin{equation*} D(x)=\{d\in \mathbb{R}^n:\, d_i \geq 0 \text{ if } x_i=l_i,\; d_i \leq 0 \text{ if } x_i=u_i,\; d_i \in \mathbb{R} \text{ if } l_i<x_i<u_i, \; i=1,\ldots,n\} \end{equation*} \end{defn} \begin{lemma}\label{coneLemma} Let $\{x_k\} \subset \Omega$ be a sequence such that $\underset{k \rightarrow \infty}{\text{lim}}x_k \rightarrow \bar{x}$. There exists $m \in \mathbb{N}$ such that for all $k>m$, \begin{equation*} D(\bar{x}) \subseteq D(x_k). \end{equation*} \end{lemma} \begin{defn} The Clarke-Jahn generalized directional derivative (\cite{clarke1990optimization}) of Lipschitz function $f$ at $x \in \Omega$ along direction $d \in D(x)$ is defined as \begin{equation} f^{\circ}(x;d)=\underset{y\rightarrow x,\; t \downarrow 0}{\text{lim sup }}\frac{f(y+td)-f(y)}{t}, \end{equation} where $y\in \Omega$ and $y+td \in \Omega$. Also, $\hat{x} \in \Omega$ is a Clarke-Jahn stationary point for $f$ if \begin{equation} f^{\circ}(\hat{x};d) \geq 0 \qquad \forall d \in D(\hat{x}). \end{equation} \end{defn} \begin{defn} Let $D=\{d_k\}_K$ be a sequence of normalized directions where $K$ is a set of indices i.e. $K=\{0,1,\ldots\}$. Given any direction $d \in \mathbb{R}^n$ such that $\|d\|_2=1$ i.e. $d \in \mathcal{B}(0,1)$ and any $\epsilon >0$, if there exists a direction $d_k \in D$ such that $\|d-d_k\|_2 < \epsilon$, then $D$ is said to be dense \cite{fasano2014linesearch} in $\mathcal{B}(0,1)$. \end{defn} \section{Background}\label{Background} \subsection{Direct Search} A typical direct search iteration is composed of following steps \begin{description} \item[Search Step:] In this step trial points are generated using some user defined approach. Implementation of problem specific procedures in the search step can greatly enhance the performance of the algorithm. Literature suggests the use of approaches like variable neighbourhood search \cite{audet_bechard_ea_nonsmooth}, quadratic models \cite{conn_le_digabel_use}, and Treed Gaussian Process (TGP) \cite{gramacy_le_digabel_mesh} in the search step of MADS. If a new point with function value better than incumbent solution is found, the search step is declared as successful. \item[Poll Step:] Unlike the search step, this does not offer any flexibility, but ensures the convergence to a local optimum. In this step, finite number of new directions are constructed to create new trial points at each iteration. These directions are in fact vectors of some positive basis (different at each iteration) and the distance of generated trial points is bounded above by a poll size parameter. The set of such directions generated over infinite iterations, when normalized, are required to be dense on the unit sphere. If any of the trial points along these directions has function value better than the incumbent solution, then poll step is termed as successful. \item[Parameter Update:] If poll step is successful then the poll size parameter is either increased or kept constant but in case of failure it is reduced by some positive factor $\tau$ where $\tau >1$. A systematic reduction of poll size parameter by a positive factor during consecutive poll step failures and the choice of different positive basis at each iteration for direction generation ensures convergence to some local optimum \end{description} \subsection{Generation of directions} An important component in the poll step procedure of direct search is the generation of directions which, after normalization, are dense on the unit sphere. These directions are generated from a positive spanning set using approaches based on $n+1$ equiangular directions \cite{alberto_nogueira_ea_pattern} in $\mathbb{R}^n$ and $2n$ orthogonal directions \cite{abramson_audet_ea_orthomads} in $\mathbb{R}^n$. The dense set however, is generated by rotating these directions along a new direction at each iteration. This new direction can be generated using Halton sequence \cite{halton1960efficiency}, Sobol sequence \cite{sobol1976uniformly, fasano2014linesearch}, simplex division approach \cite{edelsbrunner2000edgewise} or random operation \cite{audet_dennis_jr_mesh}. Rotation along these directions can be carried out using Householder transformation \cite{golub2012matrix}. We now give a brief overview of quadratic models, simplex gradient and scaled conjugate gradient which will be needed for the design of our approach. \subsection{Quadratic Models}\label{quadModels} Consider a sample set $Y$ of $p+1$ interpolation points i. e. $Y=\{y^0,y^1,\ldots,y^p\}$ and a polynomial basis $\phi$ of degree less than or equal to $2$ in $\mathbb{R}^n$. So $\phi$ can be expressed as natural basis of monomials i.e. \begin{align} \phi &= \{\phi_0(x),\phi_1(x),\ldots,\phi_q(x)\}\notag\\ &= \{1,x_1,x_2,\ldots, x_n, \frac{x^2_1}{2},\frac{x^2_2}{2},\ldots,\frac{x^2_n}{2}, x_1x_2, x_1x_3,\ldots,x_{n-1}x_n\}, \end{align} where $q+1=\frac{(n+1)(n+2)}{2}$ is the number of elements in the polynomial basis. A quadratic model $m(y)=\alpha^T\phi(y)$ built over the set $Y$ should satisfy the interpolation condition: \begin{equation}\label{quadInterpolateEqn} M(\phi,Y)\alpha=f(Y), \end{equation} where $f(Y)=(f(y^0),f(y^1),\ldots,f(y^p))$, set of function values of points in $Y$ and \begin{equation*} M(\phi,Y)= \begin{bmatrix} \phi_0(y^0) & \phi_1(y^0) & \ldots & \phi_q(y^0)\\ \phi_0(y^1) & \phi_1(y^1) & \ldots & \phi_q(y^1)\\ \vdots & \vdots & \ddots & \vdots \\ \phi_0(y^p) & \phi_1(y^p) & \ldots & \phi_q(y^p) \end{bmatrix}. \end{equation*} Based on the number of sample points $p+1$ and $q$, there are three possible scenarios for solving the above system of linear equations \ref{quadInterpolateEqn}: \begin{enumerate} \item When $p>q$, we have overdetermined system which can be solved in least squares sense i.e. \begin{equation}\label{quadLSeqn} \underset{\alpha \in \mathbb{R}^{q+1}}{\text{min}} ||M(\phi,Y)\alpha-f(Y)||^2, \end{equation} \item When $p=q$, we have determined system which can be solved directly. \item When $p<q$, we have underdetermined system. \end{enumerate} In practice, the first two scenarios rarely occur for large dimension problems because of limited function evaluation budget. Accordingly, there exist infinite possible solutions to the underdetermined system. A possible solution can be derived \cite{connBook,custodio2010incorporating} by using Minimum Frobenius Norm (MFN). For underdetermined case, it was shown \cite{connBook} that the error between $f$ and $m$, and between $\nabla f$ and $\nabla m$ is upper bounded by terms dependent on the norm of the Hessian of model $m$. Hence, a model $m$ with least Hessian norm, is desirable. The elements of Hessian, which are essentially quadratic terms of $\alpha$, can be minimized by building models based on MFN. So, bifurcating $\alpha$ into linear terms $\alpha_L \in \mathbb{R}^{n+1}$ and quadratic terms $\alpha_Q \in \mathbb{R}^{n_Q}$ where $n_Q=\frac{n(n+1)}{2}$, we have $m(y)=\alpha_L^T \phi_L+\alpha_Q ^T \phi_Q$ where $\phi_L=\{1,x_1,\ldots,x_n\}$ and $\phi_Q=\{\frac{x^2_1}{2},\frac{x^2_2}{2},\ldots,\frac{x^2_n}{2}, x_1x_2, x_1x_3,\ldots x_{n-1}x_n\}$. The solution $\alpha$ is then obtained by solving the optimization problem: \begin{align}\label{MFN} &\underset{\alpha_Q\in \mathbb{R}^{n_Q}}{\text{min }} \frac{1}{2}||\alpha_Q||^2\notag\\ & \text{s.t. } M(\phi_L,Y)\alpha_L+M(\phi_Q,Y)\alpha_Q=f(Y). \end{align} The quadratic minimization problem reduces to solving a linear system \begin{equation} F(\phi,Y)\begin{bmatrix} \mu \\ \alpha_L \end{bmatrix} = \begin{bmatrix} f(Y)\\ 0 \end{bmatrix}, \end{equation} where \begin{equation*} F(\phi,Y)=\begin{bmatrix} M(\phi_Q,Y)M(\phi_Q,Y)^T & M(\phi_L,Y) \\ M(\phi_L,Y)^T & 0 \end{bmatrix} \end{equation*} The linear term $\alpha_L$ is obtained directly by solving above linear system while quadratic term $\alpha_Q$ is obtained by computing $\alpha_Q=M(\phi_Q,Y)^T\mu$. \subsection{Simplex Gradient} A sample set $Y=\{y^1, \ldots y^q\}$ of $q$ points in $\mathbb{R}^n$ such that they all lie within a ball $B(y^0,\Delta)$ of radius $\Delta \in \mathbb{R}_+ $ with center $y^0 \in \mathbb{R}^n$, is said to be poised \cite{custodio2007using} if rank$(Y)=\text{min}\{n,q\}$, or $Y$ has full column rank. The simplex gradient \cite{custodio2007using,Bortz98thesimplex} $\nabla_sf(y^0)$ of $f$ at $y^0$ is the solution to the linear system \begin{equation}\label{simplexGradEqn} Y^T\nabla_sf=b \end{equation} where $b=[f(y^1)-f(y^0),\ldots,f(y^q)-f(y^0)]^T$. If $q>n$, then the simplex gradient is obtained by solving a least square problem to above system. The error between a simplex gradient and actual gradient is upper bounded by $\Delta$ \cite{custodio2007using} such that \begin{equation} \|\nabla f(y^0)- \nabla_s f(y^0)\| \leq \eta_{eg} \Delta, \end{equation} where, $\eta_{eg}$ is some constant dependent upon $Y$ and $\Delta$.\\ Simplex gradients are, in a broad sense, linear models for interpolation. \subsection{Scaled Conjugate Gradient} Conjugate gradient algorithm \cite{hager_zhang_survey} is a prominent approach for solving nonlinear optimization problems. For our work, we use a particular variant \cite{andrei2008unconstrained} of the conjugate gradient approach. For minimizing a nonlinear function $f(x)$ with a known initial point $x_0$, this algorithm generates a sequence of points $x_i$ such that: \begin{align}\label{scg} x_{i+1}&=x_i+\alpha_i d_i \notag\\ d_{i+1}&=-\theta_{i+1}g_{i+1}+\beta_i s_i \notag\\ d_0 &=-g_0 \end{align} where $\alpha_i$ is a positive step size obtained by line search along $d_i \in \mathbb{R}^n$ direction and $g_i=\nabla f(x_i)$ is the gradient of $f$ at $x_i$, and $s_i=x_{i+1}-x_{i}$. $\theta_{i+1}$ is a matrix or scalar parameter and $\beta_i$ is a scalar. A value for $\beta_i $\cite{birgin2001spectral} in terms of $\theta_{i+1}, \, s_i$ and $y_i=g_{i+1}-g_i$ is: \begin{equation}\label{betaCG} \beta_i=\frac{(\theta_{i+1}y_i-s_i)^Tg_{i+1}}{y_i^Ts_i} \end{equation} \section{Theory}\label{Theory} We now state our method for solving box constrained derivative free optimization problems, which is composed of novel poll step and search step methods. \subsection{Poll Step} Our poll step comprises of following steps: \begin{description} \item [Handling Box Constraints:] We use extreme barrier approach \cite{audet_dennis_jr_mesh} to handle the box constraints. In this approach, if an infeasible point is encountered then its function value is set to infinity. However, for practical purposes, we assign a large value to $f(x)$ for e.g. $1.79\times10^{308}$. \item [Equiangular Directions:] We generate a set $Y_0$ of $n+1$ equiangular directions about the origin. These directions are later translated along to current iterate $x_k$ to create a new set $Y_k$. \item [Sufficient Decrease and Parameter Update:] New trial points are generated along the equiangular directions, translated to incumbent solution $x_k$ and scaled appropriately to direct search radius $r_k$, where $r_k$ is upper bounded by poll size parameter $\Delta^p_k$ i.e. $r_k \leq \Delta^p_k$. Poll step is considered successful if there is a sufficient decrease in function values at these points compared to $f(x^k)$ i.e. \begin{equation}\label{suffdecrease} f(x_{k+1}) < f(x_k)-\gamma(r_k) \end{equation} where function $\gamma:\mathbb{R}_+ \rightarrow \mathbb{R}_+$ is defined as $\gamma(r)=\rho r^2$ and $r_k$ is the direct search radius at iteration $k$. $\rho$ is a positive scalar parameter such that $\rho <1$. In the event of failure of poll step, $\Delta^p_k$ is scaled down to $\frac{\Delta^p_k}{\tau_p}$ where $\tau_p>1$. Consequently $r_k$ is also updated as $r_{k+1}=\frac{r_k}{\tau_p}$. \item [Rotation of directions:] If the poll step is unsuccessful, a new direction $u_k$ is generated using Halton sequence. A new set $Y_{k+1}$ of $n+1$ equiangular direction vectors are generated by rotating the directions in $Y_0$ using Householder transformation to contain the direction $u_k$. The rotation of minimal positive basis using Householder transformations was first suggested in \citet{alberto_nogueira_ea_pattern}. This is accomplished by a matrix vector multiplication i.e. $d_{k+1}=H_{k}d_k$ where $d_k \in Y_k$, $d_{k+1} \in Y_{k+1}$ and matrix $H_k \in \mathbb{R}^{n\times n}$ is given by: \begin{equation}\label{householder} H_k=I-2\frac{(d_k-u_k)(d_k-u_k)^T}{||d_k-u_k||^2}. \end{equation} These directions are then scaled to length $r_{k+1}$ and new trial points are generated along them. Subsequently function values are evaluated at them and checked for sufficient decrease condition mentioned in Eq.(\ref{suffdecrease}). \end{description} \begin{algorithm}[t] \caption{Poll Step}\label{pollStep} \begin{algorithmic}[1] \STATE \textbf{INPUT} \STATE The incumbent solution $x_k$, direct search radius $r_k$, $0< \rho <1$, $\tau_l>1$, poll size parameter $\Delta^p_k$ and threshold $\epsilon >0$. \STATE A generated set $Y_0$ of $n+1$ equiangular directions of unit length about origin. \WHILE {$r_k > \epsilon$} \STATE Translate directions in $Y_0$ with $x_k$ to create $Y_k$ i.e. set $y_i \in Y_k$ as $y_i=y_i+x_k$ for $i=1,\ldots n+1$. \STATE Compute the set of $n+1$ points $A_k=\{x_1,\ldots,x_{n+1}\}$ along these directions, and scale them such that they lie on the ball $\mathcal{B}(x_k,r_k)$ where is $r_k \leq \Delta^p_k$. \STATE Evaluate function values at these points. Set $f(x)=\infty$ if $x\notin \Omega$. \IF {Sufficient decrease condition \ref{suffdecrease} is satisfied} \STATE \textbf{RETURN:} Best point $\underset{x_i \in A}{\text{argmin}}\{f(x_i)\}$, poll step radius $r_k$ and $\Delta^p_k$. \ELSE \STATE Using Halton sequence generate a new direction $u_k$. \STATE Using Householder transformation equation \ref{householder} rotate the equiangular directions of set $Y_0$ about $u_k$. \STATE Let $Y_k$ be the set of these directions. \STATE $k \gets k+1$ \STATE Update $\Delta^p_{k+1}=\frac{\Delta^p_k}{\tau_l}$ and $r_{k+1}=\frac{r_k}{\tau_l}$ \ENDIF \ENDWHILE \IF {$r_k < \epsilon$} \STATE Stationary point achieved. \STATE Terminate. \ENDIF \end{algorithmic} \end{algorithm} Algorithm \ref{pollStep} terminates whenever the sufficient decrease condition is satisfied returning the best point and direct search radius. \subsection{Search Step} Our approach consists of following important steps: \begin{description} \item [Quadratic Model Step:] We build a quadratic model around the trial points and the incumbent solution $x_k$, generated in poll step. This includes all points generated during consecutive failures of poll step about $x_k$. Least squares or MFN models are chosen based on the number of points available. If Hessian of the quadratic model is positive definite, we compute its unique minima $y_k$ by solving a linear equation. If the new minima is infeasible i.e. outside the bound constraints, we try to obtain a new solution by carrying out line search along $y_k-x_k$ such that its solution lies within the bound constraints. If Hessian of the model is not positive definite, no attempt is made to compute its minima. \item [Simplex Gradient:] We compute the simplex gradient $\nabla_sf_k$ around $x_k$. We include all available feasible points within a ball of radius $\Delta^p_k$ and center $x_k$. If number of points are greater than $n+1$ then it is computed in the regression sense i.e. by solving a least square problem. We evaluate function value at the point say $x_g$, in the direction $-\nabla_sf_k$, at a distance $\Delta^p_k$ from $x_k$. \item [Vicinity Search:] We sort all the points in the last poll step according to their function values in ascending order. We choose first $l$ points from the list. Now we take the average of these points with the point having best function value. We generate $l$ points along directions from $x_k$ to these averages such that their distance from $x_k$ is $r_k$ and evaluate function values at them. \item [Scaled Conjugate Gradient Step:] Let $x_b$ be the point with best function value at iteration $k$. Then compute the positive semidefinite matrix $\theta=-\frac{(x_b-x_k)(x_b-x_k)^T}{(x_b-x_k)\nabla_sf_k}$. We now compute $\beta$ from equation \ref{betaCG} and corresponding new search direction for conjugate gradient. We then carry out a line search along the new direction inside the feasible region $\Omega$ to obtain a point with better function value. \end{description} \subsection{Final Algorithm} As the algorithm is constituted of Scaled Conjugate Gradient (SCG) and direct search (DS) for solving box constrained (BC) DFO, we name our algorithm as BCSCG-DS. We now summarize our approach BCSCG-DS in Algorithm \ref{cscgmadsAlgo}. \begin{algorithm} \caption{BCSCG-DS algorithm}\label{cscgmadsAlgo} \begin{algorithmic}[1] \STATE \textbf{Initialization}: \STATE Set $x^0 \in \Omega$ as the center or starting point. \STATE $0< \rho <1$, $\tau_p>1$, $0<\epsilon_2<1$, $\tau_u>1$ and threshold $\epsilon >0$. \STATE Initial poll size parameter $\Delta_0^p >0$ and direct search radius $0<r_0 \leq \Delta_0^p$. \STATE Set $l$, the number of points to be considered for vicinity search step. \STATE Generate a set $Y_0$ of $n+1$ equiangular directions of unit length about origin. \STATE \textbf{Iteration $k$} \WHILE {Termination Criteria of poll step is not met} \STATE Do \textbf{Poll Step} using algorithm \ref{pollStep}. \STATE Begin \textbf{Search Step} \STATE \textbf{Quadratic Model} \IF {$x_i \in \Omega$ for all $x_i \in A_k$} \STATE Fit a quadratic model $m$ over all feasible points generated during the current poll step. \STATE If hessian of $m$ is positive definite then find its minima $y_k$. \STATE If $y_k\notin \Omega$ do line search along $y_k-x_k$ to find a good feasible solution \STATE Update the best point $x_b$. \ENDIF \STATE \textbf{Vicinity Search} \IF {$x_i \in \Omega$ for all $x_i \in A_k$} \STATE Collect all previous evaluated feasible points within the ball $\mathcal{B}(x_k,r_k(1+\epsilon_2))$. \STATE Compute simplex gradient ``$\nabla_sf$'' at $x_k$ from these points using linear system \ref{simplexGradEqn} \STATE Compute point $x_g$ along $-\nabla_sf$ such that $\|x_g-x_k\|_2=r_k$ and add it to set $A_k$ \ELSE \STATE Set $x_g=x_b$. \ENDIF \STATE Sort the points in $A_k$ in ascending order of their function values. \STATE Select first $l$ points. Let their set be $V$. \STATE Compute the average of best point $x_b$ with points in $V$ and store them in $\bar{V}$. \STATE Evaluate function values for points in $\bar{V}$ and update the best point $x_b$. \STATE \textbf{Scaled Conjugate Gradient} \STATE Compute matrix $\theta_k=-\frac{(x_b-x_k)(x_b-x_k)^T}{(x_b-x_k)\nabla_sf_k}$. \STATE Evaluate a new direction using equations \eqref{scg} and \eqref{betaCG}. \IF {The new direction is descent} \STATE Do line search along it within the box constraints. \ELSE \STATE Do line search along $x_b-x_k$ within the box constraints. \ENDIF \IF {Steplength $>\Delta^p_k$} \STATE Update $\Delta^p_{k+1}=$Steplength. \ENDIF \IF {Steplength $>2\Delta^p_k$} \STATE Update $\Delta^p_{k+1}=\tau_u\Delta^p_k$. \ENDIF \STATE Update the best point $x_b$ and set center to best point i.e. $x_k \gets x_b$ \STATE $k \gets k+1$ and goto poll step \ENDWHILE \end{algorithmic} \end{algorithm} For practical purposes, the termination criteria for Algorithm \ref{pollStep} is generally set to the budget on the number of function evaluations. We will now give some results required to show the convergence of the proposed algorithm to some Clarke-Jahn stationary point. \begin{lemma} \label{finiteSearchStep} Number of function evaluations done during the search step of Algorithm \ref{cscgmadsAlgo} is finite. \end{lemma} \begin{proof} Search step is activated when poll step is successful. So there exists $\alpha_k \in \mathbb{R}_+$ and $d_k \in \mathbb{R}^n$ such that $f(x_k+\alpha_k d_k) < f(x_k)-\rho \alpha_k^2$ where $x_k$ is the current incumbent solution. The next step requires the use of quadratic model over the points generated during the latest poll step, and it requires at most one new function evaluation. As per the construction of algorithm, vicinity search too requires finite number of function evaluations. Next step involves the computation of simplex gradient and one function evaluation along the direction opposite to it. In case negative of gradient is a descent direction, scaled conjugate gradient method is evoked which uses a line search. In case it is not a descent direction, algorithm follows a greedy approach and does line search along some descent direction, which is necessarily present since the last poll step was successful. Line search is done using Brent algorithm, which by construction, is restricted to finite number of iterations or function evaluations. Thus all intermediate steps involved during the search step require finite number of function evaluations. \end{proof} We define $[x+\alpha d]_{[l,u]}=\text{max}\{l,\text{min}\{u,(x+\alpha d)\}\}$ where $x \in \mathbb{R}^n$, $d\in \mathbb{R}^n$, $\alpha \in \mathbb{R}_+$ and $l$ and $u$ represent lower and upper bound on $x$ i.e. $l_i \leq x_i \leq u_i \; \text{for }i=1,2,\ldots,n$. Let $t\in \mathbb{N}\cup \{0\}$ and $\tau_\beta \in \mathbb{R}$ such that $\tau_\beta \geq 1$. Also, let $\alpha$ be the steplength such that $x+\alpha d \in \Omega$ and $f(x+\alpha d) \leq f(x)-\rho \alpha^2$. Let us define projection parameter $\beta \in \mathbb{R}$ as: \begin{equation} \beta= \tau_\beta^{t_\beta} \alpha \text{ where } t_\beta=\{\underset{t \in \mathbb{N}}{\text{arg max }}f([x+\tau_\beta^t\alpha d]_{[l,u]}) \; : \; f([x+\tau_\beta^t\alpha d]_{[l,u]}) \leq f(x)-\rho (\tau_\beta^t \alpha)^2\},\\ \end{equation} and extended projection parameter $\eta \in \mathbb{R}$ as: \begin{equation} \eta= \tau_\beta^{t_\eta} \alpha \text{ where } t_\eta=\{\underset{t \in \mathbb{N}}{\text{arg min }}f([x+\tau_\beta^t\alpha d]_{[l,u]}) \; : \; f([x+\tau_\beta^t\alpha d]_{[l,u]}) > f(x)-\rho (\tau_\beta^t \alpha)^2\}.\\ \end{equation} Clearly, $\beta \geq 0$ and $\eta \geq 0$ since $\alpha > 0$, and are related as \begin{equation} \label{betaEtaRelation} \eta=\tau_\beta \beta. \end{equation} \begin{lemma} \label{betaEtaFiniteness} At any iteration $k$ of Algorithm \ref{cscgmadsAlgo}, projection parameter $\beta_k$ and extended projection parameter $\eta_k$ are finite. \end{lemma} \begin{proof} Let us assume that $\beta_k$ and thus $t_\beta$ to be not finite. Since $\Omega$ is compact, from Weierstrass theorem, there exists $\bar{x}\in \Omega$ such that $\bar{x}=\underset{x \in \Omega}{\text{arg min }}f(x)$ and $f(\bar{x}) > -\infty$. Since $\rho > 0$ and $\tau_\beta \geq 1$, there exists $\bar{t}\in \mathbb{N}$ such that \begin{equation*} f(x_k)-\rho (\tau_\beta^{\bar{t}}\alpha_k)^2 < f(\bar{x}), \end{equation*} or \begin{equation*} f([x_k+\tau_\beta^{\bar{t}}\alpha_k d_k]_{[l,u]})-\rho (\tau_\beta^{\bar{t}}\alpha_k)^2 < f(\bar{x}), \end{equation*} which is a contradiction, since $\bar{x}$ is the minimum. So, $t_{\beta}$ and hence $\beta_k$ are finite. Similarly, from equation \ref{betaEtaRelation} we have $\eta_k$ to be finite. \end{proof} \begin{lemma} \label{PollStepRadiusConverge} The sequence of poll step parameter $\{\Delta_k^p\}$, generated by Algorithm \ref{cscgmadsAlgo}, in the limit, converges to $0$ i.e. $\underset{k \rightarrow 0}{\text{lim }}\Delta_k^p=0$. Also, \begin{align} \underset{k \rightarrow 0}{\text{lim }}\beta_k &=0 \\ \underset{k \rightarrow 0}{\text{lim }}\eta_k &=0. \end{align} where $\beta_k$ and $\eta_k$ are projection parameter and extended projection parameter at iteration $k$. \end{lemma} \begin{proof} Let $P_1$ be the sequence of iterations of Algorithm \ref{cscgmadsAlgo} during which poll step parameter is increased i.e. $\Delta^p_{k+1}=\tau_u \Delta^p_{k}$ where $\tau_u > 1$. Similarly, let $P_2$ be the sequence of iterations during which poll step parameter is decreased i.e. $\Delta^p_{k+1}=\frac{\Delta^p_{k}}{\tau_l}$ where $\tau_l >1$. Now, by construction of algorithm, $P_1 \cup P_2$ cannot be a finite sequence. From lemma \ref{finiteSearchStep}, all intermediate directions and step lengths during the search step can be replaced by a single direction and step length. Following two situations arise: \begin{description} \item[Case-1: $P_1$ is infinite:] From sufficient decrease condition of Algorithm \ref{cscgmadsAlgo}, \begin{equation*} f(x_{k+1}) = f(x_k+\alpha_k d_k) \leq f(x_k)-\rho \alpha^2_k. \end{equation*} Since $\Omega$ is compact and $f$ is continuous, from Weierstrass theorem, we have \begin{equation*} \underset{k\rightarrow \infty}{\text{lim }}f(x_k)=\bar{f}. \end{equation*} Thus, $\underset{k \rightarrow \infty}{\text{lim }}\rho \alpha^2_k=0$ or $\underset{k \rightarrow \infty,\,k\in P_1}{\text{lim }} \alpha_k=0$. Since $\Delta_k^p$ is always upperbounded by steplength $\alpha_k$, we have $\underset{k \rightarrow \infty,\,k\in P_1}{\text{lim }}\Delta_k^p=0$. Also, since $\beta_k=\tau_\beta^t \alpha_k$ for some $t\in \mathbb{N}$ and $\eta_k=\tau_\beta^{t+1} \alpha_k$ (from equation \ref{betaEtaRelation}), \begin{equation*} \underset{k \rightarrow \infty,\,k\in P_1}{\text{lim }}\beta_k=0 \qquad\text{ and }\qquad \underset{k \rightarrow \infty,\,k\in P_1}{\text{lim }}\eta_k=0. \end{equation*} \item[Case-2: $P_1$ is finite:] Clearly $P_2$ is a infinite sequence. Let $|P_1|=s$ where $s \in \mathbb{N}\cup\{0\}$. So, for $k > s$, \begin{equation*} \alpha_{k+1}=\frac{\alpha_s}{\tau_l^{(k-s)}}. \end{equation*} Since $\tau_l > 1$, we have $\underset{k \rightarrow \infty,\,k\in P_2}{\text{lim }}\alpha_k=0$ or $\underset{k \rightarrow \infty,\,k\in P_2}{\text{lim }}\Delta_k^p=0$. Also, $\underset{k \rightarrow \infty,\,k\in P_2}{\text{lim }}\beta_k=0$ and $\underset{k \rightarrow \infty,\,k\in P_2}{\text{lim }}\eta_k=0$. \end{description} \end{proof} We now define a lemma from \cite{fasano2014linesearch} [see lemma-2.6]. \begin{lemma} \label{limitPointDirectionSL} Let $\{x_k\}_P$, $\{d_k\}_P$ and $\{\alpha_k\}_P$ be some sequences with $P \subseteq \mathbb{N}$ being their subset of indices such that $x_k \in \Omega$, $d_k \in D(x_k)$, $\alpha_k \in \mathbb{R}_+$ and $x_{k+1}=[x_k+\alpha_k d_k]_{[l,u]}$. Further, let us assume that there exists a subset $\bar{P} \subseteq P$ such that \begin{align} \underset{k\rightarrow \infty,k\in \bar{P}}{\text{lim }} x_k &=\bar{x},\\ \underset{k\rightarrow \infty,k\in \bar{P}}{\text{lim }} d_k &=\bar{d},\\ \underset{k\rightarrow \infty,k\in \bar{P}}{\text{lim }} \alpha_k &=0 \end{align} where $\bar{x} \in \Omega$ and $\bar{d} \in D(\bar{x}),\bar{d} \neq 0$. Then, \begin{enumerate} \item there exists $m \in \mathbb{N}$ such that for all $k > m$, \begin{equation*} [x_k+\alpha_k d_k]_{[l,u]} \neq x_k; \end{equation*} \item for $w_k=\frac{[x_k+\alpha_k d_k]_{[l,u]}-x_k}{\alpha_k}$, we have \begin{equation*} \underset{k\rightarrow \infty,k\in \bar{P}}{\text{lim }} w_k = \bar{d}, \end{equation*} \end{enumerate} \end{lemma} We now present the main result of this work. Note that the approach presented here is similar to the one presented in \cite{fasano2014linesearch} [see Proposition 2.7]. \begin{proposition} Let $\{x_k\}$ be the sequence generated by Algorithm \ref{cscgmadsAlgo}. Let us assume that there exists $\bar{x}\in \mathbb{R}^n$ such that $\underset{k \rightarrow \infty,\, k \in P}{\text{lim }}x_k=\bar{x}$ where $P$ is some set of indices. Point $\bar{x}$ is said to be Clarke-Jahn stationary point if $\{d_k\}_P$, the sequence of normalized directions generated by the algorithm, is dense in the unit sphere. \end{proposition} \begin{proof} The Clarke-Jahn stationarity condition at $\bar{x}$ states that for every normalized direction $d\in D(\bar{x})$, $f^\circ(\bar{x};d) \geq 0$, i.e. \begin{equation*} \underset{\underset{t \downarrow 0,\,y+td \in \Omega}{y \rightarrow \bar{x},\, y \in \Omega}}{\text{lim sup }} \frac{f(y+td)-f(y)}{t} \geq 0. \end{equation*} We now prove the result using contradiction, by assuming the existence of some direction $\bar{d} \in D(\bar{x}) \cap \mathcal{B}(0,1)$ such that \begin{equation} \label{ClarkeJahnIneq} \underset{\underset{t \downarrow 0,\,y+t\bar{d} \in \Omega}{y \rightarrow \bar{x},\, y \in \Omega}}{\text{lim sup }} \frac{f(y+t\bar{d})-f(y)}{t} < 0. \end{equation} Let $\eta_k$ be the extended projection parameter at iteration $k$. Let $w_k=\frac{[x_k+\alpha_k d_k]_{[l,u]}-x_k}{\alpha_k}$. Now by lemma \ref{PollStepRadiusConverge}, we have $\underset{k\rightarrow \infty}{\text{lim }}\eta_k=0$. Since $\{d_k\}_P$ is dense in unit sphere, there exists a subset of indices $\bar{P} \subseteq P$ such that $\underset{k\rightarrow \infty,\, k \in \bar{P}}{\text{lim }}d_k=\bar{d}$. Additionally, $\underset{k\rightarrow \infty,\, k \in \bar{P}}{\text{lim }}x_k=\bar{x}$ and $\underset{k\rightarrow \infty,\, k \in \bar{P}}{\text{lim }}\eta_k=0$. We can see that all the assumptions of lemma \ref{limitPointDirectionSL} are satisfied. So there exists $m \in \mathbb{N}$ such that for all $k > m$ with $k \in \bar{P}$, $w\neq 0$ and $\underset{k\rightarrow \infty,k\in \bar{P}}{\text{lim }} w_k = \bar{d}$. Thus, $f(x_k+\eta_k w_k) > f(x_k)-\rho \eta_k^2$ and since $\eta_k > 0$, \begin{equation} \label{rhoEta} \frac{f(x_k+\eta_k w_k)-f(x_k)}{\eta_k} > -\rho \eta_k, \end{equation} for $k > m$. Let $L$ be the Lipschitz constant of $f$. Now, \begin{align*} & \underset{\underset{t \downarrow 0,\,x_k+t\bar{d} \in \Omega}{x_k \rightarrow \bar{x},\, x_k \in \Omega}}{\text{lim sup }} \frac{f(x_k+t\bar{d})-f(x_k)}{t} \\ & \geq \underset{k\rightarrow \infty,\,k \in \bar{P}}{\text{lim sup }} \frac{f(x_k+\eta_k \bar{d})-f(x_k)}{\eta_k}\\ & = \underset{k\rightarrow \infty,\,k\in \bar{P}}{\text{lim sup }}\frac{f(x_k+\eta_k \bar{d})+f(x_k+\eta_k w_k)-f(x_k+\eta_k w_k)-f(x_k)}{\eta_k}\\ & \geq \underset{k\rightarrow \infty,\,k \in \bar{P}}{\text{lim sup }} \frac{f(x_k+\eta_k w_k)-f(x_k)}{\eta_k}-\underset{k\rightarrow \infty,\,k \in \bar{P}}{\text{lim }} \|\bar{d}-w_k\|\\ & > -\rho \underset{k\rightarrow \infty,\,k \in \bar{P}}{\text{lim }} \eta_k-\underset{k\rightarrow \infty,\,k \in \bar{P}}{\text{lim }} \|\bar{d}-w_k\|, \end{align*} where the last inequality follows from \ref{rhoEta}. Now, from lemma \ref{limitPointDirectionSL}, we have $\underset{k\rightarrow \infty,\,k \in \bar{P}}{\text{lim }} \|\bar{d}-w_k\|=0$. Since $\underset{k\rightarrow \infty,\,k \in \bar{P}}{\text{lim }} \eta_k =0$, we have \begin{equation*} \underset{\underset{t \downarrow 0,\,x_k+t\bar{d} \in \Omega}{x_k \rightarrow \bar{x},\, x_k \in \Omega}}{\text{lim sup }} \frac{f(x_k+t\bar{d})-f(x_k)}{t} \geq 0, \end{equation*} which is a contradiction to our assumption \ref{ClarkeJahnIneq}. Thus, $\bar{x}$ is a Clarke-Jahn stationary point for $f$. \end{proof} \section{Numerical Results}\label{Results} We performed a comparative testing of our method BCSCG-DS with the state of art solver Bobyqa \cite{powell2009bobyqa}, which is fast and has the ability to handle high dimensional problems. We compared the algorithms on the basis of number of function evaluations required to reach some local optimum within the given budget. For Bobyqa, we set RHOBEG$=0.2$, RHOEND$=10^{-6}$, WORKSPACE$=10^8$ and other parameters to their respective defaults. The computational budget option MAXFUNC was set to $40(n+1)$ function evaluations. We implemented our algorithm BCSCG-DS in C using gcc compiler(version 4.9.2). For performing the linear algebra computations, we used BLAS and LAPACK library \cite{laug}. The tolerance i.e. minimum allowed poll size parameter was set to $10^{-6}$ and computation budget set to $40(n+1)$. We set the initial poll size $\Delta_0^p$ and direct search radius $r_0$ to $0.1\,\underset{i=1,\ldots,n}{\text{min}}\{u_i-l_i\}$ and scalar parameter $\rho$ to $0.25$. The parameter $\epsilon_2$ is set to $0.01$. For the vicinity search, we chose $l=\lfloor0.1n\rfloor$ points. We used Brent algorithm for performing line searches for which we set the maximum iterations to 20 and the tolerance to $10^{-5}$. Our test suite comprised of two classes of box constrained optimization problems: noisy smooth problems and noisy piecewise-smooth problems taken from the set of least square problems reported by \citet{lukvsan2018sparse}, who outline a diverse collection of problems. We considered 55 least square problems to test the effectiveness of our proposed approach. Table \ref{tab:testProblems} consists the list of least square functions considered for our computational experiments. The lower and upper bound on each variable for entire test collection was set to $-50$ and $50$ respectively. We now state the approach of generating piecewise-smooth problems from least square problems and adding noise to them as suggested in \cite{more_wild_benchmarking}. A typical least square problem is of the form: \begin{equation} f(x)=\sum_{i=1}^m f_i(x)^2, \end{equation} where $f_i:\mathbb{R}^n \rightarrow \mathbb{R} \quad \text{for } i=1,\ldots,m$ is a continuous function. The piecewise-smooth problems were generated from least square problems by modifying each individual term in least square function with its absolute value i.e. \begin{equation} f(x)=\sum_{i=1}^m | f_i(x)|. \end{equation} For adding the noise, we first define a cubic Chebyshev polynomial $U_3$ as \begin{equation} U_3(\alpha)=\alpha(4\alpha^2-3), \end{equation} where $\alpha \in \mathbb{R}$. Let $\psi:\mathbb{R}^n\rightarrow [-1,1]$ be a function defined as \begin{equation} \psi(x)=U_3(\psi_0(x)), \end{equation} where \begin{equation} \psi_0(x)=0.9\, \text{sin}\,(100\|x\|_1)\, \text{cos}\,(100\|x\|_\infty)+0.1\, \text{cos}\,(\|x\|_2) \end{equation} is a continuous and piecewise continuously differentiable with $2^n n!$ continuously differentiable regions. We define the noisy problem $f(x)$ for the smooth case as \begin{equation} f(x)=(1+\epsilon_f\psi(x))\sum_{i=1}^m f_i(x)^2 \end{equation} and piecewise smooth case as \begin{equation} f(x)=(1+\epsilon_f\psi(x))\sum_{i=1}^m |f_i(x)| \end{equation} where $\epsilon_f$ is the relative noise level. For our experiments, we set $\epsilon_f=10^{-3}$ as suggested in \cite{more_wild_benchmarking}. Since performance of algorithms is affected by the dimension of the problem, we studied each problem for large dimensions with magnitude $200,250$ and $300$. Hence, around $165$ smooth and $165$ piecewise-smooth problems were studied. In order to remove the effects of starting point on the algorithm, we generated $10$ different feasible starting points for each problem and inference was drawn on the basis of their median. Performance of a method is dependent on the final solution and the amount of function evaluations needed to achieve it. We define normalized function evaluations as a multiple of $n+1$ function evaluations. In our comparison, a method is considered to have better performance if it computes a lower function value within the computational budget of $40$ normalized function evaluations i.e. $40(n+1)$ function evaluations. \subsection{Performance Profiles} For comparative analysis, we computed the performance profiles \cite{more_wild_benchmarking,dolan2002benchmarking}, which are frequently used for quantitative assessment of derivative free optimization solvers. Let $\mathcal{P}$ be the collection of test problems and $\mathcal{S}$ be the set of solvers or algorithms under consideration. For a particular instance of a test problem, let $x_0$ be the starting point for all the solvers and $f(x_0)$ be its corresponding function value. Let the optimal solution obtained by a solver $s \in \mathcal{S}$ within the given computational budget be $f^*_s$. Then, the convergence condition for the solver $s$ is defined as: \begin{equation}\label{solverConverge} f(x_0)-f^*_s \geq (1-\tau)(f(x_0)-f^*_L) \end{equation} where $0<\tau \leq 1$ is a user defined tolerance and $f^*_L=\text{min}\{f^*_s:s \in \mathcal{S}\}$ is the minimum function value obtained by any solver. Let the performance measure for a solver $s\in \mathcal{S}$ for a particular problem instance $p \in \mathcal{P}$ be $w_{s,p}$ which can be the amount of time taken or the number of function evaluations performed by the solver to obtaining the result. In this work, we consider $w_{s,p}$ to be number of function evaluations done. Then, the performance of a particular solver $s$ with respect to the best solver, on a given problem instance, is given by \emph{performance ratio} which is defined as: \begin{equation*} \nu_{s,p}=\frac{w_{s,p}}{\text{min}\{w_{s,p}:s\in \mathcal{S}\}}. \end{equation*} For a solver $s$ which is unable to satisfy the condition \ref{solverConverge}, we set $\nu_{s,p}=\infty$. The performance profile for a solver $s\in \mathcal{S}$, is defined as the fraction of problems solved with respect to an upper bound $\alpha$ on $\nu$, i.e., \begin{equation*} \rho_s(\alpha)=\frac{1}{|\mathcal{P}|}\text{size}\{p \in \mathcal{P}:\nu_{s,p} \leq \alpha\}. \end{equation*} Performance profiles illustrate the relative performance of solvers, on the given problem set. Thus $\rho_s(1)$ represent the fraction of problems over which best performance was shown by solver $s$. We constructed the performance profiles for the smooth and piecewise-smooth problems with noise using tolerance $\tau=\{10^{-2},10^{-4}\}$ in Eq.(\ref{solverConverge}). Almost all problems utilized the complete budget of $40$ normalized function evaluations which is justifiable since the dimension of the test problems is quite high. Tables \ref{tab:perfSmoothNoisy} and \ref{tab:perfPiecewiseSmoothNoisy} show the percentage of problems on which each solver performed better than the other for the smooth and piecewise-smooth test problems with varying dimensions. For both smooth and piecewise-smooth problems with noise, BCSCG-DS found better solution than Bobyqa in accordance with the test condition given in Eq.(\ref{solverConverge}) for dimensions $200,250$ and $300$. \begin{table}[h] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Dimension} & \multicolumn{2}{c|}{$\tau=10^{-2}$} & \multicolumn{2}{c|}{$\tau=10^{-4}$}\\\cline{2-5} &Bobyqa&BCSCG-DS&Bobyqa&BCSCG-DS\\ \hline 200&25.45&90.91&25.45&89.09\\ 250&29.09&85.45&29.09&85.45\\ 300&30.91&85.45&30.91&83.64\\ \hline \end{tabular} \end{center} \caption{Performance ratio (in terms of percentage) for noisy smooth problems} \label{tab:perfSmoothNoisy} \end{table} \begin{table}[h] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Dimension} & \multicolumn{2}{c|}{$\tau=10^{-2}$} & \multicolumn{2}{c|}{$\tau=10^{-4}$}\\\cline{2-5} &Bobyqa&BCSCG-DS&Bobyqa&BCSCG-DS\\ \hline 200&22.22&92.59&22.22&90.74\\ 250&27.77&85.18&27.77&83.33\\ 300&29.63&83.33&29.63&74.07\\ \hline \end{tabular} \end{center} \caption{Performance ratio (in terms of percentage) for noisy piecewise-smooth problems} \label{tab:perfPiecewiseSmoothNoisy} \end{table} The competitive performance of BCSCG-DS is clearly evident from the results of extensive computational experimentation. In lot of cases, the gap between the solutions attained by BCSCG-DS and Bobyqa is significant. The above results clearly validate the applicability of BCSCG-DS, for solving smooth and piecewise-smooth derivative free optimization problems. They also show its effectiveness and robustness on problems with varying dimensions for different tolerance $\tau$. \subsection{Progress Curves} \begin{figure*}[t] \centering \includegraphics[scale=0.8]{smoothNoisy1.eps} \caption{Progress curves for smooth problems with noise with varying dimensions for (a-c) Generalized Broyden Tridiagonal Problem (d-f) Chained and Modified problem HS53 (g-i) Attracting-Repelling Problem (j-l) Modified Countercurrent Reactors Problem-2} \label{fig:smoothnoisy1} \end{figure*} \begin{figure*}[t] \centering \includegraphics[scale=0.8]{smoothNoisy2.eps} \caption{Progress curves for smooth problems with noise with varying dimensions for (a-c) Singular Broyden Problem (d-f) Flow in a Channel Problem (g-i) Swirling Flow Problem (j-l) Driven Cavity Problem} \label{fig:smoothnoisy2} \end{figure*} \begin{figure*}[t] \centering \includegraphics[scale=0.8]{piecewiseSmoothNoisy1.eps} \caption{Progress curves for piecewise-smooth problems with noise with varying dimensions for (a-c) Generalized Broyden Tridiagonal Problem (d-f) Chained and Modified problem HS53 (g-i) Attracting-Repelling Problem (j-l) Modified Countercurrent Reactors Problem-2} \label{fig:piecewiseSmoothnoisy1} \end{figure*} \begin{figure*}[t] \centering \includegraphics[scale=0.8]{piecewiseSmoothNoisy2.eps} \caption{Progress curves for piecewise-smooth problems with noise with varying dimensions for (a-c) Singular Broyden Problem (d-f) Flow in a Channel Problem (g-i) Swirling Flow Problem (j-l) Driven Cavity Problem} \label{fig:piecewiseSmoothnoisy2} \end{figure*} For quantitative comparison between the proposed method and Bobyqa method, we also show the progress curves \cite{regis2013combining} depicting the local optimum value versus the number of normalized function evaluations. For a given function, the local minimum function value is computed for a specific value of the starting point or initial guess and at a particular value of normalized function evaluations. For better statistical relevance, the aforesaid computation is performed for ten different starting points and median value of the multiple minima obtained over ten trials is calculated. This process is performed for each solver by varying the number of normalized function evaluations. Subsequently, the median optimum function value (y-axis, log scale) is plotted with respect to normalized function evaluations (x-axis) for the given function. The resulting plots for the two solvers obtained by considering eight noisy smooth and piecewise-smooth functions, with three different dimensions (200, 250, and 300) for each function, are shown in figures \ref{fig:smoothnoisy1} to \ref{fig:piecewiseSmoothnoisy2}. Figures \ref{fig:smoothnoisy1} and \ref{fig:smoothnoisy2} illustrate the progress curves for 24 (eight functions, three dimensions) smooth problems. To gain better understanding about the plots, consider figure \ref{fig:smoothnoisy1}(a) where the median of the optimum function values computed over ten trials versus normalized function evaluations is shown for Generalized Broyden Tridiagonal function with a dimension of magnitude 200. As the number of normalized function evaluations increase, the median optima values obtained by the proposed BCSCG-DS method improve markedly, characterized by lowering values on the y-axis. The median of function values evaluated at different starting points for both the solvers is approximately $9.438\times10^8$. The performance of different solvers is as follows: \begin{enumerate} \item \textbf{Bobyqa}: After 4 normalized function evaluations, the median of function values is approximately $9.407\times10^8$. It converges to the approximately $9.403\times10^8$ after 40 normalized function evaluations. \item \textbf{BCSCG-DS}: After 4 normalized function evaluations, the median of function values is $9.100\times10^8$. It converges to approximately $3.412\times10^8$ after 40 normalized function evaluations. \end{enumerate} The plots corresponding to dimensions of magnitude 250 and 300 for the Generalized Broyden Tridiagonal function are shown in figures \ref{fig:smoothnoisy1}(b) and \ref{fig:smoothnoisy1}(c). Similarly, progress curves are shown in figures \ref{fig:smoothnoisy1}(d-f) for Chained and Modified problem HS53, figures \ref{fig:smoothnoisy1}(g-i) for Attracting-Repelling Problem, and figures \ref{fig:smoothnoisy1}(j-l) for Modified Countercurrent Reactors Problem-2. The progress curve plots corresponding to dimensions of magnitude 200, 250 and 300 are shown for Singular Broyden Problem in figures \ref{fig:smoothnoisy2}(a-c), Flow Channel Problem in figures \ref{fig:smoothnoisy2}(d-f), Swirling Flow Problem in figures \ref{fig:smoothnoisy2}(g-i) and Driven Cavity Problem in figures \ref{fig:smoothnoisy2}(j-l). Similarly, figures \ref{fig:piecewiseSmoothnoisy1} and \ref{fig:piecewiseSmoothnoisy2} illustrate the progress curves for 24 piecewise-smooth problems. For further insights, tabular results are also presented which enable quantitative analysis of the performance of two solvers. Tables \ref{tab:smoothBobyqaCSCG} and \ref{tab:pwsmoothBobyqaCSCG} show the median (of 10 instances) of optimal function values attained by different solvers for smooth and piecewise-smooth problems. Results are shown over three columns $200,250$ and $300$ which refer to the dimensions considered for experimentation. For each dimension, the first two subcolumns refer to the median (of 10 instances) of optimal solutions found by Bobyqa and BCSCG-DS after $40$ normalized function evaluations while the third subcolumn refer to the median of function values of 10 different starting points. From the progress curves and tabular results, we can infer that the proposed method offers comparatively better optima values in contrast to Bobyqa. \subsection{Computational Time} We now analyse the computational time taken by the two solvers. We compared our approach with Bobyqa on 4 different noisy smooth problems over 9 different dimensions varying from 100 to 500, summing up to total 36 instances. We used 10 different starting points for each instance. In order to reduce any ambiguity attributed to computational noise, we repeated the experiment five times. We used gcc compiler (version 4.9.2) for both the solvers on Intel Xeon(R) E5-2670 processor with GNU Linux Operating System. Same test codes \cite{lukvsan2018sparse} were used for function evaluations for both the solvers. \begin{table}[h] \begin{center} \begin{tabular}{|p{5cm}|p{2cm}|p{2cm}|p{2cm}|p{2cm}|} \hline Problem&Dimension&Bobyqa Optimal Value&BCSCG-DS Optimal Value&Initial Function Value\\ \hline \multirow{9}{5cm}{Chained Rosenbrock Function} &100&8.822e+05&6.002e+02&8.835e+05\\ &150&1.267e+06&9.064e+05&1.269e+06\\ &200&1.692e+06&1.642e+06&1.695e+06\\ &250&2.041e+06&2.007e+06&2.041e+06\\ &300&2.454e+06&2.333e+06&2.457e+06\\ &350&2.946e+06&2.931e+06&2.948e+06\\ &400&3.280e+06&3.273e+06&3.284e+06\\ &450&3.800e+06&3.801e+06&3.803e+06\\ &500&4.148e+06&4.136e+06&4.151e+06\\\hline \multirow{9}{5cm}{Chained and Modified HS47} &100&1.672e+06&1.284e+09&2.703e+10\\ &150&3.446e+10&3.185e+10&5.198e+10\\ &200&3.600e+10&5.654e+10&6.801e+10\\ &250&6.372e+10&3.744e+10&7.984e+10\\ &300&1.024e+11&8.663e+10&1.054e+11\\ &350&1.224e+11&1.019e+11&1.236e+11\\ &400&1.392e+11&1.254e+11&1.413e+11\\ &450&1.477e+11&1.321e+11&1.502e+11\\ &500&1.588e+11&1.544e+11&1.604e+11\\\hline \multirow{9}{5cm}{Chained and Modified HS48} &100&7.010e+05&3.434e+05&7.028e+05\\ &150&1.083e+06&1.054e+06&1.084e+06\\ &200&1.561e+06&1.450e+06&1.562e+06\\ &250&1.799e+06&1.769e+06&1.802e+06\\ &300&2.224e+06&2.210e+06&2.228e+06\\ &350&2.646e+06&2.605e+06&2.650e+06\\ &400&2.843e+06&2.827e+06&2.847e+06\\ &450&3.293e+06&3.293e+06&3.297e+06\\ &500&3.660e+06&3.661e+06&3.664e+06\\\hline \multirow{9}{5cm}{Modified Countercurrent Reactors Problem-1} &100&2.410e+05&1.395e+05&2.415e+05\\ &150&3.679e+05&3.319e+05&3.685e+05\\ &200&5.046e+05&4.629e+05&5.056e+05\\ &250&6.059e+05&5.910e+05&6.069e+05\\ &300&7.547e+05&7.546e+05&7.554e+05\\ &350&8.637e+05&8.483e+05&8.650e+05\\ &400&9.979e+05&9.974e+05&9.989e+05\\ &450&1.114e+06&1.115e+06&1.115e+06\\ &500&1.226e+06&1.225e+06&1.228e+06\\\hline \end{tabular} \end{center} \caption{Test Results for Problems with Varying Dimensions} \label{tab:timeOptimValues} \end{table} In table \ref{tab:timeOptimValues}, a comparison between the solutions attained by the two solvers after 40 normalized function evaluations is shown. The first and second column represent the function name and its corresponding dimension. The columns under the headers "Bobyqa Value" and "BCSCG-DS Value" represent the median of solution values obtained from 10 different starting points. The last column with header "First Value" represents the median of initial function value evaluated at the ten different starting points. In figure \ref{fig:timePlot}, a comparison of total computational time taken in seconds by both solvers is shown. We observe that at dimensions close to 100, both solvers take nearly same time. However, as dimensions increase, the computational burden of Bobyqa solver increases significantly faster than that of the proposed solver. Notably, BCSCG-DS becomes approximately five times faster than Bobyqa when the dimension of the problem increases to $500$. From the table \ref{tab:timeOptimValues} and figure \ref{fig:timePlot}, it is clear that BCSCG-DS is competitive with respect to Bobyqa in terms of both solution quality and computational time. \begin{figure*}[h] \centering \includegraphics[scale=1]{timesPlot.eps} \caption{Computational time taken by Bobyqa and BCSCG-DS for problems (a)Chained Rosenbrock (b)Chained and Modified HS47 (c)Chained and Modified HS48 (d)Modified Countercurrent Reactors Problem-1} \label{fig:timePlot} \end{figure*} \section{Conclusion}\label{Conclusion} In this paper, we proposed a new approach for solving high dimension box constrained derivative free optimization problems with noise. We integrated scaled conjugate gradient algorithm with direct search approach whose performance is further enhanced by inclusion of quadratic models into the framework. The computational results clearly demonstrate its effectiveness and competitive performance when compared to standard solver. Importantly, our approach offers three distinct advantages: \begin{enumerate} \item Guaranteed convergence to local optimum \item Extensibility towards larger dimensions for high dimensional optimization. \item Low computational time when compared to other solvers. \end{enumerate} In addition to suitability for high dimensional derivative free optimization, the proposed approach is also effective for solving noisy piecewise-smooth problems, as is evident from the extensive and rigorous numerical experiments performed. This class of problems is conventionally considered to be more challenging than its smooth counterpart because of differentiability issues. Consequently, our algorithm BCSCG-DS holds promising potential for solving the high dimensional box constrained derivative free optimization problems. \clearpage \begin{table}[h] \begin{center} \begin{tabular}{|c|l|} \hline S.No.&Function Name\\ \hline 1&Chained Rosenbrock function\\ 2&Chained Wood function\\ 3&Chained Powell singular function\\ 4&Chained Cragg and Levy function\\ 5&Generalized Broyden tridiagonal function\\ 6&Generalized Broyden banded function\\ 7&Chained Freudenstein and Roth function\\ 8&Wright and Holt zero residual problem\\ 9&Toint quadratic merging problem\\ 10&Chained serpentine function\\ 11&Chained and modified problem HS47\\ 12&Chained and modified problem HS48\\ 13&Sparse signomial function\\ 14&Sparse trigonometric function\\ 15&Countercurrent reactors problem 1(modified)\\ 16&Tridiagonal system\\ 17&Structured Jacobian problem\\ 18&Modified discrete boundary value problem\\ 19&Chained and modified problem HS53\\ 20&Attracting-Repelling problem\\ 21&Countercurrent reactors problem 2(modified)\\ 22&Trigonometric system\\ 23&Trigonometric - exponential system (trigexp 1)\\ 24&Singular Broyden problem\\ 25&Five-diagonal system\\ 26&Seven-diagonal system\\ 27&Extended Freudenstein and Roth function\\ 28&Broyden tridiagonal problem\\ 29&Extended Powell badly scaled function\\ 30&Extended Wood problem\\ 31&Tridiagonal exponential problem\\ 32&Discrete boundary value problem\\ 33&Brent problem\\ 34&Flow in a channel\\ 35&Swirling flow\\ 36&Bratu problem\\ 37&Poisson problem 1\\ 38&Poisson problem 2\\ 39&Porous medium problem\\ 40&Convection-difussion problem\\ 41&Nonlinear biharmonic problem\\ 42&Driven cavity problem\\ 43&Problem 2.47 of \cite{lukvsan2018sparse}\\ 44&Problem 2.48 of \cite{lukvsan2018sparse}\\ 45&Problem 2.49 of \cite{lukvsan2018sparse}\\ 46&Problem 2.50 of \cite{lukvsan2018sparse}\\ 47&Problem 2.51 of \cite{lukvsan2018sparse}\\ 48&Problem 2.52 of \cite{lukvsan2018sparse}\\ 49&Problem 2.53 of \cite{lukvsan2018sparse}\\ 50&Problem 2.54 of \cite{lukvsan2018sparse}\\ 51&Broyden banded function\\ 52&Ascher and Russel boundary value problem\\ 53&Potra and Rheinboldt boundary value problem\\ 54&Modified Bratu problem\\ 55&Nonlinear Dirichlet problem\\ \hline \end{tabular} \end{center} \caption{Test Problems} \label{tab:testProblems} \end{table} \begin{table}[h] \begin{center} \begin{tabular}{|p{3mm}|p{1.3cm}|p{1.3cm}|p{1.4cm}|p{1.3cm}|p{1.3cm}|p{1.4cm}|p{1.3cm}|p{1.3cm}|p{1.4cm}|} \hline \multicolumn{1}{|c|}{\textbf{}} & \multicolumn{3}{|c|}{\textbf{Dimension=200}} & \multicolumn{3}{|c|}{\textbf{Dimension=250}} & \multicolumn{3}{|c|}{\textbf{Dimension=300}}\\\hline S No.&Bobyqa Optimal Value&BCSCG-DS Optimal Value&Initial Function Value&Bobyqa Optimal Value&BCSCG-DS Optimal Value&Initial Function Value&Bobyqa Optimal Value&BCSCG-DS Optimal Value&Initial Function Value\\ \hline 1&2.34e+10&2.01e+06&2.41e+10&3.17e+10&2.43e+10&3.19e+10&3.72e+10&3.62e+10&3.73e+10\\ 2&1.87e+10&1.21e+10&2.46e+10&2.87e+10&2.54e+10&2.89e+10&3.62e+10&3.60e+10&3.77e+10\\ 3&8.71e+09&6.80e+09&9.43e+09&1.39e+10&1.22e+10&1.41e+10&1.64e+10&1.50e+10&1.64e+10\\ 4&3.20e+73&4.09e+48&3.89e+85&9.96e+72&1.07e+45&7.21e+84&1.00e+74&1.29e+62&1.08e+86\\ 5&9.40e+08&3.41e+08&9.44e+08&1.23e+09&1.07e+09&1.23e+09&1.50e+09&1.43e+09&1.50e+09\\ 6&1.13e+13&3.60e+08&1.14e+13&1.31e+13&9.63e+08&1.48e+13&1.64e+13&1.46e+13&1.65e+13\\ 7&8.20e+11&9.05e+06&8.61e+11&1.09e+12&2.15e+07&1.12e+12&1.24e+12&1.16e+12&1.27e+12\\ 8&6.10e+60&1.24e+59&6.28e+84&3.36e+66&1.55e+73&1.15e+85&7.95e+61&3.65e+79&1.85e+85\\ 9&2.05e+13&2.46e+13&4.01e+13&1.50e+13&2.28e+13&5.52e+13&1.27e+13&5.04e+13&7.94e+13\\ 10&1.67e+07&1.46e+07&1.67e+07&2.04e+07&2.04e+07&2.04e+07&2.46e+07&2.45e+07&2.46e+07\\ 11&1.23e+13&4.23e+19&3.64e+20&1.92e+14&1.46e+20&4.68e+20&1.89e+16&2.40e+20&4.35e+20\\ 12&1.87e+10&1.48e+10&1.88e+10&2.16e+10&2.07e+10&2.21e+10&2.61e+10&2.44e+10&2.62e+10\\ 13&4.52e+26&7.50e+29&7.56e+32&3.90e+26&4.53e+31&1.08e+33&2.37e+26&1.32e+32&1.34e+33\\ 14&1.08e+04&4.14e+06&6.89e+06&1.31e+04&4.98e+06&8.18e+06&1.94e+04&8.41e+06&9.88e+06\\ 15&2.19e+09&1.97e+09&2.20e+09&2.74e+09&2.66e+09&2.75e+09&3.13e+09&3.10e+09&3.15e+09\\ 16&2.67e+13&6.24e+06&2.88e+13&2.95e+13&1.37e+13&3.31e+13&4.21e+13&1.15e+13&4.23e+13\\ 17&9.79e+08&9.63e+08&9.84e+08&1.24e+09&1.16e+09&1.27e+09&1.48e+09&1.26e+09&1.49e+09\\ 18&1.09e+06&9.25e+05&1.09e+06&1.10e+06&1.09e+06&1.10e+06&1.44e+06&1.43e+06&1.44e+06\\ 19&1.58e+10&1.18e+10&1.60e+10&2.00e+10&1.63e+10&2.07e+10&2.36e+10&2.29e+10&2.36e+10\\ 20&2.41e+10&3.10e+09&2.42e+10&3.10e+10&2.82e+10&3.11e+10&3.61e+10&3.51e+10&3.62e+10\\ 21&2.60e+09&2.17e+09&2.79e+09&2.99e+09&2.81e+09&3.17e+09&3.46e+09&3.28e+09&3.80e+09\\ 22&1.17e+01&5.59e+04&1.31e+05&1.97e+01&1.21e+05&2.73e+05&3.91e+01&2.52e+05&4.50e+05\\ 23&5.05e+71&3.09e+40&2.20e+83&4.54e+70&7.27e+56&1.22e+85&7.77e+70&1.37e+63&6.13e+84\\ 24&1.18e+16&3.37e+10&1.22e+16&1.48e+16&4.93e+12&1.67e+16&1.94e+16&1.48e+15&2.04e+16\\ 25&2.69e+13&2.71e+10&2.78e+13&3.87e+13&3.70e+13&3.89e+13&4.37e+13&4.21e+13&4.39e+13\\ 26&2.83e+13&7.75e+07&2.85e+13&3.38e+13&1.10e+09&3.56e+13&4.91e+13&4.46e+13&4.94e+13\\ 27&2.74e+11&2.59e+11&4.37e+11&4.51e+11&4.13e+11&5.20e+11&6.38e+11&5.69e+11&6.48e+11\\ 28&6.40e+07&5.77e+07&6.41e+07&8.10e+07&8.11e+07&8.11e+07&8.88e+07&8.76e+07&9.17e+07\\ 29&1.29e+25&2.94e+28&3.45e+43&1.58e+25&2.50e+35&2.78e+43&9.07e+24&7.79e+34&2.99e+43\\ 30&7.48e+15&5.53e+15&7.87e+15&9.35e+15&7.05e+15&1.00e+16&1.19e+16&1.03e+16&1.20e+16\\ 31&1.62e+05&1.66e+05&1.73e+05&1.99e+05&2.04e+05&2.11e+05&2.34e+05&2.41e+05&2.49e+05\\ 32&9.63e+05&9.32e+05&9.65e+05&1.25e+06&1.25e+06&1.25e+06&1.53e+06&1.47e+06&1.53e+06\\ 33&1.00e+10&7.59e+09&1.01e+10&1.33e+10&1.26e+10&1.33e+10&1.56e+10&1.56e+10&1.63e+10\\ 34&2.40e+09&2.15e+09&2.42e+09&1.74e+09&1.25e+09&1.85e+09&1.76e+09&1.47e+09&1.76e+09\\ 35&6.69e+09&5.42e+09&6.73e+09&4.21e+09&3.90e+09&4.22e+09&3.91e+09&3.19e+09&3.92e+09\\ 36&3.84e+20&6.08e+05&2.70e+40&6.25e+20&7.79e+07&2.71e+40&3.35e+21&9.20e+08&1.89e+40\\ 37&1.39e+07&1.39e+07&1.40e+07&1.26e+07&1.23e+07&1.26e+07&1.21e+07&1.13e+07&1.22e+07\\ 38&3.17e+06&2.98e+06&3.18e+06&3.38e+06&3.14e+06&3.39e+06&4.78e+06&4.73e+06&4.79e+06\\ 39&8.39e+12&4.79e+12&9.16e+12&9.43e+12&8.83e+12&9.51e+12&9.91e+12&8.36e+12&1.05e+13\\ 40&7.84e+08&7.29e+08&8.21e+08&8.96e+08&8.14e+08&8.99e+08&8.27e+08&7.65e+08&8.29e+08\\ 41&1.04e+08&1.00e+08&1.04e+08&1.13e+08&1.05e+08&1.14e+08&1.70e+08&1.70e+08&1.70e+08\\ 42&4.06e+13&1.97e+13&4.16e+13&4.90e+13&4.40e+13&4.92e+13&6.36e+13&5.68e+13&6.45e+13\\ 43&8.89e+08&8.88e+08&8.90e+08&4.58e+08&4.58e+08&4.59e+08&2.67e+08&2.67e+08&2.67e+08\\ 44&1.17e+25&4.97e+17&1.16e+44&5.00e+24&3.27e+39&1.22e+44&3.99e+24&1.61e+21&6.15e+43\\ 45&2.50e+06&1.97e+06&2.50e+06&3.47e+06&2.98e+06&3.48e+06&3.96e+06&3.70e+06&3.97e+06\\ 46&1.85e+15&5.75e+07&1.25e+34&5.19e+15&5.89e+11&5.75e+33&2.20e+16&3.50e+22&4.74e+33\\ 47&4.48e+06&3.26e+06&4.50e+06&6.09e+06&6.10e+06&6.11e+06&7.03e+06&7.03e+06&7.05e+06\\ 48&4.48e+11&2.14e+07&4.51e+11&5.29e+11&2.87e+07&5.41e+11&6.32e+11&1.87e+08&6.40e+11\\ 49&1.67e+05&1.60e+05&1.67e+05&2.08e+05&2.05e+05&2.09e+05&2.48e+05&2.48e+05&2.49e+05\\ 50&9.41e+05&8.31e+05&9.43e+05&1.18e+06&1.13e+06&1.18e+06&1.47e+06&1.47e+06&1.47e+06\\ 51&1.01e+13&1.15e+09&1.13e+13&1.27e+13&8.52e+09&1.37e+13&1.69e+13&1.58e+13&1.70e+13\\ 52&1.04e+06&1.02e+06&1.04e+06&1.27e+06&1.25e+06&1.28e+06&1.57e+06&1.51e+06&1.57e+06\\ 53&1.00e+06&8.83e+05&1.01e+06&1.23e+06&1.19e+06&1.23e+06&1.51e+06&1.48e+06&1.51e+06\\ 54&2.16e+20&1.53e+11&3.54e+38&1.51e+19&1.67e+15&3.22e+38&9.57e+18&1.99e+26&3.84e+38\\ 55&3.23e+06&2.92e+06&3.24e+06&3.75e+06&3.57e+06&3.75e+06&4.93e+06&4.58e+06&4.94e+06\\ \hline \end{tabular} \end{center} \vspace{-3mm} \caption{Test Results for High Dimension Smooth Problems with Noise} \label{tab:smoothBobyqaCSCG} \end{table} \begin{table}[h] \begin{center} \begin{tabular}{|p{3mm}|p{1.3cm}|p{1.3cm}|p{1.4cm}|p{1.3cm}|p{1.3cm}|p{1.4cm}|p{1.3cm}|p{1.3cm}|p{1.4cm}|} \hline \multicolumn{1}{|c|}{} & \multicolumn{3}{|c|}{\textbf{Dimension=200}} & \multicolumn{3}{|c|}{\textbf{Dimension=250}} & \multicolumn{3}{|c|}{\textbf{Dimension=300}}\\\hline S No.&Bobyqa Optimal Value&BCSCG-DS Optimal Value&Initial Function Value&Bobyqa Optimal Value&BCSCG-DS Optimal Value&Initial Function Value&Bobyqa Optimal Value&BCSCG-DS Optimal Value&Initial Function Value\\ \hline 1&1.67e+06&1.61e+06&1.68e+06&2.08e+06&1.99e+06&2.08e+06&2.55e+06&2.55e+06&2.56e+06\\ 2&1.62e+06&1.56e+06&1.63e+06&2.05e+06&2.00e+06&2.05e+06&2.41e+06&2.36e+06&2.41e+06\\ 3&9.14e+05&8.21e+05&9.16e+05&1.19e+06&1.19e+06&1.19e+06&1.37e+06&1.33e+06&1.37e+06\\ 4&1.36e+22&8.48e+24&1.70e+43&9.96e+20&2.99e+29&8.78e+42&6.58e+20&9.56e+34&6.30e+42\\ 5&3.25e+05&2.82e+05&3.26e+05&4.13e+05&3.98e+05&4.13e+05&5.05e+05&5.00e+05&5.06e+05\\ 6&3.26e+07&1.16e+07&3.26e+07&3.98e+07&3.40e+07&3.99e+07&4.77e+07&4.53e+07&4.78e+07\\ 7&1.26e+07&1.08e+07&1.26e+07&1.53e+07&1.51e+07&1.53e+07&1.82e+07&1.76e+07&1.82e+07\\ 8&1.49e+28&2.66e+39&7.16e+42&1.95e+32&2.94e+34&6.28e+42&5.75e+28&2.29e+41&7.60e+42\\ 9&3.68e+07&3.14e+07&3.75e+07&4.51e+07&5.07e+07&5.21e+07&5.71e+07&5.10e+07&5.73e+07\\ 10&5.24e+04&5.23e+04&5.24e+04&6.76e+04&6.75e+04&6.78e+04&8.27e+04&8.16e+04&8.27e+04\\ 11&5.90e+10&3.46e+10&6.58e+10&8.73e+10&6.22e+10&9.48e+10&9.87e+10&9.57e+10&1.05e+11\\ 12&1.43e+06&1.40e+06&1.43e+06&1.82e+06&1.72e+06&1.83e+06&2.24e+06&2.24e+06&2.24e+06\\ 13&1.34e+15&6.55e+15&7.80e+16&6.44e+16&4.87e+16&1.04e+17&6.85e+16&5.30e+16&1.04e+17\\ 14&8.85e+02&3.32e+04&4.16e+04&1.60e+03&4.70e+04&5.21e+04&1.67e+03&5.26e+04&6.18e+04\\ 15&4.93e+05&4.55e+05&4.93e+05&6.13e+05&6.02e+05&6.13e+05&7.59e+05&7.41e+05&7.61e+05\\ 16&4.95e+07&6.67e+06&4.96e+07&6.22e+07&5.78e+07&6.30e+07&7.26e+07&6.96e+07&7.27e+07\\ 17&3.41e+05&3.18e+05&3.41e+05&4.19e+05&4.04e+05&4.20e+05&4.97e+05&4.97e+05&4.97e+05\\ 18&1.17e+04&1.16e+04&1.17e+04&1.43e+04&1.41e+04&1.44e+04&1.80e+04&1.79e+04&1.80e+04\\ 19&1.15e+06&1.11e+06&1.16e+06&1.39e+06&1.38e+06&1.39e+06&1.78e+06&1.74e+06&1.78e+06\\ 20&1.73e+06&1.57e+06&1.73e+06&2.02e+06&1.98e+06&2.02e+06&2.58e+06&2.57e+06&2.58e+06\\ 21&5.43e+05&5.05e+05&5.46e+05&6.22e+05&6.46e+05&6.49e+05&7.91e+05&7.88e+05&7.93e+05\\ 22&2.81e+03&2.69e+03&3.40e+03&1.04e+02&4.39e+03&5.56e+03&7.05e+03&7.17e+03&8.08e+03\\ 23&2.64e+21&6.19e+21&2.66e+42&4.75e+21&1.33e+33&3.51e+41&3.17e+23&1.61e+33&1.99e+43\\ 24&9.84e+08&7.95e+08&1.02e+09&1.24e+09&4.58e+08&1.24e+09&1.48e+09&1.37e+09&1.48e+09\\ 25&4.92e+07&4.73e+07&4.93e+07&6.38e+07&5.80e+07&6.40e+07&7.33e+07&7.22e+07&7.35e+07\\ 26&5.08e+07&4.33e+07&5.09e+07&6.09e+07&5.90e+07&6.10e+07&7.20e+07&6.87e+07&7.21e+07\\ 27&6.00e+06&5.57e+06&6.03e+06&7.95e+06&7.56e+06&7.97e+06&9.46e+06&9.68e+06&9.76e+06\\ 28&8.72e+04&8.22e+04&8.74e+04&1.02e+05&1.00e+05&1.02e+05&1.27e+05&1.26e+05&1.27e+05\\ 29&1.92e+10&3.81e+14&8.00e+21&4.83e+10&6.72e+18&1.29e+22&1.19e+12&2.26e+18&1.84e+22\\ 30&6.61e+08&6.08e+08&6.63e+08&7.40e+08&7.15e+08&7.42e+08&9.05e+08&8.88e+08&9.07e+08\\ 31&4.81e+03&4.91e+03&5.01e+03&5.85e+03&5.94e+03&6.08e+03&7.17e+03&7.32e+03&7.47e+03\\ 32&1.17e+04&1.16e+04&1.17e+04&1.38e+04&1.36e+04&1.38e+04&1.76e+04&1.75e+04&1.76e+04\\ 33&9.94e+05&9.40e+05&9.96e+05&1.22e+06&1.19e+06&1.22e+06&1.56e+06&1.54e+06&1.56e+06\\ 34&5.26e+05&4.88e+05&5.27e+05&5.30e+05&5.18e+05&5.31e+05&5.08e+05&5.02e+05&5.09e+05\\ 35&7.40e+05&7.12e+05&7.43e+05&7.93e+05&7.57e+05&7.94e+05&7.74e+05&7.63e+05&7.76e+05\\ 36&1.08e+09&6.20e+03&2.73e+20&4.14e+08&1.14e+04&2.78e+20&5.51e+09&1.77e+18&2.86e+20\\ 37&3.98e+04&3.65e+04&3.99e+04&4.14e+04&4.09e+04&4.15e+04&4.98e+04&4.97e+04&4.99e+04\\ 38&2.07e+04&2.01e+04&2.08e+04&2.48e+04&2.48e+04&2.49e+04&3.15e+04&3.15e+04&3.16e+04\\ 39&2.54e+07&2.00e+07&2.60e+07&2.80e+07&2.55e+07&2.81e+07&3.22e+07&3.20e+07&3.23e+07\\ 40&2.80e+05&2.56e+05&2.81e+05&3.15e+05&3.15e+05&3.15e+05&3.71e+05&3.71e+05&3.72e+05\\ 41&1.18e+05&1.15e+05&1.21e+05&1.37e+05&1.35e+05&1.37e+05&1.75e+05&1.74e+05&1.75e+05\\ 42&7.27e+07&6.74e+07&7.27e+07&7.16e+07&6.85e+07&7.18e+07&9.94e+07&9.86e+07&9.95e+07\\ 43&6.26e+04&6.25e+04&6.27e+04&5.49e+04&5.49e+04&5.49e+04&5.13e+04&5.11e+04&5.14e+04\\ 44&1.88e+10&3.96e+08&2.09e+22&1.07e+11&1.76e+06&2.09e+22&2.15e+11&2.30e+18&1.91e+22\\ 45&1.75e+04&1.73e+04&1.75e+04&2.24e+04&2.24e+04&2.25e+04&2.66e+04&2.66e+04&2.66e+04\\ 46&1.07e+06&2.07e+12&3.00e+17&6.11e+06&4.55e+15&2.09e+17&6.45e+06&5.84e+13&1.73e+17\\ 47&2.22e+04&2.09e+04&2.23e+04&2.85e+04&2.83e+04&2.86e+04&3.49e+04&3.48e+04&3.50e+04\\ 48&6.67e+06&9.23e+05&6.69e+06&8.15e+06&7.80e+06&8.17e+06&9.60e+06&9.50e+06&9.62e+06\\ 49&5.03e+03&4.99e+03&5.03e+03&6.19e+03&6.17e+03&6.19e+03&7.48e+03&7.44e+03&7.49e+03\\ 50&1.13e+04&1.13e+04&1.13e+04&1.47e+04&1.46e+04&1.47e+04&1.71e+04&1.70e+04&1.71e+04\\ 51&3.17e+07&1.39e+07&3.17e+07&3.87e+07&3.86e+07&3.89e+07&4.73e+07&4.73e+07&4.74e+07\\ 52&1.18e+04&1.18e+04&1.18e+04&1.48e+04&1.45e+04&1.48e+04&1.76e+04&1.76e+04&1.76e+04\\ 53&1.17e+04&1.17e+04&1.18e+04&1.41e+04&1.39e+04&1.41e+04&1.75e+04&1.75e+04&1.75e+04\\ 54&9.04e+07&1.98e+10&4.72e+19&1.86e+08&4.05e+15&5.60e+19&1.29e+09&4.18e+17&4.71e+19\\ 55&2.08e+04&2.01e+04&2.08e+04&2.37e+04&2.34e+04&2.37e+04&3.06e+04&3.06e+04&3.06e+04\\ \hline \end{tabular} \end{center} \vspace{-3mm} \caption{Test Results for Piecewise-smooth Problems with Noise} \label{tab:pwsmoothBobyqaCSCG} \end{table} \clearpage
2,869,038,156,114
arxiv
\section{Introduction} Gravitational instantons are usually defined as complete nonsingular solutions of the vacuum Einstein field equations in Euclidean space \cite{h}, \cite{gh}, \cite{yau}-\cite{hit}. Among other things, they play an important role in the path-integral formulation of quantum gravity \cite{h1}, \cite{GibPerry} forming a privileged class of stationary phase metrics that provide the dominant contribution to the path integral and mediate tunneling phenomena between topologically inequivalent vacua. The first examples of gravitational instanton metrics were obtained by complexifying the Schwarzschild, Kerr and Taub-NUT spacetimes through analytically continuing them to the Euclidean sector \cite{h},\cite{gh}. The Euclidean Schwarzschild and Euclidean Kerr solutions do not have self-dual curvature though they are asymptotically flat at spatial infinity and periodic in imaginary time, while the Taub-NUT instanton is self-dual. However, there exists another type of Taub-NUT instanton which, unlike the first one, is not self-dual and possesses an event horizon ("bolt") \cite{page}. The generalization of this Taub-bolt metric to the rotating case was given in \cite{gperry}. Another class of gravitational instanton solutions consists of the Eguchi-Hanson metric \cite{eh} and the multi-centre metrics of \cite{gh}, which include the former as a special case. These metrics are asymptotically locally Euclidean with self-dual curvature and admit a hyper-K\"ahler structure. (For a review see \cite{egh}). The hyper-K\"ahler structure of gravitational instantons and some properties of gravitational instantons which are derivable from minimal surfaces in $3$-dimensional Euclidean space were examined in \cite{an1}, \cite{an2} using the Newman-Penrose formalism for Euclidean signature. A fundamental difference between manifolds that have Euclidean $( ++++)$ and Lorentzian $(-+++) $ signatures is that the former can harbor self-dual gauge fields that have no effect on the metric, while in the latter external fields serve as source terms in field equations. In other words, since the energy-momentum tensor vanishes identically for self-dual gauge fields, solutions of Einstein's equations automatically satisfy the system of coupled Einstein-Maxwell and Einstein-Yang-Mills equations. The corresponding self-dual gauge fields are inherent in the given instanton metric. Furthermore, in Euclidean signature, Weyl spinors also have vanishing energy-momentum tensor and vector and axial- vector bilinear covariants. Hence they cannot appear as source terms in the field equations as well. The explicit solutions for different configurations of some "stowaway" gauge fields and spinors living on well-known Euclidean-signature manifolds have been obtained in a number of papers (see \cite{HawkPope}-\cite{tekin}) . In recent years, motivated by Sen's $S$-duality conjecture \cite{sen}, there has been some renewed interest in self-dual gauge fields living on well-known Euclidean-signature manifolds. The gauge fields were studied by constructing self-dual square integrable harmonic forms on given spaces. For instance, the square integrable harmonic $2$-form in self-dual Taub-NUT metrics was constructed in \cite{gibbons}, its generalization to the case of complete noncompact hyper-K\"ahler spaces was given in \cite{hitchin}. However the similar square integrable harmonic form on manifolds with non-self-dual metrics was found only for the simple case of the Euclidean-Schwarzschild instanton \cite{etesi}. In this note we shall give a new exact solution to describe the Abelian "stowaway" gauge fields harbored by the Kerr-Taub-bolt instanton, which is a generalized example of asymptotically flat instantons with non-self-dual curvature. This is achieved by explicit construction of the corresponding square integrable harmonic form on the space. The Euclidean Kerr-Taub-bolt instanton was discovered by Gibbons and Perry \cite{gperry} as a rotating generalization of the earlier Taub-bolt solution \cite{page} with non-self-dual curvature. This Ricci-flat metric is still asymptotically flat and in Boyer-Lindquist type coordinates it has the form \begin{equation} ds^{2}=\Xi \left( \frac{dr^{2}}{\Delta }+d\theta ^{2}\right) +\frac{\sin ^{2}\theta }{\Xi }\left( \alpha \,dt+P_{r}\,d\varphi \right) ^{2}+\frac{\Delta }{% \Xi }\left( dt+P_{\theta }\,d\varphi \right) ^{2}\,\,, \label{ktb} \end{equation} where the metric functions are given by \begin{eqnarray} \Delta &=& r^{2}-2Mr-\alpha ^{2}+N^{2}\,,\\ \Xi &= & P_{r}-\alpha P_{\theta }=r^{2}-(N+\alpha \cos \theta )^{2}\,\,, \\ P_{r}&=&r^{2}-\alpha ^{2}-\frac{N^{4}}{N^{2}-\alpha ^{2}}\,\,\,,\\ P_{\theta }& = & -\alpha \sin ^{2}\theta +2N\cos \theta -\frac{\alpha N^{2}}{N^{2}-\alpha ^{2}}\,\,. \end{eqnarray} The parameters $\, M ,\,\, N ,\,\, \alpha \,\,$ represent the "electric" mass, "magnetic" mass and "rotation" of the instanton, respectively. When $\,\alpha=0\,$ this metric reduces to the Taub-bolt instanton solution found in \cite{page} with an event horizon and non-self-dual curvature. If $\,N=0\,$, we have the Euclidean Kerr metric. Thus one can say that the metric (\ref{ktb}) generalizes the Taub-bolt solution of \cite{page} in same manner just as the Kerr metric generalizes the Schwarzschild solution. The coordinate $\,t\,$ in the metric behaves like an angular variable and in order to have a complete nonsingular manifold at values of $\,r\,$ defined by equation $\,\Delta=0\,$ , $\,t\,$ must have a period $\,2\pi/\kappa\,$. The coordinate $\,\varphi\,$ must also be periodic with period $\,2\pi \,(1-\Omega /\kappa )\, $, where the "surface gravity" $\,\kappa\,$ and the "angular velocity" of rotation $\,\Omega,$ are defined as \begin{eqnarray} \kappa & = & \frac{r_{+}-r_{-}}{2\,r_{0}^{2}}\,\,,~~~~~~~ \Omega =\frac{\alpha }{r_{0}^{2}}\,\,, \label{sgav} \end{eqnarray} with \begin{eqnarray} r_{\pm }&=& M \pm \sqrt{M^{2}-N^{2}+\alpha ^{2}}\,\,\,,~~~~~~~ r_{0}^{2} = r_{+}^{2}-\alpha ^{2}-\frac{N^{4}}{N^{2}-\alpha ^{2}}\,\,\,. \end{eqnarray} As a result one finds that the condition $$\kappa =\frac{1}{4\mid N\mid }$$ along with $ \Xi \geq 0 \,$ for $\,r>r_{+}\,$ and $\,0\leq \theta \leq \pi \,$ guarantees that $\,r=r_{+}\,$ is a regular bolt in the nonsingular manifold of (\ref{ktb}) . We shall also need the basis one-forms for the metric (\ref{ktb}) which can be chosen as \begin{eqnarray} e^{1} &=&\left(\frac{\Xi }{\Delta }\right)^{1/2}dr\,\,,~~~~~e^{2}=\Xi ^{1/2}d\theta \,,\nonumber \\[2mm] e^{3} &=&\frac{\sin \theta }{\Xi ^{1/2}}\left(\alpha \,dt+P_{r}\,d\varphi\right)\,\,, \\[2mm] e^{4} &=&\left(\frac{\Delta }{\Xi }\right)^{1/2}(dt+P_{\theta }\,d\varphi )\,\,.\nonumber \label{bforms} \end{eqnarray} The isometry properties of the Kerr-Taub-bolt instanton with respect to a $\, U(1)\,$- action in imaginary time imply the existence of the Killing vector field \begin{eqnarray} \frac{\partial}{\partial t}&=&\xi^{\mu}_{(t)}\,\frac{\partial}{\partial x^{\mu}}\,\,. \label{killing} \end{eqnarray} We recall that the fixed point sets of this Killing vector field describe a two-surface, or bolt, in the metric. We shall use the Killing vector to construct a square integrable harmonic $2$-form on the Kerr-Taub-bolt space. It is well-known that for a Ricci-flat metric a Killing vector can serve as a vector potential for associated Maxwell fields in this metric \cite{papa}. Since our Kerr-Taub-bolt instanton is also Ricci-flat, it is a good strategy to start with the Killing one-form field \begin{equation} \xi = {\xi}_{(t) \mu}\,d\,x^\mu\, \end{equation} which is obtained by lowering the index of the Killing vector field in (\ref{killing}). Taking the exterior derivative of the one-form in the metric (\ref{ktb}) we have \begin{eqnarray} \label{2form} d\xi &=&\frac{2}{\Xi^2}\left\{ \,\left[M r^2 + \left(\alpha M \cos\theta-2 N r+M N\right)\left( N+\alpha \cos\theta \right)\right] e^{1} \,\wedge e^{4} \right. \\[2mm] & & \left. \nonumber -\left[N\,\left(\Delta + \alpha^2 +\alpha^2\, \cos^2 \theta\right) + 2\, \alpha (N^2-M r) \cos\theta\right] e^{2} \,\wedge e^{3}\,\right\}\,. \end{eqnarray} In this expression we have used the basis one-forms (\ref{bforms}) in order to facilitate the calculation of its Hodge dual, which is based on the simple relations \begin{eqnarray} ^{\star}\left(e^{1} \,\wedge e^{4}\right)&=& e^{2} \,\wedge e^{3}\,\,,~~~~ ^{\star}\left(e^{2} \,\wedge e^{3}\right)= e^{1} \,\wedge e^{4}\,\,.~~~~~~~~~ \label{duals} \end{eqnarray} Straightforward calculations using the above expressions show that the two-form (\ref{2form}) is both closed and co-closed, that is, it is a harmonic form. However the Kerr-Taub-bolt instanton does not admit hyper-K\"ahler structure, and the two-form given by (\ref{2form}) is not self-dual. Instead, we define the (anti)self-dual two form \begin{equation} F=\frac{\lambda}{2}\,(d\xi \pm \,^{\star} d\xi)\,, \label{sdual} \end{equation} where $\,\lambda\,$ is an arbitrary constant related to the dyon charges carried by the fields and the minus sign refers to the anti-self-dual case. Taking equations (\ref{2form}) and (\ref{duals}) into account in this expression, we obtain the harmonic self-dual two-form \begin{equation} F=\frac{\lambda (M-N)}{\Xi^2}\,\left(r+N +\alpha \cos\theta\right)^2 \left( e^{1} \,\wedge e^{4} + e^{2} \,\wedge e^{3}\right)\,\,, \label{sd2form} \end{equation} which implies the existence of the potential one-form \begin{equation} A=- \lambda\,(M-N)\,\left[\cos\theta\, d\varphi + \frac{r+N +\alpha \cos\theta}{\Xi} \,\left(d t+ P_{\theta}\, d \varphi)\right)\right] \,\,. \label{spotform} \end{equation} After an appropriate re-scaling of the parameter $\,\lambda\,$, which includes the electric coupling constant as well , a string singularity at $\theta =0$ or $\theta =\pi $ in this expression is avoided as usual by demanding the familiar Dirac magnetic-charge quantization rule. From equation (\ref{sdual}) we also find the corresponding anti-self-dual two-form \begin{equation} F=\frac{\lambda (M+N)}{\Xi^2}\,\left(r-N -\alpha \cos\theta\right)^2 \left( e^{1} \,\wedge e^{4} - e^{2} \,\wedge e^{3}\right)\,\,, \label{asd2form} \end{equation} The associated potential one-form is given by \begin{equation} A=- \lambda\,(M+N)\,\left[-\cos\theta\, d\varphi + \frac{r-N -\alpha \cos\theta}{\Xi} \,\left(d t+ P_{\theta}\, d \varphi)\right)\right] \,\,. \label{aspotform} \end{equation} For $\,\alpha=0\, $, the above expressions describe self-dual, or anti-self-dual Abelian gauge fields living on the space of a Taub-Nut instanton with an horizon \cite{page}. In the absence of the "magnetic" mass $\,(N=0)\,$ we have the gauge fields harbored by the Euclidean-Kerr metric. The latter can also be obtained from the potential one-form in the Kerr-Newman dyon metric after an appropriately Euclideanizing it and setting the electric and magnetic charges equal to each other ( see \cite{carter}) . Next, we shall show that these self-dual and anti-self-dual harmonic two-forms are square integrable on the Kerr-Taub-bolt space. This can be shown by explicitly integrating the Maxwell action. For the self-dual two-form we have \begin{equation} \frac{1}{4\pi ^{2}}\int F\wedge F=\frac{\lambda^2}{2\,\pi ^{2}}\, (M-N)\,\int_{0}^{t_0} dt \int_{0}^{\varphi _{0}}d\varphi =\frac{2\lambda ^{2}% }{\kappa }\,(M-N)\,\left( 1-\frac{2 \,\alpha }{r_{+}-r_{-} }\right)\,\,, \label{maxact} \end{equation} where $\, t_0=2\pi/\kappa\,$ and $\,\varphi_0=2\,\pi(1-\Omega /\kappa)\,$. Since this integral, which represents the second Chern class $\,C_2\,$ of the $\,U(1)$-bundle, is finite, the self-dual two-form $\,F\,$ is square integrable on the Kerr-Taub-bolt space. For an anti-self-dual $\,F\,$, a plus sign must be introduced between $M$ and $N$ in (\ref{maxact}). It is also useful to calculate the total magnetic flux $\Phi $ which is obtained by integrating the self-dual 2-form $\,F\,$ over a closed $2$-sphere $\Sigma $ of infinite radius; dividing this by $2\pi $ gives the first Chern class with minus sign \begin{equation} -C_{1}=\frac{\Phi }{2\pi }=\frac{1}{2\pi }\int_{\Sigma }F = 2\lambda \,(M-N)\,\left( 1- \frac{2 \,\alpha }{r_{+}-r_{-} }\right)\,, \end{equation} which must be equal to an integer $\,n\,$ because of the Dirac quantization condition. We see that the periodicity of angular coordinate in the Kerr-Taub-bolt metric affects the magnetic-charge quantization rule in a non-linear way. It involves both the "electric" and "magnetic" masses and the "rotation" parameter. \vspace{5mm} We would like to thank M. J. Perry for helpful discussions.
2,869,038,156,115
arxiv
\section{Introduction\label{sec:intro}} Given complex numbers \begin{equation} 0<q<1,\quad a\in\mathbb{C},\label{eq:1.1}\end{equation} we define \cite{Andrews4,Gasper,Ismail2,Koekoek} \begin{equation} (a;q)_{\infty}:=\prod_{k=0}^{\infty}(1-aq^{k}),\label{eq:1.2}\end{equation} and the $q$-Gamma function\begin{equation} \Gamma_{q}(z):=\frac{(q;q)_{\infty}}{(q^{z};q)_{\infty}}(1-q)^{1-z}\quad z\in\mathbb{C}.\label{eq:1.3}\end{equation} The Euler Gamma function $\Gamma(z)$ is defined as \cite{Andrews4,Gasper,Ismail2,Koekoek} \begin{equation} \frac{1}{\Gamma(z)}=z\prod_{k=1}^{\infty}\left(1+\frac{z}{k}\right)\left(1+\frac{1}{k}\right)^{-z},\quad z\in\mathbb{C}.\label{eq:1.4}\end{equation} The Gamma function satisfies the reflection formula \begin{equation} \Gamma(z)\Gamma(1-z)=\frac{\pi}{\sin\pi z},\quad z\in\mathbb{C},\label{eq:1.5}\end{equation} and the integral representation\begin{equation} \Gamma(z)=\int_{0}^{\infty}e^{-t}t^{z-1}dt,\quad\Re(z)>0.\label{eq:1.6}\end{equation} The Gamma function is a very important function in the theory of special functions, since all the hypergeometric series are defined in terms of the shifted factorials $(a)_{n}$, which are quotients of two Gamma functions \begin{equation} (a)_{n}:=\frac{\Gamma(a+n)}{\Gamma(a)},\quad a\in\mathbb{C},\quad n\in\mathbb{Z}.\label{eq:1.7}\end{equation} Similarly, the $q$-Gamma function is also very important in the theory of the basic hypergeometric series, because all the basic hypergeometric series are defined in terms of the $q$-shifted factorials $(a;q)_{n}$, which are scaled quotients of the $q$-Gamma funtions\begin{equation} (q^{\alpha};q)_{n}=\frac{(1-q)^{n}\Gamma_{q}(\alpha+n)}{\Gamma_{q}(\alpha)},\quad\alpha\in\mathbb{C},\quad n\in\mathbb{Z}.\label{eq:1.8}\end{equation} W. Gosper heuristically argued that \cite{Andrews4,Gasper,Ismail2,Koekoek}\begin{equation} \lim_{q\to1}\frac{1}{\Gamma_{q}(z)}=\frac{1}{\Gamma(z)},\quad z\in\mathbb{C}.\label{eq:1.9}\end{equation} For a rigorous proof of the case $z\in\mathbb{R}$, see \cite{Andrews4}. In this short note we are going to derive some asymptotic formulas for $\Gamma_{q}(z)$ as $q\to1$ in two different modes. In the first mode we let $z\to\infty$ and $q\to1$ simultaneously, while in the second mode we let $q\to1$ for a fixed $z$. \begin{lem} \label{lem:1}Given any complex number $a$, assume that\begin{equation} 0<\frac{\left|a\right|q^{n}}{1-q}<\frac{1}{2}\label{eq:1.10}\end{equation} for some positive integer $n$. Then, for any positive integer $K$, we have \begin{equation} \frac{(a;q)_{n}}{(a;q)_{\infty}}=\frac{1}{(aq^{n};q)_{\infty}}:=\sum_{k=0}^{K-1}\frac{(aq^{n})^{k}}{(q;q)_{k}}+r_{1}(a,n,K)\label{eq:1.11}\end{equation} with\begin{equation} \left|r_{1}(a,n,K)\right|\le\frac{2\left(\left|a\right|q^{n}\right)^{K}}{(q;q)_{K}},\label{eq:1.12}\end{equation} and \begin{equation} \frac{(a;q)_{\infty}}{(a;q)_{n}}=(aq^{n};q)_{\infty}:=\sum_{k=0}^{K-1}\frac{q^{k(k-1)/2}}{(q;q)_{k}}(-aq^{n})^{k}+r_{2}(a,n,K)\label{eq:1.13}\end{equation} with \begin{equation} \left|r_{2}(a,n,K)\right|\le\frac{2q^{K(K-1)/2}(\left|a\right|q^{n})^{K}}{(q;q)_{K}}.\label{eq:1.14}\end{equation} \end{lem} \begin{proof} From the $q$-binomial theorem \cite{Andrews4,Gasper,Ismail2,Koekoek}\begin{equation} \frac{(az;q)_{\infty}}{(z;q)_{\infty}}=\sum_{k=0}^{\infty}\frac{(a;q)_{k}}{(q;q)_{k}}z^{k}\quad a,z\in\mathbb{C},\label{eq:1.15}\end{equation} we obtain \begin{eqnarray*} r_{1}(a,n,K) & = & \sum_{k=K}^{\infty}\frac{\left(aq^{n}\right)^{k}}{(q;q)_{k}}=\frac{(aq^{n})^{K}}{(q;q)_{K}}\sum_{k=0}^{\infty}\frac{\left(aq^{n}\right)^{k}}{(q^{K+1};q)_{k}}.\end{eqnarray*} Since \[ (q^{K+1};q)_{k}\ge(1-q)^{k}\] for $k=0,1,...$, thus, \begin{align*} & \left|r_{1}(a,n,K)\right|\le\frac{(\left|a\right|q^{n})^{K}}{(q;q)_{K}}\sum_{k=0}^{\infty}\left(\frac{\left|a\right|q^{n}}{1-q}\right)^{k}\le\frac{2(\left|a\right|q^{n})^{K}}{(q;q)_{K}}.\end{align*} Apply a limiting case of \eqref{eq:1.15}, \begin{equation} (z;q)_{\infty}=\sum_{k=0}^{\infty}\frac{q^{k(k-1)/2}}{(q;q)_{k}}(-z)^{k}\quad z\in\mathbb{C},\label{eq:1.16}\end{equation} we get\begin{align*} r_{2}(a,n,K) & =\frac{q^{K(K-1)/2}(-aq^{n})^{K}}{(q;q)_{K}}\sum_{k=0}^{\infty}\frac{(-aq^{n})^{k}q^{k(k+2K-1)/2}}{(q^{K+1};q)_{k}}.\end{align*} From the inequalities,\[ \frac{1-q^{k}}{1-q}\ge kq^{k-1},\quad\frac{(q^{K+1};q)_{k}}{(1-q)^{k}}\ge k!q^{k(k+2K-1)/2},\quad\mbox{for }k=0,1,\dots\] we obtain \begin{align*} & \left|r_{2}(a,n,K)\right|\le\frac{q^{K(K-1)/2}(\left|a\right|q^{n})^{K}}{(q;q)_{K}}\sum_{k=0}^{\infty}\frac{1}{k!}\left(\frac{\left|a\right|q^{n}}{1-q}\right)^{k}\\ & \le\frac{q^{K(K-1)/2}(\left|a\right|q^{n})^{K}}{(q;q)_{K}}\exp(1/2)<\frac{2q^{K(K-1)/2}(\left|a\right|q^{n})^{K}}{(q;q)_{K}}.\end{align*} \end{proof} The Jacobi theta functions are defined as \begin{align} \theta_{1}(z;q):=\theta_{1}(v|\tau) & :=-i\sum_{k=-\infty}^{\infty}(-1)^{k}q^{(k+1/2)^{2}}e^{(2k+1)\pi iv},\label{eq:1.17}\\ \theta_{2}(z;q):=\theta_{2}(v|\tau) & :=\sum_{k=-\infty}^{\infty}q^{(k+1/2)^{2}}e^{(2k+1)\pi iv},\label{eq:1.18}\\ \theta_{3}(z;q):=\theta_{3}(v|\tau) & :=\sum_{k=-\infty}^{\infty}q^{k^{2}}e^{2k\pi iv},\label{eq:1.19}\\ \theta_{4}(z;q):=\theta_{4}(v|\tau) & :=\sum_{k=-\infty}^{\infty}(-1)^{k}q^{k^{2}}e^{2k\pi iv},\label{eq:1.20}\end{align} where\begin{equation} z=e^{2\pi iv},\quad q=e^{\pi i\tau},\quad\Im(\tau)>0.\label{eq:1.21}\end{equation} The Jacobi's triple product identities are\begin{align} \theta_{1}(v|\tau) & =2q^{1/4}\sin\pi v(q^{2};q^{2})_{\infty}(q^{2}e^{2\pi iv};q^{2})_{\infty}(q^{2}e^{-2\pi iv};q^{2})_{\infty},\label{eq:1.22}\\ \theta_{2}(v|\tau) & =2q^{1/4}\cos\pi v(q^{2};q^{2})_{\infty}(-q^{2}e^{2\pi iv};q^{2})_{\infty}(-q^{2}e^{-2\pi iv};q^{2})_{\infty},\label{eq:1.23}\\ \theta_{3}(v|\tau) & =(q^{2};q^{2})_{\infty}(-qe^{2\pi iv};q^{2})_{\infty}(-qe^{-2\pi iv};q^{2})_{\infty},\label{eq:1.24}\\ \theta_{4}(v|\tau) & =(q^{2};q^{2})_{\infty}(qe^{2\pi iv};q^{2})_{\infty}(qe^{-2\pi iv};q^{2})_{\infty},\label{eq:1.25}\end{align} they satisfy transformations:\begin{align} \theta_{1}\left(\frac{v}{\tau}\mid-\frac{1}{\tau}\right) & =-i\sqrt{\frac{\tau}{i}}e^{\pi iv^{2}/\tau}\theta_{1}\left(v\mid\tau\right),\label{eq:1.26}\\ \theta_{2}\left(\frac{v}{\tau}\mid-\frac{1}{\tau}\right) & =\sqrt{\frac{\tau}{i}}e^{\pi iv^{2}/\tau}\theta_{4}\left(v\mid\tau\right),\label{eq:1.27}\\ \theta_{3}\left(\frac{v}{\tau}\mid-\frac{1}{\tau}\right) & =\sqrt{\frac{\tau}{i}}e^{\pi iv^{2}/\tau}\theta_{3}\left(v\mid\tau\right),\label{eq:1.28}\\ \theta_{4}\left(\frac{v}{\tau}\mid-\frac{1}{\tau}\right) & =\sqrt{\frac{\tau}{i}}e^{\pi iv^{2}/\tau}\theta_{2}\left(v\mid\tau\right).\label{eq:1.29}\end{align} The Dedekind $\eta(\tau)$ is defined as \cite{Rademarcher}\begin{equation} \eta(\tau):=e^{\pi i\tau/12}\prod_{k=1}^{\infty}(1-e^{2\pi ik\tau}),\label{eq:1.30}\end{equation} or\begin{equation} \eta(\tau)=q^{1/12}(q^{2};q^{2})_{\infty},\quad q=e^{\pi i\tau},\quad\Im(\tau)>0,\label{eq:1.31}\end{equation} it has the transformation formula\begin{equation} \eta\left(-\frac{1}{\tau}\right)=\sqrt{\frac{\tau}{i}}\eta(\tau).\label{eq:1.32}\end{equation} \begin{lem} \label{lem:2}For \begin{equation} 0<a<1,\quad n\in\mathbb{N},\quad\gamma>0,\label{eq:1.33}\end{equation} and\begin{equation} q=e^{-2\pi\gamma^{-1}n^{-a}},\label{eq:1.34}\end{equation} we have\begin{equation} (q;q)_{\infty}=\sqrt{\gamma n^{a}}\exp\left\{ \frac{\pi}{12}\left((\gamma n^{a})^{-1}-\gamma n^{a}\right)\right\} \left\{ 1+\mathcal{O}\left(e^{-2\pi\gamma n^{a}}\right)\right\} ,\label{eq:1.35}\end{equation} and\begin{equation} \frac{1}{(q;q)_{\infty}}=\frac{\exp\left\{ \frac{\pi}{12}\left(\gamma n^{a}-(\gamma n^{a})^{-1}\right)\right\} }{\sqrt{\gamma n^{a}}}\left\{ 1+\mathcal{O}\left(e^{-2\pi\gamma n^{a}}\right)\right\} \label{eq:1.36}\end{equation} as $n\to\infty$. \end{lem} \begin{proof} From formulas \eqref{eq:1.30}, \eqref{eq:1.31} and \eqref{eq:1.32} we get \begin{align*} & (q;q)_{\infty}=\exp\left(\pi\gamma^{-1}n^{-a}/12\right)\eta\left(\gamma^{-1}n^{-a}i\right)\\ & =\sqrt{\gamma n^{a}}\exp\left(\pi\gamma^{-1}n^{-a}/12\right)\eta(\gamma n^{a}i)\\ & =\sqrt{\gamma n^{a}}\exp\left(\pi\gamma^{-1}n^{-a}/12-\pi\gamma n^{a}/12\right)\prod_{k=1}^{\infty}(1-e^{-2\pi\gamma kn^{a}})\\ & =\sqrt{\gamma n^{a}}\exp\left(\pi\gamma^{-1}n^{-a}/12-\pi\gamma n^{a}/12\right)\left\{ 1+\mathcal{O}\left(e^{-2\pi\gamma n^{a}}\right)\right\} ,\end{align*} and\[ \frac{1}{(q;q)_{\infty}}=\frac{\exp\left(\pi\gamma n^{a}/12-\pi\gamma^{-1}n^{-a}/12\right)}{\sqrt{\gamma n^{a}}}\left\{ 1+\mathcal{O}\left(e^{-2\pi\gamma n^{a}}\right)\right\} \] as $n\to\infty$. \end{proof} \section{Main Results\label{sec:results}} For $\Re(z)>-\frac{1}{2}$, we write\begin{equation} \frac{\Gamma_{q}(z+1/2)}{(q;q)_{\infty}(1-q)^{1/2-z}}=\frac{1}{(q^{z+1/2};q)_{\infty}},\label{eq:2.1}\end{equation} then,\begin{equation} \Gamma_{q}(z+\frac{1}{2})=\frac{(q;q)_{\infty}}{(1-q)^{z-1/2}}\sum_{k=0}^{\infty}\frac{q^{k(z+1/2)}}{(q;q)_{k}}.\label{eq:2.2}\end{equation} Formula \eqref{eq:2.1} implies\begin{equation} \Gamma_{q}(\frac{1}{2}+z)\Gamma_{q}(\frac{1}{2}-z)=\frac{(1-q)(q;q)_{\infty}^{3}}{(q,q^{1/2-z},q^{1/2+z};q)_{\infty}},\label{eq:2.3}\end{equation} or\begin{equation} \Gamma_{q}(\frac{1}{2}+z)\Gamma_{q}(\frac{1}{2}-z)=\frac{(1-q)(q;q)_{\infty}^{3}}{\theta_{4}(q^{z};q^{1/2})}.\label{eq:2.4}\end{equation} Thus, \begin{equation} \Gamma_{q}(\frac{1}{2}-z)=\frac{(q;q)_{\infty}^{2}(1-q)^{z+1/2}}{\theta_{4}(q^{z};q^{1/2})}(q^{z+1/2};q)_{\infty}\label{eq:2.5}\end{equation} or\begin{equation} \Gamma_{q}(\frac{1}{2}-z)=\frac{(q;q)_{\infty}^{2}(1-q)^{z+1/2}}{\theta_{4}(q^{z};q^{1/2})}\sum_{k=0}^{\infty}\frac{q^{k(k-1)/2}(-q^{z+1/2})^{k}}{(q;q)_{k}}\label{eq:2.6}\end{equation} for $\Re(z)>-\frac{1}{2}$. \subsection{Case $q\to1$ and $z\to\infty$:} \begin{thm} \label{thm:2.3}For \begin{equation} 0<a<\frac{1}{2},\quad n\in\mathbb{N},\quad u\in\mathbb{R},\quad q=\exp(-2n^{-a}\pi),\label{eq:2.7}\end{equation} we have\begin{align} \frac{1}{\Gamma_{q}\left(\frac{1}{2}-n-n^{a}u\right)} & =\frac{2\exp\left(\pi n^{-a}(n^{a}u+n)^{2}\right)\cos\pi(n^{a}u+n)\left\{ 1+\mathcal{O}\left(e^{-2\pi n^{a}}\right)\right\} }{\sqrt{n^{a}}\exp\left(\pi n^{a}/12+\pi n^{-a}/6\right)\left(1-\exp(-2\pi n^{-a})\right)^{n+n^{a}u+1/2}},\label{eq:2.8}\end{align} and\begin{align} \frac{1}{\Gamma_{q}\left(\frac{1}{2}+n+n^{a}u\right)} & =\frac{\exp(\pi n^{a}/12-\pi n^{-a}/12)\left\{ 1+\mathcal{O}\left(e^{-2\pi n^{a}}\right)\right\} }{\sqrt{n^{a}}(1-e^{-2\pi n^{-a}})^{1/2-n-n^{a}u}}\label{eq:2.9}\end{align} as $n\to\infty$, and the big-O term is uniform with respect $u$ for $u\in[0,\infty)$. \end{thm} \subsection{Case $q\to1$ and $z$ fixed:} \begin{thm} \label{thm:2.4}Assume that \begin{equation} q=e^{-2\pi\tau},\quad\tau>0,\quad x\in\mathbb{R}.\label{eq:2.10}\end{equation} If \begin{equation} x>-\frac{1}{2},\quad q>1-\exp(-2^{2x+1}),\label{eq:2.11}\end{equation} then\begin{equation} \Gamma_{q}(\frac{1}{2}+x)=\Gamma(x+1/2)\left\{ 1+\mathcal{O}\left((1-q)\log^{2}(1-q)\right)\right\} ,\label{eq:2.12}\end{equation} and\begin{equation} \frac{1}{\Gamma_{q}(\frac{1}{2}-x)}=\frac{\left\{ 1+\mathcal{O}\left((1-q)\log^{2}(1-q)\right)\right\} }{\Gamma(\frac{1}{2}-x)},\label{eq:2.13}\end{equation} where the implicit constants of the big-O terms are independent of $x$ under the condition \eqref{eq:2.11} \end{thm} \section{Proofs\label{sec:proofs}} \subsection{Proof for Theorem \ref{thm:2.3}} \begin{proof} We first observe that\[ \frac{1}{(q;q)_{\infty}}=\frac{\exp\left(\pi n^{a}/12-\pi n^{-a}/12\right)}{\sqrt{n^{a}}}\left\{ 1+\mathcal{O}\left(e^{-2\pi n^{a}}\right)\right\} ,\] and \begin{align*} & \frac{1}{\Gamma_{q}\left(1/2-n-n^{a}u\right)}=\frac{(q^{1/2-n}e^{2\pi u};q)_{\infty}}{(q;q)_{\infty}(1-q)^{n+n^{a}u+1/2}}\\ & =\frac{(q,q^{1/2}e^{-2\pi u},q^{1/2}e^{2\pi u};q)_{\infty}q^{-n^{2}/2}e^{2\pi nu}}{(-1)^{n}(1-q)^{n+n^{a}u+1/2}(q,q,q^{n+1/2}e^{-2\pi u};q)_{\infty}}\end{align*} as $n\to\infty$. Then we have\begin{align*} \frac{1}{(q,q,q^{n+1/2}e^{-2\pi u};q)_{\infty}} & =n^{-a}\exp\left(\pi n^{a}/6-\pi n^{-a}/6\right)\left\{ 1+\mathcal{O}\left(e^{-2\pi n^{a}}\right)\right\} ,\end{align*} and \begin{align*} & (q,q^{1/2}e^{-2\pi u},q^{1/2}e^{2\pi u};q)_{\infty}=\theta_{4}(ui\mid n^{-a}i)=n^{a/2}e^{\pi n^{a}u^{2}}\theta_{2}(n^{a}u\mid n^{a}i)\\ & =2n^{a/2}\exp\pi n^{a}(u^{2}-1/4)\cos(n^{a}u\pi)\left\{ 1+\mathcal{O}\left(e^{-2\pi n^{a}}\right)\right\} \end{align*} as $n\to\infty$. Thus,\begin{align*} \frac{1}{\Gamma_{q}\left(1/2-n-n^{a}u\right)} & =\frac{2\exp\pi\left(n^{-a}(n^{a}u+n)^{2}\right)\cos(\pi n^{a}u+n\pi)\left\{ 1+\mathcal{O}\left(e^{-2\pi n^{a}}\right)\right\} }{n^{a/2}\left(1-e^{-2\pi n^{-a}}\right)^{n+n^{a}u+1/2}\exp\left(\pi n^{a}/12+\pi n^{-a}/6\right)}\end{align*} as $n\to\infty$, and it is clear that the big-O term is uniform with respect to $u\ge0$. Similarly, formula \eqref{eq:2.9} follows from Lemma \ref{lem:1} and \ref{lem:2}. \end{proof} \subsection{Proof for Theorem \ref{thm:2.4}} \begin{proof} From \eqref{eq:1.27}, \eqref{eq:1.32} and\[ \Gamma_{q}(\frac{1}{2}+x)\Gamma_{q}(\frac{1}{2}-x)=\frac{\eta(\tau i)^{3}e^{\pi\tau/4}(1-e^{-2\pi\tau})}{\theta_{4}(x\tau i\vert\tau i)},\] we get\[ \Gamma_{q}(\frac{1}{2}+x)=\frac{(1-e^{-2\pi\tau})\exp\left(-\pi\tau(x^{2}-1/4)\right)}{\tau\Gamma_{q}(\frac{1}{2}-x)}\frac{\eta^{3}(i/\tau)}{\theta_{2}(x\vert i/\tau)},\] and \eqref{eq:1.8} and \eqref{eq:1.15} imply that\begin{align*} & \Gamma_{q}(\frac{1}{2}+x)=\frac{(1-e^{-2\pi\tau})\exp\left(-\pi\tau(x^{2}-1/4)\right)}{2\tau\cos(\pi x)\Gamma_{q}(\frac{1}{2}-x)}\left\{ 1+\mathcal{O}\left(\exp(-\frac{2\pi}{\tau}\right)\right\} \\ & =\frac{\pi\exp\left(-\pi\tau x^{2}\right)\left\{ 1+\mathcal{O}\left(\tau\right)\right\} }{\cos(\pi x)\Gamma_{q}(\frac{1}{2}-x)}\\ & =\frac{\pi\exp\left(-\pi\tau\log^{2}(1-q)\frac{x^{2}}{\log^{2}(1-q)}\right)\left\{ 1+\mathcal{O}\left(\tau\right)\right\} }{\cos(\pi x)\Gamma_{q}(\frac{1}{2}-x)}\end{align*} as $\tau\to0^{+}$ and the big-O term is independent of $x$. The condition \eqref{eq:2.11} implies that\[ 0<\frac{x^{2}}{\log^{2}(1-q)}<1,\] then,\[ \Gamma_{q}(\frac{1}{2}+x)=\frac{\pi\left\{ 1+\mathcal{O}\left((1-q)\log^{2}(1-q)\right)\right\} }{\cos(\pi x)\Gamma_{q}(\frac{1}{2}-x)}\] as $q\to1$ and the implicit constant above is independent of $x$. It is well-known that an $q$-analogue of \eqref{eq:1.6} is \cite{Andrews4,Gasper,Ismail2,Koekoek} \[ \int_{0}^{\infty}\frac{t^{x-1/2}}{(-t;q)_{\infty}}dt=\frac{\pi}{\cos\pi x}\frac{(q^{1/2-x};q)_{\infty}}{(q;q)_{\infty}},\quad x>-\frac{1}{2},\] or\[ \int_{0}^{\infty}\frac{t^{x-1/2}dt}{(-(1-q)t;q)_{\infty}}=\frac{\pi}{\cos(\pi x)\Gamma_{q}(\frac{1}{2}-x)},\quad x>-\frac{1}{2}.\] Consequently,\[ \Gamma_{q}(\frac{1}{2}+x)=\left\{ 1+\mathcal{O}\left((1-q)\log^{2}(1-q)\right)\right\} \int_{0}^{\infty}\frac{t^{x-1/2}dt}{(-(1-q)t;q)_{\infty}}\] as $\tau\to0^{+}$ and the implicit constant of the big-O term here is independent of $x$ with $x>-\frac{1}{2}$. Write \[ \int_{0}^{\infty}\frac{t^{x-1/2}dt}{(-(1-q)t;q)_{\infty}}:=I_{1}+I_{2},\] where\[ I_{1}:=\int_{0}^{\log(1-q)^{-2}}\frac{t^{x-1/2}dt}{(-(1-q)t;q)_{\infty}},\] and\[ I_{2}:=\int_{\log(1-q)^{-2}}^{\infty}\frac{t^{x-1/2}dt}{(-(1-q)t;q)_{\infty}}.\] In $I_{1}$ we have\begin{eqnarray*} \log(-(1-q)t;q)_{\infty} & = & \sum_{k=0}^{\infty}\log(1+(1-q)tq^{k})\\ & = & \sum_{k=0}^{\infty}\sum_{n=0}^{\infty}\frac{(-1)^{n}}{n+1}\left((1-q)tq^{k}\right)^{n+1}\\ & = & \sum_{n=0}^{\infty}\frac{(-1)^{n}t^{n+1}}{n+1}\frac{(1-q)^{n+1}}{1-q^{n+1}}\\ & = & t+r(n),\end{eqnarray*} where\[ r(n):=\sum_{n=1}^{\infty}\frac{(-1)^{n}t^{n+1}}{n+1}\frac{(1-q)^{n+1}}{1-q^{n+1}}.\] The condition \eqref{eq:2.11} implies that\[ 0<(1-q)\log(1-q)^{-1}<e^{-1},\] then\begin{eqnarray*} |r(n)| & \le & c(1-q)\log^{2}(1-q),\end{eqnarray*} with\[ c=4\sum_{n=0}^{\infty}\frac{\left[2(1-q)\log(1-q)^{-1}\right]^{n}}{n+2}<4\sum_{n=0}^{\infty}\frac{(2e^{-1})^{n}}{n+2}.\] Therefore,\[ I_{1}=\int_{0}^{2\log(1-q)^{-1}}e^{-t}t^{x-1/2}dt\left\{ 1+\mathcal{O}\left((1-q)\log^{2}(1-q)\right)\right\} \] as $\tau\to0^{+}$, and the implicit constant of the big-O term is independent of $x$. Since \begin{align*} \int_{2\log(1-q)^{-1}}^{\infty}e^{-t}t^{x-1/2}dt & \le e^{-\log(1-q)^{-1}}\int_{2\log(1-q)^{-1}}^{\infty}e^{-t/2}t^{x-1/2}dt<(1-q)\Gamma(x+1/2)2^{x+1/2},\end{align*} clearly, \[ \int_{2\log(1-q)^{-1}}^{\infty}e^{-t}t^{x-1/2}dt=\Gamma(x+1/2)\mathcal{O}\left((1-q)\log^{2}(1-q)\right).\] Thus,\[ I_{1}=\Gamma(x+1/2)\left\{ 1+\mathcal{O}\left((1-q)\log^{2}(1-q)\right)\right\} ,\] as $\tau\to0^{+}$, and the implicit constant of the big-O term is independent of $x$ under the condition \eqref{eq:2.11}. Recall that \[ (-(1-q)t;q)_{\infty}=\sum_{n=0}^{\infty}q^{n(n-1)/2}t^{n}\frac{(1-q)^{n}}{(q;q)_{n}}>q^{n(n-1)/2}t^{n}\frac{(1-q)^{n}}{(q;q)_{n}},\] for any $n=\left\lfloor -\log(1-q)\right\rfloor \ge2^{2x+1}>2x+1$. Then,\begin{align*} I_{2} & \le\frac{q^{n(1-n)/2}(q;q)_{n}(-\log(1-q))^{2x-2n+1}}{(1-q)^{n}(n-x-1/2)}\\ & <\frac{\Gamma(x+1/2)n!q^{n(1-n)/2}(-\log(1-q))^{2x-2n+1}}{\Gamma(x+3/2)}\\ & <\Gamma(x+1/2)n!q^{n(1-n)/2}(-\log(1-q))^{2x-2n+1}.\end{align*} It is clear that\[ q^{n(1-n)/2}=\mathcal{O}(1)\] as $\tau\to0^{+}$. From the Stirling formula\[ n!=\sqrt{2\pi n}\left(\frac{n}{e}\right)^{n}\left\{ 1+\mathcal{O}\left(\frac{1}{n}\right)\right\} \] as $n\to\infty$, we have\[ I_{2}=\Gamma(x+\frac{1}{2})\mathcal{O}\left((1-q)\log^{1/2}(1-q)^{-1}\right)\] as $\tau\to0^{+}$ and the implicit constant of the big-O term is independent of $x$ under the condition \eqref{eq:2.11}. Therefore, under the condition \eqref{eq:2.11}\[ \int_{0}^{\infty}\frac{t^{x-1/2}dt}{(-(1-q)t;q)_{\infty}}=\Gamma(x+1/2)\left\{ 1+\mathcal{O}\left((1-q)\log^{2}(1-q)\right)\right\} \] as $\tau\to0^{+}$ and the implicit constant of the big-O term is independent of $x>-\frac{1}{2}$. Hence we have proved that under the condition \eqref{eq:2.11} we have\[ \Gamma_{q}(\frac{1}{2}+x)=\Gamma(x+\frac{1}{2})\left\{ 1+\mathcal{O}\left((1-q)\log^{2}(1-q)\right)\right\} ,\quad x>-1/2.\] Then,\[ \Gamma_{q}(\frac{1}{2}-x)=\frac{\pi\sec\pi x}{\Gamma(x+1/2)}\left\{ 1+\mathcal{O}\left((1-q)\log^{2}(1-q)\right)\right\} ,\] which is\[ \frac{1}{\Gamma_{q}(\frac{1}{2}-x)}=\frac{\left\{ 1+\mathcal{O}\left((1-q)\log^{2}(1-q)\right)\right\} }{\Gamma(\frac{1}{2}-x)},\] where the implicit constant of the big-O term is independent of $x>-\frac{1}{2}$. \end{proof}
2,869,038,156,116
arxiv
\section*{Funding Information} DF and AF acknowledge support from the UK Engineering $\&$ Physical Sciences Research Council (EPSRC Grant No. EP/M01326X/1, EP/M006514/1, and EP/N002962/1, respectively).
2,869,038,156,117
arxiv
\section{Introduction} The recent progress in heavy-ion spectroscopy provides good perspectives for testing the quantum electrodynamics in a region of strong electric field. In \cite{Marrs,Stohlker} the two-electron contribution to the ground-state energy of some heliumlike ions was measured directly by comparing the ionization energies of heliumlike and hydrogenlike ions. In such an experiment the dominating one-electron contributions are completely eliminated. Though the accuracy of the experimental results is not high enough at present, it is expected \cite{Stohlker} that the experimental accuracy will be improved by up to an order of magnitude in the near future. This will provide testing the QED effects in the second order in $\alpha$. In this paper we calculate the ground state two-electron self-energy correction in the second order in $\alpha$ in the range $Z=20-100$. Calculations of this correction were previously performed for some ions for the case of a point nucleus by Yerokhin and Shabaev \cite{Yerokhin95} and for an extended nucleus by Persson {\it et al.} \cite{Persson96, Persson97}. Contrary our previous calculation of this correction, the full-covariant scheme, based on an expansion of the Dirac-Coulomb propagator in terms of interactions with the external potential \cite{Snyderman,Schneider_PhD}, is used in the present work. This technique was already applied by the authors to calculate the self-energy correction to the hyperfine splitting in hydrogenlike and lithiumlike ions \cite{Yerokhin97,Shabaev97}. The paper is organized as follows. In the Sec. 2 we give a brief outline of the calculation of the two-electron self-energy contribution. In the Sec. 3 we summarize all the two-electron contributions to the ground state energy of heliumlike ions. The relativistic units ($\hbar = c =1$) are used in the paper. \section{Self-energy contribution} The two-electron self-energy contribution is represented by the Feynman diagrams in Fig.1. The formal expressions for these diagrams can easily be derived by the two-time Green function method \cite{Shabaev90}. Such a derivation was discussed in detail in \cite {Yerokhin95}. The diagrams in Fig.1a are conveniently divided into irreducible and reducible parts. The reducible part is the one in which the intermediate state energy (between the self-energy loop and the electron-electron interaction line) coincides with the initial state energy. The irreducible part is the remaining one. The contribution of the irreducible part can be written in the same form as the first order self-energy \begin{eqnarray} \label{1} \Delta E_{\rm irred} = 2\Bigl[ \langle \xi |\Sigma_{\rm R}(\varepsilon_a)|a\rangle + \langle a|\Sigma_{\rm R}(\varepsilon_a)|\xi \rangle \Bigr]\, , \end{eqnarray} where $\Sigma_{\rm R}(\varepsilon)$ is the regularized self-energy operator, $\varepsilon_a$ is the energy of the initial state $a$, and $|\xi \rangle$ is a perturbed wave function defined by \begin{eqnarray} \label{2} |\xi \rangle = \sum_{\varepsilon_n \neq \varepsilon_a} \frac{|n\rangle \left[ \langle nb| I(0)|ab\rangle - \langle nb|I(0)|ba\rangle \right]} {\varepsilon_a - \varepsilon_n} \, . \end{eqnarray} Here $I(\omega )$ is the operator of the electron-electron interaction. The calculation of the irreducible part is carried out using the scheme suggested by Snyderman \cite{Snyderman} for the first order self-energy contribution. The reducible part is grouped with the vertex part (Fig.1b). For the sum of these terms the following formal expression is obtained \begin{eqnarray} \label{3} \Delta E_{\rm vr} & = &{2\alpha} ^2 \sum_P (-1)^P \frac{i}{2\pi} \int_{-\infty}^{\infty} d\omega \int d{\bf x} d{\bf y} d{\bf z} \frac{e^{i|\omega ||{\bf x} -{\bf y} |}}{|{\bf x} -{\bf y} |} \nonumber \\ && \times \Biggl[ \psi ^{\dag}_{Pa}({\bf x} ) \alpha _{\nu} \int d{\bf z} _1 \psi ^{\dag}_{Pb}({\bf z} _1) \frac{\alpha _{\mu}}{|{\bf z} -{\bf z} _1|} \psi _{b}({\bf z} _1) \nonumber \\ && \times G(\varepsilon_a-\omega ,{\bf x} ,{\bf z} ) \alpha ^{\mu} G(\varepsilon_a-\omega ,{\bf z} ,{\bf y} ) \alpha ^{\nu} \psi _{a}({\bf y} ) \nonumber \\ && \mbox{} - \langle PaPb|\frac{1-{\mbox{\boldmath$\alpha$}}_1 {\mbox{\boldmath$\alpha$}}_2}{r_{12}} |ab\rangle \nonumber \\ && \times \psi ^{\dag}_{a}({\bf x} ) \alpha _{\nu} G(\varepsilon_a-\omega ,{\bf x} ,{\bf z} ) G(\varepsilon_a-\omega ,{\bf z} ,{\bf y} ) \alpha ^{\nu} \psi _{a}({\bf y} )\Biggr] \, . \end{eqnarray} Here the first term corresponds to the vertex part, and the second one corresponds to the reducible part. $G(\varepsilon ,{\bf x} ,{\bf z} )$ is the Coulomb Green function, $\alpha$ is the fine structure constant, $\alpha^{\mu}=(1,{\mbox{\boldmath$\alpha$}})$, ${\mbox{\boldmath$\alpha$}}$ are the Dirac matrices, $a$ and $b$ are the $1s$ states with spin projection $m=\pm \frac12$, and $P$ is the permutation operator. According to the Ward identity the counterterms for the vertex and reducible parts cancel each other, and, so, the sum of these terms regularized in the same covariant way is ultraviolet finite. To cancel the ultraviolet divergences analytically we divide $\Delta E_{\rm vr}$ into two parts $\Delta E_{\rm vr} = \Delta E_{\rm vr}^{(0)}+ \Delta E_{\rm vr}^ {\rm many}$. The first term is $\Delta E_{\rm vr}$ with both the bound electron propagators replaced by the free propagators. It does not contain the Coulomb Green functions and can be evaluated in the momentum representation, where all the ultraviolet divergences are explicitly cancelled using a standard covariant regularization procedure. The remainder $\Delta E_{\rm vr}^{\rm many}$ does not contain ultraviolet divergent terms and is calculated in the coordinate space. The infrared divergent terms are handled introducing a small photon mass $\mu$. After these terms are separated and cancelled analytically the limit $\mu \to 0$ is taken. In practice the calculation of the self-energy contribution is made using the shell model of the nuclear charge distribution. Since the finite nuclear size effect is small enough even for high $Z$ (it constitutes about 1.5 percent for uranium), an error due to incompleteness of such a model is negligible. The Green function for the case of the shell nucleus in the form presented in \cite{Gyulassy} is used in the calculation. To calculate the part of $\Delta E_{\rm irred}$ with two and more external potentials, we subtract from the Coulomb-Dirac Green function the first two terms of its potential expansion numerically. To obtain the second term of the expansion it is necessary to evaluate a derivative of the Coulomb Green function with respect to $Z$ at the point $Z=0$. We handle it using some algorithms suggested in \cite{Manakov}. The numerical evaluation of $\Delta E_{\rm vr}^{\rm many}$ is the most time consuming part of the calculation. The energy integration is carried out using the Gaussian quadratures after rotating the integration contour into imaginary axis. To achieve a desirable precision it is sufficient to calculate 12-15 terms of the partial wave expansion. The remainder is evaluated by fitting the partial wave contributions to a polynomial in $\frac1l$. A contribution arising from the intermediate electron states which are of the same energy as the initial state is calculated separately using the B-spline method for the Dirac equation \cite{Johnson}. The same method is used for the numerical evaluation of the perturbed wave function $|\xi \rangle $ in equation (1). Table 1 gives the numerical results for the two-electron self-energy contribution to the ground state energy of heliumlike ions expressed in terms of the function $F(\alpha Z)$ defined by \begin{eqnarray}\label{4} \Delta E = \alpha^2 (\alpha Z)^3 F(\alpha Z)\,mc^{2} \end{eqnarray} To the lowest order in $\alpha Z$, $F=1.346\ln{Z}-5.251$ (see \cite{Yerokhin95} and references therein). The results for a point nucleus and an extended nucleus are listed in the third and fourth columns of the table, respectively. In the second column the values of the root-mean-square (rms) nuclear charge radii used in the calculation are given \cite{Fricke,Johnson85}. In the fifth column the results for an extended nucleus expressed in eV are given to be compared with the ones of Persson {\it et al.} \cite{Persson96} listed in the last column of the table. A comparison of the present results for a point nucleus with the ones from \cite{Yerokhin95} finds some discrepancy for the contribution which corresponds to the Breit part of the electron-electron interaction. This discrepancy results from a small spurious term arising in the non-covariant regularization procedure used in \cite{Yerokhin95}. \section{The two-electron part of the ground state energy} In the Table 2 we summarize all the two-electron contributions to the ground state energy of heliumlike ions. In the second column of the table the energy contribution due to one-photon exchange is given. Its calculation is carried out for the Fermi model of the nuclear charge distribution \begin{eqnarray} \rho(r) = \frac{N}{1+\exp{((r-c)/a)}}\, \end{eqnarray} with the rms charge radii listed in the Table 1. Following to \cite {Fricke}, the parameter $a$ is chosen to be $a = \frac{2.3}{4\ln 3}\,$ fm. The parameters $c$ and $N$, with a good precision, are given by (see, e.g., \cite{Shabaev93}) \begin{eqnarray} &c = \frac1{\sqrt{3}}\left[ \Bigl (4\pi^4a^4-10\langle r^2\rangle \pi^2a^2+ \frac{25}{4} \langle r^2\rangle^2\Bigr )^{\frac12} -5\pi^2a^2+ \frac52\langle r^2\rangle \right]^{\frac12} \,, \\ &N=\frac{3}{4\pi c^{3}}\Bigl(1+\frac{\pi^{2}a^{2}}{c^{2}}\Bigr)^{-1}\,. \end{eqnarray} Except for $Z$=83, 92, the uncertainty of this correction is obtained by a one percent variation of the rms radii. In the case $Z$=92 ($\langle r^{2}\rangle^{1/2}=5.860(2)$ fm \cite{Zumbro84}), the uncertainty of this correction is estimated by taking the difference between the corrections obtained with the Fermi model and the homogeneously charged sphere model of the same rms radius. For $Z=83$, the uncertainty comes from both a variation of the rms radius by 0.020\,fm (it corresponds to a discrepancy between the measured rms values \cite{Fricke}) and the difference between the Fermi model and the homogeneously charged sphere model. The energy contribution due to two-photon exchange is divided into two parts. The first one ("non-QED contribution") includes the non-relativistic contribution and the lowest order ($\sim (\alpha Z)^{2}$) relativistic correction, which can be derived from the Breit equation. This is given by the first two terms in the $\alpha Z$-expansion \cite{Sanders,Palchikov,Drake} \begin{eqnarray} \Delta E_{\rm non-QED} = \alpha^{2}[-0.15766638 - 0.6356 (\alpha Z)^2]mc^{2} \end{eqnarray} and is presented in the third column of the Table 2. The second part which we refer to as the "QED contribution" is the residual and is given in the fourth column of the table. The data for the two-photon contribution for all $Z$, except for $Z=92$, are taken from \cite{Blundell}, interpolation is made when it is needed. For $Z=92$ data from \cite{Lindgren} are taken. In the fifth column of the table the results of the present calculation of the two-electron self-energy contribution are given. The two-electron vacuum polarization contribution taken from \cite{Artemyev} is listed in the sixth column. In the seventh column the "non-QED part" of the energy correction due to exchange of three and more photons is given. This correction is evaluated by summing the $Z^{-1}$ expansion terms for the ground state energy of heliumlike ions beginning from $Z^{-3}$. The coefficients of such an expansion are taken to zeroth order in $\alpha Z$ from \cite{Sanders} and to second order in $\alpha Z$ from \cite{Drake}. The three and more photons QED correction has not yet been calculated. We assume that the uncertainty due to omitting this correction is of order of magnitude of the total second-order QED correction multiplied by factor $Z^{-1}$. It is given in the eighth column of the table. The two-electron nuclear recoil correction is estimated by reducing the one-photon exchange contribution by the factor $(1-m/M)$. Such an estimate corresponds to the non-relativistic treatment of this effect and takes into account that the mass-polarization correction is negligible for the $(1s)^2$ state \cite{Drake}. This correction and its uncertainty, which is taken to be 100\% for high $Z$, are included into the total two-electron contribution. The two-electron nuclear polarization effect is expected to be negligible for the ground state of heliumlike ions. In the last column the total two-electron part of the ground state energy of heliumlike ions is given. In the Table 3 our results are compared with the experimental data \cite{Marrs, Stohlker} and the results of previous calculations based on the unified method \cite{Drake}, the all-order relativistic many body perturbation theory (RMBPT) \cite{Plante}, the multiconfiguration Dirac Fock treatment \cite{Indelicato}, and RMBPT with the complete treatment of the two-electron QED correction \cite{Persson96,Persson97}. Data in the third column of the table are taken from \cite{Persson96} for $Z=54, 92$ and from \cite{Persson97} for other $Z$. The one-electron contribution from \cite{Johnson85} is subtracted from the total ionization energies presented in \cite{Plante, Drake} to obtain the two-electron part. In the Table 4 we present the theoretical contributions to the ground state energy of $^{238}U^{90+}$, based on currently available theory. The uncertainty of the one-electron Dirac-Coulomb value comes from the uncertainty of the Rydberg constant (we use $hcR_{\infty}$=13.6056981(40) eV, $\alpha$=1/137.0359895(61)). The one-electron nuclear size correction for the Fermi distribution with $\langle r^2 \rangle ^{1/2} = 5.860 \,$fm gives $397.62(76)\,$ eV. The uncertainty of this correction is estimated by taking the difference between the corrections obtained with the Fermi model and the homogeneously charged sphere model of the same rms radius \cite{Franosch}. The nuclear recoil correction was calculated to all orders in $\alpha Z$ by Artemyev {\it et al.} \cite{Artemyev2}. The uncertainty of this correction is chosen to include a deviation from a point nucleus approximation used in \cite{Artemyev2}. The one-electron nuclear polarization effect was evaluated by Plunien and Soff \cite{Plunien} and by Nefiodov {\it et al.} \cite{Nefiodov}. The values of the first order self-energy and vacuum polarization corrections are taken from \cite{Mohr} and \cite{Persson93}, respectively. The two-electron corrections are quoted from the Table 2. The higher order one-electron QED corrections are omitted in this summary since they have not yet been calculated completely. We expect they can contribute within several electron volts. \section*{Acknowledgments} Valuable conversations with Thomas St\"ohlker are gratefully acknowledged. The research described in this publication was made possible in part by Grant No. 95-02-05571a from the Russian Foundation for Basic Research.
2,869,038,156,118
arxiv
\section{Introduction}\label{sec:Intro} Effective field theory (EFT) is a powerful tool for studying the dynamics of dense, strongly coupled matter, where a microscopic description is intractable. In this approach, one writes down an effective action principle for the degrees of freedom that remain at low energies and matches the predictions of this theory against experiment order by order in a momentum expansion. The true predictive power of an EFT lies in the symmetries it possesses, which constrain the space of action principles, so that in practice experimental results may be fit to relatively few parameters. An EFT for superconductivity was proposed a number of years ago by Greiter, Witten, and Wilczek.\cite{Greiter:1989qb} In this case, the symmetry in question was Galilean invariance, which they imposed by demanding an algebraic relation between the momentum and charge currents \begin{align}\label{GWWconstraint} T^{0i} = \frac{m}{e} j^i . \end{align} The physics of this statement is that one expects that for non-relativistic theories, momentum is carried entirely by the transport of matter. They concluded that to lowest order in a derivative expansion, the most general EFT consistent with this principle is determined by a single function of a single variable \begin{align}\label{GWWaction} S = \int d^{4} x ~ p \left( D_t \varphi - \frac{1}{2m} D_i \varphi D^i \varphi \right) . \end{align} In equilibrium, $p(\mu)$ is the thermodynamic pressure as a function of the chemical potential. Here $m$ is the mass of the superconducting order parameter, $\varphi$ is its phase, and $D_\mu \varphi = \partial_\mu \varphi + q A_\mu$. The fact that the low energy dynamics may be entirely characterized by a single function of a single variable demonstrates the power of Galilean symmetry and the utility of the effective action approach. The theory (\ref{GWWaction}) was pushed to higher order in derivatives by Son and Wingate, who were interested in the next-to-leading order (NLO) physics of the unitary Fermi gas.\cite{Son2006} In this work, the author's introduced and demanded a symmetry called non-relativistic general coordinate invariance, which in particular, implies (\ref{GWWconstraint}). However, both of these approaches can be unwieldy. For instance, the constraint (\ref{GWWconstraint}) amounts to a non-linear PDE for the Lagrangian as a function of the fields. While an explicit solution was found to lowest order in derivatives of those fields, this approach is intractable at higher orders as the order of the PDE increases. Non-relativistic general coordinate invariance is a major improvement in this regard and has seen a number of condensend matter applications,\cite{Son:2005tj,Hoyos:2011ez,Son:2013,Andreev:2013qsa,Golkar:2013gqa,Hoyos:2013eha,Hoyos:2014pba,Andreev:2014gia,Gromov:2014gta,Gromov:2014vla,Wu:2014dha,Wu:2014osa,Jensen:2014aia,Jensen:2014ama,Moroz:2014ska,Moroz:2015jla,Fujii:2016mbc,Auzzi:2016lrq} but often requires a lengthy calculation to confirm invariance in the presence of massive matter. This is particularly true when one lacks, as we shall in this work, a prefered velocity field $v^i$ from which one forms the Galilean invariant combination $\tilde A_\mu$ introduced by Son. It would be advantageous to have a means of writing down manifestly invariant actions for superfluid Goldstones that would remove the need for additional calculation. More seriously, both methods are intrinsically single constituent in nature: the condition (\ref{GWWconstraint}) relies on this quite explicitly, while non-relativistic general coordinate invariance was motivated as a symmetry of microscopic single constituent actions and does not hold when fields of multiple distinct charge-to-mass ratios are included. On the other hand, multiconstituent superfluid condensates are of great experimental and theoretical\cite{khalatnikov1957hydrodynamics,khalatnikov1973sound,mineev1974theory,andreev1975three} interest. The most well known example is He3/He4, but experimentally realizable superfluid mixtures have proliferated in recent years due to experimental advances in cold atom physics. The first experimental realization of a superfluid mixture in an atom trap was obtained by Myatt et al. in 1997,\cite{Myatt1997} who condensed the $F=2, m=2$ and $F=1, m=-1$ states of Rb87. Mixtures can also be created by condensing all the spin states of a single atomic species such as spin-1 Na23.\cite{stenger1998spin} For an excellent review of weakly coupled BEC mixtures and their experimental realizations we refer the reader to review articles by Kasamatsu, Kawaguchi, and Ueda.\cite{kasamatsu2005,kawaguchi2012spinor} We begin in section \ref{sec:Geometry} with an overview of Galilean geometry, which provides an efficient means of writing down EFT's by making the spacetime transformation properties of physical objects manifest. In section \ref{sec:ParityEFT} we apply this to construct the most general parity invariant EFT to lowest order in a derivative expansion. In the single constituent case, this reduces to (\ref{GWWaction}), but allows for a non-dissipative superfluid drag in the general one, an effect originally considered by Andreev and Bashkin.\cite{andreev1975three} Section \ref{sec:ParityBreakingEFT} extends this analysis to the parity breaking case and contains most of the new results of this paper. In particular, we find a parity odd version of the drag coefficient, which we dub the Hall drag. In two spatial dimensions, this coefficient ``drags'' mass, charge, and energy perpendicular to the relative velocity of two condensates, for instance \begin{align} j^a = c^H \frac{q_1}{m_1} \epsilon^{ab} (v_{2b} - v_{3b} ) + \cdots . \end{align} Galilean invariance also admits a ``pinning'' of mass, charge, and energy to relative velocity, which one may think of as the renormalization of these quantities due to a velocity dependent interaction. For example, \begin{align} j^0 = f \frac{q_1}{m_1} \text{Vol}_{234} + \cdots , \end{align} among other effects. Here $\text{Vol}_{234}$ is the volume of the 2-simplex that spans the endpoints of the velocity vectors of the three superfluids $2,3,$ and 4. When $\text{Vol}_{234}$ is nonzero, $f$ leads to a buildup of charge proportional to that of the first superfluid. In higher dimensions, these effects have natural geometric interpretations in terms of the signed volumes and directed areas of certain sub-complexes of the convex hull defined by the endpoints of the fluid velocity vectors. For the general expressions, see equations (\ref{hallDragGeneral}) and (\ref{pinningCurrents}). We conclude in section \ref{sec:Example} with a simple quasi-one-dimensional model which exhibits non-trivial Hall drag and compute $c^H$ in mean field theory. \section{Galilean Geometry}\label{sec:Geometry} In this section we recount an efficient method for generating Galilean invariant action principles. This method is essentially an adaptation of pseudo-Riemannian geometry to the Galilean case.\cite{GPR_geometry,GPR_fluids} For massless fields, it reduces to Cartan's treatment of Newtonian gravity. Once this formalism is established, our treatment proceeds straightforwardly along the lines of the relativistic case.\cite{son2002low} Throughout we shall denote spacetime indices by $\mu, \nu, \dots$, with temporal component $t$ and spatial components $i,j,\dots$. Internal Galilean indices in the vector and covector representation will be denoted $A, B, \dots$, with temporal component $0$ and spatial components $a,b,\dots$, while extended indices will be denoted by $I,J,...$. We will regularly pass between spacetime indices and internal Galilean indices using the coframe $e^A_\mu$ and its inverse $e^\mu_A$. In this section we will only give a brief review, a more complete treatment may be found elsewhere.\cite{Geracie:2016bkg} \subsection{The Galilean Group} We begin with the Galilean group $Gal(d)$, where $d$ is the spatial dimension. This is the matrix group \begin{align}\label{galDef} \Lambda^A{}_B = \begin{pmatrix} 1 & 0 \\ - k^a & R^a{}_b \\ \end{pmatrix} , \end{align} where $R^a{}_b$ is a rotation matrix and $k^a$ is the relative velocity between Galilean frames. This is simply the action of Galilean transformations on inertial coordinates $(t ~ x^i)^T$. Note however that $Gal(d)$ will be acting as an internal symmetry throughout, since no natural notion of inertial coordinates exists in the curved case. In this paper we shall follow the approach of Son and many subsequent works in formulating the theory on curved spacetime. This is both a convenient means to make the spacetime symmetries of the theory manifest and an efficient way to encode transport, since a generic spacetime provides the theorist with a suite of knobs to turn to study response. The velocity $v^A = (1 ~ v^a)^T$ of a particle transforms under this, the vector, representation. There are however natural objects in non-relativistic physics that are neither Galilean vectors nor singlets. Consider for instance the $(d+2)$-dimensional column vector \begin{align} p^I = \begin{pmatrix} \rho \\ \rho^a \\ - \epsilon \end{pmatrix} \end{align} where $\rho$ is the mass density, $p^a$ the momentum density, and $\epsilon$ the energy density. The well-known transformation laws for momentum and energy \begin{align} \rho \rightarrow \rho, && p^a \rightarrow - \rho k^a + R^a{}_b p^b , && \epsilon \rightarrow \frac 1 2 \rho k_a k^a - k_b R^b{}_a p^a + \epsilon \end{align} may be summarized in the matrix \begin{align}\label{extendedRep} p^I \rightarrow \Lambda^I{}_J p^J , && \Lambda^I{}_J = \begin{pmatrix} 1 & 0 & 0 \\ - k^a & R^a{}_b & 0 \\ - \frac 1 2 k_c k^c & k_c R^c{}_b & 1 \end{pmatrix} . \end{align} This forms a representation of $Gal(d)$ called the extended representation. In contrast to the relativistic case then, in which momentum and energy are naturally collected into a $(d+1)$-vector $p^\mu$, in a Galilean covariant theory, mass, energy, and momentum are naturally collected into the $(d+2)$-vector $p^I$. \subsection{The Extended Derivative}\label{sec:extendedDerivative} The natural derivative operator on massive non-relativistic fields $\psi$ is not of the form $D_\mu$, familiar from relativistic theories, but is also valued in the extended representation \begin{align}\label{galDeriv} D_I \psi, \end{align} as one might guess from the above example by the correspondence $P_i = - i \frac{\partial}{\partial x^i}$, $H = i \frac{\partial}{\partial t}$. Like the energy-momentum $(d+2)$-vector given above, one of the components of this derivative operator is tied to the mass \begin{align}\label{extendedDerivative} D_I \psi = \begin{pmatrix} D_0 \psi & D_a \psi & i m \psi \end{pmatrix} . \end{align} For our purposes, the mass $m$ of a non-relativistic field is its representation under $U(1)_M$ transformations\footnote{That this is the same as the kinematic mass - the mass entering the dispersion relation - is fixed by Galilean invariance as can be seen in equation (\ref{schrodAction}). Indeed, Galilean invariance also fixes both to be equal to the gravitational mass.} \begin{align}\label{U(1)_M} \psi \rightarrow e^{i m \alpha} \psi. \end{align} Invariance under $U(1)_M$ ensures the existence of a conserved mass current $\rho^\mu$, common to non-relativistic theories. The derivative operator (\ref{galDeriv}) is both $U(1)_M$ and $Gal(d)$ covariant, and if we tried to do without the final component of (\ref{extendedDerivative}), we would break the later. $U(1)_M$ covariance requires the existence of a mass gauge field, introduced by Duval and K\"unzle into the connection,\cite{Duval:1976ht,DK} which we will return to in section \ref{sec:currents}. Though it will not be important for this work, Newtonian gravity finds its origin in the mass gauge field. \subsection{Invariant Tensors} The point of this construction is one may now obtain Galilean invariant action principles straightforwardly by contracting indices with invariant tensors. For the vector representation these are \begin{align}\label{definingInvariants} n_A = \begin{pmatrix} 1 & 0 \end{pmatrix}, &&h^{AB} = \begin{pmatrix} 0 & 0 \\ 0 & \delta^{ab} \end{pmatrix}. \end{align} The former tensor is called the internal clock-form and provides a non-relativistic theory with an absolute notion of space and time\footnote{In the general case, this decomposition may only be local.} while the latter serves as a spatial metric. The extended representation admits a higher dimensional version of these invariants \begin{align}\label{extendedInvariants} n_I = \begin{pmatrix} 1 & 0 & 0 \end{pmatrix}, &&g^{IJ} = \begin{pmatrix} 0 & 0 & 1 \\ 0 & \delta^{ab} & 0 \\ 1 & 0 & 0 \end{pmatrix}. \end{align} Indeed, the invariants (\ref{definingInvariants}) of the defining representation may be obtained from these by using the projector \begin{align} \Pi^A{}_I = \begin{pmatrix} 1 & 0 & 0 \\ 0 & \delta^{ab} & 0 \end{pmatrix} , \end{align} which the reader may check is itself a Galilean invariant. Importantly, the extended representation of $Gal(d)$ admits an inverse metric $g^{IJ}$ of Lorentzian signature. We shall denote its inverse by $g_{IJ}$ and use it to raise an lower indices in the usual way. Also note that the form (\ref{extendedDerivative}) of the extended derivative operator may be stated in the invariant way \begin{align} n^I D_I \psi = i m \psi . \end{align} Finally, note that since the defining representation of $Gal(d)$ is a subgroup of $SL(d+1)$ and the extended representation of $SL(d+2)$, they also admit the parity and time reversal breaking invariants \begin{align} \epsilon_{A_0 \cdots A_{d}}, &&\epsilon^{A_0 \cdots A_{d}}, &&\epsilon^{I_0 \cdots I_{d+1}}, \end{align} where we have chosen $\epsilon_{0 \cdots d} = \epsilon^{0 \cdots d} = \epsilon^{0 \cdots d+1} = 1$. As an illustration of this approach, the Schr\"odinger action may be written \begin{align}\label{schrodAction} S = - \frac{1}{2m} \int d^{d+1} x | e |D_I \psi^\dagger D^I \psi = \int d^{d+1} x | e |\left( \frac{i}{2} \psi^\dagger \overset \leftrightarrow D_0 \psi - \frac{\delta^{ab} }{2m} D_a \psi^\dagger D_b \psi \right) , \end{align} where $|e|= \det (e^A_\mu)$ is the volume element. \subsection{Currents}\label{sec:currents} In this work we are principally concerned with the currents present in any non-relativistic theory and their response. These currents are most easily defined in a vielbein formalism which we recount here. Since this has been discussed at great length elsewhere, we refer the reader to our references for details.\cite{Geracie:2016bkg} In this formalism, the geometry is defined by an extended vielbein \begin{align} e^I_\mu = \begin{pmatrix} e^A_\mu \\ a_\mu \end{pmatrix} \end{align} which contains a spacetime coframe $e^A_\mu = \Pi^A{}_I e^I_\mu$ in its first $d$ components. The final component is the mass gauge field alluded to in section \ref{sec:extendedDerivative} and transforms as such $a_\mu \rightarrow a_\mu + \nabla_\mu \alpha$ under (\ref{U(1)_M}). It is the star of Newton-Cartan geometry and is the manner in which it encodes Newtonian gravitational effects,\cite{Duval:1976ht,DK} reducing to the familiar Newtonian gravitational potential after fixing an appropriate Galilean frame. However, for our purposes, its principal role is to serve as a source for mass current. The extended derivative operator is determined by this data, and under a variation $\delta e^I_\mu$ and $\delta A_\mu$ we have \begin{align}\label{derivativeVariation} \delta D_I \psi = - \Pi^\mu{}_I \left( \delta e^J_\mu D_J \psi + i q \psi \delta A_\mu \right) , \end{align} where we have also chosen to couple to a background electromagnetic potential $A_\mu$ with charge $q$. The currents are then defined as\footnote{We are specializing to the case of vanishing spin current which will be relevant for our discussion. A general treatment may be found elsewhere.\cite{Geracie:2016bkg} It would be interesting to extend the analysis of this paper to magnetized superfluid mixtures, such as the ferromagnetic or polar phases of condensed Rb87 or Na23 by incorporating spin.} \begin{align} \delta S = \int \left( - \tau^\mu{}_I \delta e^I_\mu + j^\mu \delta A_\mu \right) . \end{align} The extended-valued tensor $\tau^\mu{}_I$ encodes the flow of mass, energy, and stress. In a particular Galilean frame its components have the following interpretation\footnote{Note that $- n_A \tau^A{}_I$ is the mass-momentum-energy current we considered above (\ref{extendedRep}).} \begin{align}\label{stressEnergyComponents} \tau^\mu{}_I = \begin{pmatrix} \epsilon & - \rho^i & - \rho \\ \epsilon^i & - T^i{}_a & - \rho_a \end{pmatrix}, \end{align} where $\epsilon$ is the energy density, $\rho$ the mass density, $\epsilon^i$ and $\rho^i$ their associated currents, and $T^i{}_a$ the stress tensor. When all indices are converted to the same type using the vielbein, the stress tensor is symmetric on-shell by Ward identities.\cite{GPR_improv} \section{Parity Invariant Superfluid Mixtures}\label{sec:ParityEFT} We now turn to the problem studied by Greiter, Witten, and Wilczek, that of a single component charged superfluid at $T=0$. The only degree of freedom remaining at low temperatures is a single superfluid phase $\varphi$. It is a simple matter to determine how the covariant derivative operator acts on a phase $\psi = e^{- i \varphi} | \psi |$ \begin{align} D_I \varphi = \begin{pmatrix} D_0 \varphi & D_a \varphi & - m \end{pmatrix}, \end{align} where \begin{align} D_A \varphi = e^\mu_A (\partial_\mu \varphi + q A_\mu + m a_\mu ) . \end{align} Here $e^\mu_A$ is the inverse vielbein, $A_\mu$ the electromagnetic vector potential, and $a_\mu$ the mass gauge field. In what follows we will often work with the ``extended velocity'' of the condensate and its projection \begin{align} v_I = - \frac{1}{m} D_I \varphi = \begin{pmatrix} - \frac 1 m D_0 \varphi & - \frac 1 m D_a \varphi & 1 \end{pmatrix}, &&v^A = \Pi^{AI} v_I = \begin{pmatrix} 1 \\ - \frac 1 m D^a \varphi \end{pmatrix}. \end{align} \subsection{Effective Action} Due to the shift symmetry $\varphi \rightarrow \varphi + c$, the phase must always enter the action with derivatives. The natural Galilean covariant action to lowest order in derivatives is then \begin{align}\label{oneComponentAction} S &= \int d^{d+1} x | e | p \left(- \frac{1}{2m} D_I \varphi D^I \varphi \right) \nonumber \\ &= \int d^{d+1} x | e | p \left( D_0 \varphi - \frac{1}{2m}D_a \varphi D^a \varphi \right) , \end{align} with an arbitrary function $p$, generalizing (\ref{GWWaction}) to curved space. It would be interesting to carry out this analysis to higher orders, generalizing the work of Son and Wingate\cite{Son2006} beyond NLO. For this work, we are considering backgrounds that are small deviations from flat spacetime with no applied electromagnetic field. That is, our power counting scheme is \begin{align} D_I = O (\epsilon), &&A_\mu = \mathcal O (\epsilon), &&a_\mu = \mathcal O (\epsilon) &&e^A_\mu = \delta^A{}_\mu + \mathcal O ( \epsilon ), \end{align} so that additional, background dependent terms do not enter at lowest order. The superfluid velocity $D_I \varphi$ may however be large. It now should be clear how to generalize to arbitrary superfluid mixtures. The natural set of Galilean invariants we may form from a collection of phases $\varphi_i$ with masses $m_i$ and charges $q_i$ is \begin{align} \mu_{ij} &= - D_I \varphi_i D^I \varphi_j \nonumber \\ &= m_j D_0 \varphi_i + m_i D_0 \varphi_j - D_a \varphi_i D^a \varphi_j . \end{align} There is also a single additional set of invariants which must be considered for completeness, however, they do not lead to any distinct effects, so we relegate consideration of them to appendix \ref{app:SuperfluidDrag}. The lowest order EFT is then \begin{align} S = \int d^{d+1}x | e | p ( \mu_{ij} ). \end{align} What transport does this encode? Using the variation (\ref{derivativeVariation}) as well as $\delta |e| = | e | \Pi^\mu{}_I \delta e^I_\mu$, we find that \begin{align}\label{covariantCurrents} n^\mu_i &= n_i v^\mu_i + \frac 1 2 \rho \sum\nolimits' c^d_{ij} \frac {1}{m_i} ( v^\mu_i - v^\mu_j ) , \nonumber \\ j^\mu &= \sum q_i n_i v^\mu_i + \frac 1 2 \rho \sum\nolimits' c^d_{ij} \left( \frac{q_i}{m_i} - \frac{q_j}{m_j}\right) ( v^\mu_i - v^\mu_j ) , \nonumber \\ \tau^\mu{}_I &= - p \Pi^\mu{}_I - \sum m_i n_i v^\mu_i v_{jI} - \frac 1 2 \rho \sum\nolimits' c^d_{ij} (v^\mu_i - v^\nu_j) (v_{iI} - v_{jI}) . \end{align} Primed summations denote a sum over all independent components of the tensor structures appearing in the summand, here, $i < j$. $n^\mu_i$ is the Noether current generated by the symmetry $\varphi_i \rightarrow \varphi_i + c$. $n_i$ and $\rho$ are then interpreted as the number density of the $i$th species and the total mass density respectively. $c^d_{ij}$ is a phenomenon unique to the multiconstituent case which we will have more to say on in a moment. In the above, we have defined \begin{align}\label{dragDef} n_i &= \sum_j N_{ij}, &&\rho = \sum_i m_i n_i , &&c^d_{ij} = - \frac{2}{\rho} m_i N_{ij} . \end{align} We refer to the matrix \begin{align}\label{numberMatrix} N_{ij} = S_{ij} m_j, &&\text{where} &&S_{ij} = 2 p_{ij}, &&\delta p = \sum_{ij} p_{ij} \delta \mu_{ij} \end{align} as the number matrix.\footnote{Note that since the sum in (\ref{numberMatrix}) is unconstrained, there is some double counting. For instance $p_{ij} = \frac 1 2 \frac{\partial p}{\partial \mu_{ij}}$ when $i \neq j$.} Though perhaps notational overkill at this stage, these definitions will prove convenient when we compute effective masses in appendix \ref{app:EffectiveMass}. To understand this better, let's write down these formulas in the more familiar component form of (\ref{stressEnergyComponents}) \begin{align}\label{nonCovariantCurrents} n_i^A &= \begin{pmatrix} n_i \\ n_i v^a_i + \frac 1 2 \rho \sum_j c^d_{ij} \frac{1}{m_i} ( v^a_i - v^a_j ) \end{pmatrix}, \nonumber \\ j^A &= \begin{pmatrix} \sum q_i n_i \\ \sum q_i n_i v^a_i + \frac 1 2 \rho \sum' c^d_{ij} \left( \frac{q_i}{m_i} - \frac{q_j}{m_j}\right) (v^a_i - v^a_j ) \end{pmatrix}, \nonumber \\ \rho^A &= \begin{pmatrix} \sum m_i n_i \\ \sum m_i n_i v^a_i \end{pmatrix}, \nonumber \\ \epsilon^A &= \begin{pmatrix} \sum \mu_i n_i - p \\ \sum \mu_i n_i v^a_i + \frac 1 2 \rho \sum' c^d_{ij} \left( \frac{\mu_i}{m_i} - \frac{\mu_j}{m_j}\right) ( v^a_i - v^a_j ) \end{pmatrix}, \nonumber \\ T^{ab} &= p \delta^{ab} + \sum m_i n_i v^a_i v^b_i + \frac 1 2 \rho \sum\nolimits' c^d_{ij} (v^a_i - v^a_j)(v^b_i - v^b_j) , \end{align} where we have defined the energy per particle $\mu_i = D_0 \varphi_i$ (this is the same as the chemical potential in the homogeneous case). \subsection{Superfluid Drag}\label{sec:SuperfluidDrag} This decomposition has an obvious interpretation: the fluid of density $n_i$ carries mass $m_i$, charge $q_i$, and energy $\mu_i$ per particle in the direction $v^a_i$. The fluid has pressure $p$ and the standard kinetic contribution to the stress is fixed by Galilean invariance. The coefficients $c^d_{ij}$ are the {\it drag coefficients}. In the presence of a relative velocity of the $i$th and $j$th superfluids, they lead to a force per unit area \begin{align} \frac{dF}{dA} = \frac 1 2 \rho c^d_{ij} (\Delta v_{ij})^2 \end{align} directed along the relative velocity vector, the standard definition of the drag coefficient in hydrodynamics.\cite{landau1987fluid} As originally observed by Mineev,\cite{mineev1974theory} steady-state configurations with independent motion exist even in the presence of superfluid drag. This is immediately seen from the equations of motion \begin{align} \nabla_\mu n^\mu_i = 0, \end{align} which are trivially satisfied with static and homogeneous densities $n_i$ and velocities $v^\mu_i$, regardless of their relative orientations. In particular, superfluid drag does not introduce dissipation in a superfluid mixture with independent motions. This holds regardless of microscopic dynamics and in particular applies as well in the presence of the parity odd generalizations to drag that we will consider in the rest of this paper. In the presence of superfluid drag, the mass, energy, and charge currents are not simple weighted sums of the number current, but there is rather additional transport induced by the mutual interactions of the superfluids. This phenomenon was incorporated into the hydrodynamic description of condensate mixtures by Andreev and Bashkin,\cite{andreev1975three} who anticipated the effect on the following physical grounds. In the presence of interactions between two atomic species in mixture, the first species is transformed into a quasi-particle excitation of effective mass $m^\star_1$ greater than its bare mass $m_1$. A flow of the the first superfluid then must carry with it some mass of the second, even if the second has no number current. Roughly, mass, charge, and energy are ``dragged'' along the direction of relative velocity. In this discussion we have taken the definition of the velocity to be parallel to the number current, whereas in the rest of this paper we have defined the velocity to be parallel to the momentum of a given superfluid component $v^a_i = - \frac{1}{m} D^a \varphi_i$. One may pass from one description to the other by a simple redefinition of variables. The sound velocities may be obtained from the equations of motion $\nabla_\mu n^\mu_i = 0$ at the linearized level \begin{align} 2 \sum_{jkl} S_{ij,kl} \partial_t^2 \varphi_l = \sum_j S_{ij} \nabla^2 \varphi_j , \end{align} with $S_{ij}$ defined in (\ref{numberMatrix}) and $\delta S_{ij} = \sum S_{ij,kl} \delta \mu_{kl}$. This result will not be altered by considerations in subsequent sections except to alter the expression for $S_{ij}$ in equation (\ref{sDef}). In the single component case (\ref{oneComponentAction}) one may check that this reduces to the familiar result \begin{align} \partial_t^2 \varphi = c_s^2 \nabla^2 \varphi , &&c_s = \sqrt{\frac{\partial p}{\partial \rho}}. \end{align} \section{Parity Breaking Superfluid Mixtures}\label{sec:ParityBreakingEFT} The principle results of this paper concern parity and time-reversal breaking transport. This would be relevant in the presence of a background magnetic field, or say, in mixtures of chiral molecules.\footnote{We thank Dam Son for the later suggestion.} We find the symmetries admit two types of parity odd transport that can only be achieved in the presence of superfluids in mixture. The first is a parity odd version of the drag coefficient just discussed, which we dub the Hall drag, while the second pins charge, mass, and energy to relative velocity, in addition to other effects. The number of superfluids required to realize each possibility depends on the dimensionality. A simple microscopic model that realizes the Hall drag in $1+1$ dimensions is given in section \ref{sec:Example}. \subsection{The Hall Drag}\label{sec:HallDrag} The first example we consider is a parity odd version of the drag coefficients considered in section \ref{sec:SuperfluidDrag}. For simplicity we begin in $2+1$ dimensions in a tripartite mixture $\varphi_1, \varphi_2, \varphi_3$. We may then form an additional $P$ and $T$ breaking scalar \begin{align} \lambda &= - \epsilon_{ABC} D^A \varphi_1 D^B \varphi_2 D^C \varphi_3 \nonumber \\ &= m_1 \epsilon^{ab} D_a \varphi_2 D_b \varphi_3 + m_2 \epsilon^{ab} D_a \varphi_3 D_b \varphi_1 + m_3 \epsilon^{ab} D_a \varphi_1 D_b \varphi_2 , \end{align} where $D^A \varphi_i = \Pi^{AI} D_I \varphi_i = \begin{pmatrix} - m_i & D^a \varphi_i \end{pmatrix}^T$. Note that this invariant requires the presence of at least three condensates, and the generalization to the case of more than three condensates should be clear. As in the parity invariant case, there is an additional set of scalars that can be constructed that leads to the same effect, which for simplicity of presentation we relegate to appendix \ref{app:Parity}. The pressure is then a function of the new variable $\lambda$ in addition to $\mu_{ij}$ \begin{align} S = \int d^3 x |e| p ( \mu_{ij} , \lambda ) \end{align} and the currents are those found in (\ref{covariantCurrents}), plus \begin{align} n^\mu_1 &= c^H \epsilon^\mu{}_{\nu \lambda} \frac{1}{m_1} v^\nu_2 v^\nu_3\nonumber \\ j^\mu &= c^H \epsilon^\mu{}_{\nu \lambda} \left( \frac{m_1}{q_1} v^\nu_2 v^\lambda_3 + \frac{m_2}{q_2} v^\nu_3 v^\lambda_1 + \frac{m_3}{q_3} v^\nu_1 v^\lambda_2 \right) , \nonumber \\ \tau^\mu{}_I &= - c^H \epsilon^\mu{}_{\nu \lambda} ( v_{1I} v^\nu_2 v^\lambda_3 + v_{2I} v^\nu_3 v^\lambda_1 + v_{3I} v^\nu_1 v^\lambda_2) , \end{align} and cyclic permutations for the other $n^\mu_i$. The general formula with an arbitrary number of condensates may be found in equation (\ref{hallCurrents}). Here \begin{align} c^H &= m_1 m_2 m_3 \frac{\partial p}{\partial \lambda}. \end{align} These may be computed using the variations (\ref{oddVariations}). In a fixed ``lab frame'' (\ref{stressEnergyComponents}), these read \begin{align} &n_1^A = \begin{pmatrix} 0 \\ c^H \frac{1}{m_1} \epsilon^{ab}(v_{2b} - v_{3b} ) \end{pmatrix}, \nonumber \\ &j^A = \begin{pmatrix} 0 \\ c^H \frac{q_1}{m_1} \epsilon^{ab} (v_{2b} - v_{3b} ) + \cdots \end{pmatrix}, \nonumber \\ &\rho^A = 0, \nonumber \\ &\epsilon^A = \begin{pmatrix} 0 \\ c^H \frac{\mu_1}{m_1} \epsilon^{ab} (v_{2b} - v_{3b}) + \cdots \end{pmatrix}, \nonumber \\ &T^{ab} = c^H v^{(a}_1 \epsilon^{b )c} (v_{2c} - v_{3c}) + \cdots , \end{align} where the ellipses indicate cyclic permutations. We have symmetrized the stress by hand, since we know that local rotation invariance implies on-shell symmetry of the stress tensor.\cite{GPR_improv} We see that, much like the drag $c^d$, $c^H$ leads to stresses when two fluid components are in relative motion and induces currents that are not the weighted sum of the densities times velocity. These currents are ``dragged perpendicular'' to relative velocity rather than along it, but are otherwise precisely of the same form as the standard drag currents, so we will refer to $c^H$ as the {\it Hall drag}. This behavior generalizes naturally to higher dimensions and any number of condensates. For each condensate $i$, one forms all $(d-1)$-simplices whose corners are determined by the velocity vectors of any $d$ other condensates $i_1 , \dots , i_d$. The Hall drag then drags the $i$th condensate along the directed area $\text{Area}^a_{i_1 \cdots i_d}$ of this simplex \begin{align}\label{hallDragGeneral} &n_i^A = \begin{pmatrix} 0 \\ \sum\nolimits' c^H_{i i_1\cdots i_d} \frac{1}{m_{i}} \text{Area}^a_{i_1 \cdots i_d} \end{pmatrix} , \nonumber \\ &j^A = \begin{pmatrix} 0 \\ \sum\nolimits' c^H_{i i_1\cdots i_d} \frac{q_{i}}{m_{i}} \text{Area}^a_{i_1 \cdots i_d} \end{pmatrix} , \nonumber \\ &\rho^A = 0 , \nonumber \\ &\epsilon^A = \begin{pmatrix} 0 \\ \sum\nolimits' c^H_{i i_1 \cdots i_d} \frac{\mu_{i}}{m_{i}}\text{Area}^a_{i_1 \cdots i_d} \end{pmatrix} , \nonumber \\ &T^{ab} = \sum \nolimits ' c^H_{i i_1 \cdots i_d} v^{(a}_{i} \text{Area}^{b)}_{i_1 \cdots i_d} , \end{align} where \begin{gather} \text{Area}^a_{i_1 \cdots i_d} = \frac{1}{(d-1)!} \epsilon^{a a_1 \cdots a_{d-1} } (\Delta v_{i_1 i_2})_{a_1} \cdots (\Delta v_{i_{d-1} i_d})_{a_{d-1}} , \nonumber \\ \text{and} \qquad \qquad c^H_{i_0 \cdots i_d} = (d+1)!(d-1)! m_{i_0} \cdots m_{i_d} p_{i_0 \cdots i_d}. \end{gather} with $\delta p = \sum p_{i_0 \cdots i_d} \delta \lambda_{i_0 \cdots i_d} $. This procedure is illustrated in figure \ref{fig:Areas}. In this picture, Galilean invariance is the statement that this proceedure is independent of the choice of origin. \begin{figure}[t] \centering \includegraphics[width=.2\linewidth]{3dHallDrag1} \qquad \includegraphics[width=.2\linewidth]{3dHallDrag2} \qquad \includegraphics[width=.2\linewidth]{3dHallDrag3} \qquad \includegraphics[width=.2\linewidth]{3dHallDrag4} \caption{The directed areas contributing to Hall drag in $3+1$ dimensions.} \label{fig:Areas} \end{figure} Note that there is an interesting interplay between dimensionality and the number of overlapping condensates necessary to realize this effect: it only exists when the number of condensates exceeds the spatial dimensionality of the system. For instance in the $(3+1)$-dimensional case relevant in most experiments, one requires at least 4. \subsection{Pinning Charge to Relative Velocity}\label{sec:velocityPinning} There is a single additional set of parity odd scalars one may construct. Again specializing to $2+1$ dimensions, but now in the presence of $4$ condensates, this is \begin{align} \xi &= \epsilon^{IJKL} D_I \varphi_1 D_J \varphi_2 D_K \varphi_3 D_L \varphi_4 \nonumber \\ &= ( m_1 D_0 \varphi_4 - m_4 D_0 \varphi_1 ) \epsilon^{ab} D_a \varphi_2 D_b \varphi_3 + \cdots , \end{align} and the effective action is \begin{align} S = \int d^3 x | e | p ( \mu_{ij} , \lambda_{ijk} , \xi ) . \end{align} As we shall see, this invariant pins charge, energy, and particle number to sites of relative velocity, in addition to other effects. In other words, $\xi$ encodes a velocity dependent interaction that alters the effective mass, charge, and chemical potential so that these densities are not simple weighted sums of $n_i$ with their bare values $m_i, q_i, \mu_i$. In covariant form, this induces transport \begin{align} &n_1^\mu = \frac{f}{ 2}\epsilon^{\mu}{}_{ I J K} \frac{1}{m_1} v^I_2 v^J_3 v^K_4 , \nonumber \\ &j^\mu = \frac{f}{2}\epsilon^{\mu}{}_{ I J K} \frac{q_1}{m_1} v_2^{I} v_3^{J} v_4^{K} + \cdots , \nonumber \\ &\tau^\mu{}_I = - \frac{f}{2}\epsilon^\mu{}_{JKL} v_{1 I} v_2^J v_3^K v_4^L + \cdots, \end{align} and cyclic permutations to obtain the other $n^\mu_i$'s, where $\epsilon^{\mu JKL} = \Pi^\mu{}_I \epsilon^{IJKL}$ and \begin{align} f = - 2 m_1 m_2 m_3 m_4 \frac{\partial p}{\partial \xi} . \end{align} For the general formula, see (\ref{fCovariant}). Expressing this in the lab frame (\ref{stressEnergyComponents}), we find \begin{align} n_1^A &= \begin{pmatrix} f \frac{1}{m_1} \text{Vol}_{234} \\ \frac{f}{2} \left( \frac{1}{m_1} \frac{\mu_2}{m_2} - \frac{1}{m_2} \frac{\mu_1}{m_1} \right) \epsilon^{ab} (v_{3b} - v_{4b} ) + \cdots \end{pmatrix}, \nonumber \\ j^A &= \begin{pmatrix} f \frac{q_1}{m_1} \text{Vol}_{234} + \cdots\\ \frac{f}{2} \left( \frac{q_1}{m_1} \frac{\mu_2}{m_2} - \frac{q_2}{m_2} \frac{\mu_1}{m_1} \right)\epsilon^{ab} (v_{3b} - v_{4b} ) + \cdots \end{pmatrix}, \nonumber \\ \rho^A &= 0, \nonumber \\ \epsilon^A &= \begin{pmatrix} f \frac{\mu_1}{m_1} \text{Vol}_{234} + \cdots \\ 0 \end{pmatrix}, \nonumber \\ T^{ab} &= - \frac{1}{2} f \left( \frac{\mu_1}{m_1} v^{(a}_2 - \frac{\mu_2}{m_2} v^{(a}_1 \right) \epsilon^{b)c} (v_{3c} - v_{4c} ) + \cdots , \end{align} where \begin{align} \text{Vol}_{234} = \frac 1 2 \epsilon_{ab} \Delta v_{23}^a \Delta v_{34}^b. \end{align} We see that $f$ induces charge and number transport perpendicular to relative velocity as the Hall drag does, however, the magnitude of the effect is proportional to the bare energies per particle $\mu_i$. Moreover, as previously mentioned, $f$ pins additional charge, energy, and particle number to sites of relative velocity.\footnote{In the perhaps more natural picture where density is defined to be the zero component of $n_i^\mu$, one would say it pins additional charge, energy, and mass to sites of relative velocity.} The amount is proportional to the signed volume $\text{Vol}_{ijk}$ of the 2-simplex formed by connecting the endpoints of any three velocity vectors. The greater the relative velocities, the stronger the interaction and the more pronounced the effect. As with the Hall drag, these formulas generalize naturally to any dimension and involve the signed volumes and directed areas of various simplices. To find the amount of fluid $i$ pinned, form all $d$-simplices whose corners are determined by the velocity vectors of any $d+1$ other condensates $i_1 , \dots , i_d$. $f$ pins an amount of $i$ proportional to the signed volume $\text{Vol}_{i_1 \cdots i_{d+1}}$ of this simplex \begin{align} \text{Vol}_{i_1 \cdots i_{d+1}} &= \frac{1}{d!} \epsilon_{a_1 \cdots a_d }\Delta v^{a_1}_{i_1 i_2} \cdots \Delta v^{a_d}_{i_d i_{d+1}} . \end{align} This procedure is pictured in figure \ref{fig:Volumes}. \begin{figure}[t] \centering \includegraphics[width=.2\linewidth]{3dVolumes1} \qquad\qquad \includegraphics[width=.2\linewidth]{3dVolumes2} \qquad\qquad \includegraphics[width=.2\linewidth]{3dVolumes3} \caption{Some of the signed volumes contributing to (\ref{pinningCurrents}) in $3+1$ dimensions.} \label{fig:Volumes} \end{figure} Similarly to find the currents, select any two condensates $i,j$, and form all $(d-1)$-simplices whose corners are the endpoints of velocity vectors from any $d$ other condensates $i_1, \dots , i_d$. The current is proportional to $\text{Area}^a_{i_1 \cdots i_d}$, weighted by the anti-symmetrized ratios involving the $\mu_i$'s found above. This is illustrated in figure \ref{fig:EnergyAreas}. \begin{figure}[b] \centering \includegraphics[width=.2\linewidth]{3dChargeEnergyCurrents12} \qquad\qquad \includegraphics[width=.2\linewidth]{3dChargeEnergyCurrents13} \qquad\qquad \includegraphics[width=.2\linewidth]{3dChargeEnergyCurrents14} \caption{Some of the directed areas contributing to (\ref{pinningCurrents}) in $3+1$ dimensions.} \label{fig:EnergyAreas} \end{figure} Concretely, we have in any dimension \begin{align}\label{pinningCurrents} n_i^A &= \begin{pmatrix} \sum\nolimits' f_{i i_1 \cdots i_{d+1}} \frac{1}{m_i} \text{Vol}_{i_1 \cdots i_{d+1}} \\ \frac{1}{d} \sum\nolimits' f_{ij i_1 \cdots i_d } \left( \frac{1}{m_i} \frac{\mu_j}{m_j} - \frac{1}{m_j} \frac{\mu_i}{m_i} \right) \text{Area}^a_{i_1 \cdots i_d} \end{pmatrix}, \nonumber \\ j^A &= \begin{pmatrix} \sum\nolimits' f_{i i_1 \cdots i_{d+1}} \frac{q_i}{m_i} \text{Vol}_{i_1 \cdots i_{d+1}} \\ \frac{1}{d} \sum\nolimits' f_{ij i_1 \cdots i_d } \left( \frac{q_i}{m_i} \frac{\mu_j}{m_j} - \frac{q_j}{m_j} \frac{\mu_i}{m_i} \right) \text{Area}^a_{i_1 \cdots i_d} \end{pmatrix}, \nonumber \\ \rho^A &= 0, \nonumber \\ \epsilon^A &= \begin{pmatrix} \sum\nolimits' f_{i i_1 \cdots i_{d+1}} \frac{\mu_i}{m_i} \text{Vol}_{i_1 \cdots i_{d+1}} \\ 0 \end{pmatrix}, \nonumber \\ T^{ab} &= - \frac{1}{d} \sum\nolimits' f_{ij i_1 \cdots i_d} \left( \frac{\mu_i}{m_i} v^{(a}_j - \frac{\mu_j}{m_j} v^{(a}_i \right) \text{Area}^{b)}_{i_1 \cdots i_d} . \end{align} \section{An Example in 1+1 Dimensions}\label{sec:Example} The utility of an effective action approach is that it bypasses an often intractable microscopic description and allows one to directly write down the most general low energy theory consistent with the symmetries of a problem. However, it is nonetheless instructive to have a microscopic model of the phenomena we've described. We thus conclude with a simple example of a weakly-coupled model that exhibits nonzero Hall drag. To keep things simple, we will consider the $(1+1)$-dimensional case, where a Hall drag may be obtained in the presence of a bipartite mixture. Our characterization of the Hall drag in $1+1$ dimensions is somewhat degenerate since there are no directions perpendicular to the relative velocity. However, the formulas (\ref{hallDragGeneral}) still hold formally with $\text{Area}^a_{i_1} \rightarrow 2$, as one may check from the general expressions (\ref{hallCurrents}). We then see that $c^H$ leads to persistent currents whenever two condensates have overlapping density. In a finite system this will lead to buildup of mass, charge, and energy at the edges of a system with overlapping condensates. $(1+1)$-dimensional condensation is famously forbidden for translationally invariant systems. However, experimental realizations invariably involve a trapping potential that modifies the density of states sufficiently to circumvent the usual arguments\cite{pethick2002bose} and Bose condensation has been observed in quasi-one-dimensional systems using highly anharmonic traps.\cite{lowDimensionReview} The condensation temperature of the 1-d free Bose gas in such a trap is\cite{pethick2002bose} \begin{align} T_c \approx \omega \frac{N}{\ln N} \end{align} where $N$ is the number of atoms and $\omega$ is the trapping frequency in the soft direction. We may formally consider the limit where $N$ is very large and $\omega$ very small with finite $T_c$ and so recover the approximately translationally invariant problem. \subsection{The Model}\label{sec:Model} The quasi-one-dimensional problem may be treated as the mean field theory of a three-dimensional system with weak contact interactions in a highly anharmonic trap. It has been argued by variational techniques, supported by numerical evidence, that the effective one-dimensional dynamics is that of a Gross-Pitaevski-like equation with a non-polynomial potential $V(|\psi|^2)$.\cite{salasnich2002effective} A similar analysis should yield a non-polynomial $V(|\psi_1|^2, |\psi_2|^2)$ for the quasi-one-dimensional bipartite mixture, but the precise form will not matter for us. The microscopic model we propose is then \begin{align}\label{model} \mathcal L &= \frac{i}{2} \psi_1^\dagger \overset \leftrightarrow \partial_t \psi_1 - \frac{1}{2 m_1} \partial_x \psi_1^\dagger \partial_x \psi_1 + \frac{i}{2} \psi_2^\dagger \overset \leftrightarrow \partial_t \psi_2 - \frac{1}{2 m_2} \partial_x \psi_2^\dagger \partial_x \psi_2 \nonumber \\ &\qquad \qquad + \frac{i}{2} a \left( m_1 | \psi_1 |^2 \psi_2^\dagger \overset \leftrightarrow \partial_x \psi_2 - m_2 | \psi_2 |^2 \psi^\dagger_1 \overset \leftrightarrow \partial_x \psi_1 \right) - V ( | \psi_1 |^2 , | \psi_2 |^2 ) . \end{align} We have introduced a velocity dependent interaction $a$ that will lead to Hall drag and which is marginal in mean field theory. It is consistent with the symmetries of the problem, as one may see by writting the Lagrangian in a manifestly Galilean invariant form \begin{align} \mathcal L = - \frac{1}{2 m_1} D_I \psi^\dagger_1 D^I \psi_1 - \frac{1}{2 m_2} D_I \psi^\dagger_2 D^I \psi_2 + \frac i 2 a \epsilon_{AB} \psi^\dagger_1 \overset \leftrightarrow D {}^A \psi_1 \psi^\dagger_2 \overset \leftrightarrow D {}^B \psi_2 - V (| \psi_1 |^2 , | \psi_2 |^2 ). \end{align} Perhaps a more familiar way of seeing Galilean invariance is to consider the interaction potential between two particles in a single-particle quantum mechanics picture. In a two particle quantum mechanics picture, the field theory (\ref{model}) involves an interaction potential \begin{align} \hat V = \frac 1 2 a m_1 m_2 \left \{ \delta ( \hat X_1 - \hat X_2 ), \frac{1}{m_1} \hat P_1 - \frac{1}{m_2}\hat P_2 \right \} , \end{align} which is Galilean invariant since it references only the relative velocities of the two particles. While we do not currently have a proposal on how this can be done, it would be interesting to try to engineer such an interaction in future cold atom experiments. \subsection{Computing the Hall Drag}\label{sec:ModelHallDrag} In the condensed phase $\psi_i = e^{- i \varphi_i } \sqrt{n_i}$, this reads \begin{align} \mathcal L = n_1 \mu_1 + n_2 \mu_2 + a n_1 n_2 \lambda - V (n_1 , n_2 ) , \end{align} where for this section we are denoting $\mu_i =D_t \varphi_i- \frac{1}{2 m_i} D_x \varphi_i D_x \varphi_i$. The Gross-Pitaevski equations (GPEs) that follow from varying $n_i$ then read \begin{align}\label{EOM} \mu_1 = \frac{\partial V}{\partial n_1}- a n_2 \lambda , &&\mu_2 = \frac{\partial V}{\partial n_2}- a n_1 \lambda . \end{align} This is the usual form of the equation determining the condensate density in a potential well as a function of the chemical potentials, but now the shape of the well depends on the relative velocities of the condensates. Plugging in $\mu_1$ and $\mu_2$, we find $\mathcal L$ as a function of the condensate densities and $\lambda$ \begin{align} \mathcal L = - a n_1 n_2 \lambda + n_1 \frac{\partial V}{\partial n_1}+ n_2 \frac{\partial V}{\partial n_2}- V . \end{align} The pressure is however a function of the chemical potentials $p(\mu_1 , \mu_2 , \lambda)$, so we have \begin{gather} \frac{\partial p}{\partial \lambda} = - a n_1 n_2 - a \frac{ \partial n_1}{\partial \lambda } n_2 \lambda - a n_1 \frac{ \partial n_2 }{\partial \lambda } \lambda + n_1 \frac{\partial^2 V}{\partial n_1^2} \frac{ \partial n_1 }{\partial \lambda } + n_1 \frac{\partial^2 V}{\partial n_1 \partial n_2} \frac{ \partial n_2 }{\partial \lambda } + n_2 \frac{\partial^2 V}{\partial n_1 \partial n_2} \frac{ \partial n_1 }{\partial \lambda } + n_2 \frac{\partial^2 V}{\partial n_2^2} \frac{ \partial n_2 }{\partial \lambda } . \end{gather} Differentiating the GPEs (\ref{EOM}) with respect to $\lambda$ also gives \begin{align} \frac{\partial^2 V}{\partial n_1^2} \frac{ \partial n_1 }{\partial \lambda } + \frac{\partial^2 V}{ \partial n_1 \partial n_2} \frac{ \partial n_2 }{\partial \lambda } - a \frac{ \partial n_2 }{\partial \lambda } \lambda - a n_2 = 0 , \nonumber \\ \frac{\partial^2 V}{ \partial n_1 \partial n_2} \frac{ \partial n_1 }{\partial \lambda } + \frac{\partial^2 V}{\partial n_2^2} \frac{ \partial n_2 }{\partial \lambda } - a \frac{ \partial n_1 }{\partial \lambda } \lambda - a n_1 = 0 . \end{align} Plugging these into $\frac{\partial p}{\partial \lambda}$ then gives a Hall drag proportional to the product of the mass densities $\rho_i = m_i n_i$ \begin{align} c^H = m_1 m_2 \frac{\partial p}{\partial \lambda}= a \rho_1 \rho_2 . \end{align} \section{Conclusion}\label{sec:Conclusion} In this work we have demonstrated a general procedure to construct EFT's for Galilean invariant superfluid mixtures to any order in a momentum expansion. We have carried out the construction to lowest order and found agreement with the results of Greiter, Witten, and Wilczek\cite{Greiter:1989qb} in the single component case. It would also be interesting to look at the next order to confirm agreement with Son and Wingate\cite{Son2006} as well as to investigate what new transport is allowed at this order in the presence of multiple condensates, particularly for atoms at unitarity. At lowest order, we have found two new parity odd transport coefficients, the Hall drag, and a pinning of mass, charge, and energy to relative velocity, both of which give rise to currents that have simple geometric interpretations in terms of the volumes and areas of various simplices. Both terms require a sufficient number of condensates to be realized that depends on the spatial dimensionality. We have also furnished a weak coupling example in one spatial dimension that exhibits Hall drag. It would be interesting to try to engineer such an interaction in cold atom traps, however, these effects should generically exist in highly dense mixtures with broken $P$ and $T$ and should be observable in experiment. In this paper we have assumed only Galilean invariance and particle number conservation. However, cold atom mixtures can be created in the lab with a variety of (approximate) flavor symmetries by condensing multiple hyperfine states of a single isotope. See \cite{kasamatsu2005} for a review of these so called ``fictitious spinor'' condensates. It would be interesting to see how these further constrain transport. True spinor condensates are also experimentally accessible (see \cite{kawaguchi2012spinor} for a review). These systems display a complex phase diagram, realizing different types of magnetic order, and it would be interesting to investigate spin transport in these phases within the formalism we have outlined here. \begin{acknowledgments} We are grateful for many fruitful conversations with D. T. Son, M. M. Roberts, and K. Prabhu and input from M. Ueda. This work is supported by the University of California. \end{acknowledgments}
2,869,038,156,119
arxiv
\section{The Linear Algebra Set-Up}\label{linalgsec} We will work over a field $\mathbb K$ of characteristic $0$. Given $m\geq0$, $n\geq 1$, $d\geq1$, there exists a natural sheaf homomorphism $$\Pi(n,d,m):\mathcal J^m(d)\longrightarrow\ocal_{\proj^n}(d)$$ where $\mathcal{J}^m(d)$ is the $m$th jet bundle of $\ocal_{\proj^n}(d)$. Define $V^n(d)=H^0\ocal_{\proj^n}(d)$. Then by $\gamma(n,d,m)$ we denote the natural prolongation map $$\gamma(n,d,m):V^n(d)\longrightarrow H^0\mathcal J^m(d).$$ To aid notation, if $\mathbf a=(a_0,\ldots,a_n)\in\N^{n+1}$, we define $|\mathbf a|=a_0+\cdots+a_n$, $D(d)=\left\{\left.\mathbf a\in\N^{n+1}\right||\mathbf a|=d\right\}$, and $\mathbf{X^a}=X_0^{a_0}\cdots X_n^{a_n}$. We also notice that $D(d)$ can be visualized as the integral points of $d$ times the standard $n$-simplex. For $n=2$, we can illustrate $D(d)$ in a triangle as shown in Figure \ref{d7}. If $n$ is unclear from context, we may write $D^n(d)$. \begin{figure} \centering \includegraphics{D7labeled} \caption{An illustration of $D(7)$. In subsequent illustrations, we will omit the vertex labels.}\label{d7} \end{figure} We can identify $V^n(d)$ with the vector space of homogeneous degree-$d$ polynomials in $\mathbb K[\mathbf X]=\mathbb{K}[X_0,\ldots,X_n]$, a $d+n\choose n$-dimensional space. Note that $V^n(d)$ has a natural basis consisting of monomials $\left\{\mathbf{X^a}\left|\mathbf a\in D(d)\right.\right\}$. Also, we can identify $H^0\mathcal J^{m}(d)$ with a subspace of $$V^n(d-m+1)\otimes_{\mathbb K}\mathbb K^{D(m-1)}$$ by thinking of a section of the jet bundle $\mathcal J^{m}(d)$ as the tuple of all order-$(m{-}1)$ partial derivatives, which are indexed by $D(m-1)$, of a polynomial in $V^n(d)$. Each of these derivatives is a homogeneous degree-$(d{-}m{+}1)$ polynomial, say in $\mathbb K[\mathbf P]$. Given these identifications, $\gamma(n,d,m)$ sends a homogeneous degree-$d$ polynomial to the ordered set of its order-$(m{-}1)$ partial derivatives. Specifically, $$\gamma(n,d,m)(f)=\left(\left.\frac{\partial^{|\mathbf b|}f}{\mathbf{\partial X^b}}(\mathbf P)\right|\mathbf b\in D(m-1)\right).$$ Now $\gamma(n,d,m)$ is represented by the matrix $M(m)$ with columns indexed by $D(d)$, rows indexed by $D(m-1)$, and polynomial entries in $V^n(d-m+1)$ via \begin{equation}\label{defineM} M(m)_{[\mathbf a,\mathbf b]}=\frac{\partial^{|\mathbf b|}\mathbf{X^a}}{\mathbf{\partial X^b}}(\mathbf P)=\left(\prod_{j=0}^{b_1-1}a_1-j\right)\cdots\left(\prod_{j=0}^{b_n-1}a_n-j\right)\mathbf P^{\mathbf{a-b}}. \end{equation} For any $p\in\proj^n$, there is a natural evaluation map $$\nu_{p}:H^0\mathcal J^{m}(d)\longrightarrow J^m_{p}(d)$$ where $J^m_{p}(d)$ is the jet space over $p$---that is, the fiber of $\mathcal J^m(d)$ at $p$. Now, if $r\geq1$ and $\mathbf m=(m_1,\ldots,m_r)\in\N^r$, let $\mathcal{J}=\bigoplus_{i=1}^r\mathcal{J}^{m_i}(d)$, and define $\Pi$ to be the sheaf homomorphism $$\Pi=\sum_{i=1}^r\Pi(n,d,m_i):\mathcal{J}\longrightarrow\ocal_{\proj^n}(d).$$ Therefore we have $H^0\mathcal J=\bigoplus_{i=1}^r H^0\mathcal J^{m_i}(d)$. To keep track of the fact that the sum is direct, we put an additional index on the indeterminates of polynomials in $H^0\mathcal J^{m_i}(d)$. In particular, we think of $H^0\mathcal J^{m_i}(d)$ as consisting of tuples of polynomials in $\mathbb K[P_{i,0},\ldots,P_{i,n}]=\mathbb K[\mathbf P_i]$ so that $M(m_i)$ is a matrix with entries in $\mathbb K[\mathbf P_i]$. We then define $$\gamma=\left(\gamma(n,d,m_1),\ldots,\gamma(n,d,m_r)\right):V^n(d)\longrightarrow H^0\mathcal J.$$ Then $\gamma$ can then be represented by the matrix $M=M(\mathbf m)$ defined by $$M=\left[\begin{array}{c} M(m_1)\\ \vdots\\ M(m_r) \end{array}\right].$$ To aid notation, we let $U_i=\{(i,\mathbf b)|\mathbf b\in D(m_i-1)\}$ and $U=\bigcup_{i=1}^r U_i$ so that the rows of $M$ are indexed by $U$. In particular, we can say $$M_{[\mathbf a,(i,\mathbf b)]}=M(m_i)_{[\mathbf a,\mathbf b]}.$$ Basically, $M$ multiplies the coefficient vector of a polynomial in $V^n(d)$ on the right, and yields a collection of polynomials, which are indexed by $U$, in the variables $\{P_{i,j}\}$. For any $r$-tuple of points $p_1,\ldots,p_r\in\proj^n$, we have an evaluation map $$\nu_{p_1,\ldots,p_r}:H^0\mathcal J\longrightarrow\bigoplus_{i=1}^r J^{m_i}_{p_i}(d)$$ whose components are the evaluation maps $\nu_{p_i}$ on $H^0\mathcal J^{m_i}(d)$. Define $V^n(d,(m_1p_1,\ldots,m_rp_r))$ to be the kernel of $\nu_{p_1,\ldots, p_r}\gamma$---the space of sections of $\ocal_{\proj^n}(d)$ vanishing with multiplicity at least $m_i$ at $p_i$ for each $i$. If the points $p_i$ are taken to be general, we suppress them in the notation as $V^n(d,(m_1,\ldots,m_r))=V^n(d,\mathbf m)$. \begin{definition} We say a section $f=\sum_{\mathbf a\in D(d)}\kappa_{\mathbf a}\mathbf{X^a}\in V^n(d)$ is \emph{supported} on a subset $D$ of $D(d)$ if $\kappa_{\mathbf a}=0$ whenever $\mathbf a\notin D$. We let $V^n_D$ be the subspace of $V^n(d)$ of those sections supported on $D$. We make the analogous definitions for $V^n_D(m_1p_1,\ldots,m_rp_r)$ and $V^n_D(m_1,\ldots,m_r)$. \end{definition} We also define $\gamma_D=\gamma|_{V^n_D(\mathbf m)}$, which is represented by the sub-matrix $M_D(\mathbf m)$ of $M(\mathbf m)$ containing only those columns indexed by elements $\mathbf a\in D$. We then have $$\ker (\nu_{p_1,\ldots,p_r}\gamma_D)=V_D^n(d,(m_1p_1,\ldots,m_rp_r)).$$ The following proposition allows us to study $V^n_D(\mathbf m)$, where the points are taken to be general, by focusing on the matrix $M_D(\mathbf m)$ with polynomial entries. \begin{proposition} For general points $p_1,\ldots,p_r\in\proj^n$, $$\rk\gamma_D=\rk (\nu_{{p_1},\ldots,{p_r}}\gamma_D).$$ \end{proposition} If we de-homogenize the system---say, by setting the $X_i$ coordinate to $1$---then the $n=2$ case is Dumnicki's Proposition 9 in \cite{MR2289179}. The proof here is essentially the same. \begin{proof} Since $\nu_{p_1,\ldots,p_r}$ is a homomorphism, it suffices to prove that $\rk\gamma_D\leq\rk(\nu_{p_1,\ldots,p_r}\gamma_D)$. The rank of $\gamma_D$ is the size of the largest minor of $M=M_D(\mathbf m)$ which is not identically zero as a polynomial---call this polynomial $\mu$. The evaluation of $\mu$ at general nonzero points $\hat{p_1},\ldots,\hat{p_r}\in\mathbb K^{n+1}$ is then also nonzero. Let $\hat M$ be the matrix with scalar entries obtained by evaluating each entry of $M$ at the points $\hat{p_1},\ldots,\hat{p_r}$. Letting $p_i$ be the point in $\proj^n$ over which $\hat{p_i}$ lies, we can (non-canonically) identify $\bigoplus_{i=1}^r J^{m_i}_{p_i}(d)$ with $\mathbb K^U$ so that $\hat M$ is the matrix representing $\nu_{{p_1},\ldots,{p_r}}\gamma_D$. We then see that the corresponding minor of $\hat M$ is exactly $\mu(\hat{p_1},\ldots,\hat{p_r})$, which is known to be nonzero, and so $\hat M$ has at least the same rank as $M$. \end{proof} \begin{corollary}\label{rankM} In general, we have $$\dim V_D^n(\mathbf m)=\# D-\rk M_D(\mathbf m).$$ \end{corollary} We will use the word \emph{triple} to refer to the data of $(n,D,\mathbf m)$ with the understanding that $n\geq1$, $D\subseteq D^n(d)$ for some $d\geq 1$, and $\mathbf m\in\N^{r}$ for some $r\geq 1$. \begin{definition} We call a triple $(n,D,\mathbf m)$ \emph{non-special} if the following equivalent conditions are met. \begin{enumerate} \item $M_D(\mathbf m)$ has full rank; \item If $\dim V^n_D(\mathbf m)$ has the \emph{expected dimension} of $$\edim(n,D,\mathbf m)=\max\left\{\#D-\#U,0\right\}.$$ \end{enumerate} A triple is \emph{special} if it is not non-special. If $n$ and $\mathbf m$ are understood, we may call $D$ special or non-special as well. \end{definition} Notice that this definition specializes to the one given in the introduction when $D=D(d)$ since $\#D(d)={d+n\choose n}$. \begin{remark} We point out that the definition splits depending on the sign of $\#D-\#U$. In particular \begin{enumerate} \item if $\#D<\#U$, then we say $(n,D,\mathbf m)$ is \emph{over-determined}, and it is non-special if and only if $V^n_D(\mathbf m)=0$; \item if $\#D>\#U$, then we say $(n,D,\mathbf m)$ is \emph{under-determined}, and it is non-special if and only if $\dim V^n_D(\mathbf m)=\#D-\#U$; \item if $\#D=\#U$, then we say $(n,D,\mathbf m)$ is \emph{well-determined}, and is non-special if and only if $\dim V^n_D(\mathbf m)=\#D-\#U=0$. \end{enumerate} By definition, we always have \begin{equation}\label{intersection} V^n(d,(m_1p_1,\ldots,m_rp_r))=\bigcap_{i=1}^rV^n(d,m_ip_i). \end{equation} Another characterization of speciality for under- or well-defined triples is that a triple is non-special exactly when there are points $p_i$ general enough so that the codimension of the lefthand side is equal to the sum of the codimensions of the spaces being intersected on the righthand side. \end{remark} \section{Partitions of Monomials}\label{partitionssec} Here we present a generalization of Dumnicki and Jarnicki's notion of ``reduction'' from \cite{MR2289179, MR2543429, MR2325918}. The content of this generalization is that instead of reducing one point at a time, we may reduce by several at once. Our notation will also differ slightly from the papers cited because we do not de-homogenize our polynomials by choosing an affine chart. Instead we opt to preserve the symmetry afforded by working over all of $\proj^n$, which will be put to use in Section \ref{constructions}. As a bit of notation, if $A$ is any matrix with rows indexed by $I$ and columns indexed by $J$, we will write $(I',J')$ to denote the sub-matrix with rows in $I'\subseteq I$ and columns in $J'\subseteq J$. As a convention, we will set $\det(\varnothing,\varnothing)=1$. \begin{lemma}[A Generalized Laplace Rule (GLR)]\label{glr} Let $A$ be any square matrix with rows indexed by (an ordered set) $I$ and columns indexed by (an ordered set) $J$, and let $(I_1,\ldots,I_s)$ be a partition of $I$. Let $\mathcal{P}$ be the set of partitions $(J_1,\ldots,J_s)$ of $J$ with $\#I_i=\#J_i$ for all $i$. Then \begin{equation} \det A=\sum_{(J_1,\ldots,J_s)\in\mathcal{P}}\pm\left(\prod_{i=1}^s\det(I_i,J_i)\right)\label{glreqn} \end{equation} \end{lemma} \begin{proof} Recursively use the Generalized Laplace Rule for $s=2$. \end{proof} Given a triple $(n,D,\mathbf m)$, define $U$, $U_i$, and $M=M_D(\mathbf m)$ as above. Then let $(U',D')$ be some square sub-matrix of $M$, and define $U_i':=U'\cap U_i$. Finally let $\mathcal P(U',D')$ be the set of partitions $\mathbf E=(E_1,\ldots,E_r)$ of $D'$ with $\#E_i=\#U_i'$ for all $i$. Then the GLR gives us \begin{equation}\label{applyglr} \det(U',D')=\sum_{\mathbf E\in\mathcal P(U',D')}\pm\left(\prod_{i=1}^r\det(U_i',E_i)\right). \end{equation} In this situation, we will refer to the summand associated to $\mathbf E$ in (\ref{applyglr}) as $\sigma(\mathbf E)$. For any $\mathbf E\in\mathcal P(U',D')$, we can compute directly from (\ref{defineM}) that, for some scalar $\kappa$, \begin{eqnarray}\label{summandformula} \det(U_i',E_i)=\kappa\mathbf P_i^{\mathbf a_i(\mathbf E)-\mathbf b_i}\\ \nonumber\\ \mathbf a_i(\mathbf E):=\sum_{\mathbf a\in E_i}\mathbf a,\qquad \mathbf b_i:=\sum_{(i,\mathbf b)\in U_i'}\mathbf b\nonumber \end{eqnarray} In particular, (\ref{summandformula}) is either zero or has one term as a polynomial. Hence $\sigma(\mathbf E)$ is some scalar multiple of the monomial \begin{equation}\label{theterm} \mathbf P_1^{\mathbf a_1(\mathbf E)-\mathbf b_1}\cdots \mathbf P_r^{\mathbf a_r(\mathbf E)-\mathbf b_r} \end{equation} Notice that the $\mathbf b_i$ depend only on the choice of $U'\subseteq U$, and not on the partition $\mathbf E$. \begin{definition}\label{exceptional} Given a triple $(n,D,\mathbf m)$ and a square sub-matrix $(U',D')$ of $M$, we call a partition $\mathbf E\in\mathcal P(U',D')$ \emph{exceptional} (with respect to $(U',D')$) if it satisfies the properties \begin{enumerate} \item\label{nondegen} $\sigma(\mathbf E)\neq0$. \item\label{unique} If $D''\subseteq D$ has $\#D''=\#D'$, and $\mathbf F\in\mathcal P(U',D'')$ is a different partition with $\mathbf a_i(\mathbf F)=\mathbf a_i(\mathbf E)$ for all $i$, then $\sigma(\mathbf F)=0$. \end{enumerate} If additionally, $(U',D')$ is a maximal square sub-matrix of $M$, then we call $\mathbf E$ a \emph{fully exceptional partition}. In the case where $U'\subseteq U_r$, so that $\mathbf E=(\varnothing,\ldots,\varnothing,E_r)$, we call the partition (or just $E_r$) a \emph{reduction}. \end{definition} \begin{remark} Notice that if $(n,D,\mathbf m)$ is over- (respectively under-, well-) determined, then $(U',D')$ is maximal if and only if $D'=D$ (respectively $U'=U$, $(U',D')=M$). One fact to keep in mind is that if $\#E_i=\#F_i$, then $\mathbf a_i(\mathbf E)=\mathbf a_i(\mathbf F)$ if and only if the centroid of the points in $E_i$ is the same as the centroid of the points in $F_i$. This is a visual trick which may be helpful for looking at examples. \end{remark} We now state the first main theoretical result. \begin{theorem}\label{maintheorem} Suppose $(n,D,\mathbf m)$ admits an exceptional partition with respect to $(U',D')$ with $U'\subseteq U_{k+1}\cup\cdots\cup U_r$ for some $k\leq r$. Then $$\dim V^n_D(m_1,\ldots,m_r)\leq\dim V^n_{D\smallsetminus D'}(m_1,\ldots,m_k).$$ In particular, if $k=0$, then $\dim V^n_D(\mathbf m)\leq\#(D\smallsetminus D')$. \end{theorem} We list some special cases in the following corollary. \begin{corollary}\label{consequences}~ \begin{enumerate} \item A triple which admits a fully exceptional partition is non-special. \item If $(n,D(d),\mathbf m)$ is over- or well-determined and admits a fully exceptional partition, then for general points $p_1,\ldots, p_r$, the linear series in $\ocal_{\proj^n}(d)$ of hyper-surfaces with multiplicity $m_i$ at $p_i$ for all $i$ is empty. \end{enumerate} \end{corollary} We will use the abbreviated notation $$m^{\times r}=(\;\underbrace{m,\ldots,m}_{\text{$k$ times}}\;)$$ \begin{example}\label{exceptionalnotreduction} Here we apply Corollary \ref{consequences}.2 to show that no degree $7$ curve in $\proj^2$ has multiplicity $3$ at each of 6 general points. That is, we show $V^2(7,3^{\times 6})=0$. We claim that Figure \ref{notredpic} illustrates a fully exceptional partition, call it $\mathbf E$, of $D(7)$. \begin{figure} \centering \includegraphics{exceptionalanums} \caption{An exceptional partition for $(2,D(7),3^{\times 6})$. $E_i$ consists of the points marked with $i$, for $i=1,2,3,4,5,6$.}\label{notredpic} \end{figure} That $\sigma(\mathbf E)\neq0$ follows from Corollary \ref{af} below. That no other partition $\mathbf F$ has $\mathbf a_i(\mathbf E)=\mathbf a_i(\mathbf F)$ for $1\leq i\leq6$ can be checked exhaustively, or eyeballed by observing that no other partition $\mathbf F$ has the same sextuple of centroids of its parts as $\mathbf E$. (Begin by noticing that there are only $3$ possible sets of $6$ points with the same centroid as $E_1$. For each of these, there are $6$ or $7$ possible sets of $6$ points with disjoint from the first with the same centroid as $E_2$. Among the $20$ cases you end up with, only $3$ allow for a set of $6$ points disjoint from the first two sets with the same centroid as $E_3$. Finally, among these $3$ possibilities, only one admits a set of $6$ point disjoint from the other sets with the same centroid as $E_4$, and that is the case that is shown. By symmetry, we have shown uniqueness.) \end{example} \begin{proof}[Proof of Theorem \ref{maintheorem}] We start by noting that it suffices to prove \begin{equation}\label{rankstatement} \rk M_D(m_1,\ldots,m_r)\geq\rk M_{D\smallsetminus D'}(m_{k+1},\ldots,m_r)+\#D' \end{equation} Let $(U'',D'')$ be a maximal nonsingular submatrix of $$M_{D\smallsetminus D'}(m_{k+1},\ldots,m_r)=(U_{k+1}\cup\cdots\cup U_r,D\smallsetminus D').$$ Then let $\mathcal P$ be the set of partitions $(C',C'')$ of $D'\cup D''$ with $\#C'=\#U'$ and $\#C''=\#U''$. Then by the GLR we have $$\det (U'\cup U'',D'\cup D'')=\sum_{(C',C'')\in\mathcal P}\pm\det(U',C')\det(U'',C'').$$ Again applying the GLR, we have $$\det(U',C')=\sum_{\mathbf F\in\mathcal P(U',C')}\pm\left(\prod_{i=1}^r\det(U_i',F_i)\right).$$ Combining these, we get \begin{equation}\label{expansion} \det(U'\cup U'',D'\cup D'')=\sum_{\substack{(C',C'')\in\mathcal P,\\ \mathbf F\in\mathcal P(U',C')}}\pm\left(\det(U'',C'')\prod_{i=1}^k\det(U_i',F_i)\right). \end{equation} We claim that the only summand of (\ref{expansion}) containing nonzero terms divisible by $\sigma(\mathbf E)$ is the one corresponding to $(D',D'')\in\mathcal P,\;\mathbf E\in\mathcal P(U',D')$. Furthermore we note that this summand is nonzero by the assumption that $(D'',U'')$ is nonsingular and that $\sigma(\mathbf E)\neq0$. If the claim is true, these terms cannot cancel with terms from other summands, and so the determinant in (\ref{expansion}) is nonzero as a polynomial. That is, $\rk M_D(\mathbf m)\geq\#(U'\cup U'')$, proving (\ref{rankstatement}). To prove the claim, first notice that $\det(U'',C'')$ is a polynomial in $\{P_{i,j}|i>k\}$ and $\det(U',C')$ is a polynomial in $\{P_{i,j}|i\leq k\}$. Hence a nonzero summand of (\ref{expansion}) contains terms divisible by $\sigma(\mathbf E)$ if and only if $\prod_{i=1}^k\det(U_i',F_i)$ does. And, by the exceptionality of $\mathbf E$, this product is a nonzero multiple of $\sigma(\mathbf E)$ if and only if $C'=D'$ and $\mathbf F=\mathbf E$. \end{proof} \section{Generalized Reduction Algorithms}\label{generalizedreductionalgorithms} In order to obtain sharper results, we can make a slight generalization to Theorem \ref{maintheorem}. \begin{corollary}\label{gaug} Suppose $D\subseteq G\subseteq D(d)$, and $(n,G,\mathbf m)$ admits an exceptional partition with respect to some $(U',D')$ with $D'\supseteq G\smallsetminus D$ and $U'\subseteq U_{k+1}\cup\cdots\cup U_r$, $k\leq r$. Then $$\dim V^n_D(m_1,\ldots,m_r)\leq\dim V^n_{D\smallsetminus D'}(m_1,\ldots,m_k).$$ In particular, if $k=0$, then $\dim V^n_D(\mathbf m)\leq\#(D\smallsetminus D')$. \end{corollary} \begin{proof} Since $G\supseteq D$, we certainly have $$\dim V^n_D(m_1,\ldots,m_r)\leq\dim V^n_G(m_1,\ldots,m_r).$$ Then applying Theorem \ref{maintheorem}, and noting that $G\smallsetminus D'=D\smallsetminus D'$ by assumption, we get that $$\dim V^n_G(m_1,\ldots,m_r)\leq\dim V^n_{D\smallsetminus D'}(m_1,\ldots,m_k).$$ \end{proof} It is a slightly annoying point that we allow for the possibility that $G$ properly contains $D$. It is not even obvious that this allowance provides any additional information because we are essentially adding points to $D$ only to throw them away again. However, Example \ref{gnecessaryex} shows that the generalization is nontrivial. \begin{example}\label{gnecessaryex} In Figure \ref{gnecessaryfig}, we illustrate a reduction $D'$ of $G$ containing $G\smallsetminus D$. However, $D\cap D'$, is not a reduction of $D$. In particular, $D''$, which is also illustrated, has $\#D'=\#D''$, the same centroid as $D'$, and $(U'',D'')$ nonsingular for suitably chosen $U''$. These facts can be proved using Corollary \ref{nonspeccond} below. \begin{figure} \centering \includegraphics{gnecessary} \caption{A reduction $D'$ of $G$ whose intersection with $D$ is not a reduction of $D$. $D$ is labeled by solid points, $D'$ is labeled by $\times$'s, $G=D\cup D'$, and $D''$ is obtained by replacing the elements of $D'$ at the tails of the two arrows with the elements at the heads.}\label{gnecessaryfig} \end{figure} \end{example} Pushing this generalization further, we may want to use Corollary \ref{gaug} recursively, which leads us to the following definition. \begin{definition}\label{gradef} A \emph{generalized reduction algorithm} for a triple $(n,D,\mathbf m)$ is a sequence of integers $0=r_0<r_1<\cdots<r_s=r$ and nested subsets $D=D_s\supseteq\cdots\supseteq D_1\supseteq D_0$ so that for $1\leq i\leq s$, there exist $G_i\supseteq D_i$ so that $\left(n,G_i,\left(m_1,\ldots,m_{r_i}\right)\right)$ admits an exceptional partition with respect to $\left(U^{(i)},G_i\smallsetminus D_{i-1}\right)$ for some $U^{(i)}\subseteq U_{r_{i-1}+1}\cup\cdots\cup U_{r_i}$. In the case where $s=r$ (so that each exceptional partition is a reduction), we simply call this a \emph{reduction algorithm}. If $\#D_0=\edim(n,D,\mathbf m)$, then we call the (generalized) reduction algorithm \emph{full}. \end{definition} \begin{theorem}\label{genredalgtheorem} If $(n,D,\mathbf m)$ admits a generalized reduction algorithm, then $\dim V^n_D(\mathbf m)\leq\#D_0$, where $D_0$ is as in Definition \ref{gradef}. In particular, a triple which admits a full generalized reduction algorithm is non-special. \end{theorem} \begin{remark}\label{comparedumredalg} This is a generalization of Dumnicki and Jarnicki's notion of ``reduction algorithm'' from \cite{MR2325918}. Our primary innovation here is the case where $s<r$. That is, instead of reducing one point at a time, we may reduce by several points at once. Generalized reduction algorithms also generalize applications of Dumnicki's ``diagram-cutting'' method from \cite{MR2289179} for showing non-speciality. The fully exceptional partition given in Example \ref{exceptionalnotreduction} above is a demonstration of why this is a nontrivial generalization; it cannot be produced one point at a time by a reduction algorithm. To see this, notice that no single part of the partition has a centroid which cannot arise at the centroid of another non-special collection of six points. That being said, the triple in question, $(2,D(7),3^{\times6})$, does admit a full reduction algorithm as illustrated by Figure \ref{redpic} (see Example \ref{alg1ex} below). The author is not aware of a triple for which a full generalized reduction algorithm exists but a full reduction algorithm does not. In fact, it is apparently unknown if any non-special triples exist which do not admit full reduction algorithms (see Conjecture 19 in \cite{MR2325918}). \end{remark} \begin{proof}[Proof of Theorem \ref{genredalgtheorem}] By Corollary \ref{gaug}, we know that for $1\leq i\leq s$, $$\dim V^n_{D_i}\left(m_1,\ldots,m_{r_i}\right)\leq\dim V^n_{D_{i-1}}\left(m_1,\ldots,m_{r_{i-1}}\right).$$ Hence, we get that $$\dim V^n_{D_{s}}(m_1,\ldots,m_r)\leq\dim V^n_{D_0}=\#D_0,$$ which proves the theorem. \end{proof} We note that a generalized reduction algorithm gives rise to a partition of $D\smallsetminus D_0$. As demonstrated by Example \ref{gnecessaryex}, the partition is not necessarily exceptional if $G_i$ properly contains $D_i$ for some $i$. However if $G_i=D_i$ for all $i$, the resulting partition is necessarily exceptional. This fact implies that the generalization afforded by Theorem \ref{genredalgtheorem} is only useful for proving non-speciality of over-determined triples. For an under- or well-determined triple, $\#D_0=\edim(n,D,\mathbf m)$ implies $G_i=D_i$ for all $i$. Now that we have established Theorem \ref{genredalgtheorem}, we can focus on techniques for producing exceptional partitions and reductions. Section \ref{constructions} describes some criteria for $\sigma(\mathbf E)\neq0$ and constructions for reduction algorithms, and Section \ref{results} will use these constructions---as well as some \emph{ad hoc} methods---to build full generalized reduction algorithms for some interesting examples. \section{Constructions}\label{constructions} Let $\preceq$ be any monomial ordering on $\mathbb{K}[X_0,\ldots,X_n]$. Notice $\preceq$ induces an ordering on $\N^{n+1}$, which we will also call $\preceq$, via $$\mathbf a\preceq \mathbf b\Longleftrightarrow\mathbf{X^a}\preceq\mathbf{X^b}.$$ For any $D\subseteq D(d)$, $d\geq 2$, and $c\geq1$, define $$\mathcal E(D,c)=\{E\subseteq D|\#E=c\}.$$ \begin{definition} Suppose $E,F\in\mathcal E(D,c)$ with $E=\{\mathbf a_1,\ldots,\mathbf a_c\}$ and $F=\{\mathbf b_1,\ldots,\mathbf b_c\}$ with $\mathbf a_i\prec\mathbf a_{i+1}$ and $\mathbf b_i\prec \mathbf b_{i+1}$ for $1\leq i<c$. Then we define the \emph{$\preceq$-lexicographic ordering} on $\mathcal E(D,c)$, for which we will abuse notation and also call $\preceq$, by $$E\prec F\Longleftrightarrow\text{ there exists $k\geq1$ so that $\mathbf a_k\prec\mathbf b_k$, and $\mathbf a_i=\mathbf b_i$ for all $i<k$.}$$ \end{definition} \begin{lemma} The $\preceq$-lexicographic ordering on $\mathcal E(D,c)$ is a well-ordering for any monomial ordering $\preceq$. \end{lemma} \begin{proof} This is probably standard, and the proof works for the lexicographic ordering of finite subsets of any well-ordered set. In any event, if $\mathcal F\subseteq \mathcal E(D,c)$, then take the minimal element $\mathbf a_1$ appearing in any $E\in \mathcal F$, and let $\mathcal F_1\subseteq \mathcal F$ be the collection of all $E\in\mathcal F$ containing $\mathbf a_1$. Then recursively take the minimal element $\mathbf a_{i+1}$ different from $\mathbf a_1,\ldots,\mathbf a_i$ appearing in any $E\in\mathcal F_i$, and let $\mathcal F_{i+1}$ be the collection of $E\in\mathcal F_i$ containing $\mathbf a_{i+1}$. Then $\mathcal F_c$ will contain only the minimal element $E$ of $\mathcal F$. \end{proof} Notice, $E\preceq F$ does not imply $\sum_{\mathbf a\in E}\mathbf a\preceq\sum_{\mathbf b\in F}\mathbf b$. For example, using the standard lexicographic ordering on $\mathbb K[X_0,X_1,X_2]$, we have \begin{equation}\label{smallersum} \{(2,0,0),(0,0,2)\}\prec\{(1,1,0),(1,0,1)\},\quad (2,0,2)\succ(2,1,1). \end{equation} Define $\tilde D(d)$ to be the hyperplane in $\Q^{n+1}$ that contains $D(d)$. \begin{lemma}[Dumnicki]\label{dumnonspeccond} A well-defined triple $(n,E,(m))$ is special if and only if $E$ is contained in a degree-$(m{-}1)$ hypersurface in $\tilde D(d)$ (i.e. iff there exists a nonzero homogeneous polynomial of degree $m-1$ in $\Q[A_0,\ldots,A_n]$ which vanishes at every $\mathbf a\in E$). \end{lemma} \begin{proof} See Lemma 8 in \cite{MR2342565}. \end{proof} Suppose $E\subseteq D(d)$ is contained in a subspace $S$ of $\Q^{n+1}$. Consider $\proj S$ as the projective space of lines in $S$ through the origin, and define $W^S(m-1,E)$ to be the subspace of $H^0(\ocal_{\proj S}(m-1))$ of sections vanishing at all of the points over $E$. When $S$ is all of $\Q^{n+1}$ (the case we will consider most often), we simply write $W^n(m-1,E)$. Notice that we can identify $W^n(m-1,E)$ with the subspace of homogeneous degree-$(m{-}1)$ polynomials in $\Q[A_0,\ldots,A_n]$ which vanish at every point of $E$. Hence we can rephrase Lemma \ref{dumnonspeccond} as saying $E$ with $\#E={m+n-1\choose n}$ is special if and only if $W^n(m-1,E)=0$. In fact, a closer inspection of the proof we cited in \cite{MR2342565} gives us the following. \begin{corollary}\label{nonspeccond} An over- or well-determined triple $(n,E,(m))$ is non-special if and only if $$\dim W^n(m-1,E)={m+n-1\choose n}-\#E.$$ In particular, if a well- or under-defined triple $(n,F,(m))$ is non-special and $E\subseteq F$, then $(n,E,(m))$ is non-special. \end{corollary} Also, the following lemma is elementary, but we will use it frequently. \begin{lemma}\label{addpoint} For any $E\subseteq D(d)$ and $\mathbf a\in D(d)$ we have $$\dim W^n(m-1,E)\geq\dim W^n(m-1,E\cup\{\mathbf a\})\geq\dim W^n(m-1,E)-1.$$ \end{lemma} \begin{proof} Notice that $W^n(m-1,E\cup\{\mathbf a\})$ is the vanishing of a single (possibly zero) linear condition on $W^n(m-1,E)$. Hence, adding a point either reduces the dimension by one or leaves it the same. \end{proof} \begin{example} Consider the subset $E$ of $D(7)$ illustrated in Figure \ref{wspecial}. There is a pencil of quadrics passing through the five points of $E$---the line shown plus any line through the remaining point. Hence $W^2(2,E)=2$, but ${3+2-1\choose 2}-5=1$, and so $(2,E,(3))$ is special. \begin{figure} \centering \includegraphics{Wspecial} \caption{A subset $E$ of $D(7)$ for which $(2,E,(3))$ is special. The filled dots represent the points in $E$.}\label{wspecial} \end{figure} \end{example} Define $$\mathcal F(D,c,m)=\left\{E\in\mathcal E(D,c)\left|\text{$(n,E,(m))$ is non-special}\right.\right\}.$$ The following proposition seems innocuous at first, but in light of examples like (\ref{smallersum}), it should actually be somewhat surprising, and the proof is slightly technical. \begin{proposition}\label{firsts} Suppose $c\leq {m+n-1\choose n}$, and let $E$ be the minimal element of $\mathcal F(D,c,m)$ with respect to the $\preceq$-lexicographical ordering for some monomial ordering $\preceq$. Then every $F\in\mathcal F(D,c,m)$ has the property that $$\sum_{\mathbf a\in E}\mathbf a\preceq\sum_{\mathbf b\in F}\mathbf b$$ with equality holding only if $E=F$. \end{proposition} \begin{proof} Suppose $E=\{\mathbf a_1,\ldots,\mathbf a_c\}$ with $\mathbf a_i\prec\mathbf a_{i+1}$ is the minimal element of $\mathcal F(D,c,m)$, and $F=\{\mathbf b_1,\ldots,\mathbf b_c\}$ with $\mathbf b_i\prec\mathbf b_{i+1}$ has the minimal sum of any element of $\mathcal F(D,c,m)$. By way of contradiction, suppose $E\neq F$. By Corollary \ref{nonspeccond}, we know $$\dim W^n(m-1,E)=\dim W^n(m-1,F)=M-c,\quad M={m+n-1\choose n}.$$ Let $i$ be minimal with $\mathbf a_i\neq\mathbf b_i$ so that $\mathbf a_i\prec\mathbf b_i$ (since $E\prec F$). Then $F'=\left(F\smallsetminus\{\mathbf b_i\}\right)\cup\{\mathbf a_i\}$ has a sum strictly smaller than $F$, so $F'\notin\mathcal F(D,c,m)$; that is, $F'$ is special. By Corollary \ref{nonspeccond} and Lemma \ref{addpoint} we have, $$M-c> \dim W^n(m-1,F')\geq\dim W^n(m-1,F\smallsetminus\{\mathbf b_i\})=M-c+1.$$ Hence $\dim W^n(m-1,F')=M-c+1$ Again using Lemma \ref{addpoint}, we have $$\dim W^n(m-1,F')-1\leq\dim W^n(m-1,F\cup\{\mathbf a_i\})\leq\dim W^n(m-1,F)$$ and so the middle dimension must be $M-c$. Thus $F\cup\{\mathbf a_i\}$ is special. Let $F''$ be any minimal (with respect to containment) subset of $F$ with the property that $F''\cup\{a_i\}$ is special. Note that $F''$ cannot contain only $\mathbf b_j$ with $j<i$, else $$F''\cup\{\mathbf a_i\}\subseteq\{\mathbf a_1,\ldots,\mathbf a_i\}\subseteq E.$$ (Remember $E$ is non-special, and by Corollary \ref{nonspeccond}, its subsets are as well). Hence $\left(F''\smallsetminus\{\mathbf b_j\}\right)\cup\{\mathbf a_i\}$ is non-special for some $j\geq i$, which implies $\mathbf b_j\succ\mathbf a_i$. Consider the following set of properties that some subset $\hat F\subseteq F$ may have: \begin{equation}\label{hatprops} \text{$F''\subseteq\hat F\subseteq F$;\quad $\hat F\cup\{\mathbf a_i\}$ is special;\quad $(\hat F\smallsetminus\{\mathbf b_j\})\cup\{\mathbf a_i\}$ is non-special.} \end{equation} We claim that if $G$ satisfies (\ref{hatprops}), then for any $\mathbf b_k\in F\smallsetminus G$ with $k\neq j$, $G\cup\{\mathbf b_k\}$ satisfies the properties of (\ref{hatprops}) as well. Noticing that $F''$ satisfies (\ref{hatprops}), this will allow us to apply the claim recursively, adding the points of $F\smallsetminus F''$ one at a time, and at the end conclude that $(F\smallsetminus\{\mathbf b_j\})\cup\{\mathbf a_i\}$ is non-special and hence in $\mathcal F(D,c,m)$. But $(F\smallsetminus\{\mathbf b_j\})\cup\{\mathbf a_i\}$ has a sum strictly less than that of $F$, contradicting the minimality assumption and proving the theorem. To prove the claim, suppose $G$ satisfies (\ref{hatprops}). Start by letting $\mathbf b_k\in F\smallsetminus G$, $k\neq j$. We then have $F''\subseteq G\subseteq F$, and since $F''\subseteq G$ and $F''$ is minimal, we also have $G\cup\{\mathbf a_i\}$ special. So we have only to prove that $\left(G\smallsetminus\{\mathbf b_j\}\right)\cup\{\mathbf a_i,\mathbf b_k\}$ is non-special. If it were special, then we would have, from two applications of Lemma \ref{addpoint}, \begin{eqnarray*} W^n(m-1,\left(G\smallsetminus\{\mathbf b_j\}\right)\cup\{\mathbf a_i,\mathbf b_k\})&=&W^n(m-1,\left(G\smallsetminus\{\mathbf b_j\}\right)\cup\{\mathbf b_k\})\\ W^n(m-1,\left(G\smallsetminus\{\mathbf b_j\}\right)\cup\{\mathbf a_i,\mathbf b_k\}) &=&W^n(m-1,\left(G\smallsetminus\{\mathbf b_j\}\right)\cup\{\mathbf a_i\}) \end{eqnarray*} Since the right-hand sides would then equal, we would be able restrict each space to the set of polynomials vanishing at $\mathbf b_j$ and obtain \begin{equation}\label{ws} W^n(m-1,G\cup\{\mathbf b_k\})=W^n(m-1,G\cup\{\mathbf a_i\}) \end{equation} However, we know $G\cup\{\mathbf b_k\}$ is non-special (it is a subset of $F$), and we know $G\cup\{\mathbf a_i\}$ is special (it contains $(G\smallsetminus\{\mathbf b_j\})\cup\{\mathbf a_i\}$ which is special by assumption). Hence the dimensions of the two spaces in (\ref{ws}) are not equal, and we have a contradiction. Therefore we must have had $\left(G\smallsetminus\{\mathbf b_j\}\right)\cup\{\mathbf a_i,\mathbf b_k\}$ non-special. \end{proof} As a corollary of this proposition, we obtain the following theorem, which will be one of our main tools for constructing reductions. In fact, we can think of the reductions constructed by Dumnicki in \cite{MR2289179} (by diagram-cutting) and \cite{MR2325918} (see Remark \ref{redremark}) as applications of Theorem \ref{corfirst}. \begin{theorem}\label{corfirst} For any triple $(n,D,\mathbf m)$, if $\mathcal F(D,c,m_r)$ is non-empty for some $c>0$, then its minimal element with respect to any monomial ordering $\preceq$ is a reduction. \end{theorem} \begin{proof} Let $E$ be the minimal element of $\mathcal F(D,c,m_r)$. Then by Proposition \ref{firsts}, we know that for any $F\subseteq D$ with $\#F=\#E$ and $F\neq E$, $$\mathbf a_r(\varnothing,\ldots,\varnothing,F)\succneqq\mathbf a_r(\varnothing,\ldots,\varnothing,E).$$ \end{proof} One useful feature of Theorem \ref{corfirst} is that once we know $\mathcal F(D,c,m_r)$ is non-empty, we may choose any monomial ordering and obtain a reduction. In particular, if we are building a reduction algorithm, we may use a different monomial ordering in each step. We can capitalize on this idea with the following algorithm aimed at proving non-speciality. \begin{algorithm}\label{algorithm0}~ INPUT: A triple $(n,D,\mathbf m)$, and an ordered $r$-tuple of monomial orderings $(\preceq_1,\ldots,\preceq_r)$. OUTPUT: A lower bound on $\dim V^n_D(\mathbf m)$ and either ``non-special'' or ``undecided''. ALGORITHM: Define $D_r=D$. Recursively take the largest $c_i\leq{m_i+n-1\choose n}$ for which $\mathcal F_i=\mathcal F(D_i,c_i,m_i)$ is non-empty, let $E_i$ be the minimial element of $\mathcal F_i$, and define $D_{i-1}=D_i\smallsetminus E_i$. Output $\dim V^n_D(\mathbf m)\leq\#D_0$. If $\#D_0=\edim(n,D,\mathbf m)$, then output ``non-special''. Otherwise, output ``undecided''. \end{algorithm} Notice that Algorithm \ref{algorithm0} will either prove the non-speciality a triple, or it will come out inconclusive---it cannot prove speciality. One of the drawbacks of applying Algorithm \ref{algorithm0} is that computing the minimal element of $\mathcal F_i$ may be quite difficult. In particular, the somewhat na\"ive approach of using Corollary \ref{nonspeccond} to test each element of $\mathcal E(D_i,c_i)$ (in the order determined by the well-ordering) for speciality until the minimal $E_i\in\mathcal F_i$ is found (or until $\mathcal F_i$ is found to be empty) may be very computation-heavy. However, the following generalization of a lemma of Dumnicki allows us to obviate the linear algebra test for speciality for a large class of examples. \begin{lemma}\label{anynaf} Suppose $(n,E,(m))$ is an over- or well-determined triple. For some $k\leq m$, let $H_1,\ldots,H_k$ be distinct dimension-$n$ subspaces of $\Q^{n+1}$ which intersect $\tilde D(d)$ in parallel hypersurfaces of $\tilde D(d)$. Let $E'=E\cap(H_1\cup\cdots\cup H_k)$. \begin{enumerate} \item\label{anynno1} Suppose $$\#E'>\sum_{i=1}^k{m-i+n-1\choose n-1}={m+n-1\choose n}-{m-k+n-1\choose n}.$$ Then $(n,E,(m))$ is special. \item\label{anynyes} Suppose that $E=E'$, and that for $i=1,\ldots,k$ \begin{equation}\label{gennonspeccond} \#E\cap H_i\leq{m-i+n-1\choose n-1}. \end{equation} If for each $i$, \begin{equation}\label{gennonspeccond2} \dim W^{H_i}(m-i+1,E\cap H_i)={m-i+n-1\choose n-1}-\#(E\cap H_i), \end{equation} then $(n,E,(m))$ is non-special. \end{enumerate} \end{lemma} \begin{proof}~ \begin{enumerate} \item First notice that since $\#E\leq{m+n-1\choose n}$, the assumption implies $k<m$. Now, there exists a nontrivial $f\in W^n(k,E')$ vanishing on $H_1\cup\cdots\cup H_k$. Hence $f$ vanishes on $E'$, which implies by Corollary \ref{nonspeccond} that $(n,E',(m))$ is special. Then since $E\supseteq E'$, $E$ must be special as well. \item By Corollary \ref{nonspeccond}, it suffices to prove this for the case where $k=m$ and $\#E={m+n-1\choose n}$. In this case, we necessarily have equality in (\ref{gennonspeccond}) for all $i$. Suppose, by way of contradiction, there exists a nontrivial $f\in W^n(m-1,E)$, and let $S$ be its vanishing set. Let $1\leq i<m$ and assume $S\supset H_j$ for all $j<i$ (vacuously if $i=1$). Then $S$ consists of the union of all $H_j$ with $j<i$ together with a degree $m-i+1$ hypersurface. Therefore, since the $H_j$ are disjoint, either $S\cap H_i$ has degree $m-i+1$ or it is all of $H_i$. However, by our assumption, (\ref{gennonspeccond2}) reduces to $W^{H_i}(m-i+1,E\cap H_i)=0$, so we see that the former possibility is prohibited. Hence $S\supset H_i$. Therefore, by induction $S$ contains $H_1,\ldots,H_{m-1}$, but since $S$ has degree $m-1$, their union must be all of $S$. But $S$ was also supposed to contain the single point in $H_m$, and hence we have a contradiction. \end{enumerate} \end{proof} \begin{example} In Figure \ref{specialfig}, we illustrate a subset $E$ of $D(7)$. Notice that $\#E'=19$ but ${7\choose2}-{3\choose2}=18$. Hence by Condition \ref{anynno1} of Lemma \ref{anynaf}, $(2,E,(6))$ is special. \begin{figure} \centering \includegraphics{special2} \caption{A subset $E$ of $D(7)$ so that $(2,E,(6))$ is special by Condition \ref{anynno1} of Lemma \ref{anynaf}. The intersections $H_i\cap \tilde D(7)$ are illustated for clarity.}\label{specialfig} \end{figure} \end{example} One powerful aspect of Lemma \ref{anynaf} to notice is that using Condition \ref{anynyes}, we can use our knowledge of non-speciality in low dimensions to determine non-speciality in higher dimensions. This arises from the fact that we can re-phrase (\ref{gennonspeccond2}) as $$\text{$E\cap H_i$ is non-special as a subset of $D^{n-1}(d')\subset H^i\cong\Q^n$.}$$ First we notice that the case $n=1$ is trivial: $(1,E,(m))$ is always non-special. This follows directly from Corollary \ref{nonspeccond}. From here, we apply Lemma \ref{anynaf} in two ways: first by applying Condition \ref{anynyes} to construct a large class of non-special $(n,E,(m))$ for general $n$; then by describing the case where $n=2$ as thoroughly as possible so that we can apply it to specific examples effectively. \begin{definition} Define a \emph{scrambled $1$-simplex of size $m$} to be any set of $m$ colinear points in $D(d)$. Then, recursively define a \emph{scrambled $k$-simplex of size $m$} to be a set $E\subseteq D(d)$ so that there exist $m$ distinct dimension-$(k{-}1)$ subspaces, $H_1,\ldots,H_m$ of $\Q^{n+1}$ whose intersections with $\tilde D(d)$ are parallel, such that $E\subseteq H_1\cup\cdots\cup H_m$, and $E\cap H_i$ is a scrambled $(k{-}1)$-simplex of size $i$. \end{definition} \begin{example}\label{skew2ex} In Figure \ref{skew2fig}, we illustrate a scrambled $2$-simplex of size $4$. Notice that $E\cap H_i$ is a set of $i$ colinear points for $i=1,2,3,4$. \begin{figure} \centering \includegraphics{skew2simplex} \caption{A scrambled $2$-simplex of size $4$ in $D(7)$. The intersections $H_i\cap\tilde D(7)$ are illustrated as well for clarity.}\label{skew2fig} \end{figure} \end{example} \begin{proposition}\label{skewsimplex} Any subset $E$ of a scrambled $n$-simplex of size $m$ has the property that $(n,E,(m))$ is non-special. \end{proposition} \begin{proof} Use Condition \ref{anynyes} of Lemma \ref{anynaf} and induction on $n$. \end{proof} We now specialize to the case of $n=2$. In this case, we define a \emph{row} in $D\subseteq D(d)$ to be the (possibly empty) intersection of $D$ with a dimension-$2$ subspace of $\Q^3$. \begin{proposition}\label{af} Suppose $(2,E,(m))$ is an over- or well-determined triple. \begin{enumerate} \item\label{no} If there are $k<m$ parallel rows $R_1,\ldots,R_k$ so that $$\#(R_1\cup\cdots\cup R_k)\cap E>{m+1\choose 2}-{m+1-k\choose2},$$ then $(2,E,(m))$ is special. \item\label{yes} If $E$ is contained in the union of $m$ parallel rows $R_1,\ldots,R_m$, so that $\#R_i\cap E\leq i$ for all $i$, then $(2,E,(m))$ is non-special. \end{enumerate} \end{proposition} \begin{proof}~ \begin{enumerate} \item Apply Condition \ref{anynno1} of Lemma \ref{anynaf}. \item This condition is equivalent to saying $E$ is a subset of a scrambled $2$-simplex of size $m$. Apply Proposition \ref{skewsimplex}. \end{enumerate} \end{proof} Our goal now is to apply Proposition \ref{af} in such a way as to completely avoid the (possibly computation-heavy) linear algebra test of Corollary \ref{nonspeccond}. Roughly speaking, for a given monomial ordering $\preceq$, our strategy for $n=2$ will be to take the largest $c$ we can think of for which we can determine the minimal element of $\mathcal F(D,c,m)$ using only Proposition \ref{af}. That is, take the largest $c$ for which we can show that the minimal $E\in\mathcal E(D,c)$ satisfying Condition \ref{yes} in Corollary \ref{af} is greater only than elements of $\mathcal E(D,c)$ which satisfy Condition \ref{no}. This will show \emph{a fortiori} that $E$ is minimal in $\mathcal F(D,c,m)$. Let $\preceq$ be a lexicographic (resp. reverse lexicographic) monomial ordering on $\mathbb K[X_0,X_1,X_2]$, say with the convention that $X_{i_0}\prec X_{i_1}\prec X_{i_2}$. Now we define a \emph{$\preceq$-row} in $D$ to be a row of the form $R(k)=\{(a_0,a_1,a_2)\in D|a_{i_0}=k\}$. Then we can define an ordering of $\preceq$-rows $$R(k)\succeq R(l)\Longleftrightarrow k\leq l\;\text{(resp. $k\geq l$)}.$$ Now for any two $\preceq$-rows $R_i$ and $R_j$, if $\mathbf a\in R_i$, $\mathbf b\in R_j$ and $R_i\succ R_j$, then $\mathbf a\succ\mathbf b$. We can now present a generalization of Dumnicki and Jarnicki's notion of ``weak $m$-reduction'' from \cite{MR2325918}. \begin{definition}\label{reddef} Let $\preceq$ be a lexicographic or reverse lexicographic monomial ordering on $\mathbb K[X_0,X_1,X_2]$ with the convention $$X_{i_0}\prec X_{i_1}\prec X_{i_2}.$$ Suppose $D\subseteq D(d)$. Suppose there are $\bar k$ non-empty $\preceq$-rows in $D$. Then let $k=\min\left\{m,\bar k\right\}$, and name the minimal $k$ $\preceq$-rows in $D$ $$R_1\prec\cdots\prec R_k.$$ Let $\Omega_1=\{1,\ldots,m\}$. Recursively define for $1\leq j\leq k$, $$u_j=\min\{\max\Omega_j,\#R_j\},\quad u'_j=\min\{s\in\Omega_j|s\geq u_j\},\quad \Omega_{j+1}=\Omega_j\smallsetminus\{u'_j\}.$$ Then define the \emph{$(m,\preceq)$-reduction of $D$} to be $$\red_\preceq(D,m)=\bigcup_{j=1}^m\{\text{minimal $u_j$ elements of $R_j$}\}.$$ \end{definition} \begin{example}\label{reductionex} Figure \ref{reductionfig} shows the $(5,\preceq)$-reduction of a subset $D$ of $D(7)$, where $\preceq$ is the reverse lexicographic ordering with $X_1\prec X_0\prec X_2$. Here is the step-by-step construction: $$\begin{array}{c|c|c|c} i&\Omega_i&u_i&u_i'\\ \hline 1&\{1,2,3,4,5\}&3&3\\ 2&\{1,2,4,5\}&3&4\\ 3&\{1,2,5\}&4&5\\ 4&\{1,2\}&2&2\\ 5&\{1\}&1&1 \end{array}$$ \begin{figure} \centering \includegraphics{reduction} \caption{The $(5,\preceq)$-reduction of a subset $D$ of $D(7)$, where $\preceq$ is the reverse lexicographic ordering with $X_1\prec X_0\prec X_2$. The points of $D$ are denoted by solid dots, and the points of $D'=\red_\preceq(D,5)$ are denoted by $\times$'s.}\label{reductionfig} \end{figure} \end{example} The following Lemma justifies the re-use of the word ``reduction''. \begin{lemma}\label{redlemma} Given a triple $(2,D,\mathbf m)$, let $D'=\red_\preceq(D,m_r)$. Then for some $G\subseteq D(d)$ containing $D$, $D'\cup (G\smallsetminus D)$ is a reduction for $(n,G,\mathbf m)$. \end{lemma} \begin{remark}\label{redremark} In \cite{MR2325918}, only one monomial ordering is considered. The details of the proof of the corresponding lemma from that paper are given in \cite{MR2342565}. The ``bean-counting'' aspect of our proof is mostly the same, but in the end we appeal to Theorem \ref{corfirst} to prove that we end up with a reduction. \end{remark} \begin{proof} First, we prove the case where $\preceq$ is a lexicographic monomial order, and then note how the proof must be modified to cover the reverse lexicographic case. We use the notation of Definition \ref{reddef}. We assume without loss of generality that $\preceq$ is such that $X_0\prec X_1\prec X_2$. For each row $R_i$, define $R_i'$ to be the row of $D(d)$ containing $R_i$. Notice that $R'_i=\{(a_0,a_1,a_2)\in D(d)|a_0=\alpha_i\}$ for some $$0\leq \alpha_k<\cdots<\alpha_1\leq d.$$ We note that $\#R'_i=d-\alpha_i+1\geq i$. Notice that $$u_j'\leq u_j+j-1\leq\max\Omega_j+j-1\leq\#R_1+j-1\leq\#R_1'+j-1\leq \#R_j'.$$ The first inequality is true because $\Omega_j$ must contain one of $u_j,u_j+1,\ldots,u_j+j-1$. Define $$G=D\cup\bigcup_{j=1}^k\{\text{an additional $u'_j-u_j$ points from $R_j'$}\}.$$ Now, let $E=\red_\preceq(D,m)\cup(G\smallsetminus D)$. The claim is then that $E$ is the minimal element of $\mathcal F(G,\#E,m)$. First notice that $\#(E\cap R_j')=u'_j$ since either \begin{enumerate} \item $u_j=\max\Omega_j$ so that $u_j=u_j'$ and then $G\cap R_j'=R_j$, or \item $u_j=\#R_j$ so that $u_j'\geq\#R_j$ and $G\cap R_j'=E\cap R_j'=R_j\cup\{\text{$u_j'-\#R_j$ points}\}$. \end{enumerate} Since the $u_j'$ are by definition distinct integers no more than $m$, we know that $E$ is a subset of a scrambled $2$-simplex of size $m$. Hence $E\in\mathcal F(G,\#E,m)$. Suppose that $F$ with $\#F=\#E$ has $F\prec E$. Assume $E=\{\mathbf a_1,\ldots,\mathbf a_c\}$ with $\mathbf a_i\prec\mathbf a_{i+1}$ and $F=\{\mathbf b_1,\ldots,\mathbf b_c\}$ with $\mathbf b_i\prec\mathbf b_{i+1}$. Let $i$ be minimal so that $\mathbf b_i\neq\mathbf a_i$, which implies $\mathbf b_i\prec\mathbf a_i$. Say $\mathbf a_i\in R'_j$. Then we know $\mathbf b_i$ is not in $R_j'$ because $E$ contains the minimal $u_j'$ elements of $G\cap R'_j$. Now $\mathbf a_{i-1}$ must be in either $R_j'$ or $R_{j-1}'$, but since $\mathbf b_i\succ\mathbf b_{i-1}=\mathbf a_{i-1}$ and $\mathbf b_i\notin R_j'$, we must have $\mathbf b_{i-1},\mathbf b_i\in R_{j-1}'$. In fact, $$R_{j-1}'\ni\mathbf a_{l-u_{j-1}'}=\mathbf b_{l-u_{j-1}'}\prec\cdots\prec\mathbf b_l.$$ This shows that $F\cap R_{j-1}'$ contains more than $u'_{j-1}$ elements. Now, because of this, we must have $\#F\cap R_{j-1}'>\omega$, where $\omega=\max\Omega_{j-1}$. Since $\Omega_{j-1}$ does not contain $\omega+1,\ldots,m$, there must have been $m-\omega$ rows preceding $R_{j-1}'$ whose intersections with $F$ contain $\omega+1, \ldots, m$ elements respectively. But then the number of elements in the union of these rows together with $R_{j-1}'$ intersected with $F$ is more than $$\omega+(\omega+1)+\ldots+m={m+1\choose2}-{\omega\choose2}.$$ Hence by Proposition \ref{af}, Condition \ref{no}, $F$ is in fact special, showing that $E$ is in fact minimal in $\mathcal F(G,\#E,m)$. Therefore we may apply Theorem \ref{corfirst}, and so $E$ is a reduction. In the reverse lexicographic case, we do not necessarily have $\#R_i'\geq i$. This requires us to choose $G$ more carefully, but the steps are the same as in the lexicographic case. With different notation, Dumnicki proves this case in \cite{MR2342565}. \end{proof} \begin{example} \begin{figure} \centering \includegraphics{augmented} \caption{The same illutration from Figure \ref{reductionfig} with additional labels. $D$ and $D'$ are as above. $E$ is the set of points inside the polygon, and $G$ is $E\cup D$. $F_i$ is obtained by taking $E$ and exchanging the point at the tail of the arrow labeled $F_i$ for the point at the head ($i=1,2$).}\label{augmentedfig} \end{figure} We return to Example \ref{reductionex} to demonstrate the proof. First, notice that $u_2'-u_2=1$, $u_3'-u_3=1$, and $u_j'=u_j$ for $j=1,4,5$. Hence $G$ is defined to be $D$ plus one extra point from each of $R_2'$ and $R_3'$ as shown in Figure \ref{augmentedfig}. It does not matter which extra points are chosen, but we know that there are enough to choose from. Notice that $E=D'\cup (G\smallsetminus D)$ is a scrambled $2$-simplex of size $5$. Hence $E\in\mathcal F(G,15,5)$. There are only two possible $F\subseteq G$ with $\#F=15$ and $F\prec E$; call them $F_1$ and $F_2$, as illustrated. In both cases, $F_i\cap R_4'>u_4'=2$, which necessarily means that previous rows of $F_i$ must have contained $3$, $4$, and $5$ elements---this is clearly the case. And because of this fact, $\#F_i\cap(R_1'\cup R_2'\cup R_3'\cup R_4')=15$, showing that $F_i\notin\mathcal F(G,15,5)$ for $i=1,2$ by Proposition \ref{af}. Therefore $E$ is a reduction of $G$. \end{example} In light of Lemma \ref{redlemma}, we are justified in writing the following algorithm. \begin{algorithm}\label{algorithm1}~ INPUT: $d\in\N$, $\mathbf m\in\N^r$, an ordered $r$-tuple of lexicographic or reverse lexicographic monomial orderings $(\preceq_1,\ldots,\preceq_r)$. OUTPUT: A lower bound on $\dim V^2(d,\mathbf m)$, and either ``non-special'' or ``undecided''. ALGORITHM: Define $D_r=D(d)$. Then recursively let $D_{i-1}=D_i\smallsetminus\red_{\preceq_i}(D_i,m_i)$ for $i=m,\ldots,1$. Output $\dim V^2(d,\mathbf m)\leq \#D_0$. If $\#D_0=\edim(2,D(d),\mathbf m)$, then output ``non-special''. Otherwise, output ``undecided''. \end{algorithm} \begin{remark} Compared with Algorithm \ref{algorithm1}, Algorithm \ref{algorithm0} is more likely to detect the non-speciality of a triple because Algorithm \ref{algorithm1} does not necessarily maximize the number of elements in each reduction. However, Algorithm \ref{algorithm1} is much cheaper computationally. It requires no linear algebra calculations which appeal to Corollary \ref{nonspeccond}; only the combinatoric comparisons necessary to define the $(m,\preceq)$-reductions are needed. For calculations with $|\mathbf m|>100$, Algorithm \ref{algorithm0} becomes impractical. We also note that Lemma \ref{anynaf} can be used to produce higher-dimensional analogues of Definition \ref{reddef} leading to higher-dimensional analogues of \ref{algorithm1}. Finally, we remark this algorithm could likely be improved by incorporating other techniques such as Cremona transformations, as in the algorithms developed in \cite{MR2325918}. We forgo the use of other methods for simplicity and to highlight the power of Lemma \ref{redlemma} on its own (see for example the results in Section \ref{nagatasubsection}). \end{remark} Notice that there are $12$ possible monomial orderings that we can use for each $\preceq_i$ in the input of the above algorithm. This means there are $12^r$ possible $r$-tuples of monomial ordering we could potentially test. We will use the notation $\lex(i_0,i_1,i_2)$ to denote the lexicographic ordering with $X_{i_0}\prec X_{i_1}\prec X_{i_2}$. We also denote by $\rlex(i_0,i_1,i_2)$ the reverse lexicographic ordering with $X_{i_2}\prec X_{i_1}\prec X_{i_0}$. \begin{example}\label{alg1ex} The reduction algorithm for $(2,D(7),3^{\times6})$ in Figure \ref{redpic} can be produced using Algorithm \ref{algorithm1} by using the sextuple of monomial orderings $$\big(\lex(1,2,0),\lex(1,2,0),\lex(1,2,0),\lex(0,1,2),\rlex(1,2,0),\rlex(1,2,0)\big).$$ \begin{figure} \centering \includegraphics{reductionanums} \caption{A full reduction algorithm for $(2,D(7),3^{\times 6})$. $D_i\smallsetminus D_{i-1}$ consists of the points labeled by $i$ for $i=1,2,3,4,5,6$.}\label{redpic} \end{figure} \end{example} \section{Application of Algorithms}\label{results} We will apply Theorem \ref{genredalgtheorem} to examples stemming from two different areas of study. First, we produce new bounds on the multi-point Seshadri constants of $\proj^2$. Second, we recover a result from Evain \cite{MR2125451} which generalizes the known cases of Nagata's Conjecture to higher dimensions, proving the emptiness $\left(n,D(d),m^{\times s^n}\right)$ when $d\leq ms$ (with a few well-known exceptions). \subsection{Bounding multi-point Seshadri constants of $\proj^2$.}\label{nagatasubsection} \begin{definition}[See for example \cite{MR2555949}] Let $(X,L)$ be a smooth polarized variety, $p_1,\ldots,p_r\in X$. Then we define the \emph{multi-point Seshadri constant of $L$ at $p_1,\ldots,p_r$} to be $$\epsilon(L;p_1,\ldots,p_r):=\inf_{\text{curves $C$}}\frac{L.C}{\sum_{i=1}^r\mult_{p_i}C}.$$ If the points are taken to be very general and $(X,L)$ is understood, we simply write $\epsilon(r)$. \end{definition} Applying this to the polarized variety $(\proj^2,\ocal_{\proj^2}(1))$, we can equate curves with nonzero sections of $\ocal_{\proj^2}(d)$ for some $d$, up to nonzero scalar multiples. Then we can write $$\epsilon(r)=\inf_{\substack{d\geq1,\mathbf m\in\N^r\\0\neq V^2(d,\mathbf m)}}\frac{d}{m_1+\cdots+m_r}.$$ In this language, Nagata's Conjecture states that for $r\geq9$, $\epsilon(r)=\frac 1{\sqrt r}$. That $\epsilon(r)$ is no more than $\frac 1{\sqrt r}$ can be proved from first principles (see for example \cite{MR2555949,MR2098342}), but only for $r$ a perfect square has equality been shown---in fact by Nagata in \cite{MR0088034}. To show that $\epsilon(r)\geq e$ for some constant $e$, one must prove that $V^2(d,\mathbf m)=0$ whenever $\frac{d}{m_1+\cdots+m_r}\leq e$ . One can check that for $r>9$, the condition $\frac{d}{m_1+\cdots+m_r}\leq e\leq\frac 1{\sqrt r}$ implies $(2,D(d),\mathbf m)$ is over-defined, and so showing $V^2(d,\mathbf m)=0$ is equivalent to showing that $(2,D(d),\mathbf m)$ is non-special. In \cite{MR2574368}, Harbourne and Ro\'e construct, for each non-square $r>9$, an increasing sequence of rational numbers limiting to $\frac{1}{\sqrt r}$ with no other accumulation points, so that $\epsilon(r)$ must either be one of the values in that sequence or $\frac{1}{\sqrt r}$. Furthermore, they show that to rule out any one of the rational values, it suffices to show the non-speciality of a finite number of ``candidate'' triples (we are using the word ``triple'' differently here than in \cite{MR2574368}). Hence, if enough candidate triples are shown to be non-special, one can produce a lower bound arbitrarily close to the conjectured value of $\frac 1{\sqrt r}$. If a candidate triple is special, then it is a counter-example to Nagata's Conjecture. \begin{example} \begin{figure} \centering \includegraphics{12case2} \caption{A full reduction algorithm for $(2,D(83),24^{\times12})$. To denote points in $D_i\smallsetminus D_{i-1}$, we use $i$'s for $i=1,\ldots,9$, and we use $A$'s, $B$'s and $C$'s for $i=10,11,12$ respectively.}\label{12casefig} \end{figure} $(2,83,24^{\times 12})$ is one of the candidate triples of which it is necessary to show non-speciality to prove $\epsilon(12)\neq\frac{83}{288}$. We do so by showing that $$\dim V^2(83,24^{\times 12})=\edim(2,D(83),24^{\times 12})=0.$$ We use Algorithm \ref{algorithm1} with the following orderings: \begin{eqnarray*} \big(\lex(0,1,2),\lex(1,2,0),\lex(2,0,1),\lex(0,1,2),\lex(1,2,0),\lex(2,0,1),\;\\ \;\;\lex(0,1,2),\lex(1,2,0),\lex(2,0,1),\lex(0,2,1),\lex(1,0,2),\lex(1,0,2)\big). \end{eqnarray*} For $i=12,\ldots,2$, $\#(D_i\smallsetminus D_{i-1})={25\choose2}=300$, which is the maximum possible. $\#D_1=270$ and $D_0=\varnothing$. Hence, we obtain a reduction algorithm as pictured in Figure \ref{12casefig}. \end{example} While the above example was computed by hand, we used a Mathematica program to systematically perform Algorithm \ref{algorithm1} on a number of candidate triples using a random collection of $r$-tuples of monomial orderings. We summarize our results in Figure \ref{lowerbounds}. As a measure of how close a bound $e$ is to the conjectured value $\frac{1}{\sqrt r}$ of $\epsilon(r)$, we use the $f$-value of our bound (as in \cite{MR2574368}) defined by $$e=\frac{1}{\sqrt r}\sqrt{1-\frac{1}{f}}.$$ A larger $f$-value corresponds to a better bound. In most cases tested, we were able to quickly produce the best known bounds on $\epsilon(r)$. \begin{figure} $$\begin{array}{|c|c|c|cc|c|} \hline &\multicolumn{2}{|c|}{\text{Using Algorithm \ref{algorithm1}}}&\multicolumn{3}{|c|}{\text{Previous best known}}\\ \cline{2-6} r&\epsilon(r)\geq&f(r)\geq&\multicolumn{2}{|c|}{\epsilon(r)\geq}&f(r)\geq\\ \hline 10&60/19 & 361& 117/370&\cite{MR2738381}&13690\\ 11&169/561&572.22&106/352&\cite{MR2574368}&402.28\\ 12&277/960& 1081.69&83/288&\cite{MR2574368}&300.52 \\ 13&191/689&1014.36&90/325&\cite{MR2574368}&325 \\ 14&187/700& 1129.03&86/322&\cite{MR2574368}&740.6 \\ 15&484/1875&1969.54&426/1651&\cite{MR2543429}&744.55 \\ 17&305/1258&1389.43 &136/561&\cite{MR1687571}&1089 \\ 18&509/2160& 2178.15&89/378&\cite{MR2574368}&466.94 \\ 19&584/2546& 3158.93&170/741&\cite{MR1687571}&28900 \\ 20&948/4240&5107.27&1617/7235&\cite{MR2543429}& 1017.5\\ 21&559/2562& 3765.83&142/620&\cite{MR2574368}&660.64 \\ 22&1074/5038& 5104.88&197/924&\cite{MR1687571}&38809 \\ 23&820/3933& 4703.1&115/552&\cite{MR2574368}&576 \\ 24&578/2832& 3632.35&8092/39657&\cite{MR2543429}& 1371.71\\ 26&673/3432& 4768.67&260/1326&\cite{MR1687571}&2601 \\ 27&239/1242& 5193.82&161/837&\cite{MR2574368}&997.96 \\ 28&582/3080& 4457.89&201/1064&\cite{MR2574368}&1304.25 \\ 29&350/1885& 4901&113/609&\cite{MR2574368}&639.45 \\ 30&586/3210&4641.49& 219/1200&\cite{MR2574368}&2130.76 \\ 31&746/4154&4638.63 &128/713&\cite{MR2574368}&1093.26 \\ 32&724/4096&4681.14 &147/832&\cite{MR2574368}&940.52 \\ 33&471/2706&4350.82 &178/1023&\cite{MR2574368}&1093.55 \\ 34&653/3808&4902.25 &239/1394&\cite{MR2574368}&1731.93 \\ 35&840/4970&5041 &136/805&\cite{MR2574368}&974.47 \\ 37&444/2701&5329 &444/2701&\cite{MR1687571}&5329 \\ 38&715/4408&4964.35 &265/1634&\cite{MR2574368}&1898.97 \\ 39&843/5265&5641.07 &231/1443&\cite{MR2574368}&1779.7 \\ 40&449/2840&5170.26 &196/1240&\cite{MR2574368}&1601.66 \\ 41&493/3157&6077.22 &160/1025&\cite{MR1989646,MR2574368}&1025 \\ 42&985/6384&6785.79 &149/966&\cite{MR2574368}&1306.94 \\ 43&1036/6794&6881.1 &236/1548&\cite{MR2574368}&1741.5 \\ 44&650/4312&5560.21 &252/1672&\cite{MR2574368}&1985.5 \\ 45&872/5850&6556.03& 275/1845&\cite{MR2574368}&3782.25 \\ 46&746/5060&6626.19 &217/1472&\cite{MR2574368}&3140.26 \\ 47&473/3243&5888.61&994/6815&\cite{MR2574368}&7109.17 \\ 48&575/3984&7035.57 &187/1296&\cite{MR2574368}&1521.39 \\ 50&601/4250&7372.45&700/4950&\cite{MR1687571}&9801 \\ \hline \end{array}$$ \caption{Lower bounds on multi-point Seshadri constants of $\proj^2$ found using Algorithm \ref{algorithm1} and the previously best known lower bounds with citations. Notice that our bounds are equal to or better than the previous best for all $r$ except $r=10,19,22,47,50$}\label{lowerbounds} \end{figure} \subsection{Non-speciality of $\left(n,D(d),m^{\times s^n}\right)$} Here we indicate how our methods may be used to recover a theorem of Evain confirming and strengthening certain cases of Iarrobino's Conjecture from \cite{MR1337187}. Iarrobino's conjecture states that, apart from a few known counter-examples, if $d^n<rm^n$, then $(n,D(d),m^{\times r})$ is non-special. Notice that the $2$-dimensional case is Nagata's Conjecture. Theorem \ref{nagatagen}, originally proved by Evain in \cite{MR2125451}, confirms the conjecture, in fact with weak inequality, for the case where $r=s^n$ for some $s\in\N$. \begin{theorem}\label{nagatagen} Suppose $n\geq2$, $s\geq\max\{2,6-n\}$, $m\geq1$, and $d\leq ms$. Then $\left(n,D(d),m^{\times s^n}\right)$ is non-special. In particular, the linear series of degree-$d$ hypersurfaces in $\proj^n$ based at $s^n$ general points with multiplicity at least $m$ is empty. \end{theorem} We start with the case where $d$ is strictly less than $ms$, and then sketch how to extend this to to the case of equality---first in some base cases, and then by induction on $n$. \begin{proposition}\label{strictprop} If $n\geq1$, $s\geq1$, $m\geq1$, and $d<ms$, then $\left(n,D(d),m^{\times s^n}\right)$ admits a fully exceptional partition. Moreover, each part of this partition is a subset of a scrambled $n$-simplex of size $m$. \end{proposition} \begin{proof}[Sketch of Proof] Choose some irrational $0<\delta<m-\frac{d}{s}$, and let $\mu=\frac{d}{s}+\delta$. Then consider hyperplanes in $\tilde D(d)$ of the form $$H_{c,i,j}=\left\{\mathbf a\in\tilde D(d)\left|\sum_{k=i+1}^j a_k=c\mu\right.\right\},\text{ for integers $1\leq c<s$, $0\leq i<j\leq n$}.$$ These hyperplanes will partition $D(d)$ into $s^n$ parts as $\mathbf E=(E_1,\ldots,E_{s^n})$. Because $\mu$ is irrational, no integral points will lie on the hyperplanes. And because $\mu<m$ and the arrangement of the hyperplanes, each $E_i$ is a subset of a scrambled $n$-simplex of size $m$. And finally, one can use a simple recursive argument to show that $\mathbf E$ is (fully) exceptional. \end{proof} \begin{example}\label{strictex} In Figure \ref{strictfig}, we demonstrate Proposition \ref{strictprop} for the triple $(2,D(11),3^{\times16})$ by exhibiting the prescribed exceptional partition. \begin{figure} \centering \includegraphics{strict2} \caption{An fully exceptional partition for $(2,D(11),3^{\times16})$. $E_i$ is denoted by an $i$ for $i=1,\ldots,9$ and $A,\ldots,G$ for $10,\ldots,16$ respectively.}\label{strictfig} \end{figure} \end{example} Intuitively, the addition of $\delta$ in the proof is designed to eliminate the possibility that points of $D(d)$ lie on the hyperplanes---to avoid ``borderline'' points. When $d=ms$, we have no ``buffer'', so adding $\delta>0$ may give us parts of our partition which are too big and therefore not a subset of a scrambled $n$-simplex of size $m$. So when $d=ms$, we use almost the same construction for our partitions, but we make careful choices about where to send the borderline points (compare Examples \ref{strictex} and \ref{knownnagataex}). In the cases excluded by Theorem \ref{nagatagen}, it is not possible to make these choices and end up with an exceptional partition, but in all other cases it is. We do this explicitly for our base cases, which sets us up for induction on $n$. \begin{lemma}\label{basecaselem} The triple $\left(n,D(sm),m^{\times s^n}\right)$ admits an exceptional partition in which each part is a subset of a scrambled $n$-simplex of size $m$ in the cases: \begin{enumerate} \item $n=2$, $s\geq4$ \item $n=3$, $s=3$ \item $n=4$, $s=2$. \end{enumerate} \end{lemma} \begin{proof}[Sketch of Proof] For the case of $n=2$, $s\geq4$, we show that one is able to reduce to Proposition \ref{strictprop}. We construct the partition in two steps. The first $6s-9$ parts form a ``border'' around $D(sm)$, whose complement is a translation of $D(sm-3m-3)$. Then the remaining $(s-3)^2$ parts can then by constructed using Proposition \ref{strictprop} since $sm-3m-3<(s-3)m$. Rather than giving all of the details, we refer the reader to Example \ref{knownnagataex} for an illustration. For the other two cases, explicit descriptions of $\mathbf E$ are also possible. \end{proof} \begin{example}\label{knownnagataex} In Figure \ref{knownnagatafig} we demonstrate the first part of Lemma \ref{basecaselem} by illustrating the first step in the prescribed full generalized reduction algorithm for $\left(2, D(15),3^{\times 25}\right)$. We see that the partition is exceptional because no other sextuple of points has the same centroid as $E_1$, no pair of disjoint sextuples of points $F_2$, $F_3$ with $\sigma(F_2),\sigma(F_3)\neq0$, have the same centroids as $E_2$ and $E_3$ respectively, etc. The remaining points form a translation of $D(3)$, and by Proposition \ref{strictprop}, there exists an exceptional partition of $(2,D(3),(3^{\times4}))$. Notice that the key to making this construction work is that $s\geq4$ so that we can reduce to Proposition \ref{strictprop}. \begin{figure} \centering \includegraphics{knownnagata} \caption{An exceptional partition of $\left(2,D(15),3^{\times 21}\right)$.}\label{knownnagatafig} \end{figure} \end{example} Finally we prove the inductive step, which gives rise to the theorem. \begin{proof}[Sketch of Proof of Theorem \ref{nagatagen}] The case of $d<sm$ already being covered, we fix $m$ and $s$, let $d=sm$, and induct on $n$. Given the base cases from Lemma \ref{basecaselem}, this will prove the theorem. We again construct the partition $\mathbf E$ in two steps. The first $s^n-(s-1)^n$ parts will partition $D_1=\{\mathbf a\in D(d)|a_n\leq m\}$. The complement $D(d)\smallsetminus D_1$ is then a translation of $D(sm-m-1)$, of which we can construct the remaining $(s-1)^n$ parts of the partition by Proposition \ref{strictprop}. So it remains to construct a partition of $D_1$ with each of the $s^n-(s-1)^n$ parts a subset of a scrambled $n$-simplex of size $m$. Let $D'=\{\mathbf a\in D^n(d)|a_n=0\}$. Then considering $D'$ as a subset of $\N^n$, we see that it is exactly $D^{n-1}(sm)$. By the inductive hypothesis, there exists an exceptional partition $(E'_1,\ldots,E_{s^{n-1}}')$ of $D'$ with each $E'_k$ a subset of a scrambled $(n{-}1)$-simplex of size $m$. Let $\pi$ be the projection of $D^n(sm)$ onto $D'$ via $(a_0,\ldots,a_n)\mapsto(a_0,\ldots,a_{n-1},0)$. Let $\hat E'_k=\pi^{-1}(E'_k)\cap D_1$. Each $\hat E'_k$ then has the form $$\hat E'_k\subseteq (\text{a scrambled $(n{-}1)$-simplex of size $m$})\times\{0,\ldots,m\}.$$ We can subdivide the set on the right-hand side into subsets of scrambled $n$-simplices of size $m$, which will induce a partition of $\hat E'_k$. Collecting all such parts from all of the $\hat E'_k$, we end up with the desired full exceptional partition of $D_1$. \end{proof} \bibliographystyle{amsplain}
2,869,038,156,120
arxiv
\section{Youla Parameterization via REN}\label{sec:approach} In this section, we first recall a recently developed neural network architecture -- recurrent equilibrium network \cite{revay2021recurrent}. Then, we use it to construct a nonlinear Youla parameterization for the uncertain linear system. Extensions to more general system settings are also discussed. \subsection{Recurrent equilibrium networks} \begin{figure}[!bt] \centering \includegraphics[width=0.5\linewidth]{feedback.pdf} \caption{An Lur'e system perspective of REN}\label{fig:lure} \end{figure} REN is a nonlinear dynamical system of the form: \begin{equation}\label{eq:ren} \begin{split} \renewcommand\arraystretch{1.1} \begin{bmatrix} x_{t+1} \\ v_t \\ y_t \end{bmatrix}&= \overset{W}{\overbrace{ \left[ \begin{array}{c|cc} A & B_1 & B_2 \\ \hline C_{1} & D_{11} & D_{12} \\ C_{2} & D_{21} & D_{22} \end{array} \right] }} \begin{bmatrix} x_t \\ w_t \\ u_t \end{bmatrix}+ \overset{b}{\overbrace{ \begin{bmatrix} b_x \\ b_v \\ b_y \end{bmatrix} }}\\ w_t=\sigma&(v_t):= \begin{bmatrix} \sigma(v_{t}^1) & \sigma(v_{t}^2) & \cdots & \sigma(v_{t}^q) \end{bmatrix}^\top \end{split} \end{equation} where $ x_t\in \mathbb{R}^{n_x},u_t \in \mathbb{R}^{n_u},y_t\in \mathbb{R}^{n_y}$ are the state, input and output, respectively. $ v_t,w_t\in \mathbb{R}^{n_v}$ are the input and output of the neuron layer. We assume that the activation function is $ \sigma:\mathbb{R}\rightarrow\mathbb{R}$ with slope restricted in $[0,1]$. In this work, we will use rectified linear unit (ReLU) $\sigma(x)=\max(x,0)$ as the default activation for RENs. The learnable parameter is $\theta'=(W,b)$ where $W$ is the weight matrix and $b$ is the bias vector. The REN can also be viewed as an Lur'e system, see Fig.~\ref{fig:lure}. The feedback structure forms an \emph{implicit} or \emph{equilibrium} neuron layer: \begin{equation}\label{eq:equilibrium} w_t=\sigma(D_{11}w_t+C_1x_t+D_{12}u_t+b_v), \end{equation} whose solutions are also the equilibrium points of the difference equation $w_t^{k+1}=\sigma(Dw_t^k+b_w)$ or the ordinary differential equation $\frac{d}{ds}w_t(s)=-w_t(s)+\sigma(Dw_t(s)+b_w) $, where $b_w=C_1x_t+D_{12}u_t+b_v$ is ``frozen'' for each time-step. Solving \eqref{eq:equilibrium} online is equivalent to running an infinite depth feedforward network \cite{bai2019deep}. The matrix $D_{11}$ can be interpreted as the adjacency matrix of the graph defining interconnections between the neurons. By imposing different block structure on $D_{11}$, we can divide the implicit layer into many sub-layers and formulate complex network topology, including DNN, CNN and ResNet, etc \cite{el2021implicit}. Since the nonlinear activation function is slope-restricted in $[0,1]$, the neuron layer $\sigma$ satisfies the following incremental integral quadratic constraints (IQCs): \begin{equation} \begin{bmatrix} \Delta v_t \\ \Delta w_t \end{bmatrix}^\top \begin{bmatrix} 0 & \Lambda \\ \Lambda & -2\Lambda \end{bmatrix} \begin{bmatrix} \Delta v_t \\ \Delta w_t \end{bmatrix}\geq 0,\quad \forall t\in \mathbb{N} \end{equation} where $ \Lambda\in \mathbb{R}^{n_v\times n_v}$ is a positive diagonal matrix, $ (\Delta \bm{v},\Delta \bm{w})$ is the difference between any pair of input-output trajectories of $\sigma$. From IQC theorem \cite{Megretski:1997}, we can conclude that the REN satisfies the incremental IQC defined by $(Q,S,R)$: \begin{equation} \sum_{t=0}^{T}\begin{bmatrix} \Delta y_t \\ \Delta u_t \end{bmatrix}^\top \begin{bmatrix} Q & S^\top \\ S & R \end{bmatrix} \begin{bmatrix} \Delta y_t \\ \Delta u_t \end{bmatrix}\geq 0,\quad \forall T\in \mathbb{N} \end{equation} where $ 0\succeq Q \in \mathbb{R}^{n_y\times n_y}$, $S\in \mathbb{R}^{n_u\times n_y}$ and $ 0 \preceq R \in \mathbb{R}^{n_u\times n_u}$, if there exists a positive-definite $ P\in \mathbb{R}^{n_x\times n_x}$ and a positive diagonal matrix $\Lambda\in \mathbb{R}^{n_v\times n_v}$ such that \begin{equation}\label{eq:lmi-qsr-explicit} \begin{split} \begin{bmatrix} P & -C_1^\top \Lambda & C_2^\top S^\top\\ -\Lambda C_1 & W & D_{21}^\top S^\top - \Lambda D_{12} \\ S C_2 & S D_{21} - D_{12}^\top \Lambda & R +SD_{22}+D_{22}^\top S^\top \end{bmatrix}- \\ \begin{bmatrix} A^\top \\ B_1^\top \\ B_2^\top \end{bmatrix}P \begin{bmatrix} A^\top \\ B_1^\top \\ B_2^\top \end{bmatrix}^\top - \begin{bmatrix} C_2^\top \\ D_{21}^\top \\ D_{22}^\top \end{bmatrix} Q \begin{bmatrix} C_2^\top \\ D_{21}^\top \\ D_{22}^\top \end{bmatrix}^\top \succ 0. \end{split} \end{equation} Important special cases of incremental IQCs include: \begin{itemize} \item $Q=-\frac{1}{\gamma}I, S=0, R=\gamma I$: the REN satisfies an $\ell_2$ Lipschitz bound, a.k.a. incremental $\ell^2$-gain bound, of $\gamma$. \item $Q=0,S=I, R=0$: the REN satisfies incremental passivity condition. \end{itemize} An intuitive way to learn RENs is through constrained optimization. However, LMI \eqref{eq:lmi-qsr-explicit} quickly becomes the computational bottleneck as the model size increases. A central result of \cite{revay2021recurrent} is a \emph{direct parameterization} of all well-posed RENs. Roughly speaking, by applying certain transform $\theta'=f(\theta)$, Condition \eqref{eq:lmi-qsr-explicit} is automatically satisfied for any $\theta\in \mathbb{R}^N$. That is, learning an REN becomes an unconstrained optimization problem under the new coordinate $\theta$. Throughout the rest of this paper, we will utilize a subclass of REN, called \emph{acyclic} REN (aREN) where the weight $D_{11}$ is constrained to be strictly lower triangular. That is, the equilibrium layer \eqref{eq:equilibrium} has feedforward structure. The major benefit of aREN is its simple implementation as \eqref{eq:equilibrium} yields an explicit solution. Various learning tasks in \cite{revay2021recurrent} shows that aREN often provides similar quality of models as REN. \subsection{Youla-REN} First, we make the following assumption on the uncertain linear system \eqref{eq:system}. \begin{assumption}\label{asmp:1} There exists a robust controller of the form: \begin{equation}\label{eq:K} u=-Kx+v \end{equation} where $v\in\mathbb{R}^m$ is an additional control augmentation, such that system \eqref{eq:system} has a finite $\ell_2$-gain bound from $v$ to $x$. \end{assumption} This is equivalent to that the uncertain linear system \eqref{eq:system} is robustly stabilizable by linear state-feedback control. With the extra control augmentation $v$, we are able to search for a policy with better control performance while maintaining the robust stability guarantee. The basic idea is to build on a standard method for \emph{linear} feedback optimization: the Youla-Kucera parameterization, a.k.a Q-augmentation \cite{youla1976modern,zhou1996robust}. Letting $ z=(x,u,w)$ be the performance output, the closed-loop dynamics can be written as the transfer matrix \begin{equation} \begin{bmatrix} \bm{x} \\ \bm{z} \end{bmatrix}= \overset{\bm{G}(\rho)}{\overbrace{ \begin{bmatrix} \bm{G}_{xv}(\rho) & \bm{G}_{xw}(\rho) \\ \bm{G}_{zv}(\rho) & \bm{G}_{zw}(\rho) \end{bmatrix}}} \begin{bmatrix} \bm{v} \\ \bm{w} \end{bmatrix} \end{equation} where $ \bm{G}_{xv},\bm{G}_{xw},\bm{G}_{zv},\bm{G}_{zw}$ are stable for all $\rho \in \mathbb{P}$. Now let us consider the scheme plot in Fig.~\ref{fig:youla}a, where the dynamics of $\bm{G}_\Delta$ can be described by \begin{equation} \begin{bmatrix} \tilde{\bm{x}} \\ \bm{z} \end{bmatrix}= \begin{bmatrix} \bm{G}_{xv}(\rho)-\bm{G}_{xv}(\hat{\rho}) & \bm{G}_{xw}(\rho) \\ \bm{G}_{zv}(\rho) & \bm{G}_{zw}(\rho) \end{bmatrix} \begin{bmatrix} \bm{v} \\ \bm{w} \end{bmatrix} \end{equation} where $\hat{\rho}$ is a nominal value chosen from $\mathbb{P}$. Note that the above system is incrementally stable. \begin{figure}[!bt] \centering \begin{tabular}{cc} \includegraphics[width=0.42\linewidth]{Youla-param} & \includegraphics[width=0.45\linewidth]{Youla-param-r} \\ (a) disturbance rejection & (b) reference tracking \end{tabular} \caption{Youla policy parameterization}\label{fig:youla} \end{figure} \begin{theorem}\label{thm:main} Suppose that Assumption~\ref{asmp:1} holds for system \eqref{eq:system} and $\bm{G}_\Delta$ admits the incremental IQC defined by \begin{equation*} \overset{Q}{\overbrace{\begin{bmatrix} Q_{xx} & 0 \\ 0 & Q_{zz} \end{bmatrix}}}\prec 0, \quad \overset{S}{\overbrace{\begin{bmatrix} S_{vx} & 0 \\ 0 & S_{wz} \end{bmatrix}}},\quad \overset{R}{\overbrace{\begin{bmatrix} R_{vv} & 0 \\ 0 & R_{ww} \end{bmatrix}}}\succ 0. \end{equation*} Let $\bm{Q}_\theta$ with $\theta\in \mathbb{R}^N$ be an REN satisfying the incremental IQC defined by $ (\overline{Q},\overline{S},\overline{R})$. Then, the closed-loop system is contracting and yields finite Lipschitz bound from $ w$ to $z$ for all $\rho \in \mathbb{P}$ if \begin{equation}\label{eq:stability} \begin{bmatrix} Q_{xx}+\overline{R} & S_{vx}^\top + \overline{S} \\ S_{vx}+\overline{S}^\top & R_{vv}+\overline{Q} \end{bmatrix}\prec 0. \end{equation} \end{theorem} \begin{remark} If $ Q_{xx}=-\frac{1}{\alpha}I, S_{vx}=0, R_{vv}=\alpha I$ with $\alpha>0$, i.e., $\bm{G}_\Delta$ has incremental $\ell_2$-gain bound of $\alpha$ from $ v$ to $x$, then Condition~\eqref{eq:stability} can be reduced to the requirement for incremental small-gain theorem. That is, we can make the closed-loop system contracting by choosing $\bm{Q}_\theta$ with $\overline{Q}=-\frac{1}{\gamma}I, \overline{S}=0, \overline{R}=\gamma I$ for some positive constant $\gamma < 1/\alpha$. For the reference tracking problem, we can feed the reference $\bm{r}$ to the Youla parameter $\bm{Q}_\theta$, as shown in Fig.~\ref{fig:youla}b, and specify an arbitrarily large but finite gain bound for $\bm{r}$ to $\bm{v}$, that is, $ \overline{Q} = -\frac{1}{\gamma}I$ and $\overline{R}=\diag(\gamma I, \eta I)$ where $\eta\gg \gamma$, which can help $\bm{Q}_\theta$ learn the mapping between reference and nominal input. \end{remark} We call the following controller an \emph{Youla-REN} policy: \begin{equation}\label{eq:youla} \pi_\theta: \begin{cases} \hat{x}_{t+1} = [A(\hat{\rho})-BK]\hat{x}_t+Bv_t \\ v_t=\bm{Q}_\theta(x_t-\hat{x}_t) \\ u_t=-Kx_t+v_t \end{cases} \end{equation} where $\hat{x}_t$ is the state of $\bm{G}(\hat{\rho})$. By introducing nonlinearity in $\bm{Q}_\theta$, we can significantly increase the expressive power of the candidate policy set, which is useful for learning optimal policy subject to general cost functions. \subsection{Extensions to more complex systems} \label{sec:extension} \subsubsection{Nonlinear systems} For certain class of continuous-time nonlinear systems, there exist several constructive methods for designing controllers that render the closed-loop system contracting \cite{manchester2017control}, virtually contracting \cite{wang2021nonlinear} and robustly contracting \cite{manchester2018robust}. The proposed Youla-REN can be naturally integrated with those methods by introducing an augmented control input, which is similar to \cite{van2000l2}. \subsubsection{Partially observed systems} When only partial information $y=Cx$ is available for \eqref{eq:system}, we can construct a standard output-feedback structure with $v_t$ as additional control augmentation \cite{boyd1991linear}: \begin{align} \hat{x}_{t+1}&=A(\hat{\rho})+Bu_k+L\tilde{y}_t \label{eq:observer}\\ \tilde{y}_t&=y_t-C\hat{x}_t \\ u_t &= -K\hat{x}_t+v_t \end{align} where the observer gain $L$ is designed such that \eqref{eq:observer} is robustly stable. By estimating the incremental $\ell_2$-gain bound for $\bm{G}_\Delta: \bm{v}\mapsto \tilde{\bm{y}}$, we can construct robustly stabilizing policy set via RENs $\bm{Q}_\theta:\tilde{\bm{y}}\mapsto \bm{v}$. For partially observed nonlinear systems, \cite{yi2021reduced} developed constructive methods for building globally converging observers based on contraction analysis. \section{Conclusions} In this work, we have presented a novel control policy parameterization called Youla-REN, which has built-in stability guarantee for uncertain systems. The control policy is flexible and admits a direct parameterization, allowing learning via unconstrained optimization. We have illustrated the benefits of the new policy class via several simulation examples. Our future work will further explore uses of Youla-RENs for robust reinforcement learning and online control. \section{Examples} \label{sec:example-1} In this section, we will illustrate the proposed approach via a variety of numerical simulations. \subsection{System setup for linearized cart-pole system} Let $p_x$ be the cart position and $\psi$ be the angular displacement of the pendulum from its vertical position. The control task is to balance an inverted pendulum resting on top of a cart (i.e. $ \psi=0$ ) by exerting horizontal forces $u\in \mathbb{R}$ on the cart. For a pendulum of length $\ell$ and mass $M_p$, and for a cart of mass $ M_c$, the linearized dynamics of the cart-pole system at the vertical position are \begin{equation}\label{eq:cp-ldyn} \begin{split} \dot{x}=& \begin{bmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & -\frac{M_pg}{M_c} & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & \frac{(M_c+M_p)g}{M_c\ell} & 0 \end{bmatrix}x+ \begin{bmatrix} 0 \\ \frac{1}{M_c} \\ 0 \\ -\frac{1}{M_c} \end{bmatrix}u \\ :=& A(\rho)x+Bu \end{split} \end{equation} where $ x=(p_x,\dot{p}_x,\psi,\dot{\psi})$ and $ \rho=M_p$ are the state and uncertain parameter, respectively. Model parameters are given by $M_p\in [0.2,2]$, $ M_c=1$, $l=1$ and $ g=9.81 $. We design a robust controller \eqref{eq:K} by solving the following parametric LMIs: \begin{equation}\label{eq:lmi} \begin{split} \min_{\alpha,X,Y}\; & \quad \beta \\ \mathrm{s.t.}\; & \begin{bmatrix} W(\rho) & B & X \\ B^\top & -\beta I & 0 \\ X & 0 & -\beta I \end{bmatrix} \preceq 0, \\ & W(\rho) + 2\lambda X \succeq 0,\ X \succeq 0 \end{split} \end{equation} where $ W(\rho)= XA^\top(\rho)+A(\rho)X+BY+Y^\top B^\top$. The first LMI implies that the closed-loop system achieves $\mathcal{L}_2$-gain bound of $\beta$ from $v$ to $x$. By minimizing $\beta$, we wish to have a small $\mathcal{L}_2$-gain bound $\alpha$ for $\bm{G}_\Delta$ and a large set of $Q_\theta$ by Thm.~\ref{thm:main}. The second LMI in \eqref{eq:lmi} means that the convergence rate of closed-loop system is smaller than $\lambda$, avoiding aggressive gain $K$. By choosing $\lambda=5$, we obtain a robust controller \eqref{eq:K} with \[ K=YX^{-1}=\begin{bmatrix} -7.40 & -14.96 & -125.82 & -27.73 \end{bmatrix} \] and the corresponding gain bound for $Q_\theta$ can be estimated as $\gamma=60$ via Thm.~\ref{thm:main}. Finally, we discretize the linearized cart-pole system \eqref{eq:cp-ldyn} with sampling time $t_s=0.05$. \subsection{REN vs RNN/LSTM}\label{sec:Q-learn} We first consider a quadratic regulation problem with cost \begin{equation}\label{eq:qr} c(x,u)=x^\top Q x + R u^2 \end{equation} where $Q=\diag(10,0.1,10,0.1)$ and $ R=0.01$. We will compare the performance of Youla control policy \eqref{eq:youla} with the following four choices of $\bm{Q}_\theta$: REN, long short-term memory (LSTM) \cite{hochreiter1997long} and vanilla recurrent neural network (RNN) \cite{elman1990finding} with ReLU and tanh activations, referred as RNNr and RNNt, respectively. \paragraph{Training details} All Youla parameters have approximately $ 255,000$ parameters. That is, the RNN has 500 neurons, the LSTM has 250 neurons and REN has $n_x=40$ states and $ n_v=500 $ neurons. We train those policies for 600 epochs with initial learning rate $10^{-3}$ and reduced rate $10^{-4}$ after 400 epochs. During each epoch, it firstly takes $M=10$ random samples of system setups with uniform distribution, i.e., $ (\rho,x_0) \sim \mathcal{U}(\mathbb{P}\times \mathbb{X})$ where $ \mathbb{P}=[0.2,2]$ and $ \mathbb{X}=[-10,10]\times [-0.5,0.5]\times [-2,2] \times [-0.5,0.5]$, then compute $(\hat{J},\nabla \hat{J}) $ based on the closed-loop responses of \eqref{eq:youla} and \eqref{eq:cp-ldyn} over the horizon of $ T=60 $, and finally update the parameter $\theta$ via Adam \cite{Kingma:2017}. Test cost is calculated with $M=50$ system setups and a horizon of $ T = 100$. \paragraph{Results and discussion} We have plotted the test cost versus epochs in Fig.~\ref{fig:Q-train}. The black solid line shows the performance of robust linear controller \eqref{eq:K} with zero augmented input (i.e., $ v=0$), which can be taken as an upper bound of the optimal policy. The black dashed line reveals the performance of optimal LQR controllers with known uncertain parameter $\rho$, which serves as the lower bound of the optimal policy. Firstly, we observed that unstable control policy is found for Youla-RNNr since there is no $\ell_2$-gain regularization applied to $\bm{Q}_\theta$ and ReLU activation is unbounded. Then, we observed that both Youla-RNNt and Youla-LSTM have stable responses although the cost grows significantly larger than the robust linear controller at the first 100 epochs. This is mainly due to the fact that the states of LSTM and RNNt live in some compact sets. But neither Youla-LSTM nor Youla-RNNr can guarantee global exponential stability as their Lipschitz bounds may grow larger than $\gamma$. This can be verified in Fig.~\ref{fig:Q-sample} where the Youla-RNNr yields multiple equilibrium points. After training, the Youla-LSTM and Youla-RNNr have performance gaps $ (\hat{J}-\hat{J}_{opt})/\hat{J}_{opt}$ of $7.72\%$ and $ 9.31\%$, respectively. Thanks to the prescribed Lipschitz bound, Youla-REN can ensure global exponential stability of the closed-loop system. Fig.~\ref{fig:Q-train} shows that the nominal cost decrease quickly and reaches $0.55\%$ performance gap after 250 epochs, which significantly outperforms the other $\bm{Q}$-parameterizations. \begin{figure}[!bt] \centering \includegraphics[width=\linewidth]{Q-train} \caption{Test cost versus epochs for the linearized cart-pole system.}\label{fig:Q-train} \end{figure} \begin{figure}[!bt] \centering \includegraphics[width=\linewidth]{Q-sample} \caption{State responses of different Youla policy parameterizations, where each case has different uncertain parameter and initial condition. }\label{fig:Q-sample} \end{figure} We have also plotted the test cost versus uncertain parameters for both training data distribution and unseen data distribution in Fig.~\ref{fig:Q-test}. The Youla-REN achieves near optimal performance for the training data and also generalizes to the unseen data. The performance gaps of Youla-RNNt and Youla-LSTM increase to $12.55\%$ and $16.14\%$, respectively, for the unseen data. \begin{figure}[!bt] \centering \includegraphics[width=\linewidth]{Q-test} \caption{Test cost with training data distribution and unseen distribution versus uncertain parameters. Here the unseen data is from uniform distribution over a shifted initial state set by moving the center of $\mathbb{X}$ from the origin to $ (10, 0, 0, 0) $.}\label{fig:Q-test} \end{figure} \subsection{Youla vs natural control parameterization} We here compare the proposed Youla policies with natural control policies (see Fig.~\ref{fig:ctrl}) with $ \bm{v}=\bm{C}_\theta(\bm{x})$, which are denoted by Ctrl-REN, Ctrl-RNNt and Ctrl-LSTM depending on the model used for $\bm{C}_\theta$. We choose the $\ell_2$-gain bound of $\bm{C}_\theta$ to be smaller than $ 1/\beta$ where $\beta$ is obtained by \eqref{eq:lmi}, which ensures closed-loop contracting behavior via incremental small-gain theorem. By comparing the test cost in Fig.~\ref{fig:Q-train} and \ref{fig:C-train}, we observed that Ctrl-LSTM and Ctrl-RNNt have almost doubled peak test cost and also require doubled epochs to learn a controller that outperforms the robust linear policy. Although the policies learned from the Ctrl-REN parameterization generally have decreasing test cost except local spikes, their performance are still worse than Youla-REN. One potential reason is that the gain bound for $\bm{C}_\theta$ in Ctrl-REN is about 6, which is 10 times smaller than $\bm{Q}_\theta$ in Youla-REN since the gain bound of $\bm{G}$ is usually much larger than $\bm{G}_\Delta$. \begin{figure}[!bt] \centering \includegraphics[width=0.35\linewidth]{ctrl-param} \caption{Natural control policy parameterization}\label{fig:ctrl} \end{figure} \begin{figure}[!bt] \centering \includegraphics[width=\linewidth]{C-train} \caption{Test cost versus epochs for the quadratic regulation problem using Youla and Ctrl parameterizations.}\label{fig:C-train} \end{figure} \subsection{Non-linear vs linear $Q$-parameter} We now consider the quadratic regulation problem with soft input constraint: \begin{equation} c(x,u)=x^\top Qx+Ru^2+\eta \max(|u|-\overline{u},0) \end{equation} with $ \overline{u}=5$ as the bound and $ \eta=50$ as the weighting coefficient. The purpose is to learn a controller which generates control signals $ |u_t|\leq \overline{u}$ when the state is sufficiently close to the set-point. But it is still able to use $|u_t|> \overline{u}$ in a short window to stabilize the system when the state is far way. We train both linear and nonlinear $\bm{Q}$ using RENs with $n_x=50, n_v=0$ and $n_x=50, n_v=400$, respectively. For the remaining examples, the initial state set is changed to \[ \mathbb{X}=[-5,5]\times [-0.1,0.1]\times [-1,1] \times [-0.1,0.1]. \] The learning procedure is similar to Section~\ref{sec:Q-learn} except it uses $T=100$ and $ T=120 $ for training and testing, respectively. As shown in Fig.~\ref{fig:ub-train}, the nominal cost of nonlinear $\bm{Q}$ decreases faster than the linear one. After 500 epochs, the nonlinear $\bm{Q}$ also outperforms the linear $\bm{Q}$ by 11.5\%. We also have plotted the closed-loop responses in Fig.~\ref{fig:ub-sample}. Their state responses are very close to each other. The main difference is from the input trajectories. When the state is far from the origin, both linear and nonlinear $\bm{Q}$ produce large control actions ($|u|>\overline{u}$). After a few steps, the control signal generated by the nonlinear $\bm{Q}$ remains in the desired range while the linear $\bm{Q}$ still produces excessive input actions. \begin{figure}[!bt] \centering \includegraphics[width=\linewidth]{ub-train} \caption{Test cost versus epochs (left) and uncertain parameters (right) for the quadratic regulation problem with soft input constraint.}\label{fig:ub-train} \end{figure} \begin{figure}[!bt] \centering \includegraphics[width=0.9\linewidth]{ub-sample} \caption{Responses of linear and nonlinear $\bm{Q}$-parameters, where the dashed line indicates the upper and lower bounds.}\label{fig:ub-sample} \end{figure} \subsection{Disturbance rejection} We revisit the quadratic regulation problem \eqref{eq:qr} but the system is perturbed by unknown input disturbance, i.e., \begin{equation} \dot{x}=A(\rho)x+B(u+w). \end{equation} If $w$ is Gaussian noise and $\rho$ is known, then the optimal policy is LQR controller. For general disturbance types, the optimal controller may be nonlinear. Thus, it is natural to search for a better controller using Youla policy parameterization. Here we consider two scenarios: constant and sinusoidal $ \bm{w}$. For the constant disturbance, we set $ w_t=w_0$ with $t\leq T$ where $ w_0\sim \mathcal{U}([-5,5])$. For the sinusoidal input, we choose $ w_t=A\sin(\omega t+\phi)$ where $ A\sim \mathcal{U}([0,10])$, $ \omega\sim \mathcal{U}([0.05\pi,0.5\pi])$ and $\phi \sim \mathcal{U}([-\pi/2,\pi/2])$. Fig.~\ref{fig:const-sample} shows that both robust and LQR controllers relies on steady-state error to cancel the constant disturbance. The Youla-REN can compensate the input disturbance while maintaining the state around the desired equilibrium. For the sinusoidal disturbance, the Youla-REN has smaller amplitude of oscillations in both state and control input. \begin{figure}[!bt] \centering \begin{tabular}{c} \includegraphics[width=0.76\linewidth]{const-sample} \\ (a) constant $\bm{w}$ \\ \includegraphics[width=0.8\linewidth]{sin-sample} \\ (b) sinusoidal $ \bm{w}$ \end{tabular} \caption{Responses of Youla-REN to external disturbance.}\label{fig:const-sample} \end{figure} \subsection{Non-quadratic cost} We apply the proposed approach to the problems with non-quadratic cost. The first example is the economic cost $ c(x,u)=u^2 $. Note that the optimal policy is simply $ u=0$, which is an unstable controller. By searching in the Youla-REN parameterization, we are able to find a robust stabilizing controller that uses a small amount of control action to stabilize the system, where the state may slowly converge to some non-zero equilibrium point, as shown in Fig.~\ref{fig:non-quadatic}a. The second case is weighted $\ell_1$ cost $c(x,u)=\|W_1x\|_1+\|W_2u\|_1$ where $ W_1=\begin{bmatrix} 20 & 0.1 & 5 & 0.1 \end{bmatrix}$ and $W_2=0.5$. The result closed-loop response shows that both state and input quickly converge to zero. \begin{figure}[!bt] \centering \begin{tabular}{c} \includegraphics[width=0.8\linewidth]{eco-sample} \\ (a) economic cost \\ \includegraphics[width=0.8\linewidth]{L1-sample} \\ (b) $\ell_1$ cost \end{tabular} \caption{Responses of Youla-REN for non-quadratic costs.}\label{fig:non-quadatic} \end{figure} \section{Introduction} Neural networks have recently gained popularity in various control tasks due to their success in machine learning and artificial intelligence (e.g. \cite{silver2017mastering}). Many existing work focuses on learning neural network controllers for unknown dynamical systems in the framework of reinforcement learning (RL) \cite{sutton2018reinforcement}. Despite the potential of solving hard control problems, there are still well-known issues for the RL controllers, such as sample complexity and interpretability, which impedes their applications in complex nonlinear system with critical safety requirement \cite{amodei2016concrete,brunke2021safe}. Even for the most classic control design setting where mathematical model of the system is available, it is still a challenge problem of learning provable stabilizing controllers \cite{chang2019neural}. An intuitive way is to parameterize both stability certificate (i.e. Lyapunov function) and control policy via deep neural networks (DNNs), and then use constrained optimization method to ensure that the corresponding Lyapunov inequality holds for the training data \cite{mehrjou2019deep,berkenkamp2017safe,dai2021lyapunov}. The stability property usually depends on the training data and may not be able to generalized to unseen data. Similar approaches have been applied to learn control barrier functions such that the system state remains in a safety set \cite{taylor2020learning}. Another approach is to project the neural network parameters into a stabilizing policy set based on classic stability analysis method for models rather than sampled data \cite{gu2021recurrent,kretchmar2001robust}. Although provable stability is guaranteed, this approach often has higher computation cost for large-scale neural networks since the set of stabilizing policies is often highly non-convex. A convex inner approximation was developed in \cite{gu2021recurrent}. For RL problems with linear system setting, \cite{roberts2011feedback} demonstrated that the Youla policy parameterization (\cite{youla1976modern}) can guarantee closed-loop stability and offer a number of performance advantages over some natural and naive parameterizations. More historical details about Youla parameterization are referred to \cite{boyd1991linear}, and extensions to nonlinear systems can be found in \cite{van2000l2}. In this paper, we consider designing robust state-feedback controllers for uncertain linear systems such that the accumulated running cost is also minimized. Although the system is linear, the optimal controller for general cost is nonlinear, e.g., model predictive control is a nonlinear policy with the presence of state/input constraint. Our approach can also be extended to other general system setups, such as nonlinear system and partially observed system. {\bf Contributions.} The main contribution of this work is a novel parameterization of nonlinear controllers, called \emph{Youla-REN}, which builds on recently developed neural network architecture, called the recurrent equilibrium network (REN) \cite{revay2021recurrent}, and a nonlinear version of the Youla parameterization. The proposed controller set has ``built-in'' guarantees of stability, that is, all policies in the search space result in a contracting (globally exponentially stable) closed-loop system. Such stability guarantees do not rely on the choice of cost function, the length of rollout trajectories, and training data distribution, which makes it generalizable for unseen data and suitable for various control tasks. Another useful feature of this approach is that all policies are parameterized directly without any constraints, which allows for easy and scalable learning via many unconstrained optimization methods, e.g. stochastic gradient descent (SGD). Finally, we demonstrate the effectiveness of the proposed method via a variety of numerical examples. {\bf Paper outline.} Section~\ref{sec:problem} gives a formal problem formulation. Section~\ref{sec:approach} presents the proposed Youal-REN policy parameterization, followed by various simulation examples in Section~\ref{sec:example-1}. {\bf Notation.} We use $\mathcal{U}(X)$ to denote the uniform distribution over a compact set $X$. We use bold upper letters $ \bm{G},\bm{Q},\ldots$ to represent dynamical systems and bold lower letters $ \bm{x},\bm{y},\ldots $ to denote discrete-time signals, i.e., $\bm{x}=(x_0,x_1,\ldots)$. The rest of the notation is standard. \section{Problem Formulation}\label{sec:problem} Consider uncertain linear dynamical systems of the form: \begin{equation}\label{eq:system} x_{t+1}=A(\rho)x_t+Bu_t+w_t \end{equation} with measured state $ x_t\in \mathbb{R}^{n}$, input $ u_t\in \mathbb{R}^{m}$, uncertain parameter $\rho \in \mathbb{P}$ and disturbance $ w_t\sim \mathcal{D}(\mathbb{W})$, where $\mathbb{P},\mathbb{W} $ are compact sets, and $\mathcal{D}$ is a known distribution. Our proposed method can be easily extended more general nonlinear and partially observed systems, see discussions in Section~\ref{sec:extension}. The control performance is the average value of a cost over trajectories of length $ T$, i.e., \begin{equation} \ell(\bm{x},\bm{u},\bm{w})=\frac{1}{T+1}\sum_{t=0}^T c(x_t,u_t) \end{equation} where $ \bm{x}=(x_0,x_1,\ldots,x_T),\ \bm{u}=(u_0,u_1,\ldots,u_T),\ \bm{w}=(w_0,w_1,\ldots,w_T)$ are the trajectories of \eqref{eq:system} over the horizon $[0,T]$. The stage cost function $ c$ is assumed to be piecewise differentiable. We wish to design a feedback controller of the form \begin{equation}\label{eq:policy} u=\pi_\theta(x) \end{equation} where $\theta\in \Theta\subseteq \mathbb{R}^N$ is trainable parameter, such that it (at least approximately) solves the following problem \begin{equation}\label{eq:J-theta} \min_{\theta\in \Theta} \; J(\theta)=\E_{\substack{\rho\sim \mathcal{U}(\mathbb{P}) \\ w_t\sim \mathcal{D}(\mathbb{W})}} \bigl[\ell(\bm{x},\bm{u},\bm{w})\mid \pi_\theta\bigr]. \end{equation} For general stage costs, the optimal controller is nonlinear even if the system is linear and certain. Since it is generally hard to solve \eqref{eq:J-theta} exactly, an alternative way is to search for an approximate solution using data-driven approaches. That is, starting with an initial guess $\theta^0$, during the $k$th iteration, we first generate $M$ independent scenarios $ (\rho^i, x_0^i, \bm{w}^i)$, where $\rho^i$ is the uncertain parameter, $x_0^i$ is the initial state and $ W^i$ is a disturbance sequence. Then, we compute the empirical cost \begin{equation} \hat{J}(\theta)=\frac{1}{M}\sum_{i=1}^M \ell(\bm{x}^i,\bm{u}^i,\bm{w}^i). \end{equation} where $(\bm{x}^i,\bm{u}^i,\bm{w}^i)$ is the trajectory rollout. Finally, the control parameter is updated via \begin{equation}\label{eq:sgd} \theta^{k+1}=\theta^k-\alpha^k \nabla \hat{J}(\theta^k) \end{equation} where $\alpha^k>0$ is a step size. The basic requirement for the above data-driven approach is that $\pi_\theta $ is a robustly stabilizing controller, which is often achieved by imposing extra stability constraints on $\pi_\theta$. This usually leads to a complex and probably non-convex constraint on $\theta$, and constrained optimization methods are computationally expensive for learning large-scale neural network controllers. To address those issues, this work mainly focuses on the following problem. \begin{problem}\label{prob:1} Construct a unconstrained robust policy parameterization (i.e., $\Theta=\mathbb{R}^N$) such that $ x_{t+1}=A(\rho)x_t+B\pi_\theta(x_t)$ is globally exponentially stable for all $\rho\in \mathbb{P}$ and $ \theta\in \mathbb{R}^N$. \end{problem} \subsection{Proof of Theorem~\ref{thm:main}} From the IQC condition for $\bm{G}_\Delta$, there exists an incremental storage function $ V_g(x^g,\Delta x^g)\geq 0$ with $ V_g(x^g,0)=0$ and $ x^g=(x,\hat{x})$ such that \begin{equation} \begin{split} V_g(x_{t+1}^g,&\Delta x_{t+1}^g)-V_g(x_{t}^g,\Delta x_{t}^g)\leq \\ &\left[ \begin{array}{c} \Delta \tilde{x}_t \\ \Delta z_t \\ \hline \Delta v_t \\ \Delta w_t \end{array} \right]^\top \begin{bmatrix} Q & S^\top \\ S & R \end{bmatrix} \left[ \begin{array}{c} \Delta \tilde{x}_t \\ \Delta z_t \\ \hline \Delta v_t \\ \Delta w_t \end{array} \right]. \end{split} \end{equation} Similarly, we can also find some incremental storage function $ V_q(x^q,\Delta x^q)\geq 0$ with $ V_q(x^q,0)=0$ where $x^q$ denotes the state of $\bm{Q}_\theta$ such that \begin{equation} \begin{split} V_q(x_{t+1}^q,\Delta &x_{t+1}^q) - V_q(x_{t}^q,\Delta x_{t}^q)\leq \\ & \begin{bmatrix} \Delta v_t \\ \Delta \tilde{x}_t \end{bmatrix}^\top \begin{bmatrix} \overline{Q} & \overline{S}^\top \\ \overline{S} & \overline{R} \end{bmatrix} \begin{bmatrix} \Delta v_t \\ \Delta \tilde{x}_t \end{bmatrix}. \end{split} \end{equation} By adding the above inequalities, we have \begin{equation} \begin{split} V_c(&X_{t+1},\Delta X_{t+1})-V_c(X_t,\Delta X_t) \leq \\ & \begin{bmatrix} \Delta z_t \\ \Delta w_t \end{bmatrix}^\top \begin{bmatrix} Q_{zz} & S_{wz}^\top \\ S_{wz} & R_{ww} \end{bmatrix} \begin{bmatrix} \Delta z_t \\ \Delta w_t \end{bmatrix} + \\ & \begin{bmatrix} \Delta \tilde{x}_t \\ \Delta v_t \end{bmatrix}^\top \begin{bmatrix} Q_{xx}+\overline{R} & S_{vx}^\top + \overline{S} \\ S_{vx}+\overline{S}^\top & R_{vv}+\overline{Q} \end{bmatrix} \begin{bmatrix} \Delta \tilde{x}_t \\ \Delta v_t \end{bmatrix} \\ \leq & \begin{bmatrix} \Delta z_t \\ \Delta w_t \end{bmatrix}^\top \begin{bmatrix} Q_{zz} & S_{wz}^\top \\ S_{wz} & R_{ww} \end{bmatrix} \begin{bmatrix} \Delta z_t \\ \Delta w_t \end{bmatrix} \end{split} \end{equation} where $ V_c=V_g+V_q$ and $ X=(x^g,x^q)$. Here the second inequality follows by Condition~\eqref{eq:stability}. From the above inequality, we can conclude that the closed-loop system is contracting and yields finite Lipschitz bound from $ w$ to $z$.
2,869,038,156,121
arxiv
\section{Introduction} The inner AU around a pre-main sequence star is a complex region. Dust and gas flow inward through the disk, possibly accompanied by massive planets built in the outer disk \citep[e.g.][]{lin96}. Close to the star the dust will sublimate, creating an inner wall at a few tenths of an AU \citep{kam09}. The gas will flow within this radius \citep{eis09} until it is loaded on the large dipolar stellar magnetic field and lifted out of the midplane. Once locked onto the stellar field this material free-falls onto the star, creating a shock as it strikes the stellar surface and heating a small spot to well above the photospheric temperature \citep{ing13}. Add in the possibility of a significant wind being driven outward from close to the star \citep[e.g.][]{zan13}, and one can begin to understand the wide range of physical processes at work close to the star. Synoptic observations point toward even more complexity. Models \citep[e.g.][]{kul08} and observations \citep[e.g.][]{fan13} of accretion variability indicate that the flow of material onto the star fluctuates on short timescales. Hot and cold spots dot the stellar surface leading to rapid optical variability \citep[e.g.][]{alen12}. Large scale disk instabilities lead to outbursts lasting years to decades \citep[e.g.][]{zhu10,hil13} and frequent fluctuations at a wide range of infrared wavelengths indicate structural changes at or near the sublimation edge of the circumstellar disk on weekly timescales \citep[e.g.][]{mor11,wol13,cod14}. The frequency of moderate X-ray flares, believed to arise from magnetic reconnection events in the coronae \citep{fei07}, indicate that the magnetic field is not only complex but constantly varying. Even single epoch measurements of the magnetic field find that not all young stars have strictly dipolar magnetic fields; some are dominated by the octopolar or quadropolar components close to the stellar surface \citep{gre12}. Here we attempt to take advantage of the infrared fluctuations to study the disk structure in more detail. By searching for correlations between the infrared variability and other forms of variability (X-ray, accretion, etc.) we can directly study the influence of various factors that may set the structure of the dusty disk. Recent observations have found that the occurrence rate of infrared variability increases with X-ray luminosity \citep{fla13}, suggestive of some sort of connection between the two. Enhanced X-ray emission was observed following the $\sim$few magnitude outburst of the FU Ori Object V1647 Ori \citep{ham10}, suggesting a direct connection between the change in disk structure and enhanced X-ray emission during these large accretion bursts. X-rays are also likely an important contributor to the clearing of the disk by photo-evaporation within the first few million years \citep{owe10}. High-energy X-ray emission may influence the structure of the disk directly through heating and ionization \citep{ski13} or may serve as an indirect tracer of rearrangements of the magnetic field \citep[e.g.][]{goo99}. Previous theoretical studies have focused mainly on the gas, where the X-ray illumination is most strongly felt \citep{are11} but it may penetrate into the dust layer and heat and/or ionize the dust. The frequent flaring of X-ray emission, with factors of 10 increase in X-ray emission a common occurrence \citep{fei07}, means that this illumination of the dust is constantly, and strongly, variable. \citet{ste07} find that roughly half of the stars in Taurus exhibit X-ray variability, while \citet{wol05} find that solar-type stars in Orion spend $\sim$30\%\ of their time in an elevated state. This X-ray variability is a mix of rotational modulation \citep{fla05} and flares \citep{ste07} with larger fluctuations among stars with disks \citep{fla12}. Young stellar objects have been found to have very strong magnetic fields \citep{joh07,val04} that likely interact with the disk, although this interaction has been difficult to observe directly. The structure of the magnetic field can be very complex \citep[e.g.][]{don12,don11} and even among well studied stars it can be hard to characterize observationally \citep[e.g.][]{sch13}. Given the complex, but prominent, nature of the magnetic field, studying an indirect tracer like X-ray emission may shed light on its structure and how these structural changes affect the planet forming region in a large sample of young stellar objects. Contemporaneous changes in X-ray flux and infrared emission from the disk would point towards either a direct influence of the X-ray photons on the disk, or changes in magnetic field structure that affect both the high-energy flux and the emission from the disk. \citet{fla12} report a difference in the relative size of X-ray fluctuations between stars with disks and diskless systems. \citet{for07} did not find a direct correlation between X-ray and near-infrared variability in the Coronet cluster, but this survey was limited to observations over the course of only one week of a small handful of objects. The diversity in magnetic field and circumstellar disk properties among young stellar objects suggests that a small sample may not be able to capture the full range of interactions. By extending this type of analysis to a larger sample ($\sim$100 vs 10 objects) on longer timescales (40 days vs one week) at a wavelength that has less contamination from stellar photospheric emission (3-5\micron\ vs. 1-2\micron) we can broadly address the interactions of the dusty disk and the magnetic field/X-ray emission. To do this we obtained coordinated Chandra and Spitzer observations of the IC 348 cluster. This 2-3 Myr old cluster contains $\sim$300 known cluster members, 50\%\ of which contain an infrared excess indicative of circumstellar material \citep{lad06}. In section 2 we present our observations, in section 3 we search for correlated variability and in section 4 we discuss the implications of these results. \section{Data} \subsection{X-Ray Data Reduction} The field was observed by $Chandra$ 10 times between Oct 17, 2011 and Nov 17, 2011. The nominal 4-chip ACIS-I array was used in "Very Faint" mode. Each observation was about 10 ks and centered at 03:44:31.50 +32:08:33.70 (J2000.) Roll was left nominal so not all sources were in the field of view for all exposures (see Table~\ref{xray_log} for full details). The data were processed through the standard CIAO pipeline at the Chandra X-ray Center, using their software version D8.4. This version of the pipeline automatically incorporated a noise correction for low energy events. Background was nominal and non-variable. Since the focus was on the bright sources which may vary significantly, sources were identified in the individual frames. To identify point sources, photons with energies below 300 eV and above 8.0 keV were filtered out from this merged event list. This excluded energies which generally lack a stellar contribution. By filtering the data as described, contributions from hard, non-stellar sources such as X-ray binaries and AGN are attenuated, as is noise. A monochromatic exposure map was generated in the standard way using an energy of 1.49 keV which is a reasonable match to the expected peak energy of the stellar sources and the $Chandra$ mirror transmission. The CIAO tool WavDetect was then run on a series of flux corrected images binned by 1, 2 and 4 pixels. The thresholds were set to limit false detections to about 1 per 100 detections. The output resulted in the detection of between 119 -139 sources (mean 135.5) At each source position an extraction ellipse was calculated following \citet{wol06} updated for the appropriate roll. This provided an extraction ellipse containing 95\% of the source flux within the 0.3-8 keV range. For each of the sources, a background ellipse was identified. The background was an annular ellipse with the same center, eccentricity, and rotation as the source. The outer radius was 6 times the radius of the source and the inner radius was 3 times larger than the source. From this region any nearby sources were subtracted with ellipses 3 times the size of the source ellipse. The net counts were calculated by subtracting the background counts (corrected for area) and multiplying the result by 1.053 to correct for the use of a 95\% encircled energy radius. \subsection{Spitzer} We also repeatedly observed IC 348 with the {\it Spitzer} Space Telescope using the IRAC instrument during the fall 2011 observing window (PID60160). The observing strategy was identical to that described in \citet{fla13}, and here we provide a brief summary. Both [3.6] and [4.5] photometry were obtained simultaneously over the course of 20 epochs from Oct 15, 2011 to Nov 23, 2011 (Table~\ref{obs_log}). The cadence was chosen to trace the daily to weekly fluctuations in infrared emission with one observation every two days . The cluster was mapped in a 5 by 5 grid, with each grid point separated by 360" and the total field centered at 03:44:20 +32:03:01. Since the fields of view of the [3.6] and [4.5] arrays do not completely overlap, our final map covered by both bands was not a square and instead had a total area of roughly 0.4$^{\circ}$ by 0.25$^{\circ}$. Images were taken in HDR mode with a frame time of 12 seconds and 3 cycles at each position. The IRAC data reduction pipeline is described in \citet{gut09} and \citet{mor12} with updates appropriate for the warm Spitzer mission described in Gutermuth et al. in prep. \section{Results} \subsection{Demographics and selection of variables} As a nearby ($\sim$320 pc), young (2-3 Myr), relatively compact cluster IC 348 has proven to be a valuable laboratory for studying young stellar object variability \citep{coh04,fla13}. With over 300 cluster members, only one of which is earlier than A0, this region of low mass star formation contains a wide variety of circumstellar structures from pre-natal envelopes to disks with many-AU wide gaps \citep{lad06}. In the infrared, almost 60\%\ of the sources with an infrared excess are variable at [3.6], [4.5] \citep{fla13}, while 89\%\ of the pre-main sequence stars brighter than I=14 are variable in the optical \citep{coh04} on timescales of weeks to months. The frequency of these fluctuations indicates that the majority of pre-main sequence stars experience variability on their stellar surface, probed by the optical data, and within an AU of the star, probed by the infrared data. In our search for correlated X-ray and infrared variability, we focus on cluster members that have the most complete infrared and X-ray time-series data. Cluster members, and their stellar properties (L$_*$,T$_{\rm eff}$, etc.), are drawn from \citet{luh03,lad06,mue07,luh05} as compiled by \citet{fla13}. There are a total of 133 cluster members detected on more than one X-ray epoch, with 107 detected on more than four epochs. We restrict ourselves to these 107 stars because they have enough epochs for studying variability. While we include X-ray fluxes from previous studies of IC 348 \citep{pre02,ale12}, our analysis focuses on the data that are contemporaneous with the infrared data. As a sub-sample of the cluster, the X-ray detected sources are incomplete below L=0.3L$_{\odot}$ and T$_{eff}$=3600 K (Fig~\ref{demographics}). Within IC 348, the X-ray luminosity is typically 3 to 4 orders of magnitude lower than the bolometric luminosity \citep{pre02,ale12}. X-ray emission is known to be brighter among sources without active accretion \citep{ste01,fla03,pre05} and in our sample this will present itself as a detection efficiency that is a function of the infrared SED slope (defined as the logarithmic slope of $\lambda F_{\lambda}$ versus $\lambda$ over the 3-8\micron\ range) since the presence of a circumstellar disk as measured with the infrared excess is correlated with accretion. Using the dereddened flux, the SED slope varies from $\alpha_{IRAC}<-2.56$ for sources without a disk to $\alpha_{IRAC}>0$ for sources with an envelope and we use $\alpha_{IRAC}=-2.56$ as the boundary between a source with or without a circumstellar disk. \citet{ste12} find that within IC 348 the median X-ray luminosity among stars with disks is $\sim$0.3 dex lower than comparable stars without disks. Figure~\ref{demographics} shows the distribution of SED slope for X-ray detections compared with the entire sample, demonstrating our inefficient sensitivity to sources with disks. For the majority of our analysis we restrict ourselves to the sources with a disk. The presence of infrared variability is assessed using the techniques described in \citet{fla13}. In short, we use the reduced chi-square statistic for each band, and well as the stetson index S, which is sensitive to correlated variability in the two IRAC bands. The use of the stetson index does require us to restrict the analysis to sources detected in both bands, but the added sensitivity of this index compared to the reduced chi-square offsets the reduction in sample size. A star is marked as an infrared variable if $\chi^2_{\nu}([3.6])>3$ or $\chi^2_{\nu}([4.5])>3$ or S$>$0.45. Based on prior analysis, we are sensitive to $>99\%$ of fluctuations down to 0.04 mag for sources brighter than 14th magnitude \citep{fla13}; this sensitivity limit is relatively independent of source brightness because most of the uncertainty is due to systematic effects for these bright targets \citep{cod14}. While Spitzer monitoring data is available from multiple observing campaigns, we focus on the contemporaneous fall 2011 data when assessing variability since we are mainly concerned with the behavior during the same time frame as any X-ray fluctuations. The vast majority of stars variable in one season are also variable in other seasons, and our results would not significantly change if we included the entire dataset. Since the 3-5$\micron$ emission probed by this photometry is dominated by the inner edge of the dusty disk \citep{mcc13}, we are not sensitive to the dynamics of the regions of the disk beyond this point. Given that we are mainly concerned with the change in X-ray flux, rather than its overall level, we do not convert photon fluxes to luminosities, which requires assumptions about the distance to the cluster and the underlying X-ray spectrum. While the arrival time of each photon is recorded, we instead use the photon flux averaged over each 10ks observing run, with each epoch comprising one point on our light curve. We chose to do this to increase sensitivity at the expense of knowledge about the X-ray variability on very short timescales. We restrict ourselves to sources detected on at least four X-ray epochs. Photon fluxes range from 1e-6 to 1e-4 photons/s/cm$^2$, or 1 to 1200 net photons per 10ks block. We can roughly estimate the X-ray luminosities by assuming an average photon energy of 1.5keV, to convert from photon flux to energy flux, and a distance of 320pc; we find luminosities range from $\sim$10$^{28}$ to $\sim$10$^{31}$ erg/sec with a typical luminosity of $\sim$10$^{29}$erg/sec similar to previous studies \citep{ale12,ste12}. Our final sample consists of 39 stars (Table~\ref{var_table}) with circumstellar disks with sufficient X-ray photometry to examine variability, all of which were detected in the infrared. Figure~\ref{lc_examples} show examples of light curves for stars with and without X-ray and infrared variability to demonstrate the fidelity of our data. Our sensitivity to infrared variability is fairly uniform over our entire sample because the uncertainties are dominated by systematics for the vast majority of sources. This is not the case for the X-ray emission whose uncertainties are dominated by poisson statistics and scale with the square root of the flux. This leads to substantially nonuniform sensitivities throughout our sample. As with the infrared flux we use the reduced chi-square to select variable X-ray emission, using $\chi^2_{\nu}=3$ as the boundary between variables and non-variables. This picks out variations that are significant relative to the noise, which varies substantially between the brightest and the faintest sources. As a result, the size of the fluctuations selected as variable will strongly depend on the mean X-ray flux. On average this $\chi^2_{\nu}$ boundary corresponds to a factor of 2 increase in the flux relative to the mean with fluctuations reaching close to a factor of 10. Despite the non-continuous nature of our observations and the use of different selection criteria for selecting variables as compared to these previous studies, we pick out stars with similar fluctuations. \citet{fla12} find that among stars with disks, on timescales of a week, fluctuations are typically a factor of 3, similar to our results. Overall our sample consists of 39 sources with (1) detections on four or more X-ray epochs, (2) detections in both IRAC bands on all epochs, (3) an infrared excess indicative of a disk. Among these 39 stars, 31 have significant fluctuations in the infrared, while 28 have detectable X-ray variability. Our observing cadence was designed to probe long term effects of X-ray fluctuations on disk emission, at the expense of knowledge about any short, instantaneous changes in disk emission. While very rapid changes in infrared emission have been observed \citep{sta14}, week to month long variations make up a substantial fraction of the infrared variability \citep{fla13,cod14}. The time between an X-ray epoch and the nearest infrared observation ranges from 0.1 to 2.1 days (12-180 ks) and we are completely insensitive to changes in infrared emission on timescale shorter than this. We instead focus on responses that cover more than 2 infrared epochs ($>$300ks = 4 days). This caveat should be kept in mind when interpreting our results, especially as they apply to physical connections between X-ray and infrared variability. \subsection{The lack of correlation between X-ray and Infrared variability} We first look at the 28 stars with disks that are variable in both the X-ray and infrared to see if the light curves themselves are correlated. This is assessed using the Kendall's tau and Spearman's rho statistics. Both of these coefficients serve as non-parametric tests of whether increases or decreases in the X-ray are mirrored in infrared. We find no significant correlation between the X-ray and infrared light curves suggesting that there is no strong correlation between the infrared and X-ray variability. We also find that the size of the infrared fluctuations is not correlated with the size of the X-ray fluctuations among these variable stars. \citet{fla10} find correlated variations in optical flux and soft, but not hard, X-ray flux of CTTS in NGC 2264 between observations separated by two weeks, suggesting that there is a difference between the behavior of the soft and hard X-ray emission. We examine how the median X-ray photon energy changes over the course of the light curve to see if it changes in concert with the infrared emission. If the soft X-ray emission tracks the infrared emission while the hard X-ray flux is constant, then the median photon energy will decrease as the infrared emission increases. Given that we typically only detect 24 photons per source per 10ks block we use the median flux instead of just the soft X-ray photons to reduce the errors and maximize our sensitivity to fluctuations. Using both the Kendall's tau and Spearman's rho metrics, we do not find a correlation between the median photon energy and the infrared flux in any of our disk-bearing targets. We also find no significant correlation in the size of the infrared variability and the size of the variability in median photon energy. Among the two brightest X-ray sources, LRLL 2 and 6, for which we measure $\sim$400 photons per 10 ks, while there is some evidence for a change in median photon energy this change does not track with the infrared emission. The Kendall's tau and Spearman's rho statistics are effective at selecting correlated variations when there is no time-delay between the infrared and X-ray fluctuations and if the light curve is monotonic. \citet{fla05} find that roughly a tenth of the stars in Orion with known optical periods have detectable X-ray periods and any corresponding, but phase shifted, variability in the infrared would be missed by the previous metrics. To account for a time delay between the two light curves we employ the cross-correlation function, defined as: \begin{equation} CCF(\tau)=\frac{1}{N}\Sigma^N_{i=1}\frac{[x(t_i)-\bar{x}][y(t_i-\tau)-\bar{y}]}{\sigma_x\sigma_y} \end{equation} where $x(t_i)$ and $y(t_i)$ represent the X-ray and infrared light curves, $\bar{x}$, $\sigma_x$, $\bar{y}$, $\sigma_y$ are the respective means and standard deviations of the two light curves and $\tau$ is the time lag between the two light curves \citep{gas87}. This effectively shifts the infrared light curves by $\tau$ days and calculates how well the two light curves match after this shift. A positive value of $\tau$ implies that the IR light curve leads the X-ray light curve, while a negative lag corresponds to the X-ray light curve leading the IR light curve. Such an effect might be expected if there is an appreciable time delay between the occurrence of an X-ray fluctuation and the response of the disk or if the X-ray and infrared flux show a rotational modulation that is out of phase. Given that the data are not continuously sampled, we interpolate the more heavily sampled infrared light curve onto the X-ray light curve in our analysis. We assess the uncertainty in the CCF at different lags by computing the CCF of a particular X-ray light curve with infrared light curves from 100 random cluster members, and use the dispersion in the value of the CCF at each lag as its uncertainty. Figure~\ref{ccf} shows our results for LRL 5 and 58 as examples. When examining all 28 cluster members with variability in both the X-ray and infrared, we find no evidence of a strong cross-correlation signal between a lag of -20 and 20 days, with our strongest sensitivity to lags between -10 and 10 days. Again there is no strong evidence for a direct connection between the infrared and X-ray variability. The CCF, while effective at picking out light curves with a time lag, breaks down in the presence of flares. \citet{ste07} find that roughly a quarter of stars within Taurus exhibit flares, with these flaring sources making up half of the variable sample. While these outlier points within a light curve can easily disrupt any underlying correlation, they could also be a source of changes in infrared emission if the flare is powerful enough. We define a source as having a flare if it is variable in the X-ray ($\chi^2_{\nu}>3$) and it has one epoch whose flux is greater than three times larger than the median flux. We can also look for X-ray flares within a single 10 ks epoch among the brightest sources since each photon is tracked when taking X-ray observation. Only one source (LRLL 36) shows evidence for an X-ray flare during one of these epochs, and this star has been included in our analysis. There are a total of 13 cluster members with an X-ray flare, all but one of which is variable in the infrared. In none of these stars do we see strong evidence for an extended change in the IR flux following the X-ray flare. Figure~\ref{flare} shows the X-ray and infrared light curves of one star with an X-ray flare demonstrating the lack of response in the infrared light curve. If the infrared flux is changing before the flare, it continues along its course after the flare. We examine in detail the change in IR flux associated with an X-ray flare by calculating the change in [3.6] and [4.5] magnitude between the two epochs (i.e. about 2 days) surrounding the X-ray flare and comparing this to all consecutive epoch magnitude changes in the infrared light curve. Figure~\ref{flare} shows an example of this analysis. This allows us to quantify the infrared fluctuation closest to the X-ray flare and how it compares to the fluctuations seen at other times. In none of the light curves do we see evidence that the IR fluctuation near the X-ray flare is substantially different than what is seen in any other part of the light curve. This is consistent with our visual inspection which found no evidence for a change in infrared flux in response to an X-ray flare. Again we note that given our observing cadence we are severely limited in assessing the immediate response of the infrared flux to an X-ray flare. The time between an X-ray epoch and an infrared epoch ranges from 0.1 to 2.1 days (12-180ks). The majority of X-ray flares have decay times less than 60ks with an average decay time of 10ks \citep{ste07} and almost all X-ray flares would have decayed away by the time the system was observed in the infrared. YSOs typically exhibit one flare per star per 500ks with the time between bright flares (L$_X>10^{32}$erg/sec) a factor of a few longer than this \citep{ste07}. \citet{wol05} find similar results for solar-type stars, with roughly one flare per star per 650ks. A theory for the week to month long infrared variability that relies solely on X-ray flares would require the infrared response to last at least $\sim$500ks(=5.8days) to extend the infrared variability throughout the quiescent period between flares. Even with our limited cadence we can exclude these types of large, long infrared responses. \section{Physical Mechanisms} We find no direct connection between X-ray fluctuations and infrared variability. There is no clear correlation between the shapes of the infrared and X-ray light curves and there are no large, long-term changes in infrared emission following X-ray flares. This is consistent with the lack of correlated X-ray/infrared variability seen in the Coronet cluster \citep{for07}. Given the cadence of our observations we cannot comment on short (less than a few days), immediate responses of the disk to the changing X-ray flux; we can only constrain the type of interactions that lead to sustained week-long fluctuations. This caveat should be kept in mind in the context of the different physical situations that we discuss below. Interpreting this result depends on how X-ray variability could be physically connected to changes in disk structure. Our [3.6],[4.5] infrared observations are dominated by emission from the inner edge of the disk, which is made up of a slightly rounded wall of dust a few tenths of an AU from the star at the boundary where the dust becomes hot enough to sublimate \citep{kam09}. There are three main possibilities connecting X-ray variability to changes in the structure of this wall, which we will consider in turn: (1) heating of dust by X-ray photons, (2) ionization of dust by X-ray photons, (3) interactions between the circumstellar disk and the hot plasma created during the flare {\it X-ray Heating: } A rapid increase in the stellar flux illuminating the disk would lead to an almost instantaneous change in the temperature of the disk surface layers \citep{chi97}, causing the infrared emission to increase. Our observations show no evidence for a long-term change in infrared emission in response to changes in X-ray flux, suggesting that heating by X-rays is not an important factor in setting the dust temperature. This is not surprising since models predict that much of the X-ray flux will be absorbed by gas in the tenuous upper layers of the disk \citep{gla04,are11} and dust emission models are generally able to reproduce circumstellar disk emission in a wide range of stars without including illumination from stellar X-rays \citep{dal06}. In IC 348 the ratio of X-ray luminosity to bolometric luminosity ranges from 10$^{-3}-10^{-4.5}$ \citep{ste12} again suggesting that X-ray heating does not contribute significantly to the dust temperature. {\it X-ray Ionization} As with heating, an increase in X-ray flux may increase the ionization fraction of the dust through the increased collisions with electrons and ions \citep{tie05}, leading to observable dynamical effects due to interactions between this ionized material and the magnetic field. Even if X-ray heating is minimal, the increased depth at which X-ray ionization can penetrate the disk \citep{ige99} leaves open another avenue for X-ray photons to change the dust structure. \citet{ke12} provide detailed models of such a scenario and find that a large flare (L$_X\sim10^{32}$erg/s) that lasts for a day (=86ks) can substantially change the height of the inner disk. This effect was predicted to decay over the course of a week, leading to extended perturbations to the infrared emission. \citet{fav05} find that eight out of 32 flares have L$_X>10^{32}$erg/s while \citet{wol05} find similar numbers for the occurrence of large flares. In our sample we find no strong evidence for an extended response to an X-ray although we do not observe any flares as large as employed in these models and cannot comment on any immediate response to such outbursts. {\it High temperature plasma} X-ray flares are associated with magnetic reconnection events injecting magnetic energy into the upper reaches of the stellar atmosphere \citep{ben10}. This energy creates a $\sim$100 MK plasma that expands into the surrounding coronal loop, injecting thermal energy into the area surrounding the star, decaying on a timescale that is proportional to the size of the magnetic loop \citep{fav05}. Based on detailed modeling of the decay time and emission properties of X-ray flares in Orion Favata et al. find that 9 out of 22 flares reach larger than 5R$_*$ and could potentially interact with the disk in systems with very close in dust. \citet{orl11} model a different type of flare that originates at the interface between the disk and the stellar magnetic field that provides a direct interaction between the flare and the disk structure. In this model the disk height can change by a factor of two in response to a L$_X\approx10^{32}$erg/s sized flare. We see no evidence for long-term reactions to X-ray flares suggesting that these types of interactions do not drive the majority of the observed infrared variability. \section{A brief discussion of variability among diskless sources} As with the disked sources, in the diskless stars we see no evidence for a correlation between the X-ray and infrared variability. Without excess emission, the infrared flux in the diskless stars is tracing the stellar flux rather than the dust in the inner disk. Stellar variability in diskless stars is dominated by the rotation of star spots, which may be located at sites of amplified magnetic field strength. We see no response in the infrared light curve following X-ray flares, similar to the systems with protoplanetary disks. Analysis with the cross-correlation function as well as Kendall's tau and Spearman's rho statistics, confirms the lack of similarity in light curve shapes. Compared to stars with disks, diskless stars show weaker fluctuations in the infrared \citep{fla13} and in the X-ray \citep{fla12}. Previous studies have found that while X-ray and optical periods are sometimes similar \citep{fla05} there is often not a direct correlation between the X-ray and optical fluctuations \citep{sta07,fla10}. \citet{sta07} conclude that even though X-ray production is connected to the presence of a magnetically active, spotted surface, as evidenced by the connection between optical variability and X-ray luminosity, the regions of X-ray production and the star spots are not spatially coincident. \section{Conclusion} We find no evidence for an increased frequency of infrared variability among stars with X-ray variability. We see no correlation in the shape of the light curves in stars that exhibit variability in both bands, which is especially true in the stars with moderate X-ray flares for which we see no evidence for a change in infrared flux in response to the flare. We put limits on the relation between X-ray variability and the week to month long infrared variability of dust at the inner edge of the disk that is common among young stellar objects. Based on the observational evidence, the influence of X-ray heating on the temperature of the dust close to the star is relatively minor, which in turn has implications for the overall structure of the inner edge of the disk. Below z/r$\sim$0.1 the densities are high enough that the gas and dust temperature are highly coupled through collisions \citep{gla04}, allowing the dust temperature to set the gas temperature. The gas temperature in turn sets the pressure scale height of the disk, which is the basis of the vertical structure of the disk in a region where planet formation is expected to occur. If the X-rays cannot influence the dust thermal structure, then they will not influence the overall structure in this area of the disk. Compared to FUV and EUV photons, X-rays have a higher penetration depth because the opacity of the gas decreases towards higher energies and if X-ray photons cannot reach the dusty layers, then FUV and EUV photons contribute even less to the dust temperature. The dust temperature is instead set by a combination of viscous dissipation and irradiation by stellar optical emission \citep{dal99}, at least until the disk is significantly depleted \citep{ski13}. Beyond $\sim$1 AU, which is not probed by our infrared photometry, the X-rays may still have a strong influence on disk structure through e.g. MRI turbulence \citep{ige99}. In terms of constraining the origin of the long-term infrared fluctuations, we can rule out the influence of X-ray ionization as well as interactions between the X-ray emitting plasma and the dust at the sublimation front close to the star from explaining the majority of the observed infrared variability. These factors may come into play on short timescales for a small handful of stars, or at levels below our detection limits, but generally they are not important in setting disk structure. More likely possibilities include perturbations related to accretion heating \citep{fla13}, MRI driven turbulence lifting dust out of the midplane \citep{tur10} or perturbations from a companion \citep{fra10,nag12,ata13} with observations pointing toward accretion bursts, variable obscuration of the central star and changes in the inner disk structure \citep{cod14,sta14,wol13}. \acknowledgements We would like to thank the anonymous referee for their substantive comments that greatly improved this paper. The scientific results reported in this article are based in part on observations made by the Chandra X-ray Observatory. Support for this work was provided by the National Aeronautics and Space Administration through Chandra Award Number G02-13028A issued by the Chandra X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of the National Aeronautics Space Administration under contract NAS8-03060. This work is based in part on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. Support for the work was provided by NASA through an award issued by JPL/Caltech. RG gratefully acknowledges funding support from NASA ADAP grants NNX11AD14G and NNX13AF08G and Caltech/JPL awards 1373081, 142329 and 1440160 in support of Spitzer Space Telescope observing programs.
2,869,038,156,122
arxiv
\section{Introduction}\label{sec1} Ensuring the safety of control systems has received significant attentions in the past two decades due to the increasing number of safety-critical real-life applications, such as unmanned aerial vehicles and autonomous transportations. When models of these applications are available, various model-based techniques can be applied for synthesizing safety controllers, see e.g.,~\cite{Reissig2017Feedback,Ames2016Control,Rungger2017Computing}, to name a few. Nevertheless, obtaining an accurate model requires a significant amount of effort~\cite{Hou2013model}, and even if a model is available, it may be too complex to be of any use. Such difficulties motivate researchers to enter the realm of data-driven control methods. In this paper, we focus on data-driven methods for constructing safety controllers, which enforce invariance properties over unknown linear systems affected by disturbances (i.e., systems are expected to stay within a safe set). In general, data-driven control methods can be classified into indirect and direct approaches. \emph{Indirect data-driven approaches} consist of a system identification phase followed by a model-based controller synthesis scheme. To achieve a rigorous safety guarantee, it is crucial to provide an upper bound for the error between the identified model and the real but unknown model (a.k.a. \emph{identification error}). Among different system identification approaches,~\emph{least-squares methods} (see e.g.~\cite{Ljung1999System}) are frequently used for identifying linear models. In this case, \emph{sharp error bounds}~\cite{Simchowitz2018Learning} relate the identification error to the cardinality of the finite data set which is used for the identification task. Computation of such bounds requires knowledge about the distributions of the disturbances (typically i.i.d. Gaussian or sub-Gaussian, see e.g.~\cite{Matni2019tutorial,Matni2019self}, and references herein). Therefore, computation of these bounds is challenging when dealing with~\emph{unknown-but-bounded} disturbances~\cite{Bisoffi2021Trade}, i.e., the disturbances are only assumed to be contained within a given bounded set, but their distributions are fully unknown. Note that~\emph{set-membership identification approaches}~(see e.g.~\cite{Lauricella2020Set,Cerone2016Mimo}) can be applied to identify linear control systems with unknown-but-bounded disturbances. Nevertheless, it is still an open problem to provide an upper bound for the identification error when unknown-but-bounded disturbances are involved. Different from indirect data-driven approaches,~\emph{direct data-driven approaches} directly map data into the controller parameters without any intermediate identification phase. Considering systems without being affected by exogenous disturbances, results in~\cite{DePersis2019Formulas} propose a data-driven framework to solve linear quadratic regulation (LQR) problems for linear systems. Later on, similar ideas were utilized to design model-reference controllers (see~\cite[Section 2]{Breschi2021Direct}) for linear systems~\cite{Breschi2021Direct}, and to stabilize polynomial systems~\cite{Guo2021Data}, switched linear systems~\cite{Rotulo2021Online}, and linear time-varying systems~\cite{Nortmann2020Data}. When exogenous disturbances are also involved in the system dynamics, recent results, e.g.,~\cite{DePersis2021Low,Berberich2020Robust,Berberich2020Combining,Waarde2020noisy}, can be applied to LQR problems and robust controller design. However, none of these results considers state and input constraints. Hence, they cannot be leveraged to enforce invariance properties. When input constraints are considered, results in~\cite{Bisoffi2020Data,Bisoffi2020Controller} provide data-driven approaches for constructing state-feedback controllers to make a given \emph{C-polytope} (i.e., \emph{compact} polyhedral set containing the origin~\cite[Definition 3.10]{Blanchini2015Set}) robustly invariant (see~\cite[Problem 1]{Bisoffi2020Controller}). However, when such controllers do not exist for the given $C$-polytope, one may still be able to find controllers making a subset of this polytope robustly invariant, which is not considered in~\cite{Bisoffi2020Data,Bisoffi2020Controller}. Additionally, the approaches in~\cite{Bisoffi2020Data,Bisoffi2020Controller} require an individual constraint for each vertex of the polytope (see~\cite[Section 4]{Bisoffi2020Data} and~\cite[Theorem 1 and 2]{Bisoffi2020Controller}). Unfortunately, given any arbitrary polytope, the number of its vertices grows exponentially with respect to its dimension and the number of hyperplanes defining it in the worst case~\cite[Section 1]{Dyer1983complexity}. In this paper, we focus on enforcing invariance properties over unknown linear systems affected by unknown-but-bounded disturbances. Particularly, we propose a direct data-driven approach for designing safety controllers against these properties. To this end, we first propose so-called $\gamma$-robust safety invariant ($\gamma$-RSI) sets and their associated state-feedback controllers enforcing invariance properties modeled by (possibly \emph{unbounded}) polyhedral safety sets. Then, we propose a data-driven approach for computing such sets, in which the numbers of constraints and optimization variables grow linearly with respect to the numbers of hyperplanes defining the safety set and the cardinality of the finite data set. Moreover, we also discuss the relation between our data-driven approach and the condition of~\emph{persistency of excitation}~\cite{Willems2005note}, which is a crucial concept in most literature about direct data-driven approaches. The remainder of this paper is structured as follows. In Section~\ref{sec2}, we provide preliminary discussions on notations, models, and the underlying problems to be tackled. Then, we propose in Section~\ref{sec3} the main results for the data-driven approach. Finally, we apply our methods to a 4-dimensional inverted pendulum in Section~\ref{sec4} and conclude our results in Section~\ref{sec5}. For a streamlined presentation, the proofs of all results in this paper are provided in the Appendix. \section{Preliminaries and Problem Formulation}\label{sec2} \subsection{Notations} We use $\mathbb{R}$ and $\mathbb{N}$ to denote the sets of real and natural numbers, respectively. These symbols are annotated with subscripts to restrict the sets in a usual way, e.g., $\mathbb{R}_{\geq0}$ denotes the set of non-negative real numbers. Moreover, $\mathbb{R}^{n\times m}$ with $n,m\in \mathbb{N}_{\geq 1}$ denotes the vector space of real matrices with $n$ rows and $m$ columns. For $a,b\in\mathbb{R}$ (resp. $a,b\in\mathbb{N}$) with $a\leq b$, the closed, open and half-open intervals in $\mathbb{R}$ (resp. $\mathbb{N}$) are denoted by $[a,b]$, $(a,b)$ ,$[a,b)$ and $(a,b]$, respectively. We denote by $\mathbf{0}_{n\times m}$ and $\mathbf{I}_n$ the zero matrix in $\mathbb{R}^{n\times m}$, and the identity matrix in $\mathbb{R}^{n\times n}$, respectively. Their indices are omitted if the dimension is clear from the context. Given $N$ vectors $x_i \in \mathbb R^{n_i}$, $n_i\in \mathbb N_{\ge 1}$, and $i\in\{1,\ldots,N\}$, we use $x = [x_1;\ldots;x_N]$ to denote the corresponding column vector of the dimension $\sum_i n_i$. Given a matrix $M$, we denote by $\text{rank}(M)$, $\text{det}(M)$, $M^\top$, $M(i)$, and $M(i,j)$, the rank, the determinant, the transpose, the $i$-th column, and the entry in $i$-th row and $j$-th column of $M$, respectively. \subsection{System} In this paper, we focus on discrete-time linear control systems defined as \begin{equation}\label{eq:linear_subsys} x(k+1) = Ax(k) +Bu(k) + d(k), \quad k\in\mathbb N, \end{equation} with $A\in\mathbb{R}^{n\times n}$ and $B\in\mathbb{R}^{n\times m}$ being some unknown constant matrices; $x(k)\in X$ and $u(k)\in U$, $\forall k\in\mathbb{N}$, being the state and the input vectors, respectively, in which $X\subseteq \mathbb{R}^n$ is the state set, \begin{align} U = \{u \in \mathbb{R}^m|b_ju\leq 1,j = 1,\ldots,\mathsf{j}\}\subset \mathbb{R}^m, \label{input_set} \end{align} is the input set of the system, with $b_j\in \mathbb{R}^m$ being some known vectors; $d(k)$ denotes the exogenous disturbances, where $d(k)\in \Delta(\gamma)$, $\forall k\in \mathbb{N}$, with \begin{align} \Delta(\gamma) = \{d\in\mathbb{R}^n | d^\top d\leq\gamma,\gamma\in \mathbb{R}_{\geq 0} \}\label{eq:disturbance_set}. \end{align} Note that disturbances of the form of~\eqref{eq:disturbance_set} are also known as~\emph{unknown-but-bounded disturbance with instantaneous constraint}~\cite{Bisoffi2021Trade}, with $\gamma$ being the disturbance bound that is assumed to be a priori. Finally, we denote by \begin{align} X_{1,N} &:= \begin{bmatrix}x(1)&x(2)&\ldots&x(N)\end{bmatrix},\label{eq:state_seq1}\\ X_{0,N} &:= \begin{bmatrix}x(0)&x(1)&\ldots&x(N-1)\end{bmatrix},\label{eq:state_seq}\\ U_{0,N}&:=\begin{bmatrix}u(0) &u(1) & \ldots & u(N-1)\end{bmatrix},\label{eq:inputseqm} \end{align} the data collected offline, with $N\!\in\! \mathbb{N}$, in which $x(0)$ and $U_{0,N}$ are chosen by the users, while the rest are obtained by observing the state sequence generated by the system in~\eqref{eq:linear_subsys}. \subsection{Problem Formulation} In this paper, we are interested in invariance properties, which can be modeled by (possibly unbounded) safety sets defined as \begin{align} S := \{x\in\mathbb{R}^n| a_ix\leq 1, i = 1,\ldots,\mathsf{i}\}\subset X\label{safety_set}, \end{align} where $a_i\in \mathbb{R}^{n}$ are some known vectors. The main problem in this paper is formulated as follows. \begin{problem}\label{prob} Consider a linear control system as in~\eqref{eq:linear_subsys}, where matrices $A$ and $B$ are unknown, with input set as in~\eqref{input_set}, and safety set as in~\eqref{safety_set}. Using data in~\eqref{eq:state_seq1}-~\eqref{eq:inputseqm}, design a \emph{safety envelope} $\bar{\mathcal{S}}\subseteq S$ along with a \emph{safety controller} $u=Kx$ (if existing) such that $x(k)\in \bar{\mathcal{S}}$, $\forall k\in\mathbb{N}_{>0}$, if $x(0)\in\bar{\mathcal{S}}$. \end{problem} \section{Main Results}\label{sec3} \subsection{$\gamma$-Robust Safety Invariant Set} In this subsection, we propose the computation of $\gamma$-robust safety invariant ($\gamma$-RSI) sets assuming matrices $A$ and $B$ in~\eqref{eq:linear_subsys} are known. These sets would be later employed as safety envelopes as defined in Problem~\ref{prob}. Then, we utilize these results in the next subsection to provide the main direct data-driven approach to solve Problem~\ref{prob}. First, we present the definition of $\gamma$-RSI sets as follows. \begin{definition}\label{def:RSI} ($\gamma$-RSI set) Consider a linear control system as in~\eqref{eq:linear_subsys}. A $\gamma$-RSI set $\mathcal{S}$ with respect to a safety set $S$ as in~\eqref{safety_set} is defined as \begin{equation} \mathcal{S}:=\{x\in\mathbb{R}^n|x^\top Px\leq 1 \}\subset S,\label{eq:safety_set} \end{equation} such that $\forall x\in \mathcal{S}$, one has $Ax +Bu + d\in \mathcal{S}$, $\forall d\in\Delta(\gamma)$, when the \emph{RSI-based controller} \begin{equation} u=Kx,\label{eq:safety_controller} \end{equation} associated with $\mathcal{S}$ is applied in the closed-loop, where $P\in \mathbb{R}^{n \times n}$ is a positive-definite matrix, and $K\in\mathbb{R}^{m\times n}$. \end{definition} With this definition, we present the next straightforward result for Problem~\ref{prob}, which can readily been verified according to Definition~\ref{def:RSI}. \begin{theorem}\label{thm:solveprob} Consider a system as in~\eqref{eq:linear_subsys}. If there exists a $\gamma$-RSI set $\mathcal{S}$ as in~\eqref{eq:safety_set}, then one has $x(k)\in \mathcal{S}$, $\forall k \!\in\!\mathbb{N}_{>0}$, when the RSI-based controller as in~\eqref{eq:safety_controller} associated with $\mathcal{S}$ is applied in the closed-loop, and $x(0)\in\mathcal{S}$. \end{theorem} \begin{remark} In this work, we focus on computing elliptical-type $\gamma$-RSI sets to solve Problem~\ref{prob}, while computing $\gamma$-RSI sets of more general forms, e.g., polyhedral-type sets, is left to future investigations. One of the difficulties of computing polyhedral-type $\gamma$-RSI sets is to cast the volume of a polyhedral set as a convex objective function~\cite[Section 2]{Khachiyan1993Complexity}, which is done easily in the elliptical case (cf. Remark~\ref{objective}). Additionally, consider an $\mathsf{n}$-dimensional polytope $\mathcal{P}\subseteq \mathbb{R}^{\mathsf{n}}$, which is defined by $\mathsf{m}$ hyperplanes. The model-based approaches (see e.g.~\cite{Blanchini1990Feedback}) require an individual constraint for each vertex of $\mathcal{P}$ for synthesizing controllers that make $\mathcal{P}$ a $\gamma$-RSI set. Therefore, we suspect that the exponential growth in the number of vertices with respect to $\mathsf{n}$ and $\mathsf{m}$~\cite[Section 1]{Dyer1983complexity} could also be a burden for extending our data-driven approach to polyhedral-type $\gamma$-RSI sets. \end{remark} Using Theorem~\ref{thm:solveprob}, the other question is how to compute $\gamma$-RSI sets. To do so, we need the following result. \begin{theorem}\label{thm:LMI_1} Consider a system as in~\eqref{eq:linear_subsys}. For any matrix $K\in\mathbb{R}^{m\times n}$, positive-definite matrix $P\in \mathbb{R}^{n \times n}$, and $\gamma\in\mathbb{R}_{\geq 0}$, one has \begin{align} \Big((A+BK)x+d\Big)^\top P\Big((A+BK)x+d\Big)\leq 1,\label{ineq:safety_set} \end{align} $\forall d\in\Delta(\gamma)$, and $\forall x\in \mathbb{R}^n$ satisfying $ x^\top Px\leq 1$, if and only if $\exists \kappa \in (0,1]$, such that \begin{enumerate} \item (\textbf{Cond.1}) $x^\top (A+BK)^\top P(A+BK)x\leq \kappa$ holds $\forall x\in \mathbb{R}^n$ satisfying $x^\top Px\leq 1$; \item (\textbf{Cond.2}) $(y+\tilde{d})^\top P(y+\tilde{d})\leq 1$ holds $\forall y\in \mathbb{R}^n$ satisfying $y^\top Py\leq \kappa$, and $\forall \tilde{d}\in\Delta(\gamma)$. \end{enumerate} \end{theorem} The proof of Theorem~\ref{thm:LMI_1} is provided in the Appendix. In Figure~\ref{fig1:intuition}, we provide some intuitions for Theorem~\ref{thm:LMI_1}. \begin{figure} \centering \includegraphics[width=4.2cm]{figures/kappa_intuition} \caption{An envelope $E:=\{x\in \mathbb{R}^n|x^\top Px\leq 1\}$ is a $\gamma$-RSI set, when there exists a controller $u=Kx$ that can steer any $x \in E$ into a smaller envelope $E':=\{x^+\in \mathbb{R}^n|(x^+)^\top Px^+\leq \kappa\}$ in which we assume $d=\mathbf{0}$, i.e., $\forall x\in E$, one gets $x^+\in E'$, with $x^+=(A+BK)x$.} \label{fig1:intuition} \end{figure} Next, we propose an optimization problem for computing a $\gamma$-RSI set for a linear control system as in~\eqref{eq:linear_subsys}, assuming that matrices $A$ and $B$ are known. \begin{definition}\label{opt1} Consider a linear system as in~\eqref{eq:linear_subsys} with input constraints in~\eqref{input_set}, a safety set $S$ in~\eqref{safety_set}, $\kappa\in(0,1]$, and $\gamma\geq 0$. We define an optimization problem, denoted by $OP_m$ as: \begin{align} OP_m: \min_{Q,\bar{K}} &-\text{log}(\text{det}(Q))~\label{eq:objective_function}\\ \mbox{s.t.}\ &\begin{bmatrix}\kappa Q & Q^\top A^\top +\bar{K}^\top B^\top \\AQ+B\bar{K} &Q\end{bmatrix}\succeq 0,\label{synab_cond1} \end{align} \begin{align} &Q\succeq c\mathbf{I},\label{synab_cond2}\\ &a_iQa^\top _i\leq 1,\,i=1,\ldots, \textsf{i},\label{synab_cond3}\\ &\begin{bmatrix}1 & b_j\bar{K}\\\bar{K}^\top b_j^\top &Q\end{bmatrix}\succeq 0,\,j=1,\ldots, \textsf{j}\label{synab_cond4}, \end{align} where $c=\frac{\gamma}{(1-\sqrt{\kappa})^2}$ if $\kappa \neq 1$, and $c=0$ otherwise; $Q\in \mathbb{R}^{n \times n}$ is a positive-definite matrix, and $\bar{K}\in\mathbb{R}^{m\times n}$. \end{definition} Based on Definition~\ref{opt1}, one can construct an RSI-based controller enforcing invariance properties as in the next result. \begin{theorem}\label{cor:synab} Consider the optimization problem $OP_m$ in Definition~\ref{opt1}. For any $\kappa\in(0,1]$ and $\gamma\geq 0$, the set $\mathcal{S}':=\{x\in X|x^\top Q^{-1}x\leq 1\}$ is a $\gamma$-RSI set with $u = \bar{K}Q^{-1}x$ being the associated RSI-based controller, if and only if $OP_m$ is feasible for the given $\gamma$ and $\kappa$. \end{theorem} The proof for Theorem~\ref{cor:synab} can be found in the Appendix. Note that the existence of $\kappa\in(0,1]$ is a necessary and sufficient condition for the existence of a $\gamma$-RSI set with respect to the safety set $S$ as in~\eqref{safety_set} according to Theorem~\ref{thm:LMI_1}. In practice, one can apply bisection to come up with the largest value of $\kappa$ while solving $OP_m$. \begin{remark}\label{objective} The objective function in~\eqref{eq:objective_function} maximizes the volume of the $\gamma$-RSI set in Theorem~\ref{cor:synab}, since its volume is proportional to $\text{det}(Q)$~\cite[p. 42]{Boyd1994Linear}. \end{remark} So far, we have proposed an approach for computing $\gamma$-RSI sets by assuming matrices $A$ and $B$ are known. Before proposing the direct data-driven approach with the help of the results in this subsection, we want to point out the challenge in solving Problem~\ref{prob} using indirect data-driven approaches. Following the idea of indirect data-driven approaches, one needs to identify unknown matrices $A$ and $B$ based on data, and then applies Theorem~\ref{cor:synab} to the identified model \begin{equation*} x(k+1)= \hat{A}x(k)+\hat{B}u(k)+\hat{d}(k), \end{equation*} where $\hat{A}$ and $\hat{B}$ are the estimation of $A$ and $B$, respectively, and $\hat{d}(k) := (A - \hat{A})x(k) + (B - \hat{B}) u(k) +d(k)$, with $d(k)\in\Delta(\gamma)$. Accordingly, one has $\lVert \hat{d}(k)\rVert\leq \Delta_A \lVert x(k)\rVert+\Delta_B\lVert u(k)\rVert+\gamma$, with $\Delta_A\! :=\! \lVert A - \hat{A} \rVert$ and $\Delta_B \!:=\!\lVert B - \hat{B} \rVert$. Here, $\Delta_A$ and $\Delta_B$ are known as \emph{sharp error bounds}~\cite{Simchowitz2018Learning}, which relate the identification error to the cardinality of the finite data set used for system identification. Note that the computation of these bounds requires some assumptions on the distribution of the disturbances (typically disturbances with symmetric density functions around the origin such as Gaussian and sub-Gaussian, see discussion in e.g.~\cite{Matni2019tutorial,Matni2019self} and references herein). To the best of our knowledge, it is still an open problem how to compute such bounds when considering unknown-but-bounded disturbances (also see the discussion in Section~\ref{sec1}). Such challenges in leveraging indirect data-driven approaches motivated us to propose a direct data-driven approach for computing $\gamma$-RSI sets, in which the intermediate system identification step is not required. \subsection{Direct Data-driven Computation of $\gamma$-RSI Sets}\label{sec4.1} In this subsection, we propose a direct data-driven approach for computing $\gamma$-RSI sets. To this end, the following definition is required. \begin{definition}\label{opt3} Consider a linear system as in~\eqref{eq:linear_subsys} with input constraints as in~\eqref{input_set}, a safety set $S$ as in~\eqref{safety_set}, $X_{1,N}$, $X_{0,N}$, and $U_{0,N}$, as in~\eqref{eq:state_seq1}-\eqref{eq:inputseqm}, respectively. Given $\kappa\in(0,1]$ and $\gamma\geq 0$, we define an optimization problem, denoted by $OP_d$ as: \begin{align} OP_d:\!\!\!\!\!\!\min_{Q,\bar{Z},\epsilon_1,\ldots,\epsilon_N}\!\!\!\!\! &-\text{log}(\text{det}(Q))\label{eq:objective_function2} \\ \mbox{s.t.}\ &Q\succeq c\mathbf{I},\label{LMI2_cond1}\\ & N_1\! -\!\sum_{p=1}^{N}\epsilon_p N_p \begin{bmatrix} \gamma \mathbf{I}_n &\mathbf{0} \\ \mathbf{0} &-1 \end{bmatrix} N_p^\top \!\succeq\! 0;\label{LMI2_cond2}\\ &a_iQa^\top _i\leq 1,\,i=1,\ldots, \textsf{i},\label{LMI2_cond3}\\ &\begin{bmatrix}1 & b_j\bar{Z}\\\bar{Z}^\top b_j^\top &Q\end{bmatrix}\succeq 0,\,j=1,\ldots, \textsf{j},\label{LMI2_cond4} \end{align} where $\epsilon_i>0$, $\forall i\in[1,N]$, \begin{align*} N_1 = \begin{bmatrix} \kappa Q &\mathbf{0} & \mathbf{0} & \mathbf{0}\\ \mathbf{0} &-Q & -\bar{Z}^\top & \mathbf{0}\\ \mathbf{0} &-\bar{Z} & \mathbf{0}& \bar{Z} \\ \mathbf{0} &\mathbf{0} & \bar{Z}^\top &Q \end{bmatrix}; N_p = \begin{bmatrix} \mathbf{I}_n &X_{1,N}(p) \\ \mathbf{0} &-X_{0,N}(p) \\ \mathbf{0} &-U_{0,N}(p) \\ \mathbf{0} &\mathbf{0} \end{bmatrix}, \end{align*} $\forall p\in[1,N]$; $c=\frac{\gamma}{(1-\sqrt{\kappa})^2}$ if $\kappa \neq 1$, and $c=0$, otherwise; $Q\in\mathbb{R}^{n\times n}$ is a positive-definite matrix, and $\bar{Z}\in\mathbb{R}^{m\times n}$. \end{definition} With the help of Definition~\ref{opt3}, we propose the following result for building an RSI-based controller with respect to invariance properties. \begin{theorem}\label{thm:imp2} Consider an optimization problem $OP_d$ as in Definition~\ref{opt3} and the disturbance set $\Delta(\gamma)$ as in~\eqref{eq:disturbance_set}. For any $\kappa\in(0,1]$, if $OP_d$ is feasible, then the set $\mathcal{S}'_d:=\{x\in X|x^\top Q^{-1}x\leq 1\}$ is a $\gamma$-RSI set, with $u=\bar{Z}Q^{-1}x$ being the RSI-based controller associated with $\mathcal{S}'_d$. \end{theorem} The proof of Theorem~\ref{thm:imp2} is provided in the Appendix. It is also worth mentioning that the number of LMI constraints in $OP_d$ grows linearly with respect to the number of inequalities defining the safety set in~\eqref{safety_set} and input set in~\eqref{input_set}. Meanwhile, the sizes of the (unknown) matrices on the left-hand sides of~\eqref{LMI2_cond1}-\eqref{LMI2_cond4} are independent of the number of data, i.e., $N$, and grow linear with respect to the dimensions of the state and input sets. Additionally, the number of slack variables, i.e., $\epsilon_i$, grows linearly with respect to $N$. As a result, the optimization problem $OP_d$ in Definition~\ref{opt3} can be solved efficiently. \begin{remark} Although in~Theorem~\ref{cor:synab} (assuming matrices $A$ and $B$ are known), the feasibility of $OP_m$ for given $\gamma$ and $\kappa$ is a necessary and sufficient condition for the existence of $\gamma$-RSI sets, Theorem~\ref{thm:imp2} only provides a sufficient condition on the existence of such sets. As a future direction, we plan to work on a direct data-driven approach that provides necessary and sufficient conditions for computing $\gamma$-RSI sets, but this is out of the scope of this work. \end{remark} In the remainder of this section, we discuss our proposed direct data-driven approach in terms of the condition of \emph{persistency of excitation}~\cite{Willems2005note} regarding the offline-collected data $X_{0,N}$ and $U_{0,N}$. We first recall this condition, which is adapted from~\cite[Corollary 2]{Willems2005note}. \begin{lemma}\label{PEassumption} Consider the linear system in~\eqref{eq:linear_subsys} with $(A,B)$ being controllable, $X_{0,N}$ as in~\eqref{eq:state_seq}, and $U_{0,N}$ as in~\eqref{eq:inputseqm}. One has \begin{align} \text{rank}\Big(\begin{bmatrix} X_{0,N}\\U_{0,N} \end{bmatrix}\Big)=n + m,\label{condPE} \end{align} with $n$ and $m$ being the dimensions of state and input sets, respectively, if $U_{0,N}$ is a \emph{persistently exciting} input sequence of order $n+1$, i.e., $\text{rank}(U_{0,n+1,N})=m(n+1)$, where \begin{align*} U_{0,n+1,N}:=\begin{bmatrix}U_{0,N}(1)& U_{0,N}(2)&\ldots&U_{0,N}(N-n)\\ U_{0,N}(2)& U_{0,N}(3)&\ldots&U_{0,N}(N-n+1)\\ \vdots&\vdots&\ddots&\vdots\\U_{0,N}(n+1)& U_{0,N}(n+2)&\ldots&U_{0,N}(N)\\\end{bmatrix}. \end{align*} \end{lemma} The condition of \emph{persistency of excitation} in Lemma~\ref{PEassumption} is common among direct data-driven approaches, since it ensures that the data in hand encode all information which is necessary for synthesizing controllers~\emph{directly} based on data~\cite{Willems2005note}. Although Definition~\ref{opt3} and Theorem~\ref{thm:imp2} do not require this condition, the next result points out the difficulties in obtaining a feasible solution for $OP_d$, whenever condition~\eqref{condPE} does not hold. \begin{corollary}\label{cor:PEreq} Consider the optimization problem $OP_d$ in Definition~\ref{opt3}, and the set \begin{align} \mathcal{F}:=\bigcap_{p=1}^{N}\mathcal{F}_p,\label{eq:Fins} \end{align} where $\mathcal{F}_p:=\Big\{(\tilde{A},\tilde{B})\in \mathbb{R}^{n\times n}\times \mathbb{R}^{n\times m}~\Big|~X_{1,N}(p) =\tilde{A}X_{0,N}(p)+\tilde{B}U_{0,N}(p)+d,d\in\Delta(\gamma)\Big\}$, in which $p\in[1,N]$. The set $\mathcal{F}$ is unbounded if and only if \begin{align} \text{rank}\Big(\begin{bmatrix} X_{0,N}\\U_{0,N} \end{bmatrix}\Big)<n + m.\label{cond} \end{align} \end{corollary} The proof of Corollary~\ref{cor:PEreq} can be found in the Appendix. As a key insight, given data of the form of~\eqref{eq:state_seq1} to~\eqref{eq:inputseqm}, the failure in fulfilling condition~\eqref{condPE} indicates that these data do not contain enough information about the underlying unknown system dynamics for solving the optimization problem $OP_d$, since the set of systems of the form of~\eqref{eq:linear_subsys} that can generate the same data is unbounded. Concretely, the optimization problem $OP_d$ aims at finding a common $\gamma$-RSI set for any linear system as in~\eqref{eq:linear_subsys} such that $(A,B)\!\in\!\mathcal{F}$, with $\mathcal{F}$ as in~\eqref{eq:Fins}. The unboundedness of the set $\mathcal{F}$ makes it very challenging to find a common $\gamma$-RSI set which works for all $(A,B)\!\in\!\mathcal{F}$. In practice, to avoid the unboundedness of $\mathcal{F}$ and ensure that~\eqref{condPE} holds, one can increase the duration of the single input-state trajectory till the condition of persistency of excitation is fulfilled (cf. case studies). Before proceeding with introducing the case study of this paper, we summarize in Figure~\ref{fig:alg_chart} a flowchart for applying the proposed direct data-driven approach. \begin{figure} \centering \includegraphics[width=9.5cm]{figures/al_chart} \caption{Flowchart of the proposed direct data-driven approach, with $OP_d$ and $\kappa$ as in Definition~\ref{opt3}, $\kappa_{int}\in(0,1]$, $e\in \mathbb{R}_{>0}$, and $i_{max}\in \mathbb{N}_{>0}$ being parameters which are manually selected by users, and PE condition referring to the condition of persistency of excitation as \\ in Lemma~\ref{PEassumption}. } \label{fig:alg_chart} \end{figure} \section{Case Studies}\label{sec4} To demonstrate the effectiveness of our results, we apply them to a four dimensional linearized model of the inverted pendulum as in Figure~\ref{fig:IPS}. \begin{figure} \centering \includegraphics[width=3cm]{figures/ips} \caption{Inverted pendulum, where $m=0.1314\,$kg is the mass of the pendulum, $l=0.68\,$m is the length of the pendulum, $g=9.81\,$m/s is the gravitational constant, and $B_{\theta}=0.06\,$Nm/s is the damping coefficient of the connection between the cart (the blue part) and the pendulum (the green part).} \label{fig:IPS} \end{figure} Although the direct data-driven approach proposed in Section~\ref{sec4.1} does not require any knowledge about matrices $A$ and $B$ of the model, we consider a model with known $A$ and $B$ in the case study mainly for collecting data, simulation, and computing the model-based gamma-RSI sets in Theorem~\ref{cor:synab} as baselines to evaluate the effectiveness of our direct data-driven approach (cf. Figure~\ref{fig1:IP12} and~\ref{fig1:IP34}). When leveraging the direct data-driven method, we assume that $A$ and $B$ are fully unknown and treat the system as a black-box one. The model of the inverted pendulum can be described by the difference equation as in~\eqref{eq:linear_subsys}, in which \begin{align} A \!=\! \begin{small}\begin{bmatrix}1&0.02&0&0\\ 0&1&0&0\\ 0&0&1.0042&0.0194\\0&0&0.4208&0.9466\end{bmatrix}\end{small}, B \!=\! \begin{small}\begin{bmatrix}0.0002\\0.0200\\-0.0004\\[0.3em]-0.0429\end{bmatrix}\end{small},\label{dym} \end{align} where $x(k)=[x_1(k);x_2(k);x_3(k);x_4(k)]$ is the state of the system, with $x_1(k)$ being position of the cart, $x_2(k)$ being the velocity of the cart, $x_3(k)$ being the angular position of the pendulum with respect to the upward vertical axis, and $x_4(k)$ being the angular velocity of the pendulum; $u(k)\in[-5,5]\ m/s^2$ is the acceleration of the cart that is used as the input to the system. The safety objective for the inverted pendulum case study is to keep the position of the cart within $[-1,1]$ m, and the angular position of the pendulum within $[-\pi/12,\pi/12]$ rad. This model is obtained by discretizing a continuous-time linearized model of the inverted pendulum as in Figure~\ref{fig:IPS} with a sampling time $\tau = 0.02s$, and including disturbances $d(k)$ that encompass unexpected interferences and model uncertainties. The disturbances $d(k)$ belong to the set $\Delta(\gamma)$ as in~\eqref{eq:disturbance_set}, with $\gamma = (0.05\tau)^2$, which are generated based on a non-symmetric probability density function: \begin{align} f(d):=\left\{ \begin{aligned} \frac{5}{\pi^2\gamma^2}&,\text{ for }d\in D_1;\\ \frac{9}{5\pi^2\gamma^2}&,\text{ for }d\in \Delta(\gamma)\backslash D_1, \end{aligned} \right.\label{noise1} \end{align} with $D_1:=\{ [d_1;d_2;d_3;d_4]\in \Delta(\gamma)| d_i\in\mathbb{R}_{\geq 0}, i\in[1,4]\}$. Here, we select the distribution as in~\eqref{noise1} to mainly illustrate the difficulties in identifying the underlying unknown system dynamics when the exogenous disturbances are subject to a non-symmetric distribution, even though they are bounded. Meanwhile, our proposed direct data-driven approaches can handle such disturbances since we do not require any assumption on the disturbance distribution, e.g., being Gaussian or sub-Gaussian. Moreover, this distribution is only used for collecting data and simulation, while the computation of data-driven $\gamma$-RSI sets does not require any knowledge of it. The experiments are performed via MATLAB 2019b, on a machine with Windows 10 operating system (Intel(R) Xeon(R) E-2186G CPU (3.8 GHz)) and 32 GB of RAM. The optimization problems in Section~\ref{sec3} are solved by using optimization toolboxes \texttt{YALMIP}~\cite{Lofberg2004YALMIP} and~\texttt{MOSEK}~\cite{ApS2019MOSEK}. First, we show the difficulties in applying indirect data driven approaches to solve Problem~\ref{prob} in our case study, when the bounded disturbances are generated based on a non-symmetric probability density function as in~\eqref{noise1}. Here, we adopt least-squares approach as in~\cite{Jiang2004revisit} to identify matrices $A$ and $B$. We collect data as in~\eqref{eq:state_seq1}-~\eqref{eq:inputseqm} with $N\!=\!500$, and we have the estimation of $A$ and $B$ as \begin{align*} \hat{A} \!=\! \begin{small}\begin{bmatrix}1&0.02&0&0\\ 0&1&0&0\\ 0.0764&-0.0888&2.3439&-0.3745\\0.0687&-0.0798&1.6255&0.5924\end{bmatrix}\end{small}\!\!, \hat{B} \!=\! \begin{small}\begin{bmatrix}0.0002\\0.0200\\-0.0003\\[0.3em]-0.0422\end{bmatrix}\end{small}\!\!, \end{align*} respectively. Based on the estimated model, we obtain a controller $u=K_ix$ by applying Theorem~\ref{cor:synab}, with $K_i= \big[-9.8089;-3.3176;$ $-112.7033;25.7470\big]^\top $. With this controller, we initialize the system at $x =\big[0;0;0;0\big]^\top $, and simulate the system within the time horizon $H=70$. The projections of closed-loop state trajectories on the $x_1-x_3$ plane are shown in Figure~\ref{ind_traj}, which indicate that the desired safety constraints are violated. Additionally, we also depict in Figure~\ref{prob_ident} the evolution of the entry $\hat{A}(3,3)$ as an example to show that some of the entries in $\hat{A}$ keep fluctuating as the number of data used for system identification increases. In other words, $\hat{A}$ does not seem to converge to the real value in~\eqref{dym} by increasing the number of data used for system identification. \begin{figure} \centering \includegraphics[width=7.3cm]{figures/ind_traj} \caption{Projections of 1000 closed-loop trajectories on $x_1-x_3$ plane when applying the controller obtained by leveraging indirect data-driven approach.}\label{ind_traj} \end{figure} \begin{figure} \centering \includegraphics[width=7.3cm]{figures/prob_ident} \caption{Evolution of the entry $\hat{A}(3,3)$ as number of data used for the system identification\\ increases.}\label{prob_ident} \end{figure} Next, we proceed with demonstrating our direct data-driven approach. To compute the data-driven $\gamma$-RSI set using Theorem~\ref{thm:imp2}, we first collect data as in~\eqref{eq:state_seq1}-\eqref{eq:inputseqm} with $N=107$. Note that we pick $N=107$ such that condition~\eqref{condPE} holds. Then, we obtain a data-driven $\gamma$-RSI set within $4.165$s. Here, we denote the data-driven $\gamma$-RSI set by $\mathcal{S}_d := \{x\in\mathbb{R}^4 |x^\top P_dx\leq 1\}$, with \begin{align*} P_d = Q^{-1} =\begin{small} \begin{bmatrix}3.3950 & 2.8786& 12.1264& 1.9861\\2.8786 & 3.8224 & 15.6826&2.7404\\12.1264&15.6826&81.9169&12.4079\\1.9861&2.7404&12.4079&2.4531\end{bmatrix} \end{small}, \end{align*} in which $Q$ is the solution of $OP_d$ with $\kappa=0.9813$. The RSI-based controller associated with $\mathcal{S}_d$ is $u=K_dx$, where $K_d= \big[3.2672;4.9635;$ $38.1223;4.9989\big]^\top $. As for the simulation, we first randomly select $100$ initial states from $\mathcal{S}_d$ following a uniform distribution. Then, we apply the RSI-based controller associated with $\mathcal{S}_d$ in the closed-loop and simulate the system within the time horizon $H=200$. In the simulation, disturbance at each time instant is injected following the distribution in~\eqref{noise1}. The projections\footnote{Here, the projections of the $\gamma$-RSI sets are computed by leveraging \texttt{Ellipsoidal Toolbox}~\cite{Kurzhanskiy2021Ellipsoidal}.} of the data-driven $\gamma$-RSI sets, and closed-loop state trajectories on the $x_1-x_2$ and $x_3-x_4$ planes are shown in Figure~\ref{fig1:IP12} and~\ref{fig1:IP34}, respectively. For comparison, we also compute the model-based $\gamma$-RSI set with Theorem~\ref{cor:synab}, denoted by $\mathcal{S}_m$, and project it onto relevant coordinates. One can readily verify that all trajectories are within the desired safety set, and input constraints are also respected, as displayed in Figure~\ref{fig1:input_IP}. It is also worth noting that, as shown in Figure~\ref{fig1:IP34}, the data-driven $\gamma$-RSI set does not necessarily need to be inside the model-based one, since the $\gamma$-RSI set with the maximal volume (cf. Remark~\ref{objective}) do not necessarily contain all other possible $\gamma$-RSI sets with smaller volume. \begin{figure} \centering \includegraphics[width=7.3cm]{figures/IP12} \caption{Projections of the data-driven $\gamma$-RSI set $\mathcal{S}_d$, the model-based $\gamma$-RSI set $\mathcal{S}_m$, initial states, and state trajectories on $x_1-x_2$ plane.}\label{fig1:IP12} \end{figure} \begin{figure} \centering \includegraphics[width=7.3cm]{figures/IP34} \caption{Projections of the data-driven $\gamma$-RSI set $\mathcal{S}_d$, the model-based $\gamma$-RSI set $\mathcal{S}_m$, initial states, and state trajectories on $x_3-x_4$ plane.}\label{fig1:IP34} \end{figure} \begin{figure} \centering \includegraphics[width=7.3cm]{figures/input_IP} \caption{Input sequences for the inverted pendulum example.}\label{fig1:input_IP} \end{figure} \section{Conclusions}\label{sec5} In this paper, we proposed a direct data-driven approach to synthesize safety controllers, which enforce invariance properties over unknown linear systems affected by unknown-but-bounded disturbances. To do so, we proposed a direct data-driven framework to compute $\gamma$-robust safety invariant ($\gamma$-RSI) sets, which is the main contribution of this paper. Moreover, we discuss the relation between our proposed data-driven approach and the condition of persistency of excitation, explaining the difficulties in finding a suitable solution when the collected data do not fulfill such a condition. To show the effectiveness of our results, we apply them to a 4-dimensional inverted pendulum. Providing a data-driven approach for computing control barrier functions to enforce invariance properties is under investigation, as a future work. \section{Acknowledgment} This work was supported in part by the H2020 ERC Starting Grant AutoCPS (grant agreement No 804639) and by an Alexander von Humboldt Professorship endowed by the German Federal Ministry of Education and Research. \bibliographystyle{my-elsarticle-num}
2,869,038,156,123
arxiv
\section{Introduction} \hspace*{0.3cm} Turbulence is a complex state of fluid motion. The flow field varies randomly both in space and in time. An individual flow field is too complicated to extract any simple and useful information from, and does not exhibit any universal laws. The mean flow fields, on the other hand, obtained by spatial, temporal or ensemble averaging, exhibit simpler behaviour and allow for the extraction of universal statistical properties. Useful information, if any, is expected to be seen more clearly in the mean flow. In fact, the celebrated statistical laws of turbulence, such as the Kolmogorov universal law for the energy spectrum at small scales (see \citet{MoYa75}) and the logarithmic velocity profile in wall turbulence (see \citet{Schl79}) were confirmed experimentally by ensemble averages of many measured data. \hspace*{0.3cm} In contrast to the statistical ones, the dynamical properties of turbulence are blurred in the mean field and must be analysed in the instantaneous flow. The fact that the fluid motion is chaotic and never repeats, however, makes it extremely difficult to extract any universal dynamical properties. There is no way to pick up, with confidence, any representative parts of turbulent flows from a finite series of temporal evolution. Thus, it would be nice if there are some reproducible flows, or skeletons of turbulence, which represent the turbulent state well. This is reminiscent of unstable periodic orbits in chaotic dynamical systems. The chaotic attractor contains infinitely many unstable periodic orbits. Some statistical properties associated with a strange attractor are described in terms of the unstable periodic orbits embedded in it. Rigorous results are provided by the {\em cycle expansion} theory \cite{artuso}. However, these results only seem to apply to a certain class of dynamical systems with a chaotic attractor of a dimension less than three, a far cry from developed turbulence. The dimension of the attractor of turbulent flow is expected to grow with the number of modes in the inertial range. For turbulent Poiseuille flow, for instance, the attractor dimension has been estimated to be $\mbox{O}(100)$ at a wall-unit Reynolds number of $R_{\tau}=80$ \cite{keefe}. \hspace*{0.3cm} In such high-dimensional chaos it is unknown whether an infinite number of periodic orbits is necessary to describe the statistical properties of the strange attractor or a finite number of them is sufficient. In this respect, two key papers have recently been published. \citet{kawahara} found two periodic solutions in the plane Couette system with $15,422$ degrees of freedom and showed that they represent the quiescent and turbulent phases of the flow. The latter periodic solution represents the generation cycle of turbulent activity, i.e. the repetition of alternate generation and breakdown of streamwise vortices and low-speed streaks. Moreover, the phenomenon of bursting is explained as the state point wandering back and forth between these solutions. This provides us with the first example that shows that only a single periodic motion represents the properties of the turbulent state well. The second example is the discovery of a periodic solution in shell model turbulence with $24$ degrees of freedom \cite{KaYa03}. A one parameter family of the solution exhibits the scaling exponents of the structure function of the velocity field similar to real turbulence. \hspace*{0.3cm} Inspired by these discoveries of periodic solutions which represent the turbulent state by themselves, we were led to the present search of periodic motion in isotropic turbulence, hoping to find one which reproduces turbulent statistics such as the Kolmogorov energy spectrum in the universal range. Such periodic orbits are asymptotically unstable and are not found by simple forward integration. They can only be captured by Newton-Raphson iterations or similar methods. Here, we encounter a hard practical problem, namely that the computation time required for the perfomance of Newton-Raphson iterations increases rapidly as the square of the number of degrees of freedom which is enormous in a simulation of the turbulent state. The present our computer resources limit the available number of degrees of freedom to $\rm{O}(10^4)$. \hspace*{0.3cm} In the next section, we impose the high symmetry to the flow to reduce the number of degrees of freedom in simulations \cite{kida1}. The onset of developed turbulence at micro-scale Reynolds number $R_{\lambda}=67$, described in section \ref{sec:turbulent}, can then be resolved by taking account of about $10^4$ degrees of freedom. The localisation of periodic solutions in such large sets of equations is a hard task indeed. In section \ref{sec:periodic}, we take the approach of regarding periodic solutions as fixed points of a Poincar\'e map. Newton-Raphson iterations can then be used to find such fixed points. The iterations, however, converge only if a good initial guess is provided. We filter initial data from a turbulent time series by looking for approximately periodic time segments. This works well at fairly low $R_{\lambda}$, where the flow is only weakly turbulent. Subsequently we use arc-length continuation to track the periodic solutions into the regime of developed turbulence. We present several periodic solutions of different period and compare them to the turbulent state in a range of $R_{\lambda}$. Then we show in section \ref{sec:embedded} that the solution of longest period considered here, about two to three times the large-eddy-turnover time, represents the turbulence remarkably well. In particular we compare the time-averaged energy-dissipation rate, the energy spectrum and the largest Lyapunov exponent. Further, we examine the dynamical properties of this particular periodic motion and show that it exhibits the energy-cascade process by itself. It consists of a low-active period and a high-active period, and the turbulent state approaches it selectively in the low-active part at the rate of once over several eddy-turnover times. We compute a part of the Lyapunov spectrum of the periodic motion and the corresponding Kaplan-Yorke dimension and Kolmogorov-Sinai entropy. These values can be considered as an approximation of the values found in isotropic turbulence under high-symmetry conditions. The local Lyapunov exponents are shown to have systematic correlations to the energy input rate and dissipation rate of the periodic motion, which leads to the conjecture that the ordering of the Lyapunov vectors by the magnitude of the corresponding exponents corresponds to an ordering of spatial scales of the perturbation fields they describe. Finally, future perspectives of the turbulence research on the basis of the unstable periodic motion will be discussed in section \ref{sec:conclusion}. \section{High-Symmetric Flow} \label{sec:highsymm} \hspace*{0.3cm} We consider the motion of an incompressible viscous fluid in a periodic box given by $0<x_1,x_2,x_3\leq 2\pi$. The velocity field $\bm{u}(\bm{x},t)$ and the vorticity field $\bm{\omega}(\bm{x},t)=\nabla\times\bm{u}(\bm{x},t)$ are expanded in the Fourier series of $N^3$ terms as \begin{eqnarray} \bm{u}(\bm{x},t) &=& \mbox{i} \sum_{\bm{\scriptstyle k}}\widetilde{\bm{u}}(\bm{k},t) \mbox{e}^{{\rm i} \bm{\scriptstyle k}\cdot\bm{\scriptstyle x}}, \\ \bm{\omega}(\bm{x},t) &=& \sum_{\bm{\scriptstyle k}} \widetilde{\bm{\omega}} (\bm{k},t)\mbox{e}^{{\rm i}\bm{\scriptstyle k}\cdot\bm{\scriptstyle x}}, \end{eqnarray} where $\bm{k} =(k_1,k_2,k_3)$ is the wavenumber and the summations are taken over all triples of integers satisfying $-\frac{1}{2}N<k_1, k_2, k_3 \le \frac{1}{2}N$. Then, the Navier-Stokes and the continuity equations are respectively written as \begin{eqnarray} \frac{\mbox{d}}{\mbox{dt}}\widetilde{\omega}_{i}(\bm{k},t) &=& \epsilon_{ijk}k_{j}k_{l} \,\widetilde{u_{k}u_{l}}(\bm{k},t) -\nu k^2 \widetilde{\omega}_{i}(\bm{k},t), \label{NS}\\ [0.5cm] k_{i}\widetilde{u}_{i}(\bm{k},t)&=& 0, \label{cont} \end{eqnarray} where $\nu$ is the kinematic viscosity, $\epsilon_{ijk}$ is the unit anti-symmetric tensor, and the tilde denotes the Fourier transform. The summation convention is assumed for the repeated subscripts. By definition, the Fourier transforms of the velocity and vorticity fields are related by \begin{equation} \widetilde{\omega}_{i}(\bm{k},t)=-\epsilon_{ijk}k_{j} \widetilde{u}_{k}(\bm{k},t). \end{equation} \hspace*{0.3cm} In order to reduce the number of degrees of freedom we impose the high symmetry on the flow field \cite{kida1}, in which the Fourier components of vorticity are real, and satisfy \begin{equation} \widetilde{\omega}_1(k_1,k_2,k_3;t)= \begin{cases} +\widetilde{\omega}_2(k_3,k_1,k_2;t),\\ +\widetilde{\omega}_3(k_2,k_3,k_1;t),\\ +\widetilde{\omega}_1(-k_1,k_2,k_3;t),\\ -\widetilde{\omega}_1(k_1,-k_2,k_3;t),\\ -\widetilde{\omega}_1(k_1,k_2,-k_3;t),\\ +\widetilde{\omega}_1(k_1,k_3,k_2;t) &\hbox{(if $k_1$, $k_2$, $k_3$ are all even),}\\ -\widetilde{\omega}_1(k_1,k_3,k_2;t) &\hbox{(if $k_1$, $k_2$, $k_3$ are all odd),}\\ 0 & \hbox{(unless $k_1$, $k_2$, $k_3$ are all even or odd).} \end{cases} \label{hsymmetry} \end{equation} Under these conditions, only a single component of the vorticity field has to be computed in a volume fraction $1/64$ of the periodic domain and the number of degrees of freedom is reduced by a factor of $192$. \hspace*{0.3cm} The flow is maintained by fixing the magnitude of the smallest wavenumber components of velocity, which otherwise tends to decay in time due to transfer of energy to larger wavenumbers leading to ultimate dissipation by viscosity. The magnitude of the smallest wavenumbers of a nonzero velocity component under high-symmetry condition is $k_{f}=\sqrt{11}$, and the magnitude of the velocity of the fixed components is set to be \begin{equation} |\widetilde{u}_{i}(\bm{k},t)|=\textstyle{\frac{1}{8}}\qquad \hbox{($i=1,2,3$)\hspace*{1cm} for $|\bm{k}|=k_f$.} \label{eq:forcing} \end{equation} Since the magnitude of these components of velocity decreases, in average, in each time step of numerical simulation, this manipulation results in energy supplies to the system. As will be discussed in subsection \ref{subsec:structure}, the energy-input rate, \begin{equation} e(t)=\sum_{|\bm{\scriptstyle k}|=k_{f}} \widetilde{u}_{i}(\bm{k},t) \frac{\mbox{d}}{\mbox{dt}}{\widetilde{u}}_{i}(\bm{k},t), \end{equation} changes in time depending on the state of flow. \hspace*{0.3cm} Equations (\ref{NS}) and (\ref{cont}) are solved numerically starting with some appropriate initial condition. The nonlinear terms are evaluated by the spectral method in which the aliasing interaction is suppressed by eliminating all the Fourier components beyond the cut-off wavenumber $k_{\rm max}=[N/3]$, the maximum integer not exceeding $N/3$. In the following, we fix $N=128$ so that the number $n$ of degrees of freedom of the present flow is about $10^4$. The fourth-order Runge-Kutta-Gill scheme with step size $\Delta t=0.005$ is employed for time stepping. \hspace*{0.3cm} For later use, we introduce several global quantities which characterise the flow properties, namely, the total kinetic energy of fluid motion, \begin{eqnarray} {\mathcal E}(t) &=& \frac{1}{(2\pi)^3}\int\frac{1}{2} |\bm{u}(\bm{x},t)|^2{\rm d}\bm{x} = \frac{1}{2}\sum_{\bm{\scriptstyle k}} |\widetilde{\bm{u}}(\bm{k},t)|^2, \end{eqnarray} the enstrophy, \begin{eqnarray} {\mathcal Q}(t) &=& \frac{1}{(2\pi)^3}\int\frac{1}{2} |\bm{\omega}(\bm{x},t)|^2{\rm d}\bm{x} = \frac{1}{2}\sum_{\bm{\scriptstyle k}} |\widetilde{\bm{\omega}}(\bm{k},t)|^2, \label{enstrdef} \end{eqnarray} the energy-dissipation rate, \begin{equation} \epsilon(t) = 2\nu {\mathcal Q}(t), \end{equation} and the Taylor micro-scale Reynolds number, \begin{equation} R_{\lambda}(t) = \sqrt{\frac{10}{3}}\frac{1}{\nu} \frac{{\mathcal E}(t)}{\sqrt{{\mathcal Q}(t)}}, \end{equation} where the integration is carried out over the whole periodic box. In the following, time-averaged quantities are denoted by an over bar. \section{Turbulent State} \label{sec:turbulent} \hspace*{0.3cm} The reduction by symmetry introduced above makes it possible to describe the laminar-turbulent transition and the statistics of fully developed turbulence in terms of relatively few degrees of freedom. With the forcing as described by Eq. (\ref{eq:forcing}), the following scenario is observed for decreasing viscosity \cite{kida2,veen}. \hspace*{0.3cm} The flow is steady at large viscosity $\nu$, or small micro-scale Reynolds number $R_{\lambda}$, and remains so down to $\nu\approx 0.01$ ($R_{\lambda}\approx 25$), where a Hopf bifurcation takes place and the flow becomes periodic with a period of about $2.2$. This period is identical to the Poincar\'e return time $T_{\rm{\scriptscriptstyle R}}$ which will be introduced in section \ref{sec:periodic}. The stable periodic motion subsequently undergoes a torus bifurcation and the motion becomes quasi-periodic. In a range of viscosity, $0.008>\nu >0.005$ ($35<R_{\lambda}<50$), we observe the breakdown and creation of invariant tori, and the behaviour alternates between quasi-periodic and chaotic. In this chaotic regime the spatial structure of the flow remains simple so that we can speak of `weak turbulence'. Around $\nu=0.005$ the flow becomes chaotic through the Ruelle-Takens scenario, and for lower viscosity ($\nu<0.005$) only disordered behaviour is found. Then, for $\nu<0.004$ ($R_{\lambda}>60$), the time-averaged energy-dissipation rate $\overline{\epsilon}$ hardly changes as a function of viscosity and fully developed turbulence sets in. \begin{figure}[h] \begin{center} \includegraphics[width=.8\textwidth]{turbulent1.eps} \end{center} \scaption{The time-averaged energy-dissipation rate $\overline{\epsilon}$ against viscosity $\nu$ in the turbulent state. As $\nu$ decreases, $\overline{\epsilon}$ seems to saturate around $0.1$.} \label{turbulent1} \end{figure} \hspace*{0.3cm} In Fig. \ref{turbulent1}, we display $\overline{\epsilon}$ against $\nu$ over the range $0.0035<\nu <0.005$, where a transition from weak to fully developed turbulence takes place. Observe that $\overline{\epsilon}$ seems to saturate around $0.1$ at smaller viscosity. This property will play a key role in identifying periodic motion that represents the turbulent state in section \ref{sec:periodic}. As is common to many kinds of turbulence, quite large fluctuations are observed in time series of $\epsilon(t)$. The standard deviation $\sigma_{\epsilon}$ is about $10\sim 20$\% of the mean value $\overline{\epsilon}$ in the present flow. For example, $\sigma_{\epsilon}$ takes the value 0.009 at $\nu=0.0045$ and 0.016 at $\nu=0.0035$, too large to be drawn in the figure. For a plot of $R_{\lambda}$ against $\nu$, see \citet{kida2}. \hspace*{0.3cm} The energy spectrum, which represents the scale distribution of turbulent activity, is one of the most fundamental statistical quantities characterising turbulence. Since the longitudinal velocity correlation is relatively easy to be measured in experiments, the one-dimensional longitudinal energy spectrum is frequently compared between different kinds of turbulence. In the high-symmetric flow, it is calculated by \begin{equation} E_{\parallel}(k,t)=\frac{1}{2}\sum_{k_2,k_3}|\widetilde{u}_1(k,k_2,k_3;t)|^2. \label{eq:longitudinal} \end{equation} In Fig. \ref{turbulent2}, we plot the time-averaged one-dimensional longitudinal energy spectrum $\overline{E}_{\parallel}(k)$ at the maximal micro-scale Reynolds number $R_{\lambda}=67$ attained in our numerical experiments. The straight line indicates the Kolmogorov $-5/3$ power law with Komogorov constant of $1.4$. The inertial range appears only marginally at this Reynolds number. \begin{figure}[h] \begin{center} \includegraphics[width=.8\textwidth]{turbulent2.eps} \end{center} \scaption{One-dimensional longitudinal energy spectrum for the turbulent state at $R_{\lambda}=67$ ($\nu=0.0035$) with error bars denoting standard deviation. The straight line denotes the Kolmogorov $-5/3$ power law with Kolmogorov constant of 1.4. The both axes are normalised by the Kolmogorov characteristic scales.} \label{turbulent2} \end{figure} \hspace*{0.3cm} In order to go to larger Reynolds numbers, we need to increase the truncation level to maintain $k_{\rm max}\eta\approx 1$, where $\eta=({\nu^3/\overline{\epsilon}})^{1/4}$ is the Kolmogorov length. The main impedediment for increasing the truncation level is the computation time and memory requirement of the continuation of periodic orbits, as will be described in section \ref{sec:periodic}. In previous work by \citet{kida4} it was shown that the high-symmetric flow reproduces the Kolmogorov spectra accurately at large Reynolds numbers ($R_{\lambda}\sim 100$). The intermittency effects were investigated by \citet{kida5} and \citet{boratav}. \hspace*{0.3cm} The turbulent flow is composed of various vortical motions of different spatial and temporal scales. The dominant characteristic time-scale of turbulence is the large-eddy-turnover time $T_{\rm{\scriptscriptstyle T}}$, which may be estimated from the root-mean-square velocity and the domain size, and is $\rm{O}(1)$ in the present flows. A more precise value of $T_{\rm{\scriptscriptstyle T}}$ may be obtained by the frequency spectra of energy $\mathcal{E}(t)$ and enstrophy $\mathcal{Q}(t)$, which will be useful for grouping of the periodic orbits studied in the next section. Time series of $\mathcal{E}(t)$ and $\mathcal{Q}(t)$, taken over $0<t <10^4$ in the turbulent flow at $\nu=0.0035$, are Fourier transformed, and their spectra are plotted in Fig. \ref{enefreqspec}. The dominant peak corresponds to the large-eddy-turnover time of $T_{\rm{\scriptscriptstyle T}}\approx 4.4$. The second peak near the left end shows variations on time scales around $7T_{\rm{\scriptscriptstyle T}}$ and is not discussed here. A weaker peak is visible at $T_{\rm{\scriptscriptstyle R}}\approx 2.2$, which corresponds to the period of oscillation of the flow observed at larger viscosity (see section \ref{sec:turbulent}) as well as to the most probable return time of the Poincar\'e map (see Fig. \ref{PDFt_r}) and will be used to label the periodic solutions. \begin{figure}[h] \begin{center} \includegraphics[width=.8\textwidth]{freq-spectra.eps} \end{center} \scaption{Frequency spectra of energy (solid line) and enstrophy (dotted line) at $\nu=0.0035$. Dashed lines are drawn at the peaks corresponding to the large-eddy-turnover time $T_{\rm{\scriptscriptstyle T}}$ and the most probable return time $T_{\rm{\scriptscriptstyle R}}$ of the Poincar\'e map, described in section \ref{sec:periodic}.} \label{enefreqspec} \end{figure} \section{Extracting periodic motion} \label{sec:periodic} \hspace*{0.3cm} The state of the vorticity field is represented by a point in the phase space spanned by $n$ Fourier components \{$\widetilde{\bm{\omega}}(\bm{k})$\} of the vorticiy field, independent under high-symmetry condition (\ref{hsymmetry}). Here, $n$ is the number of degrees of freedom in the truncated system, about $10^4$ for $N=128$ as stated earlier. We specify an $(n-1)$-dimensional hyperplane $S$ by fixing one of the small wavenumber components of the vorticity field to a constant. Periodic orbits are then fixed points of $m$ iterations of Poincar\'e map $\mathcal{P}_{\nu}$ on $S$: \begin{equation} \mathcal{P}^{\ m}_{\nu}(\bm{y})-\bm{y}=\bm{0}\qquad \hbox{($m=1,2,3,\cdots$)}, \label{fixed} \end{equation} where $\bm{y}\in S$. Equation (\ref{fixed}) is highly nonlinear and can be solved by Newton-Raphson iterations. For large $n$, the initial guess should be rather close to the fixed point to guarantee convergence. \hspace*{0.3cm} In order to find initial points, we performed a long time integration of Eqs. (\ref{NS}) and (\ref{cont}) with $\nu=0.0045$, i.e. in the weakly turbulent regime. We computed the intersection points with the plane $S$ given by $\widetilde{\omega}_1(0,2,4)=-0.04$, the time mean value at $\nu=0.0045$. If a point was mapped close to itself after $m$ iterations of the Poincar\'e map, i.e. \begin{equation} \| \mathcal{P}_{\nu}^{\ m}(\bm{y})-\bm{y} \|_{\rm{\scriptscriptstyle Q}} < \delta, \end{equation} it was marked as an initial point. Here, $\|\cdot\|_{\rm{\scriptscriptstyle Q}}$ stands for the enstrophy norm, i.e. the enstrophy computed according to Eq. (\ref{enstrdef}). A suitable threshold value $\delta$ for the distance was given by $0.2$, about 10\% of the standard deviation of enstrophy. Thus we found a collection of candidates for periodic orbits with $m$ ranging from $1$ to $12$. The same approach with $\nu =0.0035$, where turbulence was fully developed, did not yield any candidates in a time integration of length $10^4 T_{\rm{\scriptscriptstyle T}}$. \hspace*{0.3cm} \begin{figure}[h] \begin{center} \includegraphics[width=.8\textwidth]{PDFt_r.eps} \end{center} \scaption{The probability density function of the return time $t_{\rm{\scriptscriptstyle R}}$ of the Poincar\'e map. Obtained from $2,000$ iterations of the Poincar\'e map at $\nu=0.0035$. The most probable return time is $T_{\rm{\scriptscriptstyle R}}$. The probability density function shows little dependence on the viscosity and the choice of the Poincar\'e plane $S$. } \label{PDFt_r} \end{figure} \hspace*{0.3cm} Fig. \ref{PDFt_r} shows the probability density function of the return time $t_{\rm{\scriptscriptstyle R}}$ of the Poincar\'e map, computed at $\nu=0.0035$. Two large peaks are prominent around $T_{\rm{\scriptscriptstyle R}}$ and $2T_{\rm{\scriptscriptstyle R}}$. This implies that $\widetilde{\omega}_1(0,2,4)$ oscillates with frequency about $T_{\rm{\scriptscriptstyle R}}$ and that it crosses the prescribed value $-0.04$ every oscillation with occasional missing of a crossing. Two and more successive missings are very rare. Recall that the most probable return time $T_{\rm{\scriptscriptstyle R}}$ is the same as the characteristic time of turbulence identified in section \ref{sec:turbulent} as a peak in the frequency spectra of energy and enstrophy. The probability density function shows little dependence on the viscosity. The periodic orbits identified as fixed points of $\mathcal{P}^{\ m}_{\nu}$ have a period roughly equal to $m$ times $T_{\rm{\scriptscriptstyle R}}$ in the whole range $0.0035<\nu<0.0045$. In the following we refer to them as period-$m$ orbits and denote their period by $T_{m{\rm p}}$. \hspace*{0.3cm} From the periodic orbits found in the weakly turbulent regime, we select orbits with periods $1$ up to $5$ and continue them down to $\nu=0.0035$. For continuation of the periodic orbits, we use the arc-length method, a prediction-correction method which requires solving an equation similar to Eq. (\ref{fixed}) at each continuation step. The most time-consuming part of this algorithm is the computation of derivatives $\mbox{D}_{{\bm{\scriptstyle y}},\nu}\mathcal{P}_{\nu}$ of the Poincar\'e map with respect to the $(n-1)$ components of $\bm{y}$ and $\nu$. Finite differencing is employed for the derivatives so that for each Newton-Raphson iteration we have to run $(n+1)$ integrations, which can conveniently be done in parallel. We use $128$ processors simultaneously on a Fujitsu GP7000F900 parallel computer. The computation of one iteration of the Poincar\'e map and its derivatives takes about $25$ minutes of CPU time on each processor. The average step size in the parameter is $\Delta \nu \approx 0.00004$ and about three Newton-Raphson iterations are taken at each continuation step before the residue is smaller than $10^{-9}$ in the enstrophy norm. This brings the total computation time for continuation of a period one ($m=1$) orbit down to $\nu=0.0035$ to about $31$ hours. Note that there is no guarantee that an orbit can be continued all the way. In fact, about half the continuations we ran ended in a bifurcation point before reaching the maximal micro-scale Reynolds number. \begin{figure}[t] \begin{center} \includegraphics[width=1.0\textwidth]{EDRnu.eps} \end{center} \scaption{Energy-dissipation rate averaged over the periodic orbits as a function of viscosity. The label $m$p of the individual curves indicates an orbit corresponding to a fixed point of $\mathcal{P}^{\ m}_{\nu}$ and having a period roughly equal to $mT_{\rm{\scriptscriptstyle R}}$. The dotted line denotes the values in the turbulent state.} \label{continuation} \end{figure} \hspace*{0.3cm} It is our primary concern to find out whether the periodic orbits may represent the turbulent state or not. For this purpose we compute the mean energy-dissipation rate $\overline{\epsilon}$, averaged along the periodic orbits at each point on the continuation curve, and compare these values to that of the turbulent state. As seen in the preceding section, the time-averaged energy-dissipation rate $\overline{\epsilon}$ tends to saturate around $0.1$ in the turbulent state for $\nu < 0.004$ (see Fig. \ref{turbulent1}). In Fig. \ref{continuation}, we compare $\overline{\epsilon}$ averaged over the periodic motion to that for the turbulent state. Clearly, the values given by the short periodic orbits diverge from that of the turbulent state, decreasing monotonically with viscocity. The value produced by the period-5 orbit, however, stays close to the one found for the turbulent state. \hspace*{0.3cm} The period-2 orbit is the only one found at a somewhat lower viscosity, $\nu=0.004$, by the method described above. At the time of writing, the continuation curves for the period-3 and period-4 solutions were incomplete. These continuations are currently running on a shared memory system which is considerabaly slower than the 128 CPU parallel machine. \section{Embedded periodic motion} \label{sec:embedded} \hspace*{0.3cm} The results of the preceeding section suggest that the period-5 orbit represents the turbulent state. We now analyse the properties of this orbit for $\nu=0.0035$ in detail. \subsection{Structure in Phase Space} \label{subsec:structure} \begin{figure}[h] \begin{center} \includegraphics[width=1.0\textwidth]{eineout_BW.eps} \end{center} \scaption{The period-5 orbit and the probability density function of the turbulent state projected on the ($e$, $\epsilon$)-plane at $\nu=0.0035$. The periodic orbit of period $T_{5{\rm p}}=4.91T_{\rm{\scriptscriptstyle R}}$ is represented by a closed curve with solid ($t/T_{\rm{\scriptscriptstyle R}}<3$) and dashed ($t/T_{\rm{\scriptscriptstyle R}}>3$) parts in the low-activity and high-activity periods, respectively. Dots are attached at every $0.1 T_{\rm{\scriptscriptstyle R}}$. Numbers indicates the time in the unit of $T_{\rm{\scriptscriptstyle R}}$. Contours of the probability density function are drawn at $80\%$ of its peak value and successive factors of $0.5$ with larger values in darker areas. On the axes the energy input rate and dissipation rate are shown as deviations from their temporal mean normalised by their standard deviation in turbulence.} \label{eineout} \end{figure} \hspace*{0.3cm} It is impossible to show how close this orbit is to the turbulent state in the $n$-dimensional phase space, but we can get an impression by looking at its projection on the two-dimensional $(e, \epsilon)$-plane spanned by the energy-input rate and the energy-dissipation rate. In Fig. \ref{eineout}, we plot this projection of the orbit for $\nu=0.0035$ by a closed curve with dots at every $0.1T_{\rm{\scriptscriptstyle R}}$. The solid and dashed parts of the curve respectively indicate the low-activity and high-activity periods described below. Numbers attached to the orbit are measured from an arbitrary reference time near the beginning of the low-activity period and normalised by the return time $T_{\rm{\scriptscriptstyle R}}$. Contours of grey scale show the probability density function of the turbulent state with larger values in darker areas. Both axes are normalised by the standard deviation around the temporal mean of the respective quantitites in turbulence. \hspace*{0.3cm} This figure has several interesting features. First, the probability density function of turbulent state is slightly skewed towards high $e$ and high $\epsilon$, and the peak is located at the lower-left side of the origin. This is due to bursting events in which anomalous amounts of kinetic energy are injected and dissipated. The periodic orbit makes a large excursion to high-$\epsilon$ corresponding to such a burst Secondly, the distance of the periodic orbit from the origin remains of the order of the standard deviations of turbulence. This is consistent with the picture that this orbit is embedded in the turbulent state. In fact, both the mean values and the standard devations of $e(t)$ and $\epsilon(t)$ are strikingly close for the turbulence and the periodic motion; namely, they are $\overline{e}=\overline{\epsilon}=0.0998$, $\sigma_{e}=0.0352$, $\sigma_{\epsilon}=0.0155$ for the former and $\overline{e}^{5{\rm p}}=\overline{\epsilon}^{5{\rm p}}=0.107$, $\sigma_{e}^{5{\rm p}}=0.0348$, $\sigma_{\epsilon}^{5{\rm p}}=0.0141$ for the latter. The standard deviation of $e(t)$ is larger than that of $\epsilon(t)$ by about factor 2. The magnitude of fluctuations of the present turbulence is about 35\% of the mean values in the energy-input rate ($\sigma_{e}/\overline{e}=0.353$), and 16\% in the energy-dissipation rate ($\sigma_{\epsilon}/\overline{\epsilon}=0.156$). Thirdly, although the trajectory of the periodic orbit is not simple, we can see that it generally rotates counter-clockwise. In other words, peaks of $\epsilon(t)$ come after those of $e(t)$, which is consistent with the picture of energy cascade to larger wavenumbers (see Fig. \ref{kshells}). Fourthly, the orbit may be divided into two periods. During the first period (solid line) of about $3T_{\rm{\scriptscriptstyle R}}$, $e(t)$ and $\epsilon(t)$ are near or below their mean values. This is the period of low activity. It is followed by a period of high activity (dashed line) of about $2T_{\rm{\scriptscriptstyle R}}$. Thus, the transitions between the low activity and the high activity phase take place on a time scale $T_{\rm{\scriptscriptstyle T}}$, the large-eddy-turnover time. Dots are drawn on the periodic orbit at equal time intevals so that we can get an impression of the speed of the state point along the orbit, which tends to be higher during the high activity phase. \begin{figure}[t] \begin{center} (a) \includegraphics[width=0.7\textwidth]{epse5p.eps} \vspace*{1.0cm} (b) \includegraphics[width=0.7\textwidth]{en5p.eps} \end{center} \scaption{Energy characteristics of the period-5 orbit. Temporal variations of (a) energy-input rate $e(t)$ (solid line) and energy-dissipation rate $\epsilon(t)$ (dashed line) and (b) energy $\mathcal{E}(t)$ (solid line) and $\epsilon(t)$ (dashed line), the latter normalised to have the same time mean. The horizontal lines indicate the mean values of the respective quantities. The abscissa is the time normalised by $T_{\rm{\scriptscriptstyle R}}$. } \label{fig:time-series} \end{figure} \hspace*{0.3cm} The time series of $e(t)$ and $\epsilon(t)$, shown in Fig. \ref{fig:time-series}(a) is another representation of the periodic orbit. We see that this periodic motion is composed of five enhanced actions of energy input and dissipation every $T_{\rm{\scriptscriptstyle R}}$. The input rate is stronger in amplitude than the dissipation rate. The oscillation phase is anti-correlated between the two. It is clearly seen from the behaviour of $\epsilon(t)$ that the periods of low activity and high activity are the intervals of $t/T_{\rm{\scriptscriptstyle R}}<3$ and $t/T_{\rm{\scriptscriptstyle R}}>3$, respectively. Both $\epsilon(t)$ and $e(t)$ oscillate around lower (or higher) values than their mean values (denoted by the horizontal line) in the former (or latter) interval. In Fig. \ref{fig:time-series}(b) is shown the time series of energy $\mathcal{E}(t)$, the time-derivative of which is equal to the difference $e(t)-\epsilon(t)$. For comparison, the time series of $\epsilon(t)$ is also plotted after shifting and scaling appropriately. It is interesting that the energy and energy-dissipation rate change quite similarly and that the peaks of the former proceed a little those of the latter. Furthermore, comparison with Fig. \ref{fig:time-series}(a) tells us that peaks of $e(t)$ precede those of $\mathcal{E}(t)$. This order of the peaks represents the energy cascade process. \hspace*{0.3cm} Another convenient projection to capture the structure of the periodic orbit in the phase space is given by taking an arithmetic average of the square of those components of \{$\widetilde{\bm{\omega}}(\bm{k})$\} that have the same magnitude of wavenumber $|\bm{k}|$. This is nothing but the enstrophy spectrum, identical to the energy spectrum multiplied by the wavenumber squared. Among others, the one-dimensional longitudinal energy spectrum can readily be compared to laboratory experiments. In Fig. \ref{1Dspectra2}, we plot the time-averaged energy spectrum of our simulations with open circles for turbulence and with pluses for the period-5 motion. For comparison, also shown are the laboratory data in shear flow at $R_{\lambda}=130$ with solid circles \cite{champ} and the asymptotic form at the infinite Reynolds number derived theoretically with a solid line \cite{KiGo97}. It is remarkable that the data of the periodic motion and turbulence agree with each other almost perfectly, providing us with another support of closeness in the phase space of this periodic motion and turbulence. The data nicely collapse onto a single curve beyond the energy containing range, which shows that the agreement between the spectra of periodic and turbulent motion is not an artifact of the high symmetry of the present numerical flow. \begin{figure}[h] \begin{center} \includegraphics[width=0.9\textwidth]{1Dcompare2.eps} \end{center} \scaption{One-dimensional longitudinal energy spectrum. Open circles and pluses represent respectively the turbulent state and the period-5 motion in high-symmetric flow at $R_{\lambda}=67$. Solid circles denote experimental data at $R_{\lambda}=130$ taken from \citet{champ}. The solid line represents the asymptotic form at $R_{\lambda}\rightarrow \infty$ derived theoretical by the sparse direct-interaction approximation \cite{KiGo97}. The axes are normalised according to the Kolmogorov scaling.} \label{1Dspectra2} \end{figure} \hspace*{0.3cm} So far we have seen that the period-5 motion reproduces the temporal mean energy spectrum of turbulent state remarkably well. A yet more detailed comparison between the period-5 motion and turbulence is provided by the temporal evolution of the energy spectral function. In Fig. \ref{kshells}(a), we show the time series of the three-dimensional energy spectral function calculated as \begin{equation} E(k,t)=\frac{1}{2} \sum_{k-\frac{1}{2}\le |\bm{\scriptstyle k}'|<k+\frac{1}{2}} |\widetilde{\bm{u}}(\bm{k}',t)|^2. \label{eq:3D} \end{equation} In order to emphasize the fluctuations, the departure from the temporal mean, normalised by the standard deviation of the spectrum, is plotted by contours with positive parts shaded. The abscissa, the wavenumber normalised by the Kolmogorov length, is scaled logarithmically to illuminate the cascade process, thought to be a series of breakdowns of coherent vortical structures into parts about half their size. \hspace*{0.3cm} It is not straightforward to compare the pattern of $E(k,t)$ of the periodic orbit shown in Fig. \ref{kshells}(a) to that of turbulence because we do not know {\it a priori} which parts of a turbulent time sequence are to be compared with. We can, however, select portions of a time series of turbulence that are close to the periodic orbit, as will be explained in the next subsection. In Fig. \ref{kshells}(b), we show such a portion of a time series of $E(k,t)$ of turbulence over the same time interval as the periodic motion. The time variable is the same as in Figs. \ref{eineout} and \ref{fig:time-series}. The pattern of the energy spectrum of Figs. \ref{kshells}(a) and (b) is remarkably similar, which adds to the evidence that this period-5 orbit represents the turbulent state well. Note especially the inclination and mutual spacing of the streaks which show the cascade process and the relative duration of the \begin{figure}[tbp] \begin{center} (a) \includegraphics[width=0.9\textwidth]{ene_p.eps} \vspace{0.5cm} (b) \includegraphics[width=0.9\textwidth]{ene_t.eps} \end{center} \scaption{Temporal evolution of energy spectrum. The excess from temporal mean normalised by standard deviation of the three-dimensional energy spectrum $E(k,t)$ is shown by contours at the levels of $0$, $\pm 0.25$, $\pm 0.5$, $\pm 1$, $\pm 2$, the positive parts being shaded. The abscissa is logarithm of the wavenumber normalised by Kolmogorov length $\eta$ and the ordinate is the time normalised by $T_{\rm{\scriptscriptstyle R}}$. (a) period-5 orbit. (b) Fragment from a turbulent time series. The tilted streaks represent anomalies cascading into the dissipation range.} \label{kshells} \end{figure} \noindent low-activity period ($t/T_{\rm{\scriptscriptstyle R}}<3$) and the high-activity period ($t/T_{\rm{\scriptscriptstyle R}}>3$). Two wide streaks at larger wavenumbers ($k\eta > 0.4$) in the later phase of the periodic motion correspond to the excursion to high $\epsilon$ seen in Fig. \ref{eineout}, whereas the other narrow streaks in the earlier phase to the slow excursion. Initial conditions corresponding to other local minima give similar pictures. See \citet{kida3} for a detailed discussion on the energy dynamics at a larger Reynolds number $R_{\lambda}=186$. \subsection{Periodic motion as the skeleton of Turbulence} \hspace*{0.3cm} Motivated by the speculation that the turbulent state approaches the periodic orbit frequently, we introduce a measure of closeness by the `distance' $D(t)$, in the $(e,\epsilon)$ plane, between the periodic orbit and a finite, turbulent time sequence of lenght $T_{\rm{\scriptscriptstyle T}}$ as \begin{eqnarray} D(t)^2=\frac{1}{2T_{\rm{\scriptscriptstyle T}}} \min_{0\leq t^* <T_{5\rm{\scriptscriptstyle p}}} \int_{-\frac{1}{2}T_{\rm{\scriptscriptstyle T}}} ^{\frac{1}{2}T_{\rm{\scriptscriptstyle T}}}&& \Big[\frac{1}{\sigma_{\epsilon}^2} (\epsilon^{5{\rm p}}(t^*+t')-\epsilon(t+t'))^2\nonumber\\[0.2cm] &&+\frac{1}{\sigma_{e}^2}(e^{5{\rm p}}(t^*+t') -e(t+t'))^2\Big]\mbox{d}t', \label{D} \end{eqnarray} \vspace*{0.2cm} \begin{figure}[h] \begin{center} \includegraphics[width=0.8\textwidth]{dist.eps} \end{center} \scaption{Temporal variation of the distance $D(t)$ between the turbulent state and the period-5 motion over an arbitrary time interval. The horizontal lines denote the time mean $\overline{D}$ (solid) and $\overline{D}-\sigma_{\rm{\scriptscriptstyle D}}$ (dashed). The abscissa is the time normalised by large-eddy-turnover time $T_{\rm{\scriptscriptstyle T}}$. Observe that the frequency of appearance of sharp minima is of $O(T_{\rm{\scriptscriptstyle T}})$. } \label{fig:distance} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=0.75\textwidth]{close_shaded.eps} \end{center} \scaption{Frequency distribution of approach time of turbulent state to the period-5 motion. The histgrams of time during which $D(t) < \overline{D}-\sigma_{\rm{\scriptscriptstyle D}}$ and $D(t) < \overline{D}-1.5\sigma_{\rm{\scriptscriptstyle D}}$ are shown by white and grey steps, respectively. The area of the former histgram is normalised to unity. The mean values of the approach time are $1.33 T_{\rm{\scriptscriptstyle R}}$ ($=0.66T_{\rm{\scriptscriptstyle T}}$) and $0.91 T_{\rm{\scriptscriptstyle R}}$ ($=0.45 T_{\rm{\scriptscriptstyle T}}$) for the respective cases. } \label{fig:approach-time} \end{figure} \noindent where $\epsilon^{5{\rm p}}(t)$ and $e^{5{\rm p}}(t)$ denote the energy-dissipation rate and energy-input rate along the orbit, respectively. This distance is normalised such that, if wereplace $\epsilon^{5{\rm p}}(t)$ and $e^{5{\rm p}}(t)$ by their respective mean values in the turbulent state, the temporal mean of $D(t)$ is unity. A time series of $D(t)$ taken from a long integration is shown in Fig. \ref{fig:distance} together with the temporal mean $\overline{D}$ ($=0.776$) and the temporal mean minus the standard deviation $\overline{D}-\sigma_{\rm{\scriptscriptstyle D}}$ ($=0.535$). It can be seen that $D(t)$ takes sharp minima at the rate of once over the period of $O(T_{\rm{\scriptscriptstyle T}})$, implying that the turbulent state approaches the period-5 orbit at intervals of about one large-eddy-turnover time. \hspace*{0.3cm} In order to discuss the approach frequency of the turbulent state to the period-5 orbit quantitatively, we consider the statistics of intersections between $D(t)$ and the two horizontal lines. We may say that the turbulent state is located within $\overline{D}$ (or $\overline{D}-\sigma_{\rm{\scriptscriptstyle D}}$) distance from the periodic orbit when $D(t) < \overline{D}$ (or $D(t) < \overline{D}-\sigma_{\rm{\scriptscriptstyle D}}$). The intervals between two consecutive intersection times with $D(t)$ below the holizontal lines are called the approach time, which are regarded as the periods when the turbulent state are close to the periodic orbit. The approach time is different depending on the threshold distance. In Fig. \ref{fig:approach-time}, we plot their histgrams, obtained from a time series of about $7,000$ non-normalised time units, for two threshold distances, $\overline{D}-\sigma_{\rm{\scriptscriptstyle D}}$ (white steps) and $\overline{D}-1.5\sigma_{\rm{\scriptscriptstyle D}}$ (grey steps). The area is normalised to be unity for the former histogram. The mean approach times are $1.33 T_{\rm{\scriptscriptstyle R}}$ ($0.66 T_{\rm{\scriptscriptstyle T}}$) and $0.91 T_{\rm{\scriptscriptstyle R}}$ ($0.45 T_{\rm{\scriptscriptstyle T}}$) for the respective thresholds, implying that the turbulent state is likely to stay around the period-5 orbit over the time of $\text{O}(T_{\rm{\scriptscriptstyle T}})$ every time it approaches. \hspace*{0.3cm} How frequently the turbulent state approaches the periodic orbit may be measured by the time intervals between consecutive approach periods, which is called the approach interval. Note that this measure is more appropriate than counting the local minimum times of $D(t)$ in Fig. \ref{fig:distance} because two or more minima may occur in one approach interval. In Fig. \ref{fig:approach-interval}, we plot the histgrams of the approach interval made by using the same data as that for Fig. \ref{fig:approach-time}. Again, the white and grey steps indicate the histgrams for the threshold distances of $\overline{D}-\sigma_{\rm{\scriptscriptstyle D}}$ and $\overline{D}-1.5\sigma_{\rm{\scriptscriptstyle D}}$, respectively. The area is normalised to be unity for the former one. The mean approach intervals are $5.8 T_{\rm{\scriptscriptstyle R}}$ ($=2.9 T_{\rm{\scriptscriptstyle T}}$) and $28.0 T_{\rm{\scriptscriptstyle R}}$ ($=14.0 T_{\rm{\scriptscriptstyle T}}$) for the respective thresholds. This result tells us that the turbulent state approaches the period-5 orbit at the rate of once over a few eddy-turnover times (or about the period of this periodic orbit) within distance of $\overline{D}-\sigma_{\rm{\scriptscriptstyle D}}$ and that a more closer approach within distance of $\overline{D}-1.5\sigma_{\rm{\scriptscriptstyle D}}$ is observed much less frequently, i.e. once over $14$ eddy-turnover times. \begin{figure}[t] \begin{center} \includegraphics[width=0.75\textwidth]{far_shaded.eps} \end{center} \scaption{Frequency distribution of approach intervals of turbulent state to the period-5 motion. The histgrams of time interval between midpoints of segments with $\overline{D}-\sigma_{\rm{\scriptscriptstyle D}}$ and $\overline{D}-1.5\sigma_{\rm{\scriptscriptstyle D}}$ are shown by white and grey steps, respectively. The area of the former histgram is normalised to unity. The mean values of the interval is $5.8T_{\rm{\scriptscriptstyle R}}$ ($=2.9T_{\rm{\scriptscriptstyle T}}$) and $28.0 T_{\rm{\scriptscriptstyle R}}$ ($=14.0 T_{\rm{\scriptscriptstyle T}}$) for the respective segments. } \label{fig:approach-interval} \end{figure} \hspace*{0.3cm} Which parts of the periodic orbit is the turbulent state likely to approach more frequently ? This information is provided by the phase time $t^*$ that defines the distance $D(t)$, i.e. that gives the minimum value of the integration in (\ref{D}). In Fig. \ref{fig:phase}, we show the histgrams of the phase time of approach of turbulent state for threshold distances of $\overline{D}-\sigma_{\rm{\scriptscriptstyle D}}$ (white steps) and $\overline{D}-1.5\sigma_{\rm{\scriptscriptstyle D}}$ (grey steps). The area is normalised to be unity for the former histgram. For comparison, the PDF of realisation of the turbulent state along the period-5 orbit is drawn with a dotted curve. This density is obtained by integrating the PDF shown in Fig. \ref{eineout} over a small neigbourhood (a disk with a radius much smaller than the standard deviations $\sigma_{e}$ and $\sigma_{\epsilon}$) of a given point on the period-5 orbit and multiplying by the local speed of the state point. It is interesting that the approach phase is localised in the low-active period ($t^*/T_{\rm{\scriptscriptstyle R}}< 3$), but hardly observed in the high-active period ($t^*/T_{\rm{\scriptscriptstyle R}}> 3$). This tendency of non-uniform appoach suggests that the movement of the state point of turbulence may be more violent in the high-active period than in the low-active period. The stability characteristics, in the phase space, of the state point will be examined by the local Lyapunov analysis in the next subsection. Incidentally, the totally different behaviour between the histgrams (steps) of the approach phase and the existing probability (dotted curve) of turbulent state indicates that the non-uniformity of the approach phase may be due to that of the dynamical properties along the periodic orbit. \begin{figure}[t] \begin{center} \includegraphics[width=0.75\textwidth]{phase_shaded.eps} \end{center} \scaption{Frequency distribution of the phase of the period-5 motion at which the turbulent state approaches. The histgrams of the phase at the mid-point of segments with $D(t) < \overline{D}-\sigma_{\rm{\scriptscriptstyle D}}$ and $D(t) < \overline{D}-1.5\sigma_{\rm{\scriptscriptstyle D}}$ are shown by white and grey steps, respectively. The area of the former histgram is normalised to unity. The dotted curve indicates the existing probability of the turbulent state on the period-5 orbit projected on the $(e, \epsilon)$-plane. } \label{fig:phase} \end{figure} \hspace*{0.3cm} In order to get an idea of what the turbulent orbit looks like when it is approaching the period-5 motion, we show in Fig. \ref{fig:close} such segments of length $T_{\rm{\scriptscriptstyle T}}$ that satisfy $D(t)<\overline{D}-1.5\sigma_{\rm{\scriptscriptstyle D}}$ in the $(e, \epsilon)$ plane which are selected arbitrarily from a long turbulent orbit. Observe the way that the turbulent state is attracted around the period-5 motion quite well, though such beautiful examples are not so frequent, i.e. only at the rate of once every $24T_{\rm{\scriptscriptstyle T}}$. \begin{figure}[h] \begin{center} \includegraphics[width=0.48\textwidth]{5p_tur_close_1.eps} \includegraphics[width=0.48\textwidth]{5p_tur_close_2.eps} \end{center} \scaption{Turbulent orbits close to the period-5 motion. Such segments of length $T_{\rm{\scriptscriptstyle T}}$ that satisfy $D(t)<\overline{D}-1.5\sigma_{\rm{\scriptscriptstyle D}}$ in the $(e, \epsilon)$ plane are selected arbitrarily from a long turbulent orbit, and four examples are drawn in the respective figures. The dotted closed line indicates the priod-5 orbit. } \label{fig:close} \end{figure} \vspace*{0.5cm} \subsection{Lyapunov characteristics} \label{subsection:lyapunov} \hspace*{0.3cm} Lyapunov exponents describe the growth or decay of perturbations with respect to a given reference solution of a dynamical system. The Lyapunov exponents are a benchmark of chaos theory. If at least one exponent is positive, corresponding to a growing perturbation, the system is chaotic and there is sensitive dependence on initial conditions and a strange attractor with a fractal dimension. The rate at which information about the initial condition is lost and the dimension of the chaotic attractor can be computed from the Lyapunov exponents. Although turbulence can be regarded as a form of high dimensional chaos, its Lyapunov characteristics are far from understood. Here, we will investigate the Lyapunov characteristics of isotropic turbulence by means of the embedded periodic solution. \hspace*{0.3cm} The Navier-Stokes equation (\ref{NS}) can be written symbolically as \begin{equation} \frac{\rm d}{{\rm d}t}{\bm{x}}=\bm{f}(\bm{x},\nu), \label{symb} \end{equation} where $\bm{x}\in \mathbb{R}^n$ is a vector that holds the Fourier transform of the vorticity, $\widetilde{\bm{\omega}}$, and $\bm{f}$ denotes the right-hand side of Eq. (\ref{NS}). The linearised equations are then given by \begin{equation} \frac{\rm d}{{\rm d}t}{\bm{v}}=\bm{J}\bm{v}, \label{symb2} \end{equation} where $\bm{v}$ denotes a perturbation vorticity field $\delta\widetilde{\bm{\omega}}$, and $\bm{J}$ is the Jacobian matrix, i.e. $J_{ij}=\partial f_{i}(\bm{x}(t),\nu)/\partial x_j$. The average rate of growth or decay of a perturbation is measured by the Lyapunov exponent \begin{equation} \itLambda = \lim_{t\rightarrow \infty} \frac{1}{2t}\ln \frac{\| \bm{v}(t)\|_{\rm{\scriptscriptstyle{Q}}}}{\|\bm{v}(0)\|_{\rm{\scriptscriptstyle{Q}}}}, \label{defLambda} \end{equation} where $\|\cdot\|_{\rm{\scriptscriptstyle{Q}}}$ again denotes the enstrophy norm and a factor of $1/2$ is included because this norm is quadratic. \hspace*{0.3cm} In general, the value of $\itLambda$ depends on the reference solution $\bm{x}(t)$ and on the initial perturbation $\bm{v}(0)$. However, in systems with a chaotic attractor the Oseledec theorem guarantees that there is a spectrum of limit values, $\{\itLambda_{i}\}_{i=1}^{n}$, unique for the attractor. At {\em almost every} initial point $\bm{x}(0)$ there are $n$ initial perturbations $\bm{v}_{i}(0)$ such that $\itLambda_i = \lim_{t\rightarrow \infty} (1/2t)\ln \left( \| \bm{v}_i(t)\|_{\rm{\scriptscriptstyle{Q}}}/ \|\bm{v}_i(0)\|_{\rm{\scriptscriptstyle{Q}}}\right)$. The vectors $\bm{v}_{i}(t)$ are called the Lyapunov vectors and depend on the initial point in a complicated manner. The Oseledec theorem holds for {\em almost every} initial point in the basin of attraction of the chaotic attractor in a measure theoretic sense. This means that starting from any generic initial condition we will find the same Lyapunov spectrum, but for certain special initial points the spectrum may differ. Examples of such special initial points are points lying on periodic solutions. \hspace*{0.3cm} The Lyapunov spectrum of chaotic motion can be used to measure the `strength' of the chaos or the complexity of the motion. Suppose that the Lyapunov exponents are ordered such that $\itLambda_{1}>\itLambda_{2}>\ldots>\itLambda_{n}$, then the Kolmogorov-Sinai entropy is defined by \begin{equation} H_{\rm{\scriptscriptstyle KS}} =\sum_{i=1}^{k}\itLambda_{i}, \qquad \text{where}\ \ \itLambda_{k}>0 \ \text{and} \ \ \itLambda_{k+1}<0 , \label{KSentropy} \end{equation} i.e. the sum of positive Lyapunov exponents, and the Kaplan-Yorke dimension is defined by \begin{equation} D_{\rm{\scriptscriptstyle KY}} =k +\frac{1}{|\itLambda_{k+1}|}\sum_{i=1}^{k}\itLambda_{i}, \qquad \text{where}\ \ \sum_{i=1}^{k}\itLambda_{i}>0 \ \text{and} \ \ \sum_{i=1}^{k+1}\itLambda_{i}<0 . \label{KYdimension} \end{equation} One way to interpret these definitions is to realise that a volume element contained in the subspace spanned by any number of Lyapunov vectors will grow or decay at a rate given by the sum of the corresponding Lyapunov exponents. Thus, the Kolmogorov-Sinai entropy is the maximal rate of expansion for any volume element. It quantifies the unpredictability of the dynamics. The Kaplan-Yorke dimension can be thought of as the dimension of the chaotic attractor. Its integer part is the dimension of the largest volume element that will grow in time, and the fractional part is added to render the function continuous in the Lyapunov exponents. \hspace*{0.3cm} The numerical computation of more than only the leading Lyapunov exponent is troublesome in systems with many degrees of freedom. The algorithms at hand require simultaneous integration of several perturbation vectors and the repeated application of Gramm-Schmidt orthogonalisation (see e.g. \citet{wolf}). This introduces numerical error, especially when applied to truncations of the Navier-Stokes equation with small amplitude fluctuations in the large wavenumber components. The limit in Eq.(\ref{defLambda}) has to be replaced by an average over a finite time interval of the growth rate, and the convergence of $\itLambda$ with time can be rather slow, of order $\mbox{O}(1/\sqrt{t})$. Consequently, early attempts to compute a part of the Lyapunov spectrum for turbulent flows were restricted to simulations at low resolution. Results for isotropic turbulence \cite{grappin} and shear turbulence \cite{keefe} indicate that the Kaplan-Yorke dimension is at least of order $\mbox{O}(100)$ even at low Reynolds number. \hspace*{0.3cm} In simulations at high resolution, the computation of a few hundred Lyapunov exponents is hard if not impossible. One way around this problem is to inspect the {\em local} rather than the {\em time average} growth rates. The local Lyapunov exponent can be defined by \begin{equation} \lambda(t)=\frac{1}{2}\frac{\mbox{d}}{\mbox{d}t}\ln \|\bm{v}(t)\|_{\rm{\scriptscriptstyle{Q}}} , \label{deflocal} \end{equation} such that, taking the time average, we have $\overline{\lambda}=\itLambda$. The evolution of the Lyapunov vectors and the associated local Lyapunov exponents was studied in the case of weakly turbulent Taylor-Couette flow by \citet{vastano}. They managed to tie the local Lyapunov exponents and vectors to physical instabilities in the transition to chaotic behaviour. \hspace*{0.3cm} In the same spirit we seek to investigate the Lyapunov characteristics of developed isotropic turbulence. For this purpose we use the period-5 orbit as the reference solution. The choice of a periodic reference solution greatly facilitates the analysis. Let $\bm{x}(t)$ be a solution of Eq.(\ref{symb}) such that $\bm{x}(t+T)=\bm{x}(t)$ for all $t$ and some period $T$. By Floquet theory the solution of Eq.(\ref{symb2}) can then be written as \begin{equation} \bm{v}(t)=\bm{M}(t)\mbox{e}^{\bm{\scriptstyle A}t}\bm{v}(0), \label{Floquet} \end{equation} where $\bm{M}(t)=\bm{M}(t+T)$ is a periodic matrix satisfying $\bm{M}(0)=\mathbb{I}$ (unit matrix), and $\bm{A}$ is a constant matrix. Thus we find that the Lyapunov spectrum $\{\itLambda_i,\bm{v}_{i}(t)\}$ is determined by the eigenspectrum $\{\mu_i,\bm{w}_i\}$ of $\bm{A}$. For each real eigenvalue $\mu_i$ we have $\itLambda_i=\mu_i$, $\bm{v}_i(0)=\bm{w}_i$, and for the local exponent we find \begin{equation} \lambda_i (t)=\itLambda_i + \frac{1}{2}\frac{\mbox{d}}{\mbox{d}t}\ln \|\bm{M}(t)\bm{v}_i(0)\|_{\rm{\scriptscriptstyle{Q}}}. \label{locallyap} \end{equation} For each complex pair $\{\mu_i,\mu_{i+1}\}$ we have $\itLambda_i=\itLambda_{i+1}=\text{Re}\left(\mu_i\right)$, $\bm{v}_i(0)=\text{Re}\left(\bm{w}_i\right)$, $\bm{v}_{i+1}(0)=\text{Im}\left(\bm{w}_i\right)$ and \begin{equation} \lambda_i (t)=\lambda_{i+1}(t)=\itLambda_i + \frac{1}{2}\frac{\mbox{d}}{\mbox{d}t}\ln \left( \|\bm{M}(t)\bm{v}_i(0)\|_{\rm{\scriptscriptstyle{Q}}} +\|\bm{M}(t)\bm{v}_{i+1}(0)\|_{\rm{\scriptscriptstyle{Q}}} \right). \end{equation} \hspace*{0.3cm} The matrix $\mbox{e}^{\bm{\scriptstyle A}T}$ is computed in much the same way as we computed the matrix of derivatives of the Poincar\'e map as described in section \ref{sec:periodic}. We then solve the eigenvalue problem to find any number of Lyapunov exponents and vectors. In order to find the $\lambda_i (t)$ we integrate the linearised Navier-Stokes equations along the period-5 orbit with the eigenvectors $\bm{v}_{i}(0)$ as initial condition. In this integration numerical errors in $\bm{v}_{i}(t)$ tend to grow as $\exp([\itLambda_1-\itLambda_i]t)$, which puts a limit to the number of local Lyapunov exponents we can compute. The results presented below are based on analysis of the first $50$ exponents. The largest and smallest average exponents are $\itLambda_1=0.238$ and $\itLambda_{50}=-0.584$. \begin{figure}[t] \begin{center} \includegraphics[width=0.7\textwidth]{LLE.eps} \end{center} \scaption{The largest Lyapunov exponent $\itLambda_1$ measured along the periodic orbits, labeled as in figure \ref{continuation}. The value found for turbulent motion is represented by the dotted line. } \label{LLEnu} \end{figure} \hspace*{0.3cm} As mentioned above, the Lyapunov spectrum of periodic motion is different from that of turbulent motion. However, as argued in the preceding subsections we consider the periodic motion as the skeleton of turbulence, and its qualitative properties as an approximation of the corresponding properties of turbulence. Thus the Lyapunov characteristics of the periodic motion are expected to be close to those of turbulence. A direct comparison is given by the leading Lyapunov exponent, which can be computed for the turbulent motion as described in \citet{kida3}. Fig. \ref{LLEnu} shows $\itLambda_1$ for the turbulent motion and for the five periodic orbits in the parameter range $0.0035<\nu<0.004$. As we saw when comparing the energy-dissipation rate of periodic and turbulent motion in Fig. \ref{continuation}, the period-5 orbit reproduces the values found for turbulent motion well, whereas the shorter periodic orbits deviate. At the time of writing, the continuation curves for the period-3 and period-4 orbits were incomplete. Further computations are in progress. The numerical values at $\nu=0.0035$ are $\itLambda_{1}=0.2$ and $\itLambda_{1}^{5\rm{p}}=0.238$ for the turbulent and the period-5 motion, respectively. \hspace*{0.3cm} As we know only the leading Lyapunov exponent for turbulent flow, we cannot directly estimate $H_{\rm{\scriptscriptstyle KS}}$ and $D_{\rm{\scriptscriptstyle KY}}$ for the chaotic attractor. For the period-5 motion we find that $H_{\rm{\scriptscriptstyle KS}}^{5\rm{p}}=0.992$ and $D_{\rm{\scriptscriptstyle KY}}^{5\rm{p}}=19.7$. Note, that these values cannot directly be compared to those for general isotropic turbulence because we can only compute the contribution of perturbations that satisfy the high-symmetry constraints described in section \ref{sec:highsymm}. In the full phase space, without any symmetry constraints, these values are likely to be a factor of order $\mbox{O}(100)$ times higher. The {\it local} Kolmogorov-Sinai entropy $h_{\rm{\scriptscriptstyle KS}}(t)$ and {\it local} Kaplan-Yorke dimension $d_{\rm{\scriptscriptstyle KY}}(t)$ can be computed from the local Lyapunov exponents, substituting the $\lambda_i(t)$ for the $\itLambda_i$ in Eqs. (\ref{KSentropy}) and (\ref{KYdimension}). Thus, we get an impression of the change of the complexity of the flow with time. The graph is shown in Fig. \ref{KYSK}. Note that, strictly speaking, we can only compute a lower bound for the local quantities as we only know the leading 50 local Lyapunov exponents. However, $\lambda_{50}(t)$ is negative at all times and we expect only minor contributions, if any, from higher exponents. \begin{figure}[t] \begin{center} \includegraphics[width=0.7\textwidth]{KYdimKSen.eps} \end{center} \scaption{The local Kaplan-Yorke dimension $d_{\rm{\scriptscriptstyle KY}}(t)$ (solid line) and the local Kolmogorov-Sinai entropy $h_{\rm{\scriptscriptstyle KS}}(t)$ (nondimensionalised by $T_{\rm{\scriptscriptstyle T}}$ and drawn with a dotted line) as computed from the leading 50 local Lyapunov exponents $\{\lambda_{i}(t)\}_{i=1}^{50}$ along the period-5 orbit. Note the rapid oscillations and large amplitude. } \label{KYSK} \end{figure} \hspace*{0.3cm} The local Lyapunov exponents and the derived quantities $h_{\rm{\scriptscriptstyle KS}}(t)$ and $d_{\rm{\scriptscriptstyle KY}}(t)$ show large fluctuations on a time scale as short as the Kolmogorov dissipation time scale $\tau_{\eta}=\sqrt{\nu/\bar{\epsilon}}\approx 0.2$. Around $t/T_{\rm{\scriptscriptstyle R}}=4$ in the active phase identified in section \ref{subsec:structure}, $d_{\rm{\scriptscriptstyle KY}}(t)$ jumps from $0$ to a wide maximum larger than $50$ and back. This peak coincides with the dominant peak of the energy-dissipation rate. The second wide maximum lies around $t/T_{\rm{\scriptscriptstyle R}}=4.9$ and coincides with a peak of the energy-input rate. Thus it seems that the local Lyapunov exponents and the complexity of the flow are correlated to physical, spatial mean quantities. \begin{figure}[t] \begin{center} \includegraphics[width=0.9\textwidth]{correlation.eps} \end{center} \scaption{Coefficient of correlation of the local Lyapunov exponents $\lambda_{i}(t)$ with the energy-input rate (filled circles) and the energy-dissipation rate (open circles). Despite a fair amount of scatter we can see that the exponents $\lambda_{i}(t)$ with $1\leq i <D_{\rm{\scriptscriptstyle KY}}$ behave differently from those of higher indices. } \label{correlation} \end{figure} \hspace*{0.3cm} In order to check this conjecture we compute the correlation between the $\lambda_{i}(t)$ on one hand, and $e(t)$ and $\epsilon(t)$ on the other. The correlation coefficients are defined by \begin{eqnarray} c^{i}_{e}=\frac{1}{T_{5\rm{p}}\sigma_{e}^{5\rm{p}}\sigma_{\lambda_i}} \int_{0}^{T_{5\rm{p}}}(e^{5\rm{p}}(t)-\bar{e}^{5\rm{p}}) (\lambda_i(t)-\itLambda_i) \mbox{d}t, \nonumber \\[0.5cm] c^{i}_{\epsilon}=\frac{1}{T_{5\rm{p}} \sigma_{\epsilon}^{5\rm{p}}\sigma_{\lambda_i}} \int_{0}^{T_{5\rm{p}}}(\epsilon^{5\rm{p}}(t)-\bar{\epsilon}^{5\rm{p}}) (\lambda_i(t)-\itLambda_i) \mbox{d}t, \end{eqnarray} where $\sigma_{\lambda_i}$ is the standard deviation of $\lambda_i(t)$. Fig. \ref{correlation} shows $c^{i}_{e}$ and $c^{i}_{\epsilon}$ for the first 50 local Lyapunov exponents. Although there is a lot of scatter in the data, a structural difference between the local Lyapunov exponents with a small and a large index is obvious. Those with a small index have a negative correlation with the energy-dissipation rate and a positive correlation with the energy-input rate, whereas those with a large index have a positive correlation with the energy-dissipation rate and a correlation of either sign with the energy-input rate. This suggests that the Lyapunov vectors have a preferred spatial scale. In particular, we conjecture that the Lyapunov vectors with a small index describe perturbation fields with a large spatial scale, directly excited by the energy input. Those with a larger index describe smaller scale perturbation fields and are more strongly correlated to energy dissipation. \begin{figure}[t] \begin{center} (a) \includegraphics[width=0.7\textwidth]{1-19.eps} \vspace{0.5cm} (b) \includegraphics[width=0.7\textwidth]{20-38.eps} \end{center} \scaption{ Temporal variation of (a) $\widetilde{\lambda}_{\rm{\scriptscriptstyle I}}(t)$ and (b) $\widetilde{\lambda}_{\rm{\scriptscriptstyle II}}(t)$. The running average is taken over $\tau_{\rm{av}} = 0.54 T_{\rm{\scriptscriptstyle R}}$. For comparison the energy-input and dissipation rates, shifted and scale by equal amounts, are drawn with a dotted and a dashed line, respectively. The horizontal lines indicate the time mean values, $\overline{\lambda}_{\rm{\scriptscriptstyle I}}=0.122$ and $\overline{\lambda}_{\rm{\scriptscriptstyle II}}=-5.57$. } \label{fig:local-lyapunov} \end{figure} \hspace*{0.3cm} In order to test this conjecture we divide the Lyapunov spectrum into two parts with an equal number of exponents. As indicated in Fig. \ref{correlation}, we choose the integer part of the Kaplan-Yorke dimension, computed from the time averaged Lyapunov exponents, to separate the two. Thus, group I comprises $\{\itLambda_i,\bm{v}_i\}_{i=1}^{19}$ and group II comprises $\{\itLambda_i,\bm{v}_i\}_{i=20}^{38}$. The growth rate of volumes in these two subspaces is given by $\lambda_{\rm{\scriptscriptstyle I}}(t)=\sum_{i=1}^{19}\lambda_{i}(t)$ and $\lambda_{\rm{\scriptscriptstyle II}}(t)=\sum_{i=20}^{38}\lambda_{i}(t)$, respectively. As mentioned above, the $\lambda_{i}(t)$ fluctuates rapidly. In order to see a possible correlation with the energy-input and dissipation rates we compute the running mean of $\lambda_{\rm{\scriptscriptstyle I}}(t)$ and $\lambda_{\rm{\scriptscriptstyle II}}(t)$ over a time interval $\tau_{\rm{av}}$ such that $\tau_{\eta}<\tau_{\rm{av}}<T_{\rm{\scriptscriptstyle R}}$. The running mean is indicated by a tilde. Figs. \ref{fig:local-lyapunov}(a) and (b) show the time series of $\widetilde{\lambda}_{\rm{\scriptscriptstyle I}}(t)$ and $\widetilde{\lambda}_{\rm{\scriptscriptstyle II}}(t)$ along with the energy-input and dissipation rates, shifted to have the same time mean value and scaled by equal factors. Clearly, $\widetilde{\lambda}_{\rm{\scriptscriptstyle I}}(t)$ has a strong positive correlation with the energy-input rate and a weaker, negative correlation with the energy-dissipation rate. Most of the peaks of $\widetilde{\lambda}_{\rm{\scriptscriptstyle I}}(t)$ coincide with peaks of the energy-input rate, the latter leading in phase. Only in the interval $2<t/T_{\rm{\scriptscriptstyle R}}<3$ the correlation is not very clear. In contrast, $\widetilde{\lambda}_{\rm{\scriptscriptstyle II}}(t)$ shows a strong positive correlation with the energy-dissipation rate and tends to lead in phase. On the interval $0<t/T_{\rm{\scriptscriptstyle R}}<1$ the correlation with the energy-dissipation rate is weaker, and locally there is a positive correlation with the energy-input rate. \hspace*{0.3cm} Finally we consider the orientation of the Lyapunov vectors. Consider the Lyapunov vectors scaled to unit length, $\hat{\bm{v}}_i=\bm{v}_i(t)/\|\bm{v}_i(t)\|_{\rm{\scriptscriptstyle{Q}}}^{1/2}$ and denote the corresponding perturbation vorticity field by $\delta\hat{\widetilde{\bm{\omega}}}_i$. We compute the enstrophy spectrum of the scaled perturbation fields as \begin{equation} Q_i(k)=\frac{1}{2}\sum_{k-\frac{1}{2} <\|\bm{k}\|<k+\frac{1}{2}}|\delta\hat{\widetilde{\bm{\omega}}}_i (\bm{k})|^2. \end{equation} If the Lyapunov vectors have a preferred length scale we expect to see a structural difference between the enstrophy spectra of the vectors in group I and group II, the former having a larger amplitude in the smaller wavenumbers and the latter in the larger wavenumbers. In Fig. \ref{support}, we plot the average spectrum over the perturbation fields in group I, $Q_{\rm{\scriptscriptstyle I}}(k)$, and group II, $Q_{\rm{\scriptscriptstyle II}}(k)$. All perturbation fields have the maximal amplitude around $k\eta=0.4$, corresponding to a spatial scale in between that of the fixed modes, $k_{f}^{-1}$, and the Kolmogorov dissipation scale $\eta$ in the current simulations. The average spectrum $Q_{\rm{\scriptscriptstyle I}}(k)$ is larger for all wavenumbers below $k_{\rm{c}}$ ($\approx 0.32$) and smaller for most larger wavenumbers. \hspace*{0.3cm} These results indicate that the Lyapunov vectors indeed have preferred length scales associated with them, and that the local Lyapunov exponents are correlated with the physical quantities that dominate these spatial scales. We have checked that the results do not depend critically on the choice of the two groups, i.e. the highest index in group I, here fixed to the integer part of the Kaplan-Yorke dimension $D_{\rm{\scriptscriptstyle KY}}$. As far as we know, this is the first time that evidence is found for the localisation of Lyapunov vectors and the correlation of (local) Lyapunov exponents and physical quantities in developed turbulence. The localisation of Lyapunov vectors has been found in shell model turbulence by \citet{yamada} (and references therein). However, their results are derived at much larger Reynolds number, in the presence of a large inertial range. We expect that the presence of a developed inertial range in the case of isotropic turbulence would yield an even clearer separation of spatial scales than seen in our present results. \begin{figure}[t] \begin{center} \includegraphics[width=0.7\textwidth]{support.eps} \end{center} \scaption{Enstrophy spectra $Q_{\rm{\scriptscriptstyle I}}(k)$ and $Q_{\rm{\scriptscriptstyle II}}(k)$ of the perturbation vorticity fields $\delta\hat{\bm{\omega}}_{i}$ in the groups I and II. The filled circles represent the average profile of Lyapunov vectors 1 through 19, the open circles represent the average profile of Lyapunov vectors 20 through 38. The average profile of the leading 19 Lyapunov vectors is larger for $k\eta < k_{\rm{c}}\eta\approx 0.32$ and mostly smaller for larger wave numbers.} \label{support} \end{figure} \section{Concluding Remarks} \label{sec:conclusion} \hspace*{0.3cm} We have identified temporally periodic motion which reproduces the dynamics and statistics of isotropic turbulence well in high-symmetric flow. The period of the periodic motion is of the order of the eddy-turnover time of turbulence. The mean properties of various physical quantities, e.g. the energy spectral function and the Lyapunov exponent, calculated by time average taken over one period of the periodic motion approximate those of the turbulence taken over a long time series. This agreement may be understood by noting the fact that the turbulent motion spends much of the time in the same, or similar, spatio-temporal state as the periodic motion. In fact, we have seen that the state point of the turbulent motion approaches the orbit of the periodic motion in phase space at the rate of once over several eddy-turnover times. In other words, the orbit of this periodic motion is embedded in turbulence. Thus, we regard it as the skeleton of turbulence. \hspace*{0.3cm} Such a periodic motion embedded in turbulence is useful as a reference field with respect to which the mechanisms of various turbulence phenomena, including turbulent mixing and the energy-cascade process, can be analysed. The reason is as follows. Turbulence is intrinsically chaotic and the fluid flow varies quite randomly both in space and in time. The fluid motion is unpredictable and never repeats, though the statistical properties are rather universal. This chaotic nature makes it difficult to study the general properties of turbulence. There is no way to confirm that those turbulence data used in analysis represent typical properties of turbulence. On the other hand, the periodic motion, whose dynamical properties can be understood much more clearly than those of the turbulence itself, repeats exactly its temporal variation without limit. Then, by analysing the repeated, periodic time series we may be able to extract the typical mechanisms of turbulence dynamics as well as calculate the statistics of any physical quantities with high accuracy. This line of study is now under way. \hspace*{0.3cm} In the present study the inertial range is captured only marginally. In order to increase the resolution it is necessary to simulate high-Reynolds number turbulence. The difficulties then arise in the computation time and memory requirements of the algorithm used to find periodic motion. The calculation of iteration matrix of the Newton-Raphson procedure is most time-consuming. The number of degrees of freedom in our computations, $\rm{O}(10^4)$, seems to be the maximal number tackled in continuation of periodic orbits at the time of writing. It proved possible to use the conventional arc-length method with the Newton-Raphson iteration because of the high efficiency of parallelization. In order to go to higher truncation levels, and thus larger Reynolds numbers and a developed inertial range, it may be necessary to switch to matrix-free methods for continuation. Such methods use inexact linear solvers for equations like (\ref{fixed}), avoiding the computation and orthogonal decomposition of the matrix of derivatives. Recently this approach has successfully been applied to the computation of periodic solutions of the Navier-Stokes equations \cite{sanchez}. \hspace*{0.3cm} It is conjectured that there are infinitely many periodic orbits in the turbulent regime. Some of them represent turbulent state and the others do not. From the present study we cannot infer a general rule for selecting periodic motion which represents the turbulence well. It is interesting to see, that at low micro-scale Reynolds number, where we distill the periodic orbits from a turbulent time series, their time mean energy-dissipation rate and largest Lyapunov exonent are all close to those of turbulence. If we decrease the viscosity, however, only the orbit of longest period reproduces the average values of turbulence. Future work should aim at a better understanding of this selection process as well as of the uniqueness of such solutions. \section*{Acknowledgments} Author L. van Veen was supported by a grant of the Japan Society for Promotion of Science. The parallel computations were done at the Media Center of Kyoto University.
2,869,038,156,124
arxiv
\section{Introduction} The focus of this paper is a novel Domain Model of Revita,\footnote{revita.cs.helsinki.fi --- {\href{https://www.dropbox.com/s/jftlrr1ixcmg9az/eacl-demo.mp4?dl=0}{Link to a short demo here.}}} an Intelligent Tutoring System (ITS) for language learning~\cite{katinskaia-etal-2018-revita,katinskaia:2017-nodalida:revita}. The structure of Revita follows the classic design of ITS, with a Domain, Student, and Instruction model. The {\em Domain Model} describes what must be mastered by the learner: concepts, rules, etc.---known as {\em skills} in ITS literature---and {\em relationships} among them~\cite{wenger2014artificial,Polson1988foundations}. We represent the Domain Model as a system of {\em linguistic constructs}---a wide range of linguistic phenomena, including inflexion of various word paradigms, government relations, collocations, syntactic constructions, etc. The system of constructs is developed in collaboration with experts in language teaching. It impacts all components of Revita---the variety of exercises that it generates automatically, intelligent feedback, the modeling of learner knowledge and evaluation of learner progress. The {\em Student model} represents the learner's proficiency. It is based on the history of answers given by the learner to exercises. The {\em Instruction model} embodies the pedagogical principles that lie behind the decisions: which exercises the learner is best prepared to do next, and which feedback should be provided to guide the learner toward the right answer. The models are interconnected in the ITS. Revita is currently piloted with real-world learners and teachers at several universities~\cite{stoyanova2021integration}. It is developed as a tool for learners and teachers of several languages: Finnish and Russian are currently the most developed languages; several ``beta'' languages including Italian, German, Swedish, Kazakh, Sakha, Tatar, English, Erzya, and others, are in early stages of development. The system is not meant to replace the teacher. For students, it provides 24/7 access to an unlimited amount of exercises for practice which fit the learner's current level, with immediate feedback and progress estimation. The platform also offers support for teachers: they can delegate the mundane work of creating hundreds of exercises as needed for each topic for students at different levels. It provides teachers with a range of instruments for sharing learning materials, working with groups of students, and for monitoring progress and evaluation. The paper is organized as follows: Section~\ref{sec:prior-work} briefly reviews work on intelligent computer-assisted language learning (ICALL). The principles and ideas behind Revita are described in Section~\ref{sec:revita-principles}. It also describes its main components: linguistic constructs, automatic generation of exercises and feedback, and modeling of learner knowledge. Section~\ref{sec:tools} describes tools for learners and teachers. \section{Prior Work} \label{sec:prior-work} Computer-aided language learning (CALL)\comment{emerged in the 1960s and} has been gaining interest with the rapid development of language technology. It is defined as ``the search for and study of applications of the computer in language teaching and learning''~\cite{levy1997computer}. Applying ITS specifically to language learning and supporting CALL systems by intelligent and/or adaptive methodologies, such as expert systems (ES), natural language processing (NLP), automatic speech recognition (ASR)---defines the domain of intelligent CALL, or ICALL. The goal of ICALL is building advanced applications for language learning using NLP and language resources—corpora, lexicons, etc.~\cite{volodina2014flexible}. The number of academic and commercial tools for language learning is growing drastically, e.g., the popular commercial systems like Duolingo, Rosetta Stone, Babbel, Busuu, iTalki, etc. Some CALL systems aim to give learners access to authentic materials~\cite{white2010theory}, the opportunity to interact with teachers and native speakers (e.g., the learning app {\em Lingoda} is a platform for live video classes), and provide text or sound feedback based on learner needs and knowledge~\cite{bodnar2017learner}. Modern CALL systems are also mobile, which increases their accessibility~\cite{derakhshan2011call,rosell2018autonomous,kacetl2019use}. While some research points out that CALL systems, in their experiments, do not achieve increases in learner proficiency~\cite{golonka2014technologies,bodnar2017learner,rachels2018effects}, other work showed actual improvements in learner motivation and attitudes, retention of various learning concepts, communication between students and teachers, and overall language skills~\cite{yeh2019speaking,zhang2022types}. It has been suggested that in developing CALL system, pedagogical goals---rather than technological means---should be the primary focus~\cite{gray2008effective}. \begin{figure}[t] \includegraphics[scale=0.21]{home.png} \caption{Revita's home page, with the main activities.} \label{fig:home} \end{figure} \section{Core Components of Revita} \label{sec:revita-principles} \subsection{Main Principles} \begin{table*}[t] \centering \scalebox{0.90}{\begin{tabular}{ll} \hline \textbf{Constructs} & \textbf{Examples} \\ \hline \textbf{Finnish} \\ (1) Necessive construction: Present & {\em Energiakriisin lähestyessä kaikki keinot \underline{on \textbf{otettava}} käyntiin.} \\ passive participle, with {\em -ttava} ending & (With the energy crisis approaching, all means \underline{must \textbf{be taken}} into action.) \\ (2) Transitive vs. intransitive verbs & {\em Voisitko \underline{\textbf{sammuttaa}} valon?} (Could you \underline{\textbf{turn off}} the light?)\comment{{\em Voisitko \underline{\textbf{herättää}} minut huomenna?} (Could you \underline{\textbf{wake me up}} tomorrow?)} \\ (3) Verb government: translative case & {\em Kaupungit \underline{eivät ole muuttuneet\comment{muuttuvat} \textbf{energiatehokkaammiksi}}.} \\ & (Cities \underline{have not become \textbf{more energy efficient}}.) \\ (4) Present participle substitute for & {\em Maija \underline{kertoi \textbf{vanhempien asuvan}} kaupungissa.} \\ ``that''-relative clause\comment{({\em Että-lauseenvastike}), with different subjects} & (Maija \underline{said that \textbf{her parents live}} in the city.)\\[1ex] \hline \textbf{Russian} \\ (5) Verb: II conjugation & {\em \textcyr{Мы скоро \underline{\textbf{увидим}} восход.}} (We \underline{\textbf{will see}} the sunrise soon.) \\ (6) Pronoun: joint vs.~hyphenated\comment{Joint vs. hyphenated spelling} & {\em \textcyr{Нам нужно \underline{\textbf{кое о чем}} поговорить.}} (We need to talk \underline{\textbf{about something}}) \\ (7) Perfective vs. imperfective aspect & {\em \textcyr{Страны \underline{\textbf{согласовали}} проект о будущих отношениях.}} \\ & (The countries \underline{\textbf{agreed on}} a draft on future relations.) \\ (8) Dative subject \& impersonal verb & {\em \textcyr{\underline{\textbf{Мне} необходимо поговорить} с врачом.}} (\underline{\textbf{I} need to talk} to a doctor.) \\ [1ex] \hline \textbf{German} \\ (9) Past perfect tense & {\em Ich \underline{\textbf{wäre}} mit ihm \underline{\textbf{gekommen}}, aber er wurde krank.}\\ & (I \underline{\textbf{would have come}} with him, but he got sick.) \\ (10) Weak masculine nouns & {\em Ich möchte \underline{\textbf{den Jungen}} kennenlernen.} (I want to meet \underline{\textbf{the boy}}.) \\ (11) Prepositions governing dative case & {\em Wir sind \underline{\textbf{aus dem} Haus} gelaufen.} (We ran \underline{\textbf{out of the} house}.) \\ \end{tabular} } \caption{Examples of \underline{\em grammatical constructs} found in sentences (underlined). {\em \textbf{Candidates}} are words that will be chosen for exercises about the constructs (marked in bold).} \label{constructs} \end{table*} Revita's approach is founded on the following primary principles: \begin{enumeratorCompact} \item {\em Practice should be based on authentic content}: the learner (or teacher) can upload any text from the Internet or a file to the system. \item {\em Exercises are automatically generated}, based on any authentic text chosen by the learner. \item {\em Exercises are personalized}: they match the learner’s current skill levels, so that each new exercise is selected to be a challenge that the learner is ready to meet. \item {\em Immediate feedback}: rather than saying only ``right/wrong'', the tutor {\em gradually guides} the learner toward finding the correct answer by providing hints, which begin as general hints and give more and more specific information about the answer. \item {\em Continual assessment} of skills allows the tutor to select exercises optimally personalized for each learner based on past performance. \end{enumeratorCompact} The first principle is the bedrock of the philosophy behind Revita---the story-based approach. All learning activities are based on authentic texts, which should be inherently interesting for the learner to read and motivate her to practice longer. A few sample texts are available in the system's ``public'' library for each language; also, several new stories are recommended as so-called ``Stories of the day''---which are crawled daily from several websites. But the main idea is that texts be selected and uploaded by the learners themselves (or teachers); see Figure~\ref{fig:home}, the button ``Add new stories'' allows one to upload new material into Revita. \begin{figure*}[t] \hspace{-0.3cm} \includegraphics[trim=0mm 37mm 0mm 0mm, clip, scale=0.4]{read.png} \caption{Preview mode for a story (before practice). All violet words may appear in an exercise. Noun phrases and prepositional phrases are circled in red. All government relations and constructions are underlined. Top-right corner---the list of constructs found in the story. Bottom-right corner---translation of the clicked word: ``asennetaan'', into English (target language can be selected). The green box over the clicked word lists all constructs related to it.} \label{fig:read} \end{figure*} \subsection{Linguistic Constructs} The central aspect of Revita's approach is the system of linguistic constructs that are represented in the Domain model. {\em Constructs} are linguistic phenomena or rules, that vary in specificity: e.g., a construct (in Finnish) may be {\em verb government}: the verb {\em tutustua} (``to become acquainted'') requires its argument to be in the illative case ({\em ``into something''}), while {\em tykätä} (``to like'') requires its argument in the elative case ({\em ``from something''\comment{jostakin}}), etc. Constructs also include all {\em constructions}, as construed in Construction Grammar (CG). CG treats many phenomena---grammatical constructs, multi-word expressions (MWEs), collocations, idioms, etc.---within a unified formalism. Examples of constructs for several languages are shown in Table~\ref{constructs}. When customizing the system for a new language, we engage experts in language teaching in creating the inventory of constructs, which need to be learned by students. Currently, Finnish and Russian have the most developed system of constructs, each with over 200 constructs. (Potentially, the number can be much larger.) The Russian construct system evolved from the grammatical constructs covered in tests for L2 learners developed at the University of Helsinki over 20 years. The Finnish constructs are based on inventories of grammatical topics developed by experts in teaching Finnish as L2. As shown in the examples in Table~\ref{constructs}, each construct needs to be identified in the text. For this we use HFST morphological analyzers, neural dependency parsers, and rule-based pattern detection. In Example (1), for construct ``Present passive participle with {\em -ttava} ending,'' the rule matches the participle {\em ``otettava''} by morphological features: participle, present tense, passive voice. This is then recognized as the head of the ``necessive'' construction {\em ``on otettava''} (``must be taken''), detected by a rule that matches: the singular 3rd person present form of modal verb {\em ``olla''} (``to be''), and the present passive participle, in the nominative case. In Example (2), the construct ``Transitive vs.~intransitive verbs'' is detected by using dictionaries of verb lemmas or by rules that detect regular ending patterns in verb lemmas {(e.g., {\em sammu\textbf{ttaa}} vs {\em samm\textbf{ua}}, ``turn something off'' vs. ``turn itself off'')}. \begin{figure*}[t] \includegraphics[scale=0.4]{practice1.png} \caption{Practice mode with a story. Figure shows the second paragraph of a story with three exercises: two clozes (``halukas'' and ``aurinkopaneeli'') and one MC. Previous answers are marked green and blue---correct and incorrect. The green box shows the set of opened hints for a cloze exercise.} \label{fig:practice} \end{figure*} Verb government relations are detected by several components: large sets government patterns (2000-3000 per language); pattern matching of noun phrases, prepositional phrases, and analytic verb forms; dependency relations detected by parsers. In Example (3), a government pattern for the \nocomment{intransitive} verb {\em ``muuttua''} (``to change'') requires an argument in translative case---here, the comparative adjective {\em energiatehokkaammiksi} (``more energy-efficient''). The government detector will find an argument of {\em ``muuttua''} regardless of its position in the sentence, and for any form of the verb, including complex analytic forms, e.g., the negative perfect tense {\em ``eivät ole muuttuneet.''} Detecting longer and more complex syntactic constructions relies on all of the components mentioned above. In (4), to match the construction {\em ``kertoi vanhempien asuvan''}, the government rule states that the verb ``kertoi'' (``said'') must govern a subordinate clause starting with {\em ``että''} (``that''); the {\em substitute} clause contains a noun phrase in the genitive case, which acts as the subject ({\em ``vanhempien''}) and a genitive active participle ({\em ``asuvan''}). The user can preview all constructs identified in a story in the Preview Mode prior to practice, see Figure~\ref{fig:read}. All noun and preposition phrases are circled, government relations and syntactic constructions are underlined. A list of all constructs found in the story is in the top-right: one can click on the list to highlight all instances of the construct found in the story. Clicking on any word in the story will also show all constructs linked to it in a green box, above the clicked word. This lets the learner (or teacher) see what can be exercised in the given text. \subsection{Exercise Generation Based on Constructs} Revita offers several practice modes; the main activity is the Grammar Practice Mode based on a story, see Figure~\ref{fig:practice}. Revita offers ``cloze'' (fill-in-the-blank) and multiple-choice (MC) exercises. A cloze exercise is shown as a text box, with the lemma of the expected answer given as a hint to the learner. In Figure, the lemma in the box is \mybox{{\em aurinkopaneeli}} (``sun panel''). The learner is expected to insert the correct form of this word that suits the context; here, it is the plural partitive case ({\em ``aurinkopaneeleja''}). Each word picked to be exercised must be disambiguated---we have to know the correct lemma to show to the learner. Disambiguation is performed by agreement rules and by dependency parsers. For analytic verb forms, such as {\em ``on otettava''} (``should be taken''), the cloze box will show the lemma of the {\em head} verb: \mybox{{\em ottaa}} (``to take''). All {\em candidates}---potential exercises in practice mode---are based on the detected constructs. In Example (3) for Finnish in Table~\ref{constructs}, an exercise on the construct ``Verb government'' is in bold: the learner will see the lemma \mybox{{\em energiatehokas}} (``energy-efficient''). To insert the correct form in the translative case, the learner needs to know which case is required by the governing verb. MC exercises are more targeted: the options---knowns as ``distractors''---to choose from are generated based on the exercised construct. Therefore, the same word or construction may have more than one set of distractors, since more than one construct may be linked to the candidate. Distractors are created by rules and morphological generators. In Example (6), for the construct ``Pronoun: joint vs.~hyphenated spelling" a rule generates distractors like: {\em \textcyr{``кое о чем''}}, {\em \textcyr{``кое-о-чем''}}, {\em \textcyr{``о кое-чем''}} (``about something''). For transitive vs.~intransitive verbs, Revita uses dictionaries of lemma pairs\comment{or generates distractors' lemmas using a rule}. However, the distractors must be {\em inflected} forms that fit the context, not lemmas. We use morphological generators to produce the required inflected forms\comment{ based on given lemmas and sets of grammatical tags}. MC distractors are often an effective way of learning a particular construct. In Example (4), e.g., the construction requires the subject to be in genitive case; it is useful to offer the lemma {\em ``vanhemmat''} (``parents'') in other cases (nominative, partitive, etc.) These forms, which differ only by case, are produced by the morphological generator. \subsection{Feedback} Feedback is a second essential feature in Revita. The learner gets {\em multiple attempts} for every exercise. Feedback is designed to gradually guide the learner toward the correct answer by providing a sequence of hints that depend on the context, the constructs linked to the exercise, and on the answer that was given by the learner. Hints are ordered so they become more specific on subsequent attempts. For example, if a verb governs the partitive case, the feedback sequence may be: {\em ``The is the object of the verb 'lisätä'.''} \textrightarrow {\em ``Use another case.''} \textrightarrow {\em ``Use partitive case.''} The learner can also request hints {\em before} inserting an answer: as shown in the green box in Figure~\ref{fig:practice}, four of the available hints are already ``used up,'' (one white heart remaining). Requesting hints indicates that the learner has not mastered the concept, and affects the learner's scores. Feedback that depends on the context gives information on whether a word in question is part of some construction or relies on a governing head (verb, noun, or adjective), etc. Hints also appear as {\em underlining} of syntactically related elements in the context.\comment{: this information is available after analyzing a story, as well as other grammatical features (case, number, tense, etc). } When the learner inserts an answer which does not match the expected one (the same as in the original story), Revita analyses the answer and checks which grammatical features are not correct. To give feedback on these features in the order of increased specificity, Revita uses a language-specific hierarchy of features. For example, in Russian, the hierarchy specifies that the hint about an incorrect gender of an adjective are shown before hints about incorrect number or case.\comment{---noun lemmas are given and their gender cannot be modified. At the same time, a gender hint is shown for adjectives.} Some feedback messages are generated in the stage when the construct is mapped to a text. For example, for an exercise with the participle {\em ``asuvan''} in {\em substitute that-clause} construction (see example (4) in Table~\ref{constructs}), we generate the feedback: {\em This is equivalent to ``...kertoi että vanhemmat asuvat...''} (``...said that parents live...'')---by generating the actual clause which is substituted by the participle. To generate this feedback, Revita uses information about the syntactic roles of each word in the original construction {\em ``kentoi vanhempien asuvan''}, and the required grammatical features of the forms in the feedback---to get these forms, we use the morphological generator. All mechanisms which define the order and the content of feedback hints and algorithms of sampling exercises for students are part of the Instruction Model of Revita. \subsection{Learner Modeling and Exercise Sampling} All learner answers and all requested hints to each exercise are recorded. A learner may attempt to answer each exercise multiple times. For each attempt, Revita analyzes the answers and the requested hints to calculate {\em credits and penalties} for the corresponding language constructs. The collected information on performance with constructs is used to model the learners' skill and the difficulty of the constructs. To model students' skills and exercise difficulty, we employ Item Response Theory (IRT)~\cite{embretson2013item,van2013handbook}. IRT comes from psychometrics and has a wide range of applications in education~\cite{klinkenberg2011computer}. The {\em Item} in IRT is a task that the learner should solve. Most IRT applications have a clear definition of an {\em item}, and a clear credit standard. The classic example of an item is an test question in mathematics: it is unambiguous and there is a clear judgment of the answer---correct or wrong. Our major challenge is that language constructs are not directly judged, as test items in other learning domains.\comment{ (such as mathematics, physics, etc.)} It is challenging to determine the credit and penalty for each construct based on the student's answer, because the link from exercise to concepts is {\em one-to-many}.\comment{ and requested hints.} To date, we have collected 570K answers for Russian exercises. Experiments with this data show a strong correlation between students' proficiency, as estimated by IRT and by the teachers. \nocomment{This suggests that IRT is able to converge on a reliable estimate of learner proficiency.} The difficulty estimation of exercises, Interestingly, the estimates of exercise difficulty do not correlate with teachers' judgments, which agrees with the findings of other researchers~\cite{lebedeva-2016-placement-test}. Exercises are sampled for practice based on the difficulty of the hardest construct linked to each exercise.\footnote{At present, we assume that the difficulty of an exercise depends on the {\em hardest} construct linked to it.} The difficulty of constructs is modeled by IRT. We aim to provide exercises that are best suited to each student's proficiency levels. For each possible exercise, IRT first estimates the probability that the student will answer the exercise correctly---then the probability of picking this exercise for practice is sampled from a normal distribution around the mean of a $50\%$ chance that the learner would answer correctly. Thus, on average, the exercises are not too difficult and not too easy. For languages with insufficient learner data for training IRT, we ask teachers to assign manually CEFR difficulty levels to constructs. Earlier experiments using specialized ELO ratings for assessing learner skills and evaluating the difficulty of linguistic constructs are presented in~\citet{hou-etal-2019-modeling}. \section{Tools for Students and Teachers} \label{sec:tools} At any time, the student can set her CEFR proficiency level manually, or to take an {\em adaptive} placement test to estimate proficiency. The test draws on a bank of questions prepared by teachers; the sampling of questions is driven by an IRT model trained on learner data. \comment{by teachers and each question is marked by a CERF level.} After that, the estimate of the learner's prodiciency levels are adjusted according to the correctness of answers to exercises. The learner can upload a story from any website or a local file. To each uploaded text, Revita applies classification by semantic topic---culture, science, sport, politics---and difficulty classifiers. \comment{that run over each text that is added to the platform. After that, the student is offered a variety of learning activities. there is also an option to practice with a random story with a defined topic} The {\em Preview mode} (see Figure~\ref{fig:read}) allows the user to read a story, edit it in case of inaccuracies, and review the grammatical topics that can be learnt through practicing with this story. Clicking on each word provides its translation into a number of languages. \comment{}{The learner can mark if she knows a word or not.} All unknown words are added to the learner's personal set of flashcards, which are used for Vocabulary Practice. The {\em Practice mode} presents the grammar and listening exercises---the learner can hear a segment of text in context, and is expected to type the answer correctly in the practice box. The user can also practice with a story in the Competition Mode, against a bot: the difference from the standard practice is that the learner needs to do the exercises faster than a bot---whose skill levels approximately matches those the learner. Another option is to practice with a Crossword built on the authentic text---the translations of words are used as hints. Revita offers various statistics and info-graphics to track progress on grammar constructs and vocabulary. These analytics are available to the learner and to the teacher. Revita allows teachers to build groups of students, share texts with them, and create tailored exercises that can be shared with the group. Revita allows teachers to track how their students practice and how well they perform on various tasks. \section{Conclusions and Future Work} This paper presents an in-depth discussion of the novel core component of the Revita language learning system---the Domain model embodied in a system of linguistic constructs. This system of constructs shapes all aspects of the learning experience in Revita and improves the quality of exercises and feedback. It also enables the modeling of learner knowledge more accurately, to provide informative progress analytics, and to offer exercises most appropriate for the learner's current level. We have results from pilot studies with Finnish and Russian L2 learners using the new Domain Model, but the discussion of the results is beyond the scope of this paper. In the future, we will improve the Domain Model by adding more information about the {\em interactions} and dependencies among the constructs---which will enable the creation of more intelligent learning paths. We will also add new types of activities, e.g., speech exercises. \comment{We are also working on developing a new mode of interaction with Revita, i.e., a system of lessons---sets of grammatical and lexical topics that can be practiced together in a form of different activities. } \section*{Limitations} Revita works with many languages, however, at present, only Finnish and Russian have a sufficiently developed inventory of constructs that they can be actually used by students in real-world scenarios. Most other languages have a limited set of constructs, which affects the quality and variety of the exercises, as well as limited feedback. Developing a substantial inventory of constructs is a complex task, that rerquires expertise in computational linguistics, as well as in language pedagogy. As mentioned above, Finnish and Russian have on the order of 200 constructs. Meanwhile, ``the Great Finnish Grammar'' has over 1500 articles \cite{vilkuna-2004-iso}, each of which introduces at least one construct, which, in principle, constitutes an aspect of the linguistic competency of a native speaker. A fascinating research challenge is determining the ``essential'' core inventory of constructs, which can support effective learning. Our experience so far with the rather modest inventories suggest that they already bring enormous value to learners and teachers~\cite{stoyanova2021integration}. The approach relies on arbitrary authentic texts being uploaded from the web; sometimes these texts cannot be extracted from the web site without some inaccuracies. Also, the original texts may contain typos, mistakes, etc. These problems should be fixed manually by editing the text. Of course, learners with a low proficiency level cannot do that independently. To avoid having these mistakes negatively affect the learning, the stories can be checked by a human teacher / tutor. We also plan to employ strong language models for grammatical error detection to identify such potential problems and highlight them to alert the user that additional checking may be needed. Revita relies on the text when checking the learner's answers. Currently, only one correct answer allowed---the one that is present in the story. Sometimes the wordform entered by the learner may also be valid in the given story context---``alternative correct'' answers. In such cases, Revita may still tell the learner that the answer is not correct. This is one of the important problems that we are researching at present, using neural models for detection of grammatical errors \cite{katinskaia-2019-ACL-multi-admiss,katinskaia2021assessing}. Revita also has certain limitations related to the use of external tools and services: dependency parsers, morphological analyzers, and external dictionaries---all may contain inaccuracies and errors. All of these factors can be a source of mistakes in the intelligent tutor: wrong analyses, incorrectly disambiguated lemmas, missing translations, etc. The system tries to collect {\em multiple sources of evidence} for its predictions to raise the confidence in---and precision of---the predictions. When the confidence is low---e.g., in the presence of conflicting evidence---the exercise, feedback, etc., is {\em not} offered to the learner. \section*{Ethics Statement} Revita is designed to carefully guard the privacy of its users---learners and teachers. It does not share any personal information collected during the learner's practice with any third parties. The teacher can track the learner's performance only if the learner has explicitly accepted the invitation to join the teacher's group. Any authentic text material uploaded into the system is visible only in the user's personal {\em private} library. If the teacher shares a story with a group of students, it is visible only inside the group library, never to anyone outside the group. Texts pre-loaded into Revita's public library come either from sources that have given us explicit permission to use their content, or from the public domain. \comment{only separate short paragraphs from texts.} \section*{Acknowledgements}
2,869,038,156,125
arxiv
\section{Introduction and statement of the main result} In the qualitative theory of real planar differential systems, one important open problem is the determination of limit cycles. The study of limit cycles for smooth polynomial differential systems origins from the well-known Hilbert 16th Problem, and has achieved lots of rich and excellent works, see the survey \cite{L} and the references therein. Nevertheless, it is still open even for the quadratic cases. As substantial piecewise smooth differential systems have emerged in control theory, electronic circuits with switches, and mechanical engineering with impact and dry frictions etc., the investigation of limit cycles for piecewise smooth differential systems attracts lots of mathematicians' widespread concerns. They attempt to develop the theory on piecewise smooth differential systems, and generalise the tools for studying the number of limit cycles from smooth differential systems to piecewise smooth differential systems. To the best of our knowledge, the Melnikov function method \cite{LH,LCZ} and the Averaging method \cite{LNT} are two main tools extended to study the number of limit cycles for piecewise smooth differential systems. A center of a real planar polynomial differential system is called an isochronous center if there exists a neighborhood of which such that all periodic orbits in this neighborhood have the same period. Owing to its speciality, the isochronous centers have attracted much more attentions, see the survey \cite{C}. The quadratic polynomial differential systems with an isochronous center were first classified into four kinds in \cite{L} by Loud. Using the notation of \cite{MRT}, we exhibited the four classes of quadratic isochronous centers and their first integrals as follows, see Table \ref{Tab:S}. \begin{table}[h] \caption{Quadratic isochronous centers and first integrals} \vspace{2pt} \centering \doublerulesep=0.4pt \begin{tabular}{cll} \hline\hline\\[-8pt] Name & \quad System & \quad First integral\\[1ex] \hline\\[-8pt] $S_1$ & \quad $\dot{x}=-y+\frac{1}{2}x^{2}-\frac{1}{2}y^{2}$ & \quad $H=\frac{x^2+y^2}{1+y}$\\[1ex] & \quad $\dot{y}=x(1+y)$ &\\[1ex] $S_2$ & \quad $\dot{x}=-y+x^{2}$ & \quad $H=\frac{x^2+y^2}{(1+y)^2}$\\[1ex] & \quad $\dot{y}=x(1+y)$ &\\[1ex] $S_3$ & \quad $\dot{x}=-y+\frac{1}{4}x^{2}$ & \quad $H=\frac{(x^2+4y+8)^2}{1+y}$\\[1ex] & \quad $\dot{y}=x(1+y)$ &\\[1ex] $S_4$ & \quad $\dot{x}=-y+2x^{2}-\frac{1}{2}y^{2}$ & \quad $H=\frac{4x^2-2(y+1)^2+1}{(1+y)^4}$\\[1ex] & \quad $\dot{y}=x(1+y)$ &\\[1ex] \hline\hline \end{tabular} \label{Tab:S} \end{table} Studies on the number of limit cycles bifurcated from the period annuli of quadratic isochronous centers, when they are perturbed inside all smooth polynomial differential systems of degree $n$, have exhibited a relatively complete result. For $n=2$, in \cite{CJ}, by the first order bifurcation, Chicone and Jacobs proved that at most 1 limit cycle bifurcates from the periodic orbits of $S_1$, and at most 2 limit cycles bifurcate from the periodic orbits of $S_2,S_3$ and $S_4$; Iliev obtained in \cite{I} that the cyclicity of the period annulus around $S_1$ is also 2. Li et al in \cite{LLLZ} presented a linear estimate for the number of limit cycles with respect to the four quadratic isochronous centers for any natural number $n$, where the upper bounds for systems $S_1,S_2$ and $S_3$ are sharp, while the upper bound for system $S_4$ is not. Shao and Zhao gave an improved upper bound for system $S_4$ in \cite{SZ}. Currently, researches on the number of limit cycles bifurcated from the period annuli of quadratic isochronous centers, when they are perturbed inside all piecewise smooth polynomial differential systems of degree $2$, also obtained some results. Llibre and Mereu used the averaging method of first order to study the number of limit cycles bifurcated from the period annuli of $S_1$ and $S_2$, when they are perturbed inside a class of piecewise smooth quadratic polynomial differential systems \cite{LM} with the straight line of discontinuity $y=0$, and found that at least 4 and 5 limit cycles can bifurcate from the period annuli of $S_1$ and $S_2$, respectively. Li and Cen obtained in \cite{LC} that there are at most 4 limit cycles bifurcating from the periodic orbits of $S_3$ by the averaging method of first order and \emph{Chebyshev criterion}, when it is perturbed inside a class of discontinuous quadratic polynomial differential systems with the straight line of discontinuity $x=0$. Cen et al applied the same methods and proved in \cite{CLZ} that there are at most 5 limit cycles bifurcating from the periodic orbits of $S_4$. In the present paper, we will consider the piecewise smooth polynomial perturbations of degree $n$ for all four quadratic isochronous centers, and investigate the number of zeros of the first order Melnikov functions associated with these quadratic isochronous centers to study the maximum number of limit cycles bifurcated from the period annuli. As far as we know, studies on the number of limit cycles for quadratic polynomial differential systems with a center under piecewise smooth polynomial perturbations of degree $n$ are rare, see for instance \cite{LL}. Suppose that $H=H(x,y)$ is a first integral of the quadratic isochronous center, and $R=R(x,y)$ is the corresponding integrating factor. We consider the piecewise smooth polynomial perturbations of quadratic isochronous center: \begin{equation} \left(\begin{array}{ll}\dot{x}\\[2ex] \dot{y}\end{array}\right)=\label{S}\left\{\begin{array}{ll} \left(\begin{array}{ll}-\dfrac{H_y}{R}+\varepsilon P^+(x,y)\\ \dfrac{H_x}{R}+\varepsilon Q^+(x,y)\end{array}\right), & \mbox{ $x> 0$,} \\[2ex] \left(\begin{array}{ll}-\dfrac{H_y}{R}+\varepsilon P^-(x,y)\\ \dfrac{H_x}{R}+\varepsilon Q^-(x,y)\end{array}\right), & \mbox{ $x< 0$,} \end{array} \right. \end{equation} where $0<|\varepsilon|\ll1$ and $P^{\pm}(x,y),Q^{\pm}(x,y)$ are polynomials in the variables $x$ and $y$ of degree $n$, given by \begin{equation}\label{PQ}\begin{split} P^{\pm}(x,y)=\sum_{i+j=0}^{n}a_{ij}^{\pm}x^iy^j,\quad Q^{\pm}(x,y)=\sum_{i+j=0}^{n}b_{ij}^{\pm}x^iy^j. \end{split}\end{equation} Adopting the first order Melnikov function method for piecewise smooth integrable non-Hamiltonian systems \cite{LCZ}, we have the following main results. \begin{theorem}\label{th:S} Denote the least upper bound for the number of zeros (taking into account their multiplicity) of the first order Melnikov function associated with the quadratic isochronous center $S_i$ by $H_i(n)$, $i=1,2,3,4$. Then \vspace{-10pt} \begin{itemize} \item[(a)]$H_1(0)=1$; $H_1(n)=n+3$ for $n=1,2,3$; and $H_1(n)=2n$ for $n\geq4$; \item[(b)]$H_2(n)=n+1$ for $n=0,1$; and $H_2(n)=2n+2$ for $n\geq2$; \item[(c)]$H_3(0)=1$; $H_3(n)=2n+1$ for $n=1,2$; and $H_3(n)=2n+2$ for $n\geq3$; \item[(d)]$H_4(0)=1$; $H_4(1)=4$; $H_4(2)=7$; $H_4(n)\leq12n+4$ for $n=3,4,5$; and $H_4(n)\leq20n-10-2(1+(-1)^n)$ for $n\geq6$. \end{itemize} \vspace{-10pt} Notice that ``$=$'' denotes the upper bound is sharp. \end{theorem} \begin{remark} (i) In \cite{LC} and \cite{CLZ}, the authors respectively studied the case of $n=2$ when $a_{00}^{\pm}=b_{00}^{\pm}=0$ for systems $S_3$ and $S_4$ . They proved that at most 4 and 5 limit cycles can bifurcate from the period annuli of these two quadratic isochronous centers by the averaging method of first order. Compared to the results shown in Theorem \ref{th:S}, the piecewise smooth polynomial perturbations with the constant term can make the perturbed systems produce at least one more limit cycle. (ii) In the estimation of number of zeros of Melnikov functions for systems $S_3$ and $S_4$, we find that using the first order Melnikov method can lead to the same results as those using the first order Averaging method for piecewise smooth polynomial differential systems when $n=2$. Han et al showed the equivalence between the Melnikov method and the Averaging method for studying the number of limit cycles, which are bifurcated from the period annulus of planar analytic differential systems in \cite{HRZ}. Here some evidences demonstrate that the equivalence may also hold for piecewise smooth analytic differential systems. (iii) If $a_{ij}^+=a_{ij}^-$ and $b_{ij}^+=b_{ij}^-$, then the perturbed systems are smooth. Li et al have studied these systems in \cite{LLLZ}, and have proven that: \begin{theorem}\cite{LLLZ} The least upper bound for the number of zeros (taking into account their multiplicity) of the first Melnikov function (Abelian integral) associated with the system: \begin{itemize} \item[(a)]$S_1$ is 0 if $n = 0$; 1 if $n = 1, 2, 3$; $n-2$ for $n\geq4$; \item[(b)]$S_2$ is 0 if $n = 0, 1$; and $n$ for $n\geq2$. \item[(c)]$S_3$ is $n$ for all $n\geq0$; \item[(d)]$S_4$ is $\leq14n + 11$. \end{itemize} \end{theorem} Applying Abelian integrals and complete elliptic integrals of the first and second kinds, Shao and Zhao give a smaller upper bound on the number of zeros of the first Melnikov function for system $S_4$ in \cite{SZ}. That is, the least upper bound for the number of zeros is $0$, for $n=0$; is not greater than $35$ for $n=1,2,3$; is not greater than $59$ for $n=4,5,6$; and is not greater than $12n-1$ for $n\geq7$. We give a further investigation of the upper bound with respect to $S_4$, and obtain a better result as follows. \begin{theorem}\label{th:S4} The least upper bound for the number of zeros (taking into account their multiplicity) of the first order Melnikov function associated with the system $S_4$ is $\leq [\frac{5n-5}{2}]$. \end{theorem} \end{remark} As is demonstrated above, the main object of this paper is to provide the least upper bound for the number of zeros of the first order Melnikov functions with respect to the quadratic isochronous centers $S_1, S_2, S_3$ and $S_4$, when they are perturbed by piecewise smooth polynomials of degree $n$, and to give an improved upper bound for $S_4$ in \cite{LLLZ,SZ}, when it is perturbed by smooth polynomials of degree $n$. Incidentally, we contrast the results obtained in \cite{LC} and \cite{CLZ} which deal with the quadratic isochronous centers $S_3$ and $S_4$ respectively by the averaging method of first order, when $n=2$ and $a_{00}^{\pm}=b_{00}^{\pm}=0$. We find that (i) if the piecewise smooth polynomial perturbations include the constant term, the perturbed systems could produce at least one more limit cycle, (ii) the first order Melnikov method and the first order averaging method are equivalent in studying the number of limit cycles bifurcated from the period annuli of the centers. We exploit the technique than in \cite{LLLZ} to compute the first order Melnikov functions, that is, we calculate the Melnikov function through a double integral by using Green's theorem. It has great advantages in obtaining a more accurate expression, and the double integrals are very easy to compute. For $S_4$, a more precise representation of the first order Melnikov function than in \cite{LLLZ} is obtained, and thus a better upper bound for the smooth case can be acquired. Since the Melinikov functions include different kinds of elementary functions, or elliptic integrals, to obtain a better estimate for the number of zeros of the first order Melnikov function, we always eliminate these elementary functions step by step orderly. It consists in getting rid of the logarithm function first, and then eliminating functions which include polynomials as numerators by multiplying nonzero factors and taking derivatives, and thus it suffices to consider the derived function. A useful lemma is proposed to be helpful for determining the exact upper bound for systems $S_1$, $S_2$ and $S_3$, and a better upper bound for system $S_4$. For small $n$, the common \emph{Chebyshev criterion} and the properties on extended Chebyshev systems with positive accuracy are used to obtain a sharp upper bound. When the polynomial perturbations are smooth, the Chebyshev property on two-dimensional Fuchsian systems also plays a key role in resulting in Theorem \ref{th:S4}. The present paper is organized as follows. First, some useful preliminary results are given in Section \ref{sec:pre}. Then, we estimate the number of zeros of the first order Melnikov function for quadratic isochronous centers $S_1, S_2, S_3$ and $S_4$ in Sections \ref{sec:S1}-\ref{sec:S4} respectively, when they are perturbed inside piecewise smooth polynomial differential systems. The proof of Theorem \ref{th:S4} is provided in Section \ref{sec:S4s}, in which an improved result on the number of zeros of the first order Melnikov function for quadratic isochronous center $S_4$ under smooth polynomial perturbations is obtained. Finally, some important proofs and results are given in Appendix for reference. \section{Preliminary results}\label{sec:pre} In this section, we introduce the main method that we will use to study the piecewise smooth polynomial systems \eqref{S}, and some useful tools and results to estimate the number of zeros of the first Melnikov function. From Theorem 1 of \cite{LCZ}, the first order Melnikov function with respect to system \eqref{S} is \begin{equation}\label{M0} \begin{split} M(h)=\int_{L_h^+}RP^+\mathrm{d}y-RQ^+\mathrm{d}x+\int_{L_h^-}RP^-\mathrm{d}y-RQ^-\mathrm{d}x, \end{split} \end{equation} where $L_h^+=\{x\geq0|H=h, h\in(h_c,h_s)\}$ and $L_h^-=\{x\leq0|H=h, h\in(h_c,h_s)\}$, see Figure \ref{fig1}. Here $H=h$ is one periodic orbit of system \eqref{S}$|_{\varepsilon=0}$, and $h_c$ and $h_s$ correspond to the center and the separatrix polycycle, respectively. Inspired by the idea of \cite{LLLZ}, we will use Green's theorem to compute $M(h)$ through two double integrals. Let $\mathcal{A}_h$ and $\mathcal{B}_h$ be the two intersection points of $L_h=L_h^+\bigcup L_h^-$ and $y$-axis, and $D_h^+$ and $D_h^-$ be the regions formed by $L_h^+\bigcup\overrightarrow{\mathcal{A}_h\mathcal{B}_h}$ and $L_h^-\bigcup\overrightarrow{\mathcal{B}_h\mathcal{A}_h}$, respectively. Then by Green's theorem, the first order Melnikov function \eqref{M0} can be expressed as \begin{equation}\label{Mh} \begin{split} M(h)=&\iint_{D_h^+}\left[\frac{\partial(RP^+)}{\partial x}+\frac{\partial(RQ^+)}{\partial y}\right]\mathrm{d}x\ \mathrm{d}y-\int_{\overrightarrow{\mathcal{A}_h\mathcal{B}_h}}RP^+\mathrm{d}y-RQ^+\mathrm{d}x\\ &+\iint_{D_h^-}\left[\frac{\partial(RP^-)}{\partial x}+\frac{\partial(RQ^-)}{\partial y}\right]\mathrm{d}x\ \mathrm{d}y-\int_{\overrightarrow{\mathcal{B}_h\mathcal{A}_h}}RP^-\mathrm{d}y-RQ^-\mathrm{d}x. \end{split} \end{equation} In the subsequent sections, we will exploit formula \eqref{Mh} to obtain the specific expressions of the first order Melnikov functions for the four quadratic isochronous centers. \begin{figure}[h] \centering \includegraphics[width=.45\textwidth]{figure2} \caption{\small{The periodic orbit of \eqref{S}$|_{\varepsilon=0}$.}} \label{fig1} \end{figure} For a more complicated function, it is not an easy thing to determine the exact number of its zeros. Here we provide some effective results to obtain the lower bound and the upper bound of the number of zeros for a more complicated function. The next result is well known for a lower bound. \begin{lemma}\label{le:CGP}\cite{CGP} Consider $n$ linearly independent analytical functions $f_i(x):D\rightarrow\mathbb{R},i=1,2,\cdots,n$, where $D\subset\mathbb{R}$ is an interval. Suppose that there exists $k\in\{1,2,\cdots,n\}$ such that $f_k(x)$ has constant sign. Then there exists $n$ constants $c_i, i=1,2,\cdots,n$ such that $c_1f_1(x)+ c_2f_2(x)+\cdots+c_nf_n(x)$ has at least $n-1$ simple zeros in $D$. \end{lemma} To obtain a better upper bound for the number of zeros of the first order Melnikov function, we give the following formula, which plays a key role in determining the least upper bound. The proof is putted in Appendix A.1. \begin{lemma}\label{lem:P} If $a\neq0$, $p, q\not\in \mathbb{Z}$ and $p+q\in \mathbb{Z}$, then \begin{equation}\begin{split} \left(\displaystyle\frac{P_{n}(x)}{x^{p}(a\pm x)^{q}}\right)^{(n+1-(p+q))}=\dfrac{\widetilde P_{n}(x)}{x^{n+1-q}(a\pm x)^{n+1-p}}, \quad n\geq p+q-1, \end{split}\end{equation} where $P_n(x)$ and $\widetilde P_{n}(x)$ are polynomials of degree $n$, $f^{(k)}$ denotes the $k$-order derivative of the function $f$. \end{lemma} In addition, for small $n$, the theory on \emph{Extended Complete Chebyshev system} (in short, ECT-system) is useful for an exact upper bound. Let $\mathcal{F}=(f_1, f_2, \cdots, f_n)$ be an ordered set of $\mathcal{C}^\infty$ functions on $L$. We call it an ECT-system on $L$ if, for all $i=1,2,...,n$, any nontrivial linear combination $\lambda_{1}f_{1}(x)+\lambda_{2}f_{2}(x)+...+\lambda_{i}f_{i}(x)$ has at most $i-1$ isolated zeros on $L$ counted with multiplicities. Moreover, this bound can be reached \cite{GV}. \begin{lemma}\label{ECT3} \cite{MV} $\mathcal{F}$ is an ECT-system on L if, and only if, for each $i=1,2,...,n$, \begin{equation*} W_{i}(x)= \begin{vmatrix} f_{1}(x) & f_{2}(x) & \cdots & f_{i}(x)\\ f_{1}^{\prime}(x) &f_{2}^{\prime}(x) & \cdots & f_{i}^{\prime}(x) \\ \vdots & \vdots & \ddots & \vdots \\ f_{1}^{(i-1)}(x) & f_{2}^{(i-1)}(x) & \cdots & f_{i}^{(i-1)}(x) \\ \end{vmatrix}\neq{0}, \quad \mbox{$x\in L$} . \end{equation*} \end{lemma} If the order set $\mathcal{F}$ is not an ECT-system, i.e., some Wronskian determinant has zeros, then the following two lemmas are powerful. The first one provides an sharp upper bound, while the second one provides a lower bound, which extends Lemma \ref{le:CGP}. \begin{lemma}\label{le:NT}\cite{NT} Let $\mathcal{F}$ be an ordered set of $\mathcal{C}^\infty$ functions on $[a, b]$. Assume that all the Wronskians are nonvanishing except $W_n(x)$, which has exactly one zero on $(a, b)$ and this zero is simple. Then $Z(F)=n$ and for any configuration of $m\leq n$ zeros there exists an element in $\mathrm{Span}(\mathcal F)$ realizing it, where $Z(\mathcal F)=n$ denotes the maximum number of zeros counting multiplicity that any nontrivial function $F\in\mathrm{Span}(\mathcal F)$ can have. \end{lemma} \begin{lemma}\label{le:NT2}\cite{NT} Let $\mathcal{F}$ be an ordered set of real $\mathcal{C}^\infty$ functions on $(a, b)$ satisfying that all the Wronskians are nonvanishing except $W_{n-1}(x)$ and $W_n(x)$, such that there exists $\xi\in(a, b)$ with $W_{n-1}(\xi)\neq0$. If $W_n(\xi)=0$ and $W'_n(\xi)\neq0$, then for each configuration of $m\leq n$ zeros, taking account their multiplicity, there exists $F\in\mathrm{Span}(\mathcal F)$ with this configuration of zeros. \end{lemma} Some results on Two-dimensional Fuchsian systems and the Chebyshev property in \cite{GI} are given in Appendix A.4 for reference, which will be used to obtain an improved upper bound on quadratic isochronous center $S_4$, when it is perturbed inside all smooth polynomial differential systems of degree $n$. \section{Zeros of $M(h)$ for system $S_1$}\label{sec:S1} Consider the piecewise smooth polynomial perturbations of degree $n$ of system $S_1$: \begin{equation} \left(\begin{array}{ll}\dot{x}\\[2ex] \dot{y}\end{array}\right)=\label{S1}\left\{\begin{array}{ll} \left(\begin{array}{ll}-y+\frac{1}{2}x^{2}-\frac{1}{2}y^{2}+\frac{\varepsilon }{2}P^+(x,y)\\ x(1+y)+\frac{\varepsilon }{2}Q^+(x,y)\end{array}\right), & \mbox{ $x> 0$,} \\[2ex] \left(\begin{array}{ll}-y+\frac{1}{2}x^{2}-\frac{1}{2}y^{2}+\frac{\varepsilon }{2}P^-(x,y)\\ x(1+y)+\frac{\varepsilon }{2}Q^-(x,y)\end{array}\right), & \mbox{ $x< 0$,} \end{array} \right. \end{equation} where $P^{\pm}(x,y)$ and $Q^{\pm}(x,y)$ are given by \eqref{PQ}. For ${\varepsilon=0}$, a first integral of system \eqref{S1} is \[ H=\frac{x^2+y^2}{1+y}, \] and the integrating factor is $R=\frac{2}{(1+y)^2}$. Here $L_h^+=\{x\geq0|H=h,h>0\}$ and $L_h^-=\{x\leq0|H=h,h>0\}$ are the right part and the left part of the periodic orbits surrounding the origin, respectively. $\mathcal{A}_h=(0,\alpha(h))$ and $\mathcal{B}_h=(0,\beta(h))$, where \begin{equation} \alpha(h)=\frac{h+\sqrt{h(4+h)}}{2}, \quad \beta(h)=\frac{h-\sqrt{h(4+h)}}{2}. \end{equation} \subsection{Expression of $M(h)$}\label{sub:S11} This subsection is devoted to obtaining the expression of the first order Melnikov function of system \eqref{S1}. By \eqref{Mh}, \begin{equation}\label{M} \begin{split} M(h)=M^+(h)+M^-(h), \end{split} \end{equation} where \begin{equation*}\label{M1} \begin{split} M^+(h)=&\iint_{D_h^+}\left[\frac{\partial}{\partial x}\left(\frac{P^+}{(1+y)^2}\right)+\frac{\partial}{\partial y}\left(\frac{Q^+}{(1+y)^2}\right)\right]\mathrm{d}x\ \mathrm{d}y+\int_{\beta(h)}^{\alpha(h)}\frac{P^+(0,y)}{(1+y)^2}\mathrm{d}y,\\ M^-(h)=&\iint_{D_h^-}\left[\frac{\partial}{\partial x}\left(\frac{P^-}{(1+y)^2}\right)+\frac{\partial}{\partial y}\left(\frac{Q^-}{(1+y)^2}\right)\right]\mathrm{d}x\ \mathrm{d}y-\int_{\beta(h)}^{\alpha(h)}\frac{P^-(0,y)}{(1+y)^2}\mathrm{d}y.\\ \end{split} \end{equation*} To acquire the expression of $M(h)$, it suffices to compute $M^+(h)$, and $M^-(h)$ can be obtained in the same way. \begin{equation*}\label{M^+1} \begin{split} M^+(h)=&\iint_{D_h^+}\left[\frac{1}{(1+y)^2}\left(\frac{\partial P^+ }{\partial x}+\frac{\partial Q^+ }{\partial y}\right)-\frac{2Q^+}{(1+y)^3}\right]\mathrm{d}x\ \mathrm{d}y+\int_{\beta(h)}^{\alpha(h)}\frac{P^+(0,y)}{(1+y)^2}\mathrm{d}y\\ =&\iint_{D_h^+}\left[\frac{1}{(1+y)^2}\sum_{i+j\leq n}\left(ia^+_{ij}x^{i-1}y^j+jb^+_{ij}x^{i}y^{j-1}\right)-\frac{2}{(1+y)^3}\sum_{i+j\leq n}b^+_{ij}x^{i}y^j\right]\mathrm{d}x\ \mathrm{d}y\\ &+\int_{\beta(h)}^{\alpha(h)}\frac{1}{(1+y)^2}\sum_{j=0}^na^+_{0j}y^{j}\mathrm{d}y\\ =&\iint_{D_h^+}\left(\frac{1}{(1+y)^2}\sum_{i+j\leq n}ia^+_{ij}x^{i-1}y^j+\frac{1}{(1+y)^3}\sum_{i+j\leq n}b^+_{ij}x^{i}\left(j(1+y)y^{j-1}-2y^j\right)\right)\mathrm{d}x\ \mathrm{d}y\\ &+\int_{\beta(h)}^{\alpha(h)}\frac{1}{(1+y)^2}\sum_{j=0}^na^+_{0j}y^{j}\mathrm{d}y\\ =&\int_{\beta(h)}^{\alpha(h)}\frac{1}{(1+y)^2}\sum_{2i+j\leq n}a^+_{2i,j}(h+hy-y^2)^iy^j\mathrm{d}y\\ &+\int_{\beta(h)}^{\alpha(h)}\frac{1}{(1+y)^2}\sum_{2i+1+j\leq n}a^+_{2i+1,j}(h+hy-y^2)^iy^j\sqrt{h+hy-y^2}\mathrm{d}y\\ &+\int_{\beta(h)}^{\alpha(h)}\frac{1}{(1+y)^3}\sum_{2i+j\leq n}\frac{b^+_{2i,j}}{2i+1}(h+hy-y^2)^i(jy^{j-1}+(j-2)y^j)\sqrt{h+hy-y^2}\mathrm{d}y\\ &+\int_{\beta(h)}^{\alpha(h)}\frac{1}{(1+y)^3}\sum_{2i+j+1\leq n}\frac{b^+_{2i+1,j}}{2i+2}(h+hy-y^2)^{i+1}(jy^{j-1}+(j-2)y^j)\mathrm{d}y\\ =&\sum_{k=0}^{n}\widetilde{m}_{ak}(h)\int_{\beta(h)}^{\alpha(h)}(1+y)^{k-2}\mathrm{d}y+ \sum_{k=0}^{n-1}\overline{m}_{ak}(h)\int_{\beta(h)}^{\alpha(h)}(1+y)^{k-2}\sqrt{h+hy-y^2}\mathrm{d}y\\ &+ \sum_{k=0}^n\overline{m}_{bk}(h)\int_{\beta(h)}^{\alpha(h)}(1+y)^{k-3}\sqrt{h+hy-y^2}\mathrm{d}y +\sum_{k=0}^{n+1}\widetilde{m}_{bk}(h)\int_{\beta(h)}^{\alpha(h)}(1+y)^{k-3}\mathrm{d}y\\ =&\sum_{k=0}^n\overline{m}_k(h)\int_{\beta(h)}^{\alpha(h)}(1+y)^{k-3}\sqrt{h+hy-y^2}\mathrm{d}y+ \sum_{k=0}^{n+1}\widetilde{m}_k(h)\int_{\beta(h)}^{\alpha(h)}(1+y)^{k-3}\mathrm{d}y, \end{split} \end{equation*} where $\overline{m}_{ik}$, $\widetilde{m}_{ik}$, $i=a,b$, and $\overline{m}_{k}$, $\widetilde{m}_{k}$ are polynomials of $h$ with degree \begin{equation}\begin{split}\label{mk1} &\deg\overline{m}_{ak}\leq\min\{k,n-1-k\},\quad \deg\widetilde{m}_{ak}\leq\min\{k,n-k\},\\ &\deg\overline{m}_{bk}\leq\min\{k,n-k\},\quad \deg\widetilde{m}_{bk}\leq\min\{k,n+1-k\},\\ &\deg\overline{m}_k\leq\min\{k,n-k\},\quad \deg\widetilde{m}_k\leq\min\{k,n+1-k\}, \end{split}\end{equation} which are determined by Newton's formula and some qualitative analysis, see \cite{LLLZ} for details. It is worth noting that when $n=0$, $\widetilde{m}_{bk}(h)=0$, for $k=0,1$, and thus $\widetilde{m}_0(h)=0$. Let \[ I_k(h)=\int_{\beta(h)}^{\alpha(h)}(1+y)^{k}\sqrt{h+hy-y^2}\mathrm{d}y,\quad J_k(h)=\int_{\beta(h)}^{\alpha(h)}(1+y)^{k}\mathrm{d}y. \] Then \begin{equation}\label{M1+} M(h)=\sum_{k=0}^n\overline{m}_k(h)I_{k-3}(h)+ \sum_{k=0}^{n+1}\widetilde{m}_k(h)J_{k-3}(h). \end{equation} Next, we need to compute $I_k(h)$ and $J_k(h)$, respectively. For $k>0$, let $u=1+y$, then \begin{equation*}\begin{split} I_k(h)=&\int_{u_1}^{u_2}u^k\sqrt{-1+(2+h)u-u^2}\mathrm{d}u\\ =&\int_{u_1}^{u_2}u^k\sqrt{(u-u_1)(u_2-u)}\mathrm{d}u, \end{split}\end{equation*} where $u_1=\beta(h)+1$ and $u_2=\alpha(h)+1$ are the two roots of $-1+(2+h)u-u^2=0$. Two different transformations $\sqrt{(u-u_1)(u_2-u)}=t(u-u_1)$ and $\sqrt{(u-u_1)(u_2-u)}=t(u_2-u)$ lead to \begin{equation*} I_k(h)=2(u_1-u_2)^2\int_{0}^{\infty}\frac{t^2(u_2+u_1t^2)^k}{(1+t^2)^{k+3}}\mathrm{d}t =2(u_1-u_2)^2\int_{0}^{\infty}\frac{t^2(u_1+u_2t^2)^k}{(1+t^2)^{k+3}}\mathrm{d}t. \end{equation*} It follows that \begin{equation}\begin{split}\label{Ik1} I_{k}(h)&=\displaystyle\ (u_{1}-u_{2})^{2}\int_{0}^{\infty}\dfrac{t^{2}[(u_{2}+u_{1}t^{2})^{k}+(u_{1}+u_{2}t^{2})^{k}]}{(1+t^{2})^{k+3}}\mathrm{d}t\\ &=\displaystyle\ (u_{1}-u_{2})^{2}\sum_{i+2j=k}r_{ij}(u_{1}+u_{2})^{i}(u_{1}u_{2})^{j}\\ &=\displaystyle h(4+h) \sum_{i+2j=k}r_{ij}\left(2+h\right)^{i}\\ &=h(4+h)\sum_{i=0}^{k}C_{i,k}h^{i},\quad \quad \quad \quad k>0, \end{split}\end{equation} where $r_{ij}, i+2j=k$, and $C_{i,k}, i=0,1,\cdots,k$ are constants. Moreover \[ C_{k,k}=\int_{0}^{\infty}\frac{t^{2}(1+t^{2k})}{(1+t^2)^{k+3}}\mathrm{d}t>0. \] Direct computations show that \begin{equation}\begin{split}\label{I1} &I_{-3}(h)=I_{0}(h)=\frac{h(4+h)\pi}{8},\\ &I_{-2}(h)=I_{-1}(h)=\frac{h\pi}{2}. \end{split}\end{equation} We get $J_k(h)$ by direct computations, \begin{equation}\begin{split}\label{Jk1} J_k(h)&=\frac{(1+\alpha(h))^{k+1}-(1+\beta(h))^{k+1}}{k+1}\\ &=\frac{(2+h+\sqrt{h(4+h)})^{k+1}-(2+h-\sqrt{h(4+h)})^{k+1}}{(k+1)2^{k+1}}\\ &=\frac{\sqrt{h(4+h)}}{(k+1)2^{k}}\sum_{j=0}^{[k/2]}C_{k+1}^{2j+1}(2+h)^{k-2j}h^j(4+h)^j\\ &=\sqrt{h(4+h)}\sum_{j=0}^{k}s_{j,k}h^j, \quad k>0, \end{split}\end{equation} where $s_{i,k}, j=0,1,\cdots,k$ are constants, \[ s_{k,k}=\frac{1}{(k+1)2^{k}}\sum_{j=0}^{[k/2]}C_{k+1}^{2j+1}=\frac{1}{k+1}>0, \] and \begin{equation}\begin{split}\label{J1} &J_{-3}(h)=\dfrac{1}{2}\sqrt{h(4+h)}(2+h),\\ &J_{-2}(h)=J_0(h)=\sqrt{h(4+h)},\\ &J_{-1}(h)=2\ln(1+\alpha(h)). \end{split}\end{equation} Using \eqref{mk1}-\eqref{J1}, we have the following result. \begin{proposition}\label{PS11} The first order Melnikov function $M(h)$ for system $S_1$ is: \begin{equation}\label{M13} \begin{split} M(h) =&\alpha_0h(4+h)+\beta_0\sqrt{h(4+h)},\ n=0,\\ M(h) =&h(\alpha_{0}+\alpha_{1}h)+\sqrt{h(4+h)}(\beta_0+\beta_1h)+\gamma_0\ln(1+\alpha(h)), \ n=1,\\ M(h) =&h(\alpha_{0}+\alpha_{1}h)+\sqrt{h(4+h)}(\beta_0+\beta_1h)+(\gamma_0+\gamma_1h)\ln(1+\alpha(h)), \ n=2,\\ M(h) =&h(\alpha_{0}+\alpha_{1}h)+\sqrt{h(4+h)}(\beta_0+\beta_1h)+(\gamma_0+\gamma_1h+\gamma_2h^2)\ln(1+\alpha(h)),\ n=3,\\ M(h) =&h\sum_{k=0}^{n-2}\alpha_kh^k+\sqrt{h(4+h)}\sum_{k=0}^{n-2}\beta_kh^k+(\gamma_0+\gamma_1h+\gamma_2h^2)\ln(1+\alpha(h)),\ n\geq4, \end{split} \end{equation} where $\alpha(h)=(h+\sqrt{h(4+h)})/2$, and $\alpha_k, \beta_k, k=0,1,\cdots,n-2$, and $\gamma_0, \gamma_1, \gamma_2$ are constants. \end{proposition} \begin{proof} For $n\leq3$, the computation of $M(h)$ is straightforward. For $n\geq4$, \begin{equation*} \begin{split} \deg\left(\dfrac{\overline{m}_k(h)I_{k-3}(h)}{h}\right)&=\deg\overline{m}_k(h)+\deg \left(\dfrac{I_{k-3}(h)}{h}\right)\\ &\leq\min\{k,n-k\}+k-3+1\\ &\leq n-2,\quad\quad \mbox{for}\ k\geq4, \end{split} \end{equation*} \begin{equation*} \begin{split} \deg\left(\frac{\widetilde{m}_k(h)J_{k-3}(h)}{\sqrt{h(4+h)}}\right)&=\deg\widetilde{m}_k(h)+\deg \left(\frac{J_{k-3}(h)}{\sqrt{h(4+h)}}\right)\\ &\leq\min\{k,n+1-k\}+k-3\\ &\leq n-2,\quad\quad \mbox{for}\ k\geq4,\\ \end{split} \end{equation*} therefore, \begin{equation*}\label{M1n} \begin{split} M(h)=&\sum_{k=0}^n\overline{m}_k(h)I_{k-3}(h)+ \sum_{k=0}^{n+1}\widetilde{m}_k(h)J_{k-3}(h)\\ =&h(\alpha_{0}+\alpha_{1}h)+\sum_{k=4}^n\overline{m}_k(h)I_{k-3}(h)\\ &+\sqrt{h(4+h)}(\beta_0+\beta_1h)+(\gamma_0+\gamma_1h+\gamma_2h^2)\ln(1+\alpha(h))+\sum_{k=4}^{n+1}\widetilde{m}_k(h)J_{k-3}(h)\\ =&h\sum_{k=0}^{n-2}\alpha_kh^k+\sqrt{h(4+h)}\sum_{k=0}^{n-2}\beta_kh^k+(\gamma_0+\gamma_1h+\gamma_2h^2)\ln(1+\alpha(h)). \end{split} \end{equation*} \end{proof} \iffalse \begin{equation}\label{M13} \begin{split} M(h)=&\overline{m}_0(h)I_{-3}(h)+\widetilde{m}_0(h)J_{-3}(h)+\widetilde{m}_1(h)J_{-2}(h)\\ =&\alpha_0h(4+h)+\beta_0\sqrt{h(4+h)},\quad\quad \mbox{for}\ n=0,\\ M(h)=&\overline{m}_0(h)I_{-3}(h)+\overline{m}_1(h)I_{-2}(h)+\widetilde{m}_0(h)J_{-3}(h) +\widetilde{m}_1(h)J_{-2}(h)+\widetilde{m}_2(h)J_{-1}(h)\\ =&h(\alpha_{0}+\alpha_{1}h)+\sqrt{h(4+h)}(\beta_0+\beta_1h)+\gamma_0\ln(1+\alpha(h)),\quad\quad \mbox{for}\ n=1,\\ M(h)=&\sum_{k=0}^2\overline{m}_k(h)I_{k-3}(h)+ \sum_{k=0}^{3}\widetilde{m}_k(h)J_{k-3}(h)\\ =&h(\alpha_{0}+\alpha_{1}h)+\sqrt{h(4+h)}(\beta_0+\beta_1h)+(\gamma_0+\gamma_1h)\ln(1+\alpha(h)),\quad\mbox{for}\ n=2,\\ M(h)=&\sum_{k=0}^3\overline{m}_k(h)I_{k-3}(h)+ \sum_{k=0}^{4}\widetilde{m}_k(h)J_{k-3}(h)\\ =&h(\alpha_{0}+\alpha_{1}h)+\sqrt{h(4+h)}(\beta_0+\beta_1h)+(\gamma_0+\gamma_1h+\gamma_2h^2)\ln(1+\alpha(h)),\quad\mbox{for}\ n=3, \end{split} \end{equation} and \begin{equation*} \begin{split} \deg\left(\dfrac{\overline{m}_k(h)I_{k-3}(h)}{h}\right)&=\deg\overline{m}_k(h)+\deg \left(\dfrac{I_{k-3}(h)}{h}\right)\\ &\leq\min\{k,n-k\}+k-3+1\\ &\leq n-2,\quad\quad \mbox{for}\ k\geq4,\\ \deg\left(\frac{\widetilde{m}_k(h)J_{k-3}(h)}{\sqrt{h(4+h)}}\right)&=\deg\widetilde{m}_k(h)+\deg \left(\frac{J_{k-3}(h)}{\sqrt{h(4+h)}}\right)\\ &\leq\min\{k,n+1-k\}+k-3\\ &\leq n-2,\quad\quad \mbox{for}\ k\geq4,\\ \end{split} \end{equation*} therefore, for $n\geq4$, \begin{equation}\label{M1n} \begin{split} M(h)=&\sum_{k=0}^n\overline{m}_k(h)I_{k-3}(h)+ \sum_{k=0}^{n+1}\widetilde{m}_k(h)J_{k-3}(h)\\ =&h(\alpha_{0}+\alpha_{1}h)+\sum_{k=4}^n\overline{m}_k(h)I_{k-3}(h)\\ &+\sqrt{h(4+h)}(\beta_0+\beta_1h)+(\gamma_0+\gamma_1h+\gamma_2h^2)\ln(1+\alpha(h))+\sum_{k=4}^{n+1}\widetilde{m}_k(h)J_{k-3}(h)\\ =&h\sum_{k=0}^{n-2}\alpha_kh^k+\sqrt{h(4+h)}\sum_{k=0}^{n-2}\beta_kh^k+(\gamma_0+\gamma_1h+\gamma_2h^2)\ln(1+\alpha(h)). \end{split} \end{equation} \fi \subsection{Independence of the coefficients}\label{sub:S12} To determine the independence of the coefficients in $M(h)$ obtained in \eqref{M13}, we give the following lemma. The proof is similar to the computation of $M^{+}(h)$, and thus we omit here. \begin{lemma}\label{lem:S1} The following equalities hold. \begin{equation*} \begin{split} &\iint_{D_h^+}\frac{\partial}{\partial x}\left(\frac{x^{2m+1}}{(1+y)^2}\right)\mathrm{d}x\ \mathrm{d}y=\left\{\begin{array}{ll} h(c_{2m-1}h^{2m-1}+\cdots) & \mbox{if $m>0$,} \\ \frac{\pi}{2}h& \mbox{if $m=0$,} \end{array} \right.\\ &\iint_{D_h^+}\frac{\partial}{\partial y}\left(\frac{x^{2m}}{(1+y)^2}\right)\mathrm{d}x\ \mathrm{d}y=\left\{\begin{array}{lll} h(c_{2m-2}h^{2m-2}+\cdots) & \mbox{if $m>1$,} \\ -\frac{\pi}{4}h^2 & \mbox{if $m=1$,}\\ -\frac{\pi}{4}h(h+4) & \mbox{if $m=0$,} \end{array} \right.\\ &\iint_{D_h^+}\frac{\partial}{\partial y}\left(\frac{x^{2m+1}}{(1+y)^2}\right)\mathrm{d}x\ \mathrm{d}y\\ =&\left\{\begin{array}{ll} \sqrt{h(4+h)}(d_{2m-1}h^{2m-1}+\cdots)+(-1)^{m}\left(m(h+2)^2+2\right)\ln(1+a(h)) & \mbox{if $m>0$,} \\ \sqrt{h(4+h)}(-\frac{1}{2}h-1)+2\ln(1+a(h))& \mbox{if $m=0$,} \end{array} \right.\\ &\iint_{D_h^+}\frac{\partial}{\partial x}\left(\frac{x^{2m}}{(1+y)^2}\right)\mathrm{d}x\ \mathrm{d}y\\ =&\left\{\begin{array}{ll} \sqrt{h(4+h)}(d_{2m-2}h^{2m-2}+\cdots)+2(-1)^{m-1}m(h+2)\ln(1+a(h)) & \mbox{if $m>0$,} \\ 0& \mbox{if $m=0$,} \end{array} \right.\\ &\int_{\beta(h)}^{\alpha(h)}\frac{1}{(1+y)^2}\mathrm{d}y=J_{-2}(h)=\sqrt{h(4+h)},\\ \end{split} \end{equation*} \begin{equation*} \begin{split} &\int_{\beta(h)}^{\alpha(h)}\frac{y}{(1+y)^2}\mathrm{d}y=J_{-1}(h)-J_{-2}(h)=2\ln(1+\alpha(h))-\sqrt{h(4+h)},\quad\quad\quad\quad\quad\quad\quad\quad \end{split} \end{equation*} where \begin{equation*}\begin{split} &c_{2m-1}=\sum_{k=m}^{2m}C_m^{2m-k}(-1)^{k-m}C_{k-2,k-2}= \frac{\sqrt{\pi}\ \Gamma(m+\frac{3}{2})}{(2m-1)2^{2m-1}\Gamma(m+1)}, \ m\geq2, \quad c_1=\frac{3\pi}{8},\\ &c_{2m-2}=-\frac{2}{1+2m}\sum_{k=m}^{2m}C_m^{2m-k}(-1)^{k-m}C_{k-3,k-3}= -\frac{2 \Gamma(m-\frac{3}{2}) \Gamma(m+\frac{3}{2})}{(2m+1) \Gamma(2m)},\ m\geq2, \quad c_0=-\frac{\pi}{8},\\ &d_{2m-1}=-\frac{1}{1+m}\sum_{k=m+1}^{2m+2}C_{m+1}^{2m+2-k}(-1)^{k-m-1}\frac{1}{k-2}= -\frac{\Gamma (m-1) \Gamma (m+2)}{(m+1) \Gamma (2m+1)},\ m\geq2, \quad d_1=\frac{3}{2},\\ &d_{2m-2}=\sum_{k=m}^{2m}C_{m}^{2m-k}(-1)^{k-m}\frac{1}{k-1}= \frac{m\sqrt{\pi }\ \Gamma (m-1)}{2^{2m-1}\Gamma (m+\frac{1}{2})},\ m\geq2, \quad d_0=-2 \end{split}\end{equation*} are nonzero constants, and the dots denote the lower-order terms of $h$. \end{lemma} \begin{proposition}\label{PS12} The coefficients in $M(h)$ given in Proposition \ref{PS11} are independent. \end{proposition} \begin{proof} By Lemma \ref{lem:S1}, for $n=0$, \[ \frac{\partial(\alpha_0,\beta_0)} {\partial(b^+_{0,0},a^+_{0,0})} =-\dfrac{\pi}{4}\neq0, \] for $n=1$, \[ \frac{\partial(\alpha_0,\alpha_1,\beta_0,\beta_1,\gamma_0)} {\partial(a^+_{1,0},b^+_{0,0},a^+_{0,0},b^+_{1,0},a^+_{0,1})} =\dfrac{\pi^2}{8}\neq0, \] for $n=2$, \[ \frac{\partial(\alpha_0,\alpha_1,\beta_0,\beta_1,\gamma_0,\gamma_1)} {\partial(a^+_{1,0},b^+_{0,0},a^+_{0,0},b^+_{1,0},a^+_{0,1},a^+_{2,0})} =\dfrac{\pi^2}{4}\neq0, \] for $n\geq3$ odd, \[ \frac{\partial(\alpha_0,\alpha_1,\alpha_2,\cdots,\alpha_{n-2},\beta_0,\beta_1,\beta_2,\cdots,\beta_{n-2},\gamma_0,\gamma_1,\gamma_2)} {\partial(a^+_{1,0},a^+_{3,0},b^+_{4,0},\cdots,a^+_{n,0}, a^+_{2,0},b^+_{3,0},a^+_{4,0}\cdots,b^+_{n,0},b^+_{1,0},a^+_{0,0},a^+_{0,1})} =-\pi c_1\prod_{k=2}^{n-2}c_{k}d_k\neq0, \] and for $n\geq4$ even, \[ \frac{\partial(\alpha_0,\alpha_1,\alpha_2,\cdots,\alpha_{n-2},\beta_0,\beta_1,\beta_2,\cdots,\beta_{n-2},\gamma_0,\gamma_1,\gamma_2)} {\partial(a^+_{1,0},a^+_{3,0},b^+_{4,0},\cdots,b^+_{n,0}, a^+_{2,0},b^+_{3,0},a^+_{4,0},\cdots,a^+_{n,0},b^+_{1,0},a^+_{0,0},a^+_{0,1})} =-\pi c_1\prod_{k=2}^{n-2}c_{k}d_k\neq0, \] thus the parameters $\alpha_k, \beta_k, k=0,1,\cdots,n-2$, and $\gamma_0, \gamma_1, \gamma_2$ are independent. \end{proof} \subsection{Zeros of $M(h)$} Finally, we estimate the number of zeros of $M(h)$ obtained in Proposition \ref{PS11}. \begin{proposition} $H_1(0)=1$, $H_1(n)=n+3$ for $n=1,2,3$, and $H_1(n)=2n$ for $n\geq4$. \end{proposition} \begin{proof} For $n=0$, it is easy to verify that $M(h)$ has at most 1 zero in $(0,+\infty)$, i.e., $H_1(0)\leq1$. For $n\geq1$, we eliminate the logarithmic function first by taking derivatives. \begin{equation*} M^{(3)}(h)=\dfrac{1}{h^2(4+h)^2\sqrt{h(4+h)}}\sum_{i=0}^{n+1}\widetilde{\beta}_ih^i,\quad \mbox{for} \ n=1,2,3, \end{equation*} and \begin{equation*} M^{(3)}(h)=\sum_{i=0}^{n-4}\widetilde{\alpha}_ih^i+\dfrac{1}{h^2(4+h)^2\sqrt{h(4+h)}}\sum_{i=0}^{n+1}\widetilde{\beta}_ih^i, \quad \mbox{for}\ n\geq4. \end{equation*} Obviously, $M^{(3)}(h)$ has at most $n+1$ zeros when $n=1,2,3$. Then, it follows from $M(0)=0$ that \[ H_1(n)\leq n+1+3-1=n+3. \] For $n\geq4$, let $F(h)=M^{(3)}(h)$, then it follows from Lemma \ref{lem:P} that \begin{equation*} F^{(n-3)}(h)=\dfrac{1}{h^{n-1/2}(4+h)^{n-1/2}}\sum_{i=0}^{n+1}\bar{\beta}_ih^i. \end{equation*} Obviously, $F^{(n-3)}(h)$ has at most $n+1$ zeros. Thus, by Rolle's Theorem, $F(h)$, as well as $M^{(3)}(h)$, has at most $2n-2$ zeros in $(0,+\infty)$. Note that $M(0)=0$, thus for $n\geq4$, \[ H_1(n)\leq 2n-2+3-1=2n. \] Note that the functions $h^{k+1}, h^k\sqrt{h(4+h)}$, $k=0,1,\cdots, n-2$ and $h^k\ln(1+\alpha(h)), k=0,1,2$ are linearly independent. Hence by Lemma \ref{le:CGP} and Proposition \ref{PS12}, $H_1(0)\geq1$, $H_1(n)\geq n+3$ for $n=1,2,3$ and $H_1(n)\geq 2n$ for $n\geq4$. The Proposition follows. \end{proof} \section{Zeros of $M(h)$ for system $S_2$}\label{sec:S2} Consider the piecewise smooth polynomial perturbations of degree $n$ of system $S_2$: \begin{equation} \left(\begin{array}{ll}\dot{x}\\[2ex] \dot{y}\end{array}\right)=\label{S2}\left\{\begin{array}{ll} \left(\begin{array}{ll}-y+x^2+\frac{\varepsilon }{2}P^+(x,y)\\ x(1+y)+\frac{\varepsilon }{2}Q^+(x,y)\end{array}\right), & \mbox{ $x> 0$,} \\[2ex] \left(\begin{array}{ll}-y+x^2+\frac{\varepsilon }{2}P^-(x,y)\\ x(1+y)+\frac{\varepsilon }{2}Q^-(x,y)\end{array}\right), & \mbox{ $x< 0$,} \end{array} \right. \end{equation} where $P^{\pm}(x,y)$ and $Q^{\pm}(x,y)$ are given by \eqref{PQ}. For ${\varepsilon=0}$, the first integral of system \eqref{S2} is \[ H=\frac{2y+1-x^2}{(1+y)^2}, \] and the integrating factor is $R=\frac{2}{(1+y)^3}$. Here $L_h^+=\{x\geq0|H=h,h\in(0,1)\}$, $L_h^-=\{x\leq0|H=h,h\in(0,1)\}$, $\mathcal{A}_h=(0,\alpha(h))$ and $\mathcal{B}_h=(0,\beta(h))$, where \begin{equation} \alpha(h)=\frac{1-h+\sqrt{1-h}}{h}, \quad \beta(h)=\frac{1-h-\sqrt{1-h}}{h}. \end{equation} \subsection{Expression of $M(h)$ and independence of coefficients} Similarly as the calculus of $M^+(h)$ in subsection \ref{sub:S11}, by \eqref{Mh}, the first order Melnikov function of system \eqref{S2} is \begin{equation}\label{M2} \begin{split} M(h)=&\iint_{D_h^+}\left[\frac{\partial}{\partial x}\left(\frac{P^+}{(1+y)^3}\right)+\frac{\partial}{\partial y}\left(\frac{Q^+}{(1+y)^3}\right)\right]\mathrm{d}x\ \mathrm{d}y+\int_{b(h)}^{a(h)}\frac{P^+(0,y)}{(1+y)^3}\mathrm{d}y\\ &+\iint_{D_h^-}\left[\frac{\partial}{\partial x}\left(\frac{P^-}{(1+y)^3}\right)+\frac{\partial}{\partial y}\left(\frac{Q^-}{(1+y)^3}\right)\right]\mathrm{d}x\ \mathrm{d}y-\int_{a(h)}^{b(h)}\frac{P^-(0,y)}{(1+y)^3}\mathrm{d}y\\ =&\sum_{k=0}^n\overline{m}_k(h)I_{k-4}(h)+ \sum_{k=0}^{n+1}\widetilde{m}_k(h)J_{k-4}(h), \end{split} \end{equation} where \[ I_k(h)=\int_{\beta(h)}^{\alpha(h)}(1+y)^{k}\sqrt{1+2y-h(1+y)^2}\mathrm{d}y,\quad J_k(h)=\int_{\beta(h)}^{\alpha(h)}(1+y)^{k}\mathrm{d}y, \] and $\overline{m}_{k}$, $\widetilde{m}_{k}$ are polynomials of $h$ with degree \begin{equation}\begin{split}\label{mk} \deg\overline{m}_k\leq[\frac{k}{2}],\quad \deg\widetilde{m}_k\leq[\frac{k}{2}], \end{split}\end{equation} which are determined by Newton's formula and some qualitative analysis, see \cite{LLLZ} for details. It is worth noting that when $n=0$, $\widetilde{m}_0(h)=0$. We exploit the method than in subsection \ref{sub:S11} to obtain the expressions of $I_k(h)$ and $J_k(h)$ first. \begin{equation}\begin{split}\label{Ik2} I_{k}(h)&=\sqrt{h}(1-h)\sum_{i=[(k+1)/2]}^{k}C_{i,k}h^{-i-2},\quad \quad \quad \quad k>0, \end{split}\end{equation} where $C_{i,k}, i=[(k+1)/2],[(k+1)/2]+1,\cdots,k$ are constants, and \[ C_{k,k}=2^{k+2}\int_{0}^{\infty}\frac{t^{2}(1+t^{2k})}{(1+t^2)^{k+3}}\mathrm{d}t>0,\quad C_{[(k+1)/2],k}\neq0, \] \begin{equation}\begin{split}\label{I2} &I_{-4}(h)=I_{-3}(h)=\frac{(1-h)\pi}{2},\\ &I_{-2}(h)=(1-\sqrt h)\pi,\\ &I_{-1}(h)=\frac{(1-\sqrt h)\pi}{\sqrt h},\\ &I_0(h)=\frac{(1-h)\pi}{2h^{3/2}}. \end{split}\end{equation} \begin{equation}\begin{split}\label{Jk2} J_k(h)&=\frac{2\sqrt{1-h}}{(k+1)h^{k+1}}\sum_{j=0}^{[k/2]}C_{k+1}^{2j+1}(1-h)^j, \quad k>0, \end{split}\end{equation} where $C_{k+1}^{2j+1}, j=0,1,\cdots,[k/2]$ are combinatorial numbers and \begin{equation}\begin{split}\label{J2} &J_{-4}(h)=\frac{2}{3}\sqrt{1-h}(4-h),\\ &J_{-3}(h)=J_{-2}(h)=2\sqrt{1-h},\\ &J_{-1}(h)=\ln\left(\frac{1+\sqrt{1-h}}{1-\sqrt{1-h}}\right),\\ &J_0(h)=\frac{2\sqrt{1-h}}{h}. \end{split}\end{equation} Thus, using \eqref{M2}-\eqref{J2}, we have \begin{proposition}\label{PS21} The first order Melnikov function $M(h)$ for system $S_2$ is: \begin{equation*} \begin{split} M(h)=&\alpha_0(1-h)+\beta_0\sqrt{1-h},\quad n=0,\\ M(h)=&\alpha_{0}(1-h)+\sqrt{1-h}(\beta_0+\beta_1h),\quad n=1,\\ M(h)=&\sum_{k=0}^3\alpha_{k}h^{k/2}+\sqrt{1-h}(\beta_0+\beta_1h) +J_{-1}(h)(\gamma_0+\gamma_1h),\quad n=2,\\ M(h)=&\sum_{k=0}^3\alpha_{k}h^{k/2}+\sum_{k=2-n}^{-1}\alpha_{k}h^{k+1/2}+\sqrt{1-h}\sum_{k=2-n}^1\beta_k h^k+J_{-1}(h)(\gamma_0+\gamma_1h), \quad n\geq3,\\ \end{split} \end{equation*} where $J_{-1}(h)$ is given in \eqref{J2}, and $\alpha_k, k=2-n,3-n,\cdots,3$, $\beta_k, k=2-n,3-n,\cdots,1$ and $\gamma_0, \gamma_1$ are constants, and $\alpha_0,\beta_0$ are independent for $n=0$, $\alpha_0,\beta_0,\beta_1$ are independent for $n=1$, and the coefficients except some $\alpha_k$ in $M(h)$ are independent for $n\geq2$. \end{proposition} \begin{proof} The computation of $M(h)$ is straightforward, and thus is omit here. In the following, we prove the independence of the coefficients. Since $I_k(1)=0$ and $J_k(1)=0$ for any $k\geq-4$, they mean that \[ M(1)=\sum_{k=2-n}^3\alpha_k=0, \] which implies that $\alpha_k, k=2-n,3-n,\cdots,3$ are linearly dependent, and some $\alpha_k$ can be expressed in others. More concretely, similarly as that in subsection \ref{sub:S12}, we can obtain that \[ \frac{\partial(\alpha_0,\beta_0)} {\partial(b^+_{0,0},a^+_{0,0})} =-3\pi\neq0,\quad \mbox{for}\ n=0, \] \[ \frac{\partial(\alpha_0,\beta_0,\beta_1)} {\partial(b^+_{0,0},a^+_{0,0},b^+_{1,0})} =-6\pi\neq0,\quad \mbox{for}\ n=1, \] \[ \frac{\partial(\alpha_1,\alpha_2,\alpha_3,\beta_0,\beta_1,\gamma_0,\gamma_1)} {\partial(b^+_{0,2},b^+_{0,0},b^+_{2,0},a^+_{0,0},b^+_{1,0},a^+_{0,2},a^+_{2,0})} =6\pi^3\neq0,\quad \mbox{for}\ n=2, \] and for $n\geq3$, \[ \frac{\partial(\alpha_3,\alpha_2,\alpha_1,\alpha_{-1},\alpha_{-2},\cdots,\alpha_{2-n}, \beta_1,\beta_0,\beta_{-1},\cdots,\beta_{2-n},\gamma_0,\gamma_1)} {\partial(b^+_{2,0},b^+_{0,0},b^+_{0,2},a^+_{1,2},b^+_{0,4},\cdots,b^+_{0,n}, b^+_{1,0},a^+_{0,0},a^+_{0,3}\cdots,a^+_{0,n},a^+_{0,2},a^+_{2,0})} \neq0. \] \end{proof} \subsection{Zeros of $M(h)$} Consider the first Melnikov function $M(h)$ obtained in Proposition \ref{PS21}. Then \begin{proposition} $H_2(n)=n+1$ for $n=0,1$, and $H_2(n)=2n+2$ for $n\geq2$. \end{proposition} \begin{proof} It is easy to verify that $M(h)$ has at most 1 zero in $(0,1)$ for $n=0$, and at most 2 zeros in $(0,1)$ for $n=1$, which show that $H_2(n)\leq n+1$ for $n=0,1$. Since the functions $1-h, \sqrt{1-h}$ and $h\sqrt{1-h}$ are linearly independent, by Lemma \ref{le:CGP} and Proposition \ref{PS21}, $H_2(n)\geq n+1$ for $n=0,1$. The first conclusion of the proposition holds. For $n\geq2$, we get the second derivative of $M(h)$ first. \[ M''(h)=\frac{1}{\sqrt{h}}\sum_{k=1-n}^{0}\widetilde\alpha_{k}h^{k} +\frac{1}{(1-h)^{3/2}}\sum_{k=-n}^{1}\widetilde\beta_{k}h^{k}, \] where $\widetilde\alpha_{k}, \widetilde\beta_{k}$ are liner combinations of $\alpha_{k}, \beta_{k}$, and are independent. Let $F(h)=h^{n-\frac{1}{2}}M''(h)$, then \[ F(h)=\sum_{k=0}^{n-1}\widetilde\alpha_{k-n+1}h^{k} +\frac{1}{h^{1/2}(1-h)^{3/2}}\sum_{k=0}^{n+1}\widetilde\beta_{k-n}h^{k}, \] which has the same zeros as $M''(h)$ in $(0,1)$. By Lemma \ref{lem:P}, \[ F^{(n)}(h)=\frac{1}{h^{n+1/2}(1-h)^{n+3/2}}\sum_{k=0}^{n+1}\overline\beta_{k}h^{k}. \] Obviously, $F^{(n)}(h)$ has at most $n+1$ zeros in $(0,1)$. Thus, $F(h)$, as well as $M''(h)$, has at most $2n+1$ zeros in $(0,1)$. It follows that $M(h)$ has at most $2n+3$ zeros. Note that $M(1)=0$, thus $M(h)$ has at most $2n+2$ zeros in $(0,1)$. That is $H_2(n)\leq 2n+2$ for $n\geq2$. On the other hand, $\sum_{k=2-n}^3\alpha_k=0$ shows that $$\alpha_0=-\sum_{k=2-n}^{-1}\alpha_k-\sum_{k=1}^3\alpha_k,$$ and $M(h)$ can be rewritten as \begin{equation}\label{Mr} M(h)=\sum_{k=1}^3\alpha_{k}(h^{k/2}-1)+\sum_{k=2-n}^{-1}\alpha_{k}(h^{k+1/2}-1)+\sqrt{1-h}\sum_{k=2-n}^1\beta_k h^k+J_{-1}(h)(\gamma_0+\gamma_1h). \end{equation} Since $h^{k/2}-1, k=1,2,3$, $h^{k+1/2}-1, k=2-n,\cdots,-1$, $h^k\sqrt{1-h}, k=2-n,\cdots,1$ and $h^kJ_{-1}(h), k=0,1$ are linearly independent, it follows from Lemma \ref{le:CGP} and Proposition \ref{PS21} that $H_2(n)\geq2n+2$ for $n\geq2$. The second part of the proposition follows. Specifically, $H_2(2)=6$. \end{proof} \section{Zeros of $M(h)$ for system $S_3$}\label{sec:S3} Consider the piecewise smooth polynomial perturbations of degree $n$ of system $S_3$: \begin{equation} \left(\begin{array}{ll}\dot{x}\\[2ex] \dot{y}\end{array}\right)=\label{S3}\left\{\begin{array}{ll} \left(\begin{array}{ll}-y+\frac{1}{4}x^{2}+\varepsilon P^+(x,y)\\ x(1+y)+\varepsilon Q^+(x,y)\end{array}\right), & \mbox{ $x> 0$,} \\[2ex] \left(\begin{array}{ll}-y+\frac{1}{4}x^{2}+\varepsilon P^-(x,y)\\ x(1+y)+\varepsilon Q^-(x,y)\end{array}\right), & \mbox{ $x< 0$,} \end{array} \right. \end{equation} where $P^{\pm}(x,y)$ and $Q^{\pm}(x,y)$ are given by \eqref{PQ}. For ${\varepsilon=0}$, the first integral of system \eqref{S3} is \[ H=\dfrac{(x^2+4y+8)^2}{1+y}, \] and the integrating factor is $R=\dfrac{4(x^2+4y+8)}{(1+y)^2}$. \subsection{Expression of $M(h)$ and independence of the coefficients} Let $x=2\bar x$ and $y=\bar y(2+\bar y)$, system \eqref{S3} is transformed into the following perturbed system $S_1$ (we omit ``\ $\bar{}$\ '' below for convenience): \begin{equation} \left(\begin{array}{ll}\dot{x}\\[2ex] \dot{y}\end{array}\right)=\label{S31}\left\{\begin{array}{ll} \left(\begin{array}{ll}-y+\frac{1}{2}x^{2}-\frac{1}{2}y^{2}+\frac{\varepsilon}{2} P^+\left(2x,y(2+y)\right)\\ x(1+y)+\frac{\varepsilon}{2(1+y)} Q^+\left(2x,y(2+y)\right)\end{array}\right), & \mbox{ $x> 0$,} \\[2ex] \left(\begin{array}{ll}-y+\frac{1}{2}x^{2}-\frac{1}{2}y^{2}+\frac{\varepsilon}{2} P^-\left(2x,y(2+y)\right)\\ x(1+y)+\frac{\varepsilon}{2(1+y)} Q^-\left(2x,y(2+y)\right)\end{array}\right), & \mbox{ $x< 0$.} \end{array} \right. \end{equation} Thus, the first order Melnikov function of system \eqref{S31} is \begin{equation*}\label{M} \begin{split} M(h)=& \iint_{D_h^+}\left[\frac{\partial}{\partial x}\left(\frac{P^+}{(1+y)^2}\right)+\frac{\partial}{\partial y}\left(\frac{Q^+}{(1+y)^3}\right)\right]\mathrm{d}x\ \mathrm{d}y+\int_{\beta(h)}^{\alpha(h)}\frac{P^+|_{x=0}}{(1+y)^2}\mathrm{d}y\\ &+\iint_{D_h^-}\left[\frac{\partial}{\partial x}\left(\frac{P^-}{(1+y)^2}\right)+\frac{\partial}{\partial y}\left(\frac{Q^-}{(1+y)^3}\right)\right]\mathrm{d}x\ \mathrm{d}y+\int_{\beta(h)}^{\alpha(h)}\frac{P^-|_{x=0}}{(1+y)^2}\mathrm{d}y,\quad h>0,\\ \end{split} \end{equation*} where $P^{\pm}$ and $Q^{\pm}$ can be rewritten as \begin{equation*}\begin{split} P^{\pm}=\sum_{i+j=0}^{n}c_{ij}^{\pm}x^i(1+y)^{2j},\quad Q^{\pm}=\sum_{i+j=0}^{n}d_{ij}^{\pm}x^i(1+y)^{2j}, \end{split}\end{equation*} with $c_{ij}^{\pm}$ and $d_{ij}^{\pm}$ being linear combinations of $a_{ij}^{\pm}$ and $b_{ij}^{\pm}$, respectively. By the results obtained in section \ref{sec:S1}, we have the following statements. \begin{proposition}\label{PS31}The first order Melnikov function $M(h)$ for system $S_3$ is: \begin{equation*}\label{M3} \begin{split} M(h)=&\alpha_0h(4+h)(2+h)+\beta_0\sqrt{h(4+h)},\quad\ n=0,\\ M(h)=&h(\alpha_{0}+\alpha_{1}(4+h)(2+h))+\sqrt{h(4+h)}(\beta_0+\beta_1(2+h)^2),\quad\ n=1,\\ M(h)=&h(\alpha_{0}+\alpha_{1}h+\alpha_{2}h^2)+\sqrt{h(4+h)}(\beta_0+\beta_1(2+h)^2)+\gamma_0(2+h)\ln(1+\alpha(h)),\ n=2,\\ M(h) =&h(\alpha_{-2}+\alpha_{-1}h+\alpha_{0}h^2)+h(4+h)\sum_{i=1}^{n-2}\alpha_i(2+h)^{2i}\\ &+\sqrt{h(4+h)}\sum_{i=0}^{n-1}\beta_i(2+h)^{2i}+\gamma_0(2+h)\ln(1+\alpha(h)),\quad\ n=3,4,\\ M(h) =&h(\alpha_{-2}+\alpha_{-1}h+\alpha_{0}h^2)+h(4+h)\sum_{i=1}^{n-2}\alpha_i(2+h)^{2i}\\ &+\sqrt{h(4+h)}\sum_{i=0}^{n-1}\beta_i(2+h)^{2i}+(2+h)(\gamma_0+\gamma_1(2+h)^2)\ln(1+\alpha(h)), \quad\ n\geq5, \end{split} \end{equation*} where $\alpha(h)=(h+\sqrt{h(4+h)})/2$. Moreover, all the coefficients of $M(h)$ are independent. \end{proposition} \subsection{Zeros of $M(h)$} Consider the first order Melnikov function $M(h)$ obtained in Proposition \ref{PS31}. Then \begin{proposition} $H_3(0)=1$, $H_3(n)=2n+1$ for $n=1,2$, and $H_3(n)=2n+2$ for $n\geq3$. \end{proposition} \begin{proof} For $n=0$, $M(h)$ has at most 1 zero in $(0,+\infty)$ by a direct computation. Notice that the two generating functions are linearly independent, by Lemma \ref{le:CGP} and Proposition \ref{PS31}, $H_3(0)\geq1$, thus $H_3(0)=1$. For $n=1$, let \begin{equation*}\begin{split} &f_1=h, \quad f_2=h(4+h)(2+h),\quad f_3=\sqrt{h(4+h)}, \quad f_4=\sqrt{h(4+h)}(2+h)^2. \end{split}\end{equation*} Then \begin{equation*}\begin{split} W_1&=h>0,\\ W_2&=2h^2(3+h)>0,\\ W_3&=\frac{4(18+16h+3h^2)\sqrt{h(4+h)}}{(4+h)^2}>0,\\ W_4&=-\frac{192(6+h)}{h(4+h)^3}<0, \end{split}\end{equation*} hence $M(h)$ has at most 3 zeros in $(0,+\infty)$ by \emph{Chebyshev} criterion. For $n=2$, it follows from \begin{equation}\label{S32} M^{(4)}(h)=-\frac{2\left(8(3\beta_0+48\beta_1+4\gamma_0)+2(12\beta_0+12\beta_1+13\gamma0)(2+h)^2-\gamma_0 (2+h)^4\right)}{(h(4+h))^{7/2}} \end{equation} that $M^{(4)}(h)$ has at most $2$ zeros in $(0,+\infty)$. Thus by Rolle's Theorem, $M(h)$ has at most $6$ zeros. Note that $M(0)=0$, $M(h)$ has at most $5$ zeros in $(0,+\infty)$. Since the generating functions of $M(h)$ are linearly independent, $H_3(n)=2n+1$ for $n=1,2$ follows from Lemma \ref{le:CGP} and Proposition \ref{PS31}. For $n\geq3$, \[ M^{(4)}(h)=\sum_{k=0}^{n-3}\widetilde{\alpha}_k(2+h)^{2k}+\frac{1}{(h(4+h))^{7/2}}\sum_{k=0}^{n+1}\widetilde{\beta}_k(2+h)^{2k}. \] Let $F(h)=M^{(4)}(h)$ and $z=(2+h)^2$, then we have \[ F(z)=\sum_{k=0}^{n-3}\widetilde{\alpha}_k z^{k}+\frac{1}{(z-4)^{7/2}}\sum_{k=0}^{n+1}\widetilde{\beta}_kz^{k}, \quad z>4. \] It is easy to obtain that \[ F^{(n-2)}(z)=\frac{1}{(z-4)^{n+3/2}}\sum_{k=0}^{n+1}\bar{\beta}_kz^{k}, \quad z>4. \] Hence, by Rolle's Theorem $F(z)$ have at most $2n-1$ zeros in $(4,+\infty)$, which implies that $M^{(4)}(h)$ has at most $2n-1$ zeros in $(0,+\infty)$. Thus $M(h)$ has at most $2n+3$ zeros. It follows from $M(0)=0$ that $M(h)$ has at most $2n+2$ zeros in $(0,+\infty)$. We remark that the upper bound is sharp when $n\geq5$ by Lemma \ref{le:CGP} and Proposition \ref{PS31}, i.e., $H_3(n)=2n+2$ for $n\geq5$. In what follows, we show that the upper bound also can be reached when $n=3,4$. Here \emph{Chebyshev} criterion does not work since the last Wronskian determinant has zeros. Lemma \ref{le:NT} is needed for $n=3$, in which case the last Wronskian determinant has a simple zero. However, the last Wronskian determinant has two simple zeros for $n=4$, we apply Lemma \ref{le:NT2} to show the upper bound can be achieved. For $n=3$, let \begin{equation*}\begin{split} &f_i=h^{i}, \ i=1,2,3,\\ &f_4=h(4+h)(2+h)^2,\\ &f_i=\sqrt{h(4+h)}(2+h)^{2(i-5)},\ i=5,6,7,\\ &f_8=(2+h)\ln(1+\alpha(h)), \end{split}\end{equation*} then \begin{equation*}\begin{split} W_1=&h>0,\\ W_2=&h^2>0,\\ W_3=&2h^3>0,\\ W_4=&12h^4>0,\\ W_5=&\frac{576(35+30h+9h^2+h^3)\sqrt{h(4+h)}}{(4+h)^4}>0,\\ W_6=&-\frac{138240(2+h)(70 + 56 h + 14 h^2 + h^3)}{h^3(4+h)^7}<0,\\ W_7=&-\frac{99532800(2+h)^3} {h^6(4+h)^{10}\sqrt{h(4+h)}}Y_7<0,\\ W_8=&\frac{4777574400 (2 + h)^6((1 - 2 h)^2 + 11 h^2 + 8 h^3 + h^4)} {h^{13}(4+h)^{13}\sqrt{h(4+h)}}Y_8, \end{split}\end{equation*} where \begin{equation*}\begin{split} Y_7=&112(1-h)^2+560h^2+1192h^3+676h^4+178h^5+22h^6+h^7>0,\\ Y_8=&1680\ln(1+\alpha(h))\\ &-h\frac{1057 + 30452 h^2 + 47 (20 h - 7)^2+ 31164 h^3 + 5334 h^4 + 336 h^5 + 88 h^6 + 18 h^7 + h^8}{\sqrt{h(4+h)}((1 - 2 h)^2 + 11 h^2 + 8 h^3 + h^4)}. \end{split}\end{equation*} Since \[ Y'_8=-\frac{2h^3(2 + h)(-1 + 8 h + 2 h^2)Y_7}{(4 + h)(1 - 4 h + 15 h^2 + 8 h^3 + h^4)^2\sqrt{h(4+h)}}, \] which has a zero at $h^*=(3\sqrt{2}-4)/2$, we obtain that $Y_8$ increases when $h\in(0,h^*)$ and decreases when $h\in(h^*,+\infty)$. Note that $\lim_{h\rightarrow0^{+}}Y_8=0$ and $\lim_{h\rightarrow +\infty}Y_8=-\infty$. Thus, $Y_8$ has a simple zero in $h\in(0,+\infty)$, equivalently, $W_8$ has a simple zero in $h\in(0,+\infty)$. It follows from Lemma \ref{le:NT} that $M(h)$ can have $8$ zeros in $h\in(0,+\infty)$. For $n=4$, let \begin{equation*}\begin{split} f_i&=h^{i},\ i=1,2,3 \\ f_i&=h(4+h)(2+h)^{2(i-3)},\ i=4,5,\\ f_i&=\sqrt{h(4+h)}(2+h)^{2(i-6)},\ i=6,7,8,9,\\ f_{10}&=(2+h)\ln(1+\alpha(h)), \end{split}\end{equation*} then all of $W_i, i=1,2,3,4$ are the same than in $n=3$ and are positive for $h>0$, and \begin{equation*}\begin{split} W_5=&288 h^5 (12 + 5 h)>0,\\ W_6=&-\frac{69120 (2 + h) (756 + 847 h + 364 h^2 + 73 h^3 + 6 h^4)\sqrt{h(4+h)}}{(4+h)^5}<0,\\ W_7=&-\frac{696729600 (2 + h)^3 (3 + h) (84 + 63 h + 15 h^2 + h^3)}{h^4 (4 + h)^9}<0,\\ W_8=&\frac{501645312000(2 + h)^6} {h^8(4+h)^{13}\sqrt{h(4+h)}}\big(1038 + 588 (h - 1)^2 + 3384 h^3 + 390 (h^2 - 1)^2 + 2794 h^4\\& + 1268 h^5 + 258 h^6 + 26 h^7 + h^8\big)>0,\\ W_9=&\frac{20226338979840000 (2 + h)^{10} } {h^{13} (4 + h)^{18}}Y_9>0,\\ W_{10}=&\frac{5825185626193920000 (2 + h)^{15}Z_1} {h^{22} (4 + h)^{22}} \left(-5040 \ln(1+\alpha(h)) +\frac{hZ_2}{\sqrt{h(4+h)}Z_1}\right), \end{split}\end{equation*} where \begin{equation*}\begin{split} Y_9=&700 + 25424 h^2 + 257 (12 h - 2)^2 + 47228 h^3 + 15520 h^2 (h^2 - 1)^2+27588 h^6 \\ & + 1348 h^3 (h^2 - 1)^2 + 36222 h^7+ 15253 h^8 + 3520 h^9 + 472 h^{10} + 34 h^{11} + h^{12}>0,\\ Z_1=&15 - 160 h + 1304 h^2 - 1248 h^3 - 76 h^4 + 920 h^5 + 450 h^6 + 80 h^7 + 5 h^8>0,\\ Z_2=&151200 - 1600200 h + 13008660 h^2 - 11470860 h^3 - 1925118 h^4 + 9318900 h^5 + 5293110 h^6\\ & + 1120770 h^7 + 102665 h^8 + 6340 h^9 + 1110 h^{10} + 130 h^{11} + 5 h^{12}>0, \end{split}\end{equation*} by Sturm Theorem. Similarly, denote the function in the parenthesis of $W_{10}$ by $Y_{10}$, then \[ Y'_{10}=\frac{20h^4(2 + h) (3 - 60 h + 65 h^2 + 40 h^3 + 5 h^4)Y_9 }{(4 + h)Z_1^2\sqrt{h(4+h)}} \] has two simple zeros $h_1^*$ and $h_2^*$ in $h\in(0,+\infty)$ with $h_1^*<h_2^*$. Therefore $Y_{10}$ increases when $h\in(0,h_1^*)\bigcup(h_2^*,+\infty)$ and decreases when $h\in(h_1^*,h_2^*)$. Notice that $\lim_{h\rightarrow0^{+}}Y_{10}=0$, $\lim_{h\rightarrow 1/2}Y_{10}\approx-0.58<0$, and $\lim_{h\rightarrow +\infty}Y_{10}=+\infty$. Hence $Y_{10}$ has two simple zeros $h_1$ and $h_2$. Obviously, the ordered set $(f_1,f_2,\cdots,f_{10})$ satisfies $W_9(h_1)\neq0$, $W_{10}(h_1)=0$ and $W'_{10}(h_1)\neq0$. It follows from Lemma \ref{le:NT2} that $M(h)$ can have $10$ zeros in $(0,+\infty)$. \end{proof} \begin{remark} If $a_{00}^{\pm}=b_{00}^{\pm}=0$, then $\beta_0=-4\beta_1-\gamma_0$ (see Appendix A.2 for the specific expressions of $\beta_0, \beta_1$ and $\gamma_0$), and from \eqref{S32}, \[ M^{(4)}(h)=\frac{2(72\beta_1+2\gamma_0+\gamma_0(2+h)^2)}{(h(4+h))^{5/2}}. \] Thus, $M(h)$ has at most $4$ zeros in $(0,+\infty)$. This result is consistent with that in \cite{LC}. In fact, by the change \[ h=2\left(-1+\frac{1}{\sqrt{1-r^2}}\right), \] $M(h)$ can be translated into \[ M(r)=-\frac{F(r)}{(1-r^2)^{3/2}}, \] where $F(r)$ is the averaged function in \cite{LC}, and we omit the difference in the coefficients. This shows that the first order Melnikov function and the first order Averaged function may be equivalent in investigating the number of limit cycles of piecewise smooth polynomial differential systems. \end{remark} \section{Zeros of $M(h)$ for system $S_4$}\label{sec:S4} Consider the piecewise smooth polynomial perturbations of degree $n$ of system $S_4$: \begin{equation} \left(\begin{array}{ll}\dot{x}\\[2ex] \dot{y}\end{array}\right)=\label{S4}\left\{\begin{array}{ll} \left(\begin{array}{ll}-y+2x^{2}-\frac{1}{2}y^{2}+\frac{\varepsilon }{8}P^+(x,y)\\ x(1+y)+\frac{\varepsilon }{8}Q^+(x,y)\end{array}\right), & \mbox{ $x> 0$,} \\[2ex] \left(\begin{array}{ll}-y+2x^{2}-\frac{1}{2}y^{2}+\frac{\varepsilon }{8}P^-(x,y)\\ x(1+y)+\frac{\varepsilon }{8}Q^-(x,y)\end{array}\right), & \mbox{ $x< 0$,} \end{array} \right. \end{equation} where $P^{\pm}(x,y)$ and $Q^{\pm}(x,y)$ are given by \eqref{PQ}. For ${\varepsilon=0}$, the first integral of system \eqref{S4} is \[ H=\frac{4x^2-2(y+1)^2+1}{(1+y)^4}, \] and the integrating factor is $R=\frac{8}{(1+y)^5}$. Here $L_h^+=\{x\geq0|H=h,-1<h<0\}$ and $L_h^-=\{x\leq0|H=h,-1<h<0\}$ are the right part and the left part of the periodic orbits surrounding the origin. $\mathcal{A}_h=(0,\alpha(h))$ and $\mathcal{B}_h=(0,\beta(h))$, where \begin{equation} \alpha(h)=-1+\sqrt{\frac{1+\sqrt{1+h}}{-h}}, \quad \beta(h)=-1+\sqrt{\frac{1-\sqrt{1+h}}{-h}}. \end{equation} \subsection{Expression of $M(h)$} By \eqref{Mh}, the first order Melnikov function of system \eqref{S4} is \begin{equation}\label{M4} \begin{split} M(h)=&\iint_{D_h^+}\left[\frac{\partial}{\partial x}\left(\frac{P^+}{(1+y)^5}\right)+\frac{\partial}{\partial y}\left(\frac{Q^+}{(1+y)^5}\right)\right]\mathrm{d}x\ \mathrm{d}y+\int_{\beta(h)}^{\alpha(h)}\frac{P^+(0,y)}{(1+y)^5}\mathrm{d}y\\ &+\iint_{D_h^-}\left[\frac{\partial}{\partial x}\left(\frac{P^-}{(1+y)^5}\right)+\frac{\partial}{\partial y}\left(\frac{Q^-}{(1+y)^5}\right)\right]\mathrm{d}x\ \mathrm{d}y-\int_{\beta(h)}^{\alpha(h)}\frac{P^-(0,y)}{(1+y)^5}\mathrm{d}y\\ =&\sum_{k=0}^{n+2[\frac{n}{2}]}\overline{m}_k(h)I_{k-6}(h)+ \sum_{k=0}^{n+3+2[\frac{n-1}{2}]}\widetilde{m}_k(h)J_{k-6}(h), \end{split} \end{equation} where \[ I_k(h)=\int_{\beta(h)}^{\alpha(h)}(1+y)^{k}\sqrt{-1+2(1+y)^2+h(1+y)^4}\mathrm{d}y,\quad J_k(h)=\int_{\beta(h)}^{\alpha(h)}(1+y)^{k}\mathrm{d}y, \] and $\overline{m}_{k}$, $\widetilde{m}_{k}$ are polynomials of $h$ with degree \begin{equation}\begin{split}\label{mk4} &\deg\overline{m}_k\leq[\frac{k}{4}],\quad \ \deg\widetilde{m}_k\leq[\frac{k}{4}], \end{split}\end{equation} which are determined by Newton's formula and some qualitative analysis, see \cite{LLLZ} for details. It is worth noting that $\widetilde{m}_0(h)=0$ for $n=0$, $\overline{m}_{2n-1}=0$ for $n$ even, and $\widetilde{m}_{2n+1}=0$ for $n$ odd, and \begin{equation}\label{mkn} \begin{split} \overline{m}_k(h)&=\left\{\begin{array}{ll} \displaystyle\sum_{i=0}^{[k/4]}c_ih^i & \mbox{for $k\leq n$,} \\ \displaystyle\sum_{i=[(k+1-n)/2]}^{[k/4]}c_ih^i & \mbox{for $k\geq n+1$,} \end{array} \right.\\ \widetilde{m}_k(h)&=\left\{\begin{array}{ll} \displaystyle\sum_{i=0}^{[k/4]}c_ih^i \quad & \mbox{for $k\leq n+1$,} \\ \displaystyle\sum_{i=[(k-n)/2]}^{[k/4]}c_ih^i \quad & \mbox{for $k\geq n+2$.} \end{array} \right. \end{split} \end{equation} Formula \eqref{mkn} is essential to obtain a more precise expression of $M(h)$. Using the results in \cite{LLLZ}, the expressions of $I_k(h)$ are as follows. For $k$ odd, i.e. $k=2m+1$, \begin{equation}\begin{split}\label{Ik4} I_{2m+1}(h) =4(-h)^{-\frac{3}{2}}(1+h)\sum_{i=0}^{[m/2]}C_{i,m}(-h)^{i-m}, \quad m\geq0, \end{split}\end{equation} and \begin{equation}\begin{split}\label{I4} &I_{-5}(h)=\frac{(1+h)\pi}{4},\\ &I_{-3}(h)=\frac{1}{2}(1-\sqrt{-h})\pi,\\ &I_{-1}(h)=\frac{1}{2}((-h)^{-\frac{1}{2}}-1)\pi. \end{split}\end{equation} For $k$ even, i.e. $k=2m$, \begin{equation}\begin{split}\label{Ik41} I_{2m}(h)=&(-h)^{-m-1}(f_m \bar I_{2}+g_m \bar I_0), \quad m\geq1, \end{split}\end{equation} and \begin{equation}\begin{split}\label{I41} &I_{-6}(h)=\bar I_2,\\[1ex] &I_{-4}(h)=\bar I_0,\\ &I_{-2}(h)=\dfrac{4\bar I_0-5\bar I_2}{h},\\ &I_{0}(h)=-\dfrac{\bar I_0}{h}, \end{split}\end{equation} where $f_m$ and $g_m$ are polynomials with respect to $h$ with $\deg f_m=[(m-1)/2]$ and $\deg g_m=[m/2]$, and \begin{equation}\label{barIk} \bar I_k=\int_{u_1}^{u_2}u^k\sqrt{h+2u^2-u^4}\mathrm{d}u. \end{equation} Here $u_1=\sqrt{-h}(1+\beta(h))$ and $u_2=\sqrt{-h}(1+\alpha(h))$ are the roots of $h+2u^2-u^4=0$, and $\bar I_0$ and $\bar I_2$ satisfy the Picard-Fuchs equation: \begin{equation}\label{I02} 4h(1+h)\left(\begin{array}{ll}\bar I_0'\\[1ex] \bar I_2'\end{array}\right)= \left(\begin{array}{ll}4+3h &-5\\[1ex] -h &5h\end{array}\right) \left(\begin{array}{ll}\bar I_0\\[1ex] \bar I_2\end{array}\right). \end{equation} We get $J_k(h)$ by direct computations, \begin{equation*}\begin{split} J_k(h)&=\frac{(1+\alpha(h))^{k+1}-(1+\beta(h))^{k+1}}{k+1}\\ &=\frac{\left(1+\sqrt{1+h}\right)^{\frac{k+1}{2}}-\left(1-\sqrt{1+h}\right)^{\frac{k+1}{2}}}{(k+1)(-h)^{\frac{k+1}{2}}},\quad k\neq-1. \end{split}\end{equation*} If $k$ is odd, i.e., $k=2m+1$, then \begin{equation}\begin{split}\label{Jk4} J_{2m+1}(h)=&\frac{\left(1+\sqrt{1+h}\right)^{m+1}-\left(1-\sqrt{1+h}\right)^{m+1}}{(2m+2)(-h)^{m+1}}\\ =&\frac{\sqrt{1+h}}{(m+1)(-h)^{m+1}}\sum_{j=0}^{[\frac{m}{2}]}C_{m+1}^{2j+1}(1+h)^{j},\quad \quad m\geq0, \end{split}\end{equation} and if $k$ is even, i.e., $k=2m$, then \begin{equation}\begin{split}\label{Jk5} J_{2m}(h)=&\frac{\left(1+\sqrt{1+h}\right)^{m+\frac{1}{2}}-\left(1-\sqrt{1+h}\right)^{m+\frac{1}{2}}}{(2m+1)(-h)^{m+\frac{1}{2}}}\\ =&\frac{\left[\left(1+\sqrt{1+h}\right)^{m+\frac{1}{2}}-\left(1-\sqrt{1+h}\right)^{m+\frac{1}{2}}\right] \left[\left(1+\sqrt{1+h}\right)^{\frac{1}{2}}+\left(1-\sqrt{1+h}\right)^{\frac{1}{2}}\right]} {(2m+1)(-h)^{m+\frac{1}{2}}\left[\left(1+\sqrt{1+h}\right)^{\frac{1}{2}}+\left(1-\sqrt{1+h}\right)^{\frac{1}{2}}\right]}\\ =&\frac{\left(1+\sqrt{1+h}\right)^{m+1}-\left(1-\sqrt{1+h}\right)^{m+1}+ \sqrt{-h}\left[\left(1+\sqrt{1+h}\right)^{m}-\left(1-\sqrt{1+h}\right)^{m}\right]} {(2m+1)(-h)^{m+\frac{1}{2}}\sqrt{2(1+\sqrt{-h})}} \\ =&\frac{\sqrt{2(1-\sqrt{-h})}} {(2m+1)(-h)^{m+\frac{1}{2}}}\left(\displaystyle\sum_{j=0}^{[\frac{m}{2}]}C_{m+1}^{2j+1}(1+h)^{j} +\sqrt{-h}\displaystyle\sum_{j=0}^{[\frac{m-1}{2}]}C_{m}^{2j+1}(1+h)^{j}\right)\\ =&\frac{\sqrt{2(1-\sqrt{-h})}} {(2m+1)(-h)^{m+\frac{1}{2}}}\displaystyle\sum_{j=0}^{m}C_{j,m}\left(\sqrt{-h}\right)^j, \quad \quad m>0, \end{split}\end{equation} where $C_{j,m}, j=0,1,\cdots,m$ are constants, and \begin{equation}\begin{split}\label{J4} &J_{-6}(h)=\dfrac{\sqrt{2(1-\sqrt{-h})}}{5}(4+2\sqrt{-h}+h),\\ &J_{-5}(h)=J_{-3}(h)=\sqrt{1 + h},\\ &J_{-4}(h)=\dfrac{\sqrt{2(1-\sqrt{-h})}}{3}(2+\sqrt{-h}),\\ &J_{-2}(h)=\sqrt{2(1-\sqrt{-h})},\\ &J_{-1}(h)=\ln\frac{1+\sqrt{1+h}}{\sqrt{-h}},\\ &J_{0}(h)=\dfrac{\sqrt{2(1-\sqrt{-h})}}{\sqrt{-h}}. \end{split}\end{equation} Using \eqref{M4}-\eqref{I41} and \eqref{Jk4}-\eqref{J4}, we have \begin{proposition} The first order Melnikov function $M(h)$ for system $S_4$ is: \begin{equation*} \begin{split} M(h)=&\alpha_0\bar I_2+\beta_0\sqrt{1 + h},\quad\ n=0,\\ M(h)=&\alpha_0\bar I_2+\delta_0(1+h)+\sqrt{1-\sqrt{-h}}(\beta_0(2+\sqrt{-h})+\beta_1h) +\gamma_0\sqrt{1 + h},\quad\ n=1,\\ M(h)=&\alpha_0\bar I_2+\xi_0\bar I_0+\delta_0(1+h)+\sqrt{1-\sqrt{-h}}(\beta_0(2+\sqrt{-h})+\beta_1h) +\gamma_0\sqrt{1 + h}\\ &+\eta_0hJ_{-1}(h),\quad\ n=2,\\ M(h) =&\alpha_0\bar I_2+\xi_0\bar I_0+(1-\sqrt{-h})(\delta_0+\delta_1\sqrt{-h})+\sqrt{1-\sqrt{-h}}P_2(\sqrt{-h}) +\gamma_0\sqrt{1 + h}\\ & +\eta_0hJ_{-1}(h),\quad\ n=3,\\ M(h) =&\alpha_{-1}h^{-1}(4\bar I_0-5\bar I_2)+\alpha_0\bar I_2+\xi_0\bar I_0+(1-\sqrt{-h})(\delta_0+\delta_1\sqrt{-h})\\&+\sqrt{1-\sqrt{-h}}P_2(\sqrt{-h}) +\gamma_0\sqrt{1 + h} +(\eta_0+\eta_1h)J_{-1}(h),\quad\ n=4,\\ M(h) =&\alpha_{-1}h^{-1}(4\bar I_0-5\bar I_2)+\alpha_0\bar I_2+\xi_0\bar I_0+((-h)^{-\frac{1}{2}}-1)P_2(\sqrt{-h})\\&+\sqrt{1-\sqrt{-h}}(-h)^{-\frac{1}{2}} P_3(\sqrt{-h}) +\gamma_0\sqrt{1 + h} +(\eta_0+\eta_1h)J_{-1}(h),\quad\ n=5, \end{split} \end{equation*} and for $n\geq6$ even, \begin{equation*} \begin{split} M(h)=&((-h)^{-\frac{1}{2}}-1)P_2(\sqrt{-h}) +(-h)^{\frac{5-n}{2}}(1+h)P_{\frac{n-6}{2}}(h)+(-h)^{\frac{4-n}{2}}\left(P_{\frac{n-4}{2}}(h)\bar I_2+\bar P_{\frac{n-4}{2}}(h)\bar I_0\right)\\[1ex] &+(\eta_0+\eta_1h)J_{-1}(h)+\sqrt{1 + h}(-h)^{\frac{4-n}{2}}P_{\frac{n-4}{2}}(h) +\sqrt{1-\sqrt{-h}}(-h)^{\frac{5-n}{2}}P_{n-3}(\sqrt{-h}), \end{split} \end{equation*} for $n\geq7$ odd, \begin{equation*} \begin{split} M(h) =&((-h)^{-\frac{1}{2}}-1)P_2(\sqrt{-h}) +(-h)^{\frac{4-n}{2}}(1+h)P_{\frac{n-5}{2}}(h)+(-h)^{\frac{5-n}{2}}\left(P_{\frac{n-5}{2}}(h)\bar I_2+\bar P_{\frac{n-5}{2}}(h)\bar I_0\right)\\ &+(\eta_0+\eta_1h)J_{-1}(h)+\sqrt{1+h}(-h)^{\frac{5-n}{2}}P_{\frac{n-5}{2}}(h) +\sqrt{1-\sqrt{-h}}(-h)^{\frac{4-n}{2}}P_{n-2}(\sqrt{-h}), \end{split} \end{equation*} where $J_{-1}(h)$ is given in \eqref{J4}, and $P_k$ and $\bar P_k$ represent polynomials of degree $k$. \end{proposition} \begin{proof} The computations of $M(h)$ for $n\leq5$ are straightforward and thus we omit here. We mainly concern on the case $n\geq6$ even, and the case $n\geq7$ odd can be obtained in a similar way. For $n\geq6$ even, \begin{equation*} \begin{split} M(h)=&\sum_{k=0}^{2n}\overline{m}_k(h)I_{k-6}(h)+ \sum_{k=0}^{2n+1}\widetilde{m}_k(h)J_{k-6}(h)\\ =&\sum_{i=0}^{n-1}\overline{m}_{2i+1}(h)I_{2i-5}(h)+\sum_{i=0}^{n}\overline{m}_{2i}(h)I_{2i-6}(h)+ \sum_{i=0}^{n}\widetilde{m}_{2i+1}(h)J_{2i-5}(h)+\sum_{i=0}^{n}\widetilde{m}_{2i}(h)J_{2i-6}(h). \end{split} \end{equation*} Note that for $i\geq3$, \begin{equation*} \begin{split} \overline{m}_{2i+1}(h)I_{2i-5}(h)=\left\{\begin{array}{ll} (-h)^{\frac{3}{2}-i}(1+h)P_{i-2}(h) & \mbox{for $i\leq \frac{n-2}{2}$,} \\ (-h)^{\frac{5-n}{2}}(1+h)P_{\frac{n-6}{2}}(h) & \mbox{for $i\geq \frac{n}{2}$,} \end{array} \right.\\ \end{split} \end{equation*} \begin{equation*} \begin{split} \widetilde{m}_{2i+1}(h)J_{2i-5}(h)=\left\{\begin{array}{ll} \sqrt{1+h}(-h)^{2-i}P_{i-2}(h) & \mbox{for $i\leq \frac{n}{2}$,} \\ \sqrt{1+h}(-h)^{\frac{4-n}{2}}P_{\frac{n-4}{2}}(h) & \mbox{for $i\geq \frac{n+2}{2}$,} \end{array} \right.\\ \end{split} \end{equation*} and for $i\geq4$, \begin{equation*} \begin{split} \overline{m}_{2i}(h)I_{2i-6}(h)=\left\{\begin{array}{ll} (-h)^{2-i}\left(P_{[\frac{i-4}{2}]+[\frac{i}{2}]}(h)\bar I_2+P_{i-2}(h)\bar I_0\right) & \mbox{for $i\leq \frac{n}{2}$,} \\ (-h)^{\frac{4-n}{2}}\left(P_{[\frac{i-4}{2}]+\frac{n}{2}-[\frac{i+1}{2}]}(h)\bar I_2+P_{\frac{n-4}{2}}(h)\bar I_0\right) & \mbox{for $i\geq \frac{n+2}{2}$,} \end{array} \right.\\ \end{split} \end{equation*} \begin{equation*} \begin{split} \widetilde{m}_{2i}(h)J_{2i-6}(h)=\left\{\begin{array}{ll} \sqrt{(1-\sqrt{-h})}(-h)^{\frac{5}{2}-i} P_{2[\frac{i}{2}]+i-3}(\sqrt{-h}) & \mbox{for $i\leq \frac{n}{2}$,} \\ \sqrt{(1-\sqrt{-h})}(-h)^{\frac{5-n}{2}}P_{n-3+2[\frac{i}{2}]-i}(\sqrt{-h}) & \mbox{for $i\geq \frac{n+2}{2}$.} \end{array} \right.\\ \end{split} \end{equation*} Therefore, for $n\geq6$ even, \begin{equation*} \begin{split} M(h)=&\sum_{i=0}^{(n-2)/2}\overline{m}_{2i+1}(h)I_{2i-5}(h) +\sum_{i=n/2}^{n-1}\overline{m}_{2i+1}(h)I_{2i-5}(h)\\ &+\sum_{i=0}^{n/2}\overline{m}_{2i}(h)I_{2i-6}(h) +\sum_{i=(n+2)/2}^{n}\overline{m}_{2i}(h)I_{2i-6}(h)\\ &+\sum_{i=0}^{n/2}\widetilde{m}_{2i+1}(h)J_{2i-5}(h) +\sum_{i=(n+2)/2}^{n}\widetilde{m}_{2i+1}(h)J_{2i-5}(h)\\ &+\sum_{i=0}^{n/2}\widetilde{m}_{2i}(h)J_{2i-6}(h) +\sum_{i=(n+2)/2}^{n}\widetilde{m}_{2i}(h)J_{2i-6}(h)\\ =&(1-\sqrt{-h})(\delta_{-1}(-h)^{-\frac{1}{2}}+\delta_0+\delta_1\sqrt{-h}) +(-h)^{\frac{5-n}{2}}(1+h)P_{\frac{n-6}{2}}(h)\\[1ex] &+h^{-1}(P_1(h)\bar I_0+\bar P_1(h)\bar I_2) +(-h)^{\frac{4-n}{2}}\left(P_{\frac{n-4}{2}}(h)\bar I_2+\bar P_{\frac{n-4}{2}}(h)\bar I_0\right)\\[1ex] &+\gamma_0\sqrt{1 + h}+(\eta_0+\eta_1h)J_{-1}(h)+\sqrt{1+h}(-h)^{\frac{4-n}{2}}P_{\frac{n-4}{2}}(h)\\[1ex] &+\sqrt{1-\sqrt{-h}}(-h)^{-\frac{1}{2}}P_3(\sqrt{-h}) +\sqrt{1-\sqrt{-h}}(-h)^{\frac{5-n}{2}} P_{n-3}(\sqrt{-h})\\[1ex] =&((-h)^{-\frac{1}{2}}-1)P_2(\sqrt{-h}) +(-h)^{\frac{5-n}{2}}(1+h)P_{\frac{n-6}{2}}(h)+(-h)^{\frac{4-n}{2}}\left(P_{\frac{n-4}{2}}(h)\bar I_2+\bar P_{\frac{n-4}{2}}(h)\bar I_0\right)\\[1ex] &+(\eta_0+\eta_1h)J_{-1}(h)+\sqrt{1 + h}(-h)^{\frac{4-n}{2}}P_{\frac{n-4}{2}}(h) +\sqrt{1-\sqrt{-h}}(-h)^{\frac{5-n}{2}}P_{n-3}(\sqrt{-h}). \end{split} \end{equation*} \end{proof} Now, we begin to estimate the number of zeros of $M(h)$ obtained above. By the definition of $\bar I_0(h)$ (see \eqref{barIk}), it is easy to know that \[ \bar I_0(h)=-\frac{1}{2}\oint_{u^4-2u^2+y^2=h}y\mathrm{d}u=\frac{1}{2}\iint_{u^4-2u^2+y^2\leq h}\mathrm{d}\sigma>0 \] in $h\in(-1,0)$. Let \begin{equation} v(h)=\frac{\bar I_2(h)}{\bar I_0(h)}, \end{equation} then by \eqref{barIk} and Picard-Fuchs equation \eqref{I02}, we have \begin{lemma}\label{le:v} $v=v(h)$ is the solution of the differential system \begin{equation}\label{vh} \dot h=4h(1+h),\quad \dot v=-h+2(-2+h)v+5v^2, \end{equation} satisfying $v'(h)<0$ for $h\in(-1,0)$, $\lim_{h\rightarrow-1^+}v(h)=1$ and $\lim_{h\rightarrow0^-}v(h)=4/5$. \end{lemma} The proof of Lemma \ref{le:v} is given in Appendix A.3. \begin{proposition} $H_4(0)=1$ and $H_4(1)=4$. \end{proposition} \begin{proof} For $n=0$, let $f_1=\sqrt{1+h},\ f_2=\bar I_2$. Using \begin{equation*}\begin{split} W_1=&\sqrt{1+h}>0,\\ W_2=&\frac{ \bar I_0}{4\sqrt{1+h}}(3v(h)-1)>0,\\ \end{split}\end{equation*} in $h\in(0,1)$, we know that $M(h)$ has at most one zero in $(-1,0)$ for $n=0$ by \emph{Chebyshev criterion}. For $n=1$, let \[ f_1=\sqrt{1+h}, \quad f_2=(2+\sqrt{-h})\sqrt{1-\sqrt{-h}}, \quad f_3=h\sqrt{1-\sqrt{-h}},\quad f_4=1+h, \quad f_5=\bar I_2. \] We can get \begin{equation*}\begin{split} W_1=&\sqrt{1+h}>0,\\ W_2=&-\frac{ (1-\sqrt{-h})^{3/2}}{4\sqrt{1+h}}<0,\\ W_3=&\dfrac{3(1-\sqrt{-h})^{2}}{32(1+\sqrt{-h})\sqrt{-h(1+h)}}>0,\\ W_4=&\dfrac{3(-7+\sqrt{-h})}{1024h(1+\sqrt{-h})^2\sqrt{-h(1+h)}} >0,\\ W_5=&\dfrac{9(1-\sqrt{-h})(-5+2h-h\sqrt{-h})\bar I_0}{131072h^{4}(1+h)^{11/2}} \left(\dfrac{40+7\sqrt{-h}-27h+17h\sqrt{-h}+3h^2}{-5+2h-h\sqrt{-h}}+10v(h)\right)>0, \end{split}\end{equation*} thus by \emph{Chebyshev criterion}, $M(h)$ has at most four zeros in $(-1,0)$ for $n=1$. In fact, denote the function in the parenthesis of $W_5$ by $Z_5$. It is easy to verify that $Z_5$ is not a monotonous function in $(-1,0)$. From \eqref{vh}, \[ Z'_5|_{Z_5=0}=\dfrac{-140+251\sqrt{-h}+642h+332(-h)^{3/2}+70h^2-15(-h)^{5/2}} {8\sqrt{-h}(-5+2h-h\sqrt{-h})^2} <0,\] moreover, we have the asymptotic expansions of $Z_5$ near $h=-1$ and $h=0$: \begin{equation*}\begin{split} Z_5&=-\dfrac{1}{4}(h+1)-\dfrac{41}{384}(h+1)^2+o\left((h+1)^2\right),\quad\quad h\rightarrow-1^+,\\ Z_5&=-\dfrac{7}{5}\sqrt{-h}-\dfrac{1}{10}h(-82+90\log2-15\log(-h))+o\left(-h\right),\quad\quad h\rightarrow0^-. \end{split}\end{equation*} Since $\lim_{h\rightarrow-1^+}Z_5=\lim_{h\rightarrow0^-}Z_5=0$ and $Z_5$ is always negative near $h\rightarrow-1^+$ and $h\rightarrow0^-$, if $Z_5$ has zeros in $(-1,0)$, then it has at least two (taking into account their multiplicity), and one of them satisfies $Z'_5\geq0$, which contradicts with $Z'_5|_{Z_5=0}<0$. Thus, $Z_5$ has none zeros in $(-1,0)$, which implies that $W_5$ is positive in $(-1,0)$. \end{proof} \begin{proposition} $H_4(2)=7$. \end{proposition} \begin{proof} For $n=2$, notice that by the change \[ \cos\theta=\frac{1-u^2}{\sqrt{1+h}}, \quad \theta\in[0,\pi],\] \begin{equation}\label{barI01} \begin{split} &\bar I'_0=\int_{u_1}^{u_2}\frac{1}{2\sqrt{h+2u^2-u^4}}\mathrm{d}u =\frac{1}{4}\displaystyle\int_0^{\pi}\frac{1}{\sqrt{1-\sqrt{1+h}\cos\theta}}\mathrm{d}\theta =\frac{1}{4}J(\sqrt{1+h}),\\ &\bar I'_2=\int_{u_1}^{u_2}\frac{u^2}{2\sqrt{h+2u^2-u^4}}\mathrm{d}u =\frac{1}{4}\displaystyle\int_0^{\pi}\sqrt{1-\sqrt{1+h}\cos\theta}\mathrm{d}\theta =\frac{1}{4}I(\sqrt{1+h}), \end{split} \end{equation} where \begin{equation}\begin{split}\label{IJ} I(r)&=\displaystyle\int_0^{\pi}\sqrt{1-r\cos\theta}\mathrm{d}\theta =2\sqrt{1+r}E\left(\frac{2r}{1+r}\right),\\ J(r)&=\displaystyle\int_0^{\pi}\frac{1}{\sqrt{1-r\cos\theta}}\mathrm{d}\theta =\frac{2}{\sqrt{1+r}}K\left(\frac{2r}{1+r}\right) \end{split}\end{equation} are the functions defined in \cite{CLZ}. Then use \eqref{I02}, and let $r=\sqrt{1+h}$, \begin{equation}\begin{split} F(r)=M(h)|_{h=r^2-1}=rf(r)+k_0f_0, \end{split}\end{equation} where $f_0=r$ and $f(r)$ is the averaged function obtained in \cite{CLZ}, with \begin{equation}\begin{split} &k_0=\frac{1}{2}(3\sqrt{2}\beta_0-\sqrt{2}\beta_1+2\gamma_0-2\eta_0)=a^+_{00}-a^-_{00},\\ &k_1=\delta_0,\quad k_2= -\frac{\beta_0-\beta_1}{\sqrt{2}},\quad k_3=\frac{\beta_1}{\sqrt{2}},\quad k_4=\frac{\eta_0}{2}, \quad k_5=\frac{\alpha_0}{5},\quad k_6=\frac{\alpha_0+5\xi_0}{15}. \end{split}\end{equation} We exploit the same approach than in \cite{CLZ}, and eliminate the logarithm function first. Since \begin{equation} \left(\dfrac{F(r)}{1-r^2}\right)'=\dfrac{G(r)}{(1-r^2)^2}, \end{equation} and $G(r)$ has as many zeros as $F(r)$ in $r\in(0,1)$, we consider $G(r)$ in the following, where \begin{equation}\label{G} G(r)=m_1 g_1+m_2 g_2+m_3 g_3+m_4 g_4+m_5 g_5+m_6 g_6+m_7g_7, \end{equation} with \begin{equation}\begin{split}\label{g} g_1&=r,\\ g_2&=r^2,\\ g_3&=6+4r(\sqrt{1-r}-\sqrt{1+r})-(3+r^2)(\sqrt{1-r}+\sqrt{1+r}),\\ g_4&=-4+2(1+r^2)(\sqrt{1-r}+\sqrt{1+r})+r(-5+r^2)(\sqrt{1-r}-\sqrt{1+r}),\\ g_5&=r\left((-5+r^2)I(r)+(1-r^2)J(r)\right),\\ g_6&=r\left(4I(r)-(1-r^2)J(r)\right),\\ g_7&=1, \end{split}\end{equation} and \begin{equation*}\begin{split} m_1=2k_1,\,\,\, m_2=3k_2-2k_3+4k_4+k_0,\,\,\, m_3=\dfrac{k_2}{2},\,\,\,m_4=\dfrac{k_3}{2},\,\,\, m_5=-\dfrac{k_5}{2},\,\,\, m_6=\dfrac{k_6}{2},\,\, m_7=k_0. \end{split}\end{equation*} By the results in \cite{CLZ}, we know that the Wronskian determinants $W_1, W_2, \cdots, W_6$ on the ordered set $(g_1, g_2,\cdots, g_7)$ do not vanish in $r\in(0,1)$. Thus it suffices to consider the Wronskian determinant \[ W_7=-\frac{6075 (1-s)J^2(\sqrt{1-s^2})}{8192 s^{18} \left(1-s^2\right)^3}\left(Y_{70}+Y_{71}w(s)+Y_{72}w^2(s)\right), \] where $s=\sqrt{1-r^2}$, \begin{equation}\begin{split} Y_{70}=&s^2 \left(-960-960 s+305136 s^2+314496 s^3-291576 s^4-288351 s^5+28820 s^6\right.\\ &\left.-1205 s^7+4170 s^8+46375 s^9+14000 s^{10}+525 s^{11}+1050 s^{12}\right),\\ Y_{71}=&2 \left(1920+1920 s-17184 s^2-35424 s^3-464928 s^4-469008 s^5+364162 s^6\right.\\ &\left.+396377 s^7-37560 s^8-110965 s^9-44870 s^{10}+14175 s^{11}+6300 s^{12}+ 525 s^{13}\right),\\ Y_{72}=&-18624-22464 s+101040 s^2+125280 s^3+524168 s^4+592453 s^5-259404 s^6\\ &-365129 s^7-104350 s^8-15645 s^9+84000 s^{10}+19425 s^{11}-3150 s^{12}, \end{split}\end{equation} and $w(s)=I(r)/J(r)|_{r=\sqrt{1-s^2}}$ is the solution of the following differential system \begin{equation}\label{ws} \dot{s}=2s(1-s^2),\quad \dot{w}=s^2-2s^2w+w^2, \end{equation} satisfying $w'(s)>0$, and \[ \lim_{s\rightarrow 0^+} w(s)=0, \quad \quad \lim_{s\rightarrow1^-}w(s)=1. \] Define \[ \Psi(s,w)=Y_{70}+Y_{71}w+Y_{72}w^2. \] Then the number of zeros of $W_7$ in $(0,1)$ equals the number of intersection points of the curve $C=\{(s,w)|\Psi(s,w)=0,s\in(0,1)\}$ and the curve $\Gamma=\{w=w(s),s\in(0,1)\}$ in the $(s,w)$-plane. First, we show that the curve $C$ and $\Gamma$ can intersect at least one point. It is easy to know that $Y_{72}$ has a unique zero $s_0$ in $(0,1)$ by Sturm Theorem, and $17/50<s_0<7/20$. If $s=s_0$, then $\Psi=0$ implies that $w_0=-Y_{70}(s_0)/Y_{71}(s_0)$. In the following, we consider $\Psi$ in $s\in (0,s_0)\bigcup(s_0,1)$. Let $C_{+}$ and $C_{-}$ be two branches of the curve $C$, denoted by \begin{equation} C_{+}=\{w_{+}(s)=\dfrac{-Y_{71}+\sqrt{\Delta(s)}}{2Y_{72}}\}, \quad C_{-}=\{w_{-}(s)=\dfrac{-Y_{71}-\sqrt{\Delta(s)}}{2Y_{72}}\}, \end{equation} where $\Delta(s)=Y_{71}^2-4Y_{70}Y_{72}>0$ in $s\in(0,1)$ by Sturm Theorem, with \begin{equation*}\begin{split} \Delta(s)=&900 (1 + s)^4 (16384 - 32768 s - 323584 s^2 + 352256 s^3 + 19012608 s^4 - 29902848 s^5\\ & - 47736576 s^6 + 165632640 s^7 + 3358208 s^8 - 270031520 s^9 + 144902688 s^{10}\\ & + 18987696 s^{11} + 278405247 s^{12} - 359686304 s^{13} + 66366873 s^{14} + 15590624 s^{15}\\ & +11191110 s^{16} + 12549152 s^{17} - 8746430 s^{18} - 960400 s^{19} + 376075 s^{20}\\ & - 117600 s^{21} + 15925 s^{22}). \end{split}\end{equation*} A direct computation shows that \begin{equation*}\begin{split} &\lim_{s\rightarrow 0^+} w_{+}(s)=0, \quad \lim_{s\rightarrow s_0^-} w_{+}(s)=-\infty, \quad \lim_{s\rightarrow s_0^+}w_{+}(s)=+\infty, \quad \lim_{s\rightarrow1^-}w_{+}(s)=1,\\ & \lim_{s\rightarrow 0^+}w_{-}(s)=\frac{20}{97}, \quad \lim_{s\rightarrow s_0^-} w_{-}(s)=\lim_{s\rightarrow s_0^+} w_{-}(s)=w_0, \quad \lim_{s\rightarrow1^-}w_{-}(s)=\frac{1}{5}, \end{split}\end{equation*} which implies that $w_{+}(s)$ is not continuous in $s_0$, while $w_{-}(s)$ is continuous in $s_0$, and thus is continuous in $(0,1)$. Further, $w_{+}(s)$ and $w(s)$ have the asymptotic expansions when $s\rightarrow0^+$, \begin{equation*}\begin{split} w_{+}(s)=&\frac{1}{4}s^2 - \frac{4923}{64}s^4+o(s^4),\\ w(s)=&\frac{4}{6\log2-2\log s}+o\left(\frac{1}{6\log2-2\log s}\right), \end{split}\end{equation*} and when $s\rightarrow1^-$, \begin{equation*}\begin{split} w_{+}(s)=&1-\frac{1}{2}(1-s)+\frac{7}{16}(1-s)^2+\frac{22407}{8768}(1-s)^3+o((1-s)^3),\\ w(s)=&1-\frac{1}{2}(1-s)-\frac{1}{32}(1-s)^2-\frac{3}{128}(1-s)^3+o((1-s)^3). \end{split}\end{equation*} Comparing these results, we have \begin{equation*}\begin{split}\label{0} &w_{-}(s)>w(s)>w_{+}(s),\quad \mbox{$s\rightarrow0^+$},\\ &w_{+}(s)>w(s)>w_{-}(s), \quad \mbox{$s\rightarrow1^-$}, \end{split}\end{equation*} and hence the curve $\Gamma$ intersects $C_{-}$ at least one point $(s^*,w^*)$. Next, we show that there exist exactly two points of $C$ at which the vector field \eqref{ws} is tangent on $C$. We call them contact points. A direct computation shows that \[ \Phi(s,w)=\left(\frac{\partial \Psi(s,w)}{\partial s},\frac{\partial \Psi(s,w)}{\partial w}\right)\cdot(\dot{s},\dot{w})=-2\sum_{k=0}^{k=3}\phi_k(s)w^k, \] where \begin{equation*}\begin{split} \phi_0(s)=& s^3 (960 - 1205280 s - 1539936 s^2 + 3434928 s^3 + 4059945 s^4 - 2344178 s^5\\ & - 2403989 s^6 + 226420 s^7 - 410005 s^8 - 81430 s^9 + 489125 s^{10}\\ & + 147000 s^{11} + 6300 s^{12} + 14700 s^{13}),\\ \phi_1(s)=& s(-3840 + 91200 s + 242688 s^2 + 3515280 s^3 + 4281408 s^4 - 9543392 s^5\\& - 11769827 s^6 + 5958632 s^7 + 8704531 s^8 + 325670 s^9 - 2515505 s^{10}\\& - 1222340 s^{11} + 307125 s^{12} + 166950 s^{13} + 14700 s^{14}),\\ \phi_2(s)=& -1920 + 20544 s - 222144 s^2 - 407808 s^3 - 1227584 s^4 - 1866857 s^5 \\&+ 4337270 s^6 + 6306697 s^7 - 1202872 s^8 - 3034391 s^9 - 1838630 s^{10}\\& - 399945 s^{11} + 1039500 s^{12} + 252000 s^{13} - 44100 s^{14},\\ \phi_3(s)=&-Y_{72}. \end{split}\end{equation*} Obviously, $\phi_3(s)$ does not vanish in $(0,s_0)\bigcup(s_0,1)$. Thus, the resultant $R$ of $\Psi(s,w)$ and $\Phi(s,w)$ with respect to $w$, has the form $R=810000(1-s)^2s^4(1+s)^{10}Y_{72}R_1(s)R_2(s)$, where \begin{equation*}\begin{split} R_1(s)=&-15360 - 30720 s + 690176 s^2 + 257920 s^3 + 800384 s^4 + 1928800 s^5- 1741120 s^6\\& - 1638704 s^7 + 1429155 s^8 + 79254 s^9 - 266350 s^{10} - 42140 s^{11} + 42875 s^{12} + 7350 s^{13},\\ R_2(s)=&10764288 - 43057152 s + 1390657536 s^2 - 5132058624 s^3 + 1851543552 s^4 \\& + 16976596992 s^5- 22820136960 s^6 - 11403103872 s^7 + 17152685568 s^8 \\& + 58829080800 s^9 - 117428834304 s^{10} + 82319936688 s^{11} - 32444436073 s^{12}\\& - 8041553870 s^{13} + 49398619998 s^{14} - 23623365500 s^{15} - 20007160159 s^{16}\\& + 13090741902 s^{17} + 90897156 s^{18} - 1222410840 s^{19} + 772710057 s^{20} + 175441070 s^{21}\\& - 135536450 s^{22} + 7923300 s^{23} + 3301375 s^{24} + 120050 s^{25}. \end{split}\end{equation*} By Sturm Theorem, $R_1(s)$ has a unique zero $s_1$ in $(4/25,17/100)$, and $R_2(s)$ has a unique zero $s_2$ in $(12/25,49/100)$, which means that there exist $w_1$ and $w_2$, such that \[ \Psi(s_1,w_1)=\Phi(s_1,w_1)=\Psi(s_2,w_2)=\Phi(s_2,w_2)=0. \] Besides, \[ \lim_{s\rightarrow s_0^+}\left(w'_{-}(s)-\frac{\mathrm{d}w}{\mathrm{d}s}\Big|_{w=w_{-}(s)}\right)= \lim_{s\rightarrow s_0^-}\left(w'_{-}(s)-\frac{\mathrm{d}w}{\mathrm{d}s}\Big|_{w=w_{-}(s)}\right)\approx0.34\neq0. \] This confirms that there are exactly two points of the curve $C$ at which the vector field \eqref{ws} is tangent to $C$. To prove that $\Psi(s,w(s))$ has a unique zero in $s\in(0,1)$, we introduce an auxiliary straight line $w=2/5$. By the Sturm Theorem, \begin{equation}\begin{split} \Psi(s,2/5)=&\frac{1}{25}(-36096 - 51456 s + 36480 s^2 - 231360 s^3 + 426512 s^4 + 852052 s^5\\& - 1043776 s^6 - 741751 s^7 - 448100 s^8 - 2312005 s^9 - 457150 s^{10}\\& + 1520575 s^{11} + 463400 s^{12} + 23625 s^{13} + 26250 s^{14})<0, \end{split}\end{equation} which demonstrates that the straight line $w=2/5$ is above the curve $C_{-}$ in $(0,1)$. It follows from the values of the endpoints of $w(s)$ and the monotonicity of $w(s)$, that the straight line $w=2/5$ and the curve $\Gamma$ will intersect at a unique point $(s_*,2/5)$ with $1/25<s_*<1/10$. Thus $w_{-}(s_*)<2/5=w(s_*)$, and $w(s)>2/5>w_{-}(s)$ holds in $s\in(s_*,1)$, which shows that the curves $\Gamma$ and $C_{-}$ intersect at least one point $(s^*,w^*)$ when $s\in(0,s_*)$ and can not intersect when $s\in(s_*,1)$. Since \begin{equation*}\begin{split} &\lim_{s\rightarrow0^+}\left(w'_{-}(s)-\frac{\mathrm{d}w}{\mathrm{d}s}\Big|_{w=w_{-}(s)}\right)=-\infty,\\ &\lim_{s\rightarrow\frac{3}{10}}\left(w'_{-}(s)-\frac{\mathrm{d}w}{\mathrm{d}s}\Big|_{w=w_{-}(s)}\right)\approx0.39,\\ &\lim_{s\rightarrow1^-}\left(w'_{-}(s)-\frac{\mathrm{d}w}{\mathrm{d}s}\Big|_{w=w_{-}(s)}\right)=-\infty, \end{split}\end{equation*} there exist two points on $C_{-}$ at which the vector field \eqref{ws} is tangent to the curve $C_{-}$. By the result above, the curve $\Gamma$ can not intersect $C_{-}$ and $C_{+}$ in other point, otherwise, extra contact point will emerge, which results in a contradiction. Hence, $\Psi(s,w(s))$, as well as $W_7$, has a unique zero in $s\in(0,1)$. Finally, note that $s^*<s_*<s_1<s_2$, thus the unique zero of $W_7$ is simple. Combined with $W_i\not\equiv0, i=1,2,\cdots,6$, it follows from Lemma \ref{le:NT} that there exists a linear combination of $G(r)$ such that $G(r)$ has exactly $7$ zeros. Thus, $F(r)$ can have at most $7$ zeros in $r\in(0,1)$, which is equivalent to $M(h)$ having at most $7$ zeros in $h\in(-1,0)$ for $n=2$. \end{proof} \begin{figure}[h] \centering \includegraphics[width=.45\textwidth]{ws} \caption{\small{The curve $\Gamma$ has a unique common point with $C_{-}$.}} \label{fig} \end{figure} For $n\geq3$, we eliminate the different kinds of functions by taking derivatives. First, we get rid of the logarithm function by taking a second order derivative and then classify the derived function $M''(h)$ into four kinds of functions. Next, we eliminate two kinds of functions which include polynomials of $h$ as numerators by multiplying nonzero factors and taking derivatives. Finally, it suffices to consider the derived function, which is described in more detail in the following. Take the case $n\geq6$ even for example. \noindent$(i)$ Eliminate the logarithm function \begin{equation}\begin{split}\label{d1} M'' =&\dfrac{P_{\frac{n-4}{2}}(h)}{(-h)^{\frac{n-1}{2}}} +\dfrac{P_{\frac{n-2}{2}}(h)\bar I_2+\bar P_{\frac{n-2}{2}}(h)\bar I_0}{(-h)^{\frac{n}{2}}(1+h)}+\dfrac{P_{\frac{n}{2}}(h)}{(1+h)^{\frac{3}{2}}(-h)^{\frac{n}{2}}} +\dfrac{P_{n-1}(\sqrt{-h})}{(1-\sqrt{-h})^{\frac{3}{2}}(-h)^{\frac{n-1}{2}}}, \end{split}\end{equation} $(ii)$ eliminate the first part of $M''$ by induction \begin{equation}\begin{split}\label{d2} F=&\left((-h)^{\frac{n-1}{2}}M''\right)^{(\frac{n-2}{2})}\\ =& \dfrac{P_{n-2}(h)\bar I_2+\bar P_{n-2}(h)\bar I_0}{(-h)^{\frac{n-1}{2}}(1+h)^{\frac{n}{2}}} +\dfrac{P_{\frac{n}{2}}(h)}{(1+h)^{\frac{n+1}{2}}(-h)^{\frac{n-1}{2}}} +\dfrac{P_{\frac{3n}{2}-3}(\sqrt{-h})}{(1-\sqrt{-h})^{\frac{n+1}{2}}(-h)^{\frac{n-3}{2}}}, \end{split}\end{equation} $(iii)$ eliminate the second part of $F$ by induction \begin{equation}\begin{split}\label{d3} G=&\left((1+h)^{\frac{n+1}{2}}(-h)^{\frac{n-1}{2}}F\right)^{(\frac{n+2}{2})}\\ =&\left(\sqrt{1+h}(P_{n-2}(h)\bar I_2+\bar P_{n-2}(h)\bar I_0) +(-h)(1+\sqrt{-h})^{\frac{n+1}{2}}P_{\frac{3n}{2}-3}(\sqrt{-h})\right)^{(\frac{n+2}{2})}\\ =& \dfrac{P_{\frac{3n}{2}-1}(h)\bar I_2+\bar P_{\frac{3n}{2}-1}(h)\bar I_0}{(-h)^{\frac{n+2}{2}}(1+h)^{\frac{n+1}{2}}} +\dfrac{P_{2n-3}(\sqrt{-h})}{(1+\sqrt{-h})^{\frac{1}{2}}(-h)^{\frac{n-1}{2}}}. \end{split}\end{equation} Notice that $M(-1)=0$, thus, \begin{equation}\begin{split}\label{N1} H_4(n)&\leq \#\{-1<h<0|G(h)=0\}+\dfrac{n+2}{2}+\dfrac{n-2}{2}+2-1\\ &\leq \#\{-1<h<0|G(h)=0\}+n+1, \end{split}\end{equation} where $\#$ denotes the number of elements of a finite set. We need consider the zeros of $G$ in $(-1,0)$ and we will give a rough estimate of zeros of $G$ using the method in \cite{LLLZ}. Let \begin{equation} \begin{split} G_1&=\dfrac{P_{2n-3}(\sqrt{-h})}{(1+\sqrt{-h})^{\frac{1}{2}}(-h)^{\frac{n-1}{2}}},\\ G_2&=\dfrac{P_{\frac{3n}{2}-1}(h)}{(-h)^{\frac{n+2}{2}}(1+h)^{\frac{n+1}{2}}},\\ G_0&=\dfrac{\bar P_{\frac{3n}{2}-1}(h)}{(-h)^{\frac{n+2}{2}}(1+h)^{\frac{n+1}{2}}}. \end{split} \end{equation} Obviously, $G_1$ has at most $2n-3$ zeros in $(-1,0)$. \begin{equation*} \begin{split} \left(\frac{G}{G_1}\right)'=U_2\bar I_2+U_0\bar I_0, \end{split} \end{equation*} where \begin{equation*} \begin{split} U_2&=\dfrac{1}{4G_1^2h(h+1)}(4 h (h+1)(G_1G_2'-G_2G_1')+5hG_1G_2-5G_0G_1)\\ &=\dfrac{(-h)^{-1/2-n}(1+\sqrt{-h})^{-1/2}(1+h)^{-(n+1)/2} P_{5n-4}(\sqrt{-h})}{4G_1^2h(h+1)},\\ U_0&=\dfrac{1}{4G_1^2h(h+1)}(4 h (h+1)(G_1G_0'-G_0G_1')+(3h+4)G_0G_1-hG_1G_2)\\ &=\dfrac{(-h)^{-1/2-n}(1+\sqrt{-h})^{-1/2}(1+h)^{-(n+1)/2}\bar P_{5n-3}(\sqrt{-h})}{4G_1^2h(h+1)}.\\ \end{split} \end{equation*} Let $g=P_{5n-4}(\sqrt{-h})\bar I_2+\bar P_{5n-3}(\sqrt{-h})\bar I_0$. Then \begin{equation}\label{N1G} \#\{\left(\frac{G}{G_1}\right)'=0\}\leq\#\{g=0\}. \end{equation} Using the method in \cite{LLLZ}, consider the function \[ U=\frac{\bar I_2}{\bar I_0}+\frac{\bar P_{5n-3}(\sqrt{-h})}{P_{5n-4}(\sqrt{-h})}, \] and compute the number of zeros of $U$ by a Ricatti equation. Then we have \[ \#\{g=0\}\leq 15n-8. \] With \eqref{N1} and \eqref{N1G}, \[ H_4(n)\leq 15n-8+1+2n-3+2n-3+n+2-1=20n-12. \] Similarly, for $n\geq7$ odd, \[ H_4(n)\leq 15n-8+1+2n-2+2n-2+n+2-1=20n-10, \] and for $n=3,4,5$, $H_4(n)\leq 12n+4. $ \section{Zeros of $M(h)$ for system $S_4$ in the smooth case}\label{sec:S4s} This section is devoted to giving an improved result on the number of zeros of the first Melnikov function $M(h)$ for quadratic isochronous center $S_4$ in \cite{LLLZ}. When the perturbation polynomials are smooth, by the result in Section \ref{sec:S4}, we have \begin{equation*} \begin{split} M(h)=((-h)^{-\frac{1}{2}}-1)P_2(\sqrt{-h}) +(-h)^{\frac{5-n}{2}}(1+h)P_{\frac{n-6}{2}}(h)+(-h)^{\frac{4-n}{2}}\left(P_{\frac{n-4}{2}}\bar I_2+\bar P_{\frac{n-4}{2}}\bar I_0\right), \end{split} \end{equation*} for $n\geq6$ even, and $n\geq7$ odd, \begin{equation*} \begin{split} M(h)=&((-h)^{-\frac{1}{2}}-1)P_2(\sqrt{-h}) +(-h)^{\frac{4-n}{2}}(1+h)P_{\frac{n-5}{2}}(h)+(-h)^{\frac{5-n}{2}}\left(P_{\frac{n-5}{2}}\bar I_2+\bar P_{\frac{n-5}{2}}\bar I_0\right). \end{split} \end{equation*} We will estimate the number of zeros of $M(h)$ for $n\geq6$ even in the following. The case of $n\geq7$ odd can be obtained in a similar way. Using the result in \eqref{d1} and \eqref{d2}, \begin{equation*} \left((-h)^{\frac{n-1}{2}}M''\right)^{(\frac{n-2}{2})}= \dfrac{P_{n-2}(h)\bar I_2+\bar P_{n-2}(h)\bar I_0}{(-h)^{\frac{n-1}{2}}(1+h)^{\frac{n}{2}}}. \end{equation*} Let $g=P_{n-2}(h)\bar I_2+\bar P_{n-2}(h)\bar I_0$, and notice that $M(-1)=0$, then \begin{equation}\begin{split}\label{N2} \#\{-1<h<0|M(h)=0\}&\leq \#\{g=0\}+\dfrac{n-2}{2}+2-1=\#\{g=0\}+\dfrac{n}{2}. \end{split}\end{equation} Let $\mathbf{I}(h)=(\bar I_0,\bar I_2)^\top$, then by \eqref{I02}, $\mathbf{I}(h)$ satisfies a two-dimensional first-order Fuchsian system \begin{equation}\label{Ih} \mathbf{I}(h)=\mathbf{A}(h)\mathbf{I}'(h), \end{equation} where \begin{equation*}\label{A} \mathbf{A}(h)= \left(\begin{array}{cc}\frac{4h}{3} &\frac{4}{3}\\[1ex] \frac{4h}{15} & \frac{4(4+3h)}{15} \end{array}\right). \end{equation*} It is easy to verify that system \eqref{Ih} satisfies the assumptions $(\mathrm{H1})$-$(\mathrm{H3})$, see Appendix A.4 or \cite{GI} for details. That is, \vspace{-10pt} \begin{itemize} \item[(1)]$\mathbf{A}'=\left(\begin{array}{cc}\frac{4}{3} &0\\[1ex] \frac{4}{15} & \frac{4}{5} \end{array}\right)$ is a constant matrix which has real distinct eigenvalues $\frac{4}{3}$ and $\frac{4}{5}$. \item[(2)]$\det \mathbf{A}(h)=\frac{16}{15}h(1+h)$ has real distinct zeros $h_0=-1, h_1=0$, and the identity trace $\mathbf{A}(h)\equiv(\det \mathbf{A}(h))'=\frac{16}{15}(1+2h)$. \item[(3)]$\mathbf{I}(h)$ is analytic in a neighborhood of $-1$. \end{itemize} Thus $\lambda=3/4$, and $\lambda^*=3/4$. It follows from Theorem \ref{th:GI} and $\dim g=2n-2$ that an upper bound of the number of zeros of $g$ is $(2n-3)+1=2n-2$. Moreover, $-1$ is a trivial zero of $g$, hence \[ \#\{g=0\}\leq 2n-3, \] and it follows from \eqref{N2} that \[ \#\{-1<h<0|M(h)=0\}\leq \frac{5n-6}{2}. \] Similarly, for $n\geq7$ odd, \[ \#\{-1<h<0|M(h)=0\}\leq 2n-3+\frac{n-1}{2}+2-1=\frac{5n-5}{2}. \] Thus, we have that the upper bound of the number of zeros of $M(h)$ is $[\frac{5n-5}{2}]$. Specially, this upper bound is applicable to the cases of $n=2,3,4,5$, except that the maximum number of zeros of $M(h)$ is $i$ for $n=i, i=0,1$. \section{Acknowledgements} We appreciate the helpful suggestions of Professor Changjian Liu. The first author is partially supported by NSF of China (Grant No. 11401111, No. 11571379, No. 11571195 and No. 11601257). The second author is partially supported by NSF of China (Grant No. 11571195). The third author is partially supported by NSF of China (Grant No. 11231001 and No. 11371213).
2,869,038,156,126
arxiv
\section{Introduction}\label{sec:intro} Fluid-particle systems widely exist in both natural and industrial processes. Examples can be found from drug delivery within human bodies~\cite{faraji2009nanoparticles}, sediment transport~\cite{yuan2018water} in oceans and rivers, debris flows~\cite{trujillo2020smooth}, to particle mixing in a fluidized-bed reactor~\cite{derksen2014simulations}. One common feature of these systems is that the involved particles often have complex non-spherical shapes (Fig.~\ref{fig:round_particle}). The particle shape not only affects fluid-particle, particle-particle interactions at the individual particle level but also can dramatically influence the behaviours of fluid-particle systems at the macroscopic scale. For instance, the jamming of dense suspension is highly sensitive to particle shapes~\cite{guazzelli2018rheology}. Besides the importance of particle shapes for fluid-particle systems, it is still a challenging task to quantitatively describe the role of particle shapes and explain the underlay mechanisms. Difficulties arise from the fact that current experimental observations cannot always provide enough information, particularly about the mechanics occurring at the particle scale, due to measurement limitations. Therefore, numerical modellings become increasingly important for understanding fluid-particle systems. With increasing computational powers, particle scale resolved numerical methods become promising tools to explore details of flows and particle motions at both the microscopic and macroscopic scales. One popular approach to simulate particle motions is the Discrete Element Method (DEM)~\cite{luding2008introduction}. DEM uses a bottom-up strategy where individual particle motions are tracked directly and particle-particle interactions are modelled at the particle scale. Classical DEM can only indirectly take into account the shape effects by rolling resistance since all particles are approximated as spheres~\cite{ai2011assessment}. Thus, many shape description methods are developed to address this issue. For example, complex shapes can be approximated by glueing spheres together~\cite{garcia2009clustered, ashmawy2003evaluating}, the collisions between particles are then simplified into sphere-sphere contacts. Although this approach is widely used, the sphere-clustering technique introduces an artificial surface roughness and a significant number of prime spheres are required to have a decent shape approximation~\cite{kruggel2008study}. Polyhedral DEM also draws a lot of attention due to its capability of describing complex shapes efficiently. However, it suffers from numerical instability due to the non-smooth nature of each polyhedron. The Sphero-polyhedron approach~\cite{alonso2008spheropolygons, alonso2009efficient, galindo2010minkowski,galindo2012breaking, galindo2009molecular} overcomes this issue by smoothing particle surfaces with a sphere. However, not every shape can be efficiently represented by a polyhedral mesh, a drawback that will be discussed further on. Particle shapes can also be described by distance functions such as the level set function. Level set DEM~\cite{kawamoto2016level} can handle particles with realistic shapes by directly cooperating with CT scan results. The main limitation is the high computational cost since each particle is represented by a points cloud. The recently developed metaball DEM~\cite{zhang2021metaball} describes particle shapes by a metaball function analytically. The contact between particles is modelled by solving an optimization problem. It shows great potential for handling non-spherical particles with rounded features without discretization of particle surfaces. To solve fluid-particle interactions, DEM needs to be coupled with Computational Fluid Dynamics (CFD) methods. The Lattice Boltzmann Method (LBM) has emerged as an effective CFD solver during the last decades, and it has attracted enormous interest in simulating complex flows including fluid-particle systems~\cite{ladd1994numerical,zhang2016lattice, cui20122d,wang2013lattice, galindo2013coupled, zhang2017efficient}. LBM has several unique advantages that make it suitable to couple with DEM. First, LBM enjoys high parallelization efficiency due to the locality of the collision operator, where the computational cost is always a bottleneck of fully solved fluid-particle simulations. Furthermore, the kinetic nature of LBM ensures its capability in handling complex moving boundary conditions with simple algorithms. The DEM-LBM coupling schemes can be classified into two categories: diffuse interface approach and sharp interface approach. Diffuse interface schemes handle the discontinuity at solid-fluid boundaries by smoothing the interface. Most Immersed Boundary Methods (IBM)~\cite{peskin2002immersed, feng2004immersed, luo2007modified, niu2006momentum, wu2009implicit} belong to this category, where the influence of solid boundaries is replaced by a smoothed external force field. In the partially saturated cells method (PSM)~\cite{noble1998lattice, cook2004direct, feng2007coupled}, the solid boundaries are introduced by the solid volume fraction, therefore, the exact boundary position does not exist in the fluid solver. Although diffuse interface approaches benefit from smooth transitions between solid and fluid nodes and fewer fluctuations in hydrodynamic forces, the non-physical diffuse interface representation limits its accuracy. For instance, it is found that IBM can only achieve first-order accuracy when simulating porous media flows and PSM underestimates the permeability systematically~\cite{chen2020intercomparison}. On the other hand, the sharp interface approaches treat solid boundaries without smoothing. Within the LBM framework, it is straightforward to handle sharp interfaces by applying the bounce-back rule: fluid molecules that contact the solid surface are reflected back to the fluid domain with opposite velocity. The simple bounce-back scheme approximates interfaces as stairwise boundaries which may damage overall accuracy~\cite{ladd1994numerical}. Thus, Bouzidi et al.~\cite{bouzidi2001momentum} introduced an improved bounce-back scheme where the missing distribution functions are interpolated. It is further developed by Yu et al.~\cite{yu2003viscous} with a unified scheme by estimating the distribution at boundaries. It is found that the interpolated bounce-back schemes (IBB) have second-order accuracy in space~\cite{peng2016implementation, chen2020efficient}. The trade-off of sharp interface representations is less numerical stability: the hydrodynamic forces are considerable noisier than diffuse interface schemes~\cite{peng2016implementation}. Special treatments are also required for new fluid nodes due to moving boundaries~\cite{lallemand2003lattice, fang2002lattice, zhang2021coupled}. Therefore, diffuse interface approaches are widely used for DEM-LBM coupling despite sharp interface schemes having better accuracy in general~\cite{zhang2022random}. Recently, Peng et al.~\cite{peng2019comparative,peng2019comparative2} conducted comprehensive comparisons between IBM and IBB for both laminar and turbulence flows. It is shown that IBB is second-order accuracy for velocity, hydrodynamic force/torque, and stress, where diffuse interface IBM only hold first-order accuracy when simulating laminar flows~\cite{peng2019comparative}. Furthermore, IBM fails to correctly capture the velocity gradient within the diffuse interface for turbulent flows~\cite{peng2019comparative2}. In terms of particle shape, the majority of DEM-LBM schemes use spheres with diffuse interface coupling schemes~\cite{feng2010combined, feng2007coupled, zhang2016lattice}. Galindo-Torres~\cite{galindo2013coupled} extended the DEM-LBM model for generally shaped particles (even non-convex ones) by using the sphero-polyhedron technique. Recently, Wang et al.~\cite{wang2021coupled} introduced a polygonal DEM-LBM model with an energy-conserving contact algorithm. Although these latest developments can handle fluid-particle interactions with complex shapes, it is still not an easy task to handle particles with round shapes such as river pebbles due to the large numbers of vertices required by the surface mesh. \begin{figure}[b] \centering \subfigure[Pebbles]{\label{fig:a}\includegraphics[width=60mm]{pebbles.jpeg}} \subfigure[New Zealand’s Moeraki boulders]{\label{fig:b}\includegraphics[width=60mm]{boulders.png}} \caption{Some examples of general shaped particles with round features found in nature. (Image source: Google and Geological Society of Glasgow.)} \label{fig:round_particle} \end{figure} The goal of this work is to provide a sharp interface coupling model that combines the efficiency of LBM in solving flows and the capability of MDEM in handling non-spherical particles to simulate fluid-particle systems. The structure of the paper is organized as follows: Sec.~\ref{sec:mathod} describes the basics of MDEM and LBM, the ideas, approximations and detailed implementations of the coupling scheme. The presented model is validated by comparing with settling of a sphere and a non-spherical metaball in Sec.~\ref{sec:validation}. The significance of capturing particle shapes is demonstrated in Sec.~\ref{sec:examples} by interactions of multiple non-spherical particles. Finally Sec.~\ref{sec:conclusion} presents conclusions for the present work. \section{Numerical model}\label{sec:mathod} \subsection{Metaball Discrete Element Method}\label{sec:dem} \subsubsection{Discrete Element Method} DEM is a method that solves the individual particle (element) motions directly~\cite{luding2008introduction,solov2017mbn}. The translational motions are described by Newton's equation and rotations are governed by the angular momentum conservation equation: \begin{equation} \begin{cases} m_i\bm{a}_i = m_i\bm{g} + \sum \limits_{j=0}^{N-1} \bm{F}_{ij}^c + \bm{F}_i^h,\\[2ex] \frac{d}{dt}\left(\textbf{I}_i \bm{\omega}_i\right) = \sum \limits_{j=0}^{N-1} \bm{T}_{ij}^c + \bm{T}_i^h, \end{cases} \label{eq:DEM} \end{equation} where $N$ is total number of particles, $m_i$ and $\bm{a}_i$ are the mass and acceleration of particle $i$, respectively. Forces acting on particle $i$ include gravitational force $m_i\bm{g}$, the hydrodynamic force $\bm{F}_i^h$ and the contact force $\bm{F}_{ij}^c$ between particle $i$ and $j$. $\textbf{I}_i$ is the inertia tensor and $\bm{\omega}_i$ is the angular velocity. $\bm{T}_i^h$ and $\bm{T}_{ij}^c$ represent torques due to the hydrodynamic and contact forces. It is worth to mention that the angular momentum conservation equation in Eq.~\ref{eq:DEM} is handled by solving Euler's equation~\cite{solov2017mbn} under the body-frame. Within the DEM formalism, there are various kinds of contact laws to determine the contact force $\bm{F}^c$. One of the most widely used models is the linear spring dashpot model introduced by Cundall and Strack~\cite{cundall1979discrete}. The normal component of $\bm{F}^c$ between particle $i$ and $j$ is given by: \begin{equation} F_n^{c} = k_n\delta+\eta_n \left(\bm{v}_j-\bm{v}_i \right)\cdot\bm{n}, \end{equation} where $\delta$ is the particle overlapping distance, $k_n$ is the normal spring stiffness and $\eta_n$ is the normal damping coefficient. The unit normal vector $\bm{n}$ points from particle $j$ to particle $i$. The tangential contact force $F_t^{c}$ follows Coulomb's law: $F_t^{c} \leq \mu_s F^{cn}$ and can be determined as: \begin{equation} \begin{cases} F_t^{c} &= min(\mu_s F_n^{c}, F_{t0}^{c}),\\[2ex] F_{t0}^{c} &= \norm{-k_t \bm{\xi} - \eta_t \left(\bm{v}_j-\bm{v}_i \right)\cdot\bm{t}}, \end{cases} \end{equation} where $\mu_s$ is the static friction coefficient. $k_t$ and $\eta_t$ are tangential spring stiffness and damping coefficient. The unit tangential vector $\bm{t}$ and tangential spring $\bm{\xi}$ can be determined by relative velocity. More details can be found in~\cite{luding2008introduction,solov2017mbn,chen2020efficient}. The second-order Velocity Verlet scheme is employed to solve Eq.~\ref{eq:DEM} numerically. \subsubsection{Metaball function} One key aspect of modern DEM schemes is the shape descriptor for non-spherical particles. Recently, the authors introduced a novel shape descriptor: metaball function~\cite{zhang2021metaball} which can be used for non-spherical particles with round features, such as river pebbles. Metaball function describes particle shapes by an analytical expression, it can be considered as a natural extension of the sphere, ellipsoid etc. The metaball equation used in this study is defined as: \begin{equation} M(\bm{x})=\sum_{i=0}^{n-1}{\frac{K_i}{\Vert\bm{x}-\bm{x}_i\Vert^2}}=1 \label{eq:metaball} \end{equation} where $\bm{x}_i$ is the $i$th control point which determines the skeleton of the shape, $K_i$ is the corresponding weight which controls the influence range of $\bm{x}_i$, $n$ is the total number of control points. It is clear that a sphere can be described by Eq.~\ref{eq:metaball} with a single control point as centre and $\sqrt{K_0}$ as the radius. One advantage of Eq.~\ref{eq:metaball} is: no constraints on choosing control points and the weights can be any non-negative value. Because of this flexibility, the above metaball equation can be used to describe many complex shapes. Although there are no limitations on using metaball for non-convex shapes, only convex shapes are considered here as the first step. A 2D metaball particle and its control points are illustrated in Fig.~\ref{fig:2d_meta}. The contour plot of $M(\bm{x})$ also highlights that its value decreases with increasing area (volume) when $M(\bm{x})<1$. This property will be used to handle collisions between metaballs later. \begin{figure}[b] \begin{centering} \includegraphics{2d_meta.eps} \caption{2D graphic illustration of a metaball particle and its control points, the dash lines refer to Eq.~\ref{eq:metaball} with different values.} \label{fig:2d_meta} \end{centering} \end{figure} \subsubsection{Collision between two metaballs} The collision algorithm of two metaballs requires defining collision properties including contact point, contact normal and overlaps. To avoid intersections between two metaballs, a sphero-metaball approach is developed in~\cite{zhang2021metaball}, where the original metaball is eroded to an internal metaball with a similar shape and then dilated by a sphere with radius $R_s$. The task of finding the closest points of internal metaballs can be solved as an optimization problem with the help of the analytical expression of metaballs (Eq.~\ref{eq:metaball}). The optimization problem is defined as follows: \begin{equation} \begin{aligned} &Minimize \quad M_0(\bm{x})+M_1(\bm{x}) \\ &subject \text{ } to \quad c_{tol}<\abs{M_0(\bm{x})}<1, \quad c_{tol}<\abs{M_1(\bm{x})}<1 \end{aligned} \label{eq:opt} \end{equation} where the $M_0(\bm{x})$ and $M_1(\bm{x})$ are the function of two metaballs, $c_{tol}$ is a small tolerance to avoid the solution of $M_0(\bm{x})+M_1(\bm{x})=0$ when $\norm{\bm{x}}\to \infty$. If the solution of local minimum exists, the gradient of Eq.~\ref{eq:opt} must equal to zero: \begin{equation} \nabla (M_0(\bm{x})+M_1(\bm{x}))=\bm{0} \label{eq:grad} \end{equation} Eq.~\ref{eq:grad} is solved by the Newton-Raphson method numerically in this study, more details can be found in~\cite{zhang2021metaball}. Once the local minimum point $\bm{x}_m$ is found, the closest points on metaballs ($\bm{x}_{c0}$ and $\bm{x}_{c1}$) are approximated by the intersection points between the line through $\bm{x}_m$ with direction $\nabla M(\bm{x}_m)$ as shown in Eq.~\ref{eq:cp}. \begin{equation} \begin{cases} & \bm{x}_{c0} = \bm{x}_m + q_0\nabla M_0(\bm{x}_m) \\ & \bm{x}_{c1} = \bm{x}_m + q_1\nabla M_1(\bm{x}_m) \end{cases} \label{eq:cp} \end{equation} By using Eq.~\ref{eq:cp} and Taylor series expansion of $M_0(\bm{x}_{c0})$ about point $\bm{x}_m$, ignoring second and higher order terms, and combined with $M_0(\bm{x}_{c0})=1$, $q_0$ can be expressed explicitly as: \begin{equation} q_0 = \frac{1-M_0(\bm{x}_m)}{\norm{\nabla M_0(\bm{x}_m)}^2} \label{eq:k0} \end{equation} $q_1$ can be calculated in the same way. Eq.~\ref{eq:cp} and \ref{eq:k0} are fairly accurate when particles are close to contact and tend to overestimate minimum distance when particles are far away since $M(\bm{x})$ quadratically decreases with distance. In another word, no collision happens when the error of Eq.~\ref{eq:cp} is large. Finally, the overlap $\delta$, contact direction $\bm{n}$ and contact point $\bm{x}_{cp}$ are defined as: \begin{equation} \begin{cases} & \delta = R_{s0}+R_{s1}-\norm{\bm{x}_{c1} - \bm{x}_{c0}} \\ & \bm{n} = \frac{\bm{x}_{c0} - \bm{x}_{c1}}{\norm{\bm{x}_{c0} - \bm{x}_{c1}}} \\ & \bm{x}_{cp} = \bm{x}_{c0} + (R_{s0}-0.5\delta)\bm{n} \end{cases} \label{eq:contactprop} \end{equation} \begin{figure}[t] \begin{centering} \includegraphics[width=0.6\linewidth]{sphere_meta.eps} \caption{An illustration of a sphero-metaball, the solid and dash lines refer to original metaball and internal metaball. The original metaball is approximated by the Minkowski sum of the internal metaball and a sphere.} \label{fig:sphere_meta} \end{centering} \end{figure} \subsubsection{Collision between metaball and plane} The collisions between metaball and plane are handled similarly to metaball collisions. The task becomes: finding the closest points between metaball and plane. It is equal to finding a point on metaball with a normal that is perpendicular to the plane and pointing towards the inside of the plane. This problem can be simplified by rotating the coordinate around an internal point (usually the mass centre) of the metaball to make sure that the norm of the plane is perpendicular to the x-axis (Fig.~\ref{fig:metawall}). Since the distance between metaball and plane is small, the problem can be further modified as finding a point $\bm{x}_{cw}$ on the plane where its normalized gradient regarding the rotated metaball function $M^{R}$ equals to $(-1,0,0)$. Thus we have: \begin{equation} \begin{cases} & \frac{\partial}{\partial y} M^{R}(\bm{x}^{R}_{cw}) = 0\\ & \frac{\partial}{\partial z} M^{R}(\bm{x}^{R}_{cw}) = 0\\ & \frac{\partial}{\partial x} M^{R}(\bm{x}^{R}_{cw}) < 0 \end{cases} \label{eq:metawall} \end{equation} Eq.~\ref{eq:metawall} is also solved by the Newton-Raphson Method. The previous $\bm{x}^{R}_{cw}$ is used as the initial point if this potential collision already exists at the previous time step. Otherwise, the initial point is determined by projecting the control point with the smallest distance to the plane. \begin{figure}[t] \begin{centering} \includegraphics[width=0.6\linewidth]{metawall.eps} \caption{Illustration of the contact scheme for metaball-plane collision.} \label{fig:metawall} \end{centering} \end{figure} Once $\bm{x}^{R}_{cw}$ is determined, the corresponding closest point on metaball $\bm{x}^{R}_{cm}$ is found as: \begin{equation} \begin{aligned} \bm{x}^{R}_{cm} & = \bm{x}^{R}_{cw} + q^R\nabla M^{R}(\bm{x}^{R}_{cw}) \end{aligned} \label{eq:metawallcp} \end{equation} using $M^{R}(\bm{x}^{R}_{cm})=1$ and Taylor series expansion, we have: \begin{equation} q^R = \frac{1-M^R(\bm{x}^{R}_{cw})}{\norm{\nabla M^R(\bm{x}^{R}_{cw})}^2} \label{eq:k} \end{equation} The closest points are then rotated back to the global coordinate and the overlap $\delta$, contact direction $\bm{n}$ and contact point $\bm{x}_{cp}$ are defined as: \begin{equation} \begin{cases} & \delta = R_{s}-\norm{\bm{x}_{cm} - \bm{x}_{cw}} \\ & \bm{n} = \frac{\bm{x}_{cm} - \bm{x}_{cw}}{\norm{\bm{x}_{cm} - \bm{x}_{cw}}} \\ & \bm{x}_{cp} = \bm{x}_{cw} + 0.5\delta\bm{n} \end{cases} \label{eq:contactprop2} \end{equation} It is worth mentioning that this collision algorithm could potentially be used to couple the Metaball DEM with traditional polyhedral DEM to enhance the modelling capability of this method. \subsection{Lattice Boltzmann Method}\label{sec:lbm} The fluid flow is simulated by the Lattice Boltzmann equation (LBE) – a discretized form of the Boltzmann equation~\cite{galindo2012numerical,galindo2013coupled, galindo2013lattice} and the D3Q15 lattice model is used, where the space is divided into cubic lattices. The velocity domain is discretized to fifteen velocity vectors as shown in Figure~\ref{fig:lbmcell}. The discrete velocity vectors are defined as follows: \begin{equation*} \resizebox{0.8\hsize}{!} {$\bm{e}_i = \left\{ \begin{array}{l l l} 0, & \quad \text{$i$ = 0,}\\[0.5ex] (\pm C,0,0),(0,\pm C,0),(0,0,\pm C), & \quad \text{$i$ = 1 to 6,}\\[0.5ex] (\pm C,\pm C,\pm C), & \quad \text{$i$ = 7 to 14,}\\ \end{array} \right.$} \end{equation*} where $C=\Delta{x}_{LBM}/\Delta{t}_{LBM}$ being the characteristic lattice velocity, $\Delta{x}_{LBM}$ and $\Delta{t}_{LBM}$ are the lattice size and time step of LBM. \begin{figure}[b] \begin{centering} \includegraphics[width=0.6\linewidth]{lbmcell} \caption{Discrete velocity vectors for D3Q15~\cite{galindo2013coupled}.} \label{fig:lbmcell} \end{centering} \end{figure} Based on the Chapman-Enskog expansion of the LBE, an evolution rule is applied to every distribution function~\cite{guo2002discrete}: \begin{equation} f_i(\bm{x}+\bm{e}_i\Delta{t}_{LBM},t+\Delta{t}_{LBM}) = f_i(\bm{x},t) + \Omega_{col}, \end{equation} where $f_i$ is the probability distribution function, $\bm{x}$ is the position of the local lattice, $\Omega_{col}$ is the collision operator. The well-known Bhatnagar-Gross-Krook (BGK) collision operator is used in this study, \begin{equation} \Omega_{col} = \frac{\Delta{t}_{LBM}}{\tau}(f^{eq}_{i}-f_i), \label{eq:lbm_collide} \end{equation} where $\tau$ is the relaxation time and $f^{eq}_{i}$ is the equilibrium distribution given by \begin{equation} f^{eq}_{i} = \omega_i\rho_f \bigg(1 + 3\frac{\bm{e}_i \cdot \bm{u}_f}{C^2} + \frac{9(\bm{e}_i \cdot \bm{u}_f)^2}{2C^4} - \frac{3u_f^2}{2C^2}\bigg), \end{equation} The weights are $\omega_0=2/9$, $\omega_i=1/9$ for $i=$1 to 6, $\omega_i=1/72$ for $i=$7 to 14. The kinetic viscosity is related to the relaxation time by \begin{equation} \nu = \frac{\left(\Delta x_{LBM}\right)^2}{3\Delta t_{LBM}}\bigg(\tau - \frac{1}{2}\bigg), \end{equation} here the Mach number is defined as the ratio of the maximum fluid velocity to $C$. When $Ma\ll1$, the LBE can be recovered to the Navier-Stokes equation. More detail can be found in~\cite{mohamad2011lattice}. The macroscopic properties of fluid such as density $\rho_f$ and flow velocity $\bm{u}_f$ can be determined by the zeroth and the first-order moment of the distribution function: \begin{equation} \begin{array}{ll} \rho_f(\bm{x}) &= \sum_{i=0}^{14} f_i(\bm{x}),\\[2ex] \bm{u}_f(\bm{x}) &= \frac{1}{\rho_f(\bm{x})} \sum_{i=0}^{14} f_i(\bm{x})\bm{e}_i . \end{array} \label{eq:blm_rhov} \end{equation} \subsection{Coupling scheme between MDEM and LBM} To successfully model fluid-structure interactions, the no-penetration non-slip boundary conditions need to be imposed on the fluid-solid interface, and the hydrodynamic forces acting on particles are also required. \subsubsection{Interpolated Bounce Back scheme for Moving boundary condition} The LBM nodes are divided into fluid nodes and solid nodes, the fluid nodes which are next to the solid boundary are further identified as boundary nodes ($f$ in Fig.~\ref{fig:ibb}). Since the uniform-sized mesh is used in classic LBM, the curved boundaries are generally located between boundary nodes and solid nodes. Thus, the distribution functions at boundary nodes that streamed from solid nodes are missing, the critical task is to determine the missing distribution functions properly. \begin{figure}[t] \begin{centering} \includegraphics[width=0.8\linewidth]{ibb} \caption{Schematic of the interpolated bounce back role at the fluid-structure interface, where ``s" for the closest solid node, ``w" for wall, ``f" for the boundary node, ``ff" for the neighbouring fluid node of ``f".} \label{fig:ibb} \end{centering} \end{figure} The simplest solution is the bounce-back role where molecules depart from $\bm{x}_f$ with velocity $\bm{e}_{i'}$ hit on wall and return back to $\bm{x}_f$ with opposite discrete velocity $\bm{e}_{i}$. It is clear that the wall is assumed to be located at the middle point between $\bm{x}_f$ and $\bm{x}_s$ regardless of the actual position, where $\bm{x}_s$ is the neighbour solid node. This assumption leads to stair-wise boundaries which damage the second-order accuracy of LBM. Therefore, interpolated bounce-back(IBB) schemes~\cite{bouzidi2001momentum} are proposed to reduce geometrical errors. The idea is to interpolate the missing distribution functions from existing ones and the interpolation weights depend on the distance $q=\Vert\bm{x}_f-\bm{x}_w\Vert/\Vert\bm{x}_f-\bm{x}_s\Vert$, where $\bm{x}_w$ is the intersection point between the solid surface and discrete velocity. The original IBB scheme needs to treat $q \leqslant 0.5$ and $q>0.5$ conditions separately, Yu et al.~\cite{yu2003viscous} proposed an unified IBB scheme where the distributions at solid boundary $f_{i'}(\bm{x}_w,t+\Delta{t}_{LBM})$ are evaluated first, then the bounce-back role is applied, the missing distributions at $\bm{x}_f$ after streaming $f_{i}(\bm{x}_f,t+\Delta{t}_{LBM})$ is interpolated between $f_{i}(\bm{x}_w,t+\Delta{t}_{LBM})$ and $f_{i}(\bm{x}_{ff},t+\Delta{t}_{LBM})$. However, it is found that classical IBB schemes cannot guarantee non-slip conditions at solid surfaces, particularly, at high Reynolds number. Recently, a velocity interpolation-based bounce back scheme (VIBB) is proposed to to reduce the slipping error~\cite{zhang2019velocity}. VIBB scheme is based on the following observation: we can always find a point $\bm{x}_d$ where the distribution departs from $\bm{x}_d$ will arrive at $\bm{x}_f$ after stream and bounce back. The unknown $f_{i}(\bm{x}_f,t+\Delta{t}_{LBM})$ is determined as: \begin{equation} f_{i}(\bm{x}_f,t+\Delta{t}_{LBM}) = f^{+}_{i'}(\bm{x}_d,t) + 6\omega_{i'}\rho_{f}\frac{\bm{e}_i \cdot \bm{u}_{w}}{C^2}, \label{eq:eqneq} \end{equation} the particle surface velocity is given as: $\bm{u}_{w} = \bm{v}_{pj} + \bm{w}_{pj} \times (\bm{x}_w-\bm{x}_{pj})$, where $\bm{v}_{pj}$ and $\bm{w}_{pj}$ are the translational and angular velocity at the $j$th particle's centroid $\bm{x}_{pj}$, respectively. $f^{+}_{i'}(\bm{x}_d,t)$ is decomposed into equilibrium $f^{eq}$ and non-equilibrium part $f^{neq}$: \begin{equation} f^{+}_{i'}(\bm{x}_d,t) = f^{eq}_{i'}(\rho_d,\bm{u}_d) + f^{neq}_{i'}(\bm{x}_d,t), \label{eq:eqneq} \end{equation} notice that the distributions are dominated by the equilibrium part since the variations of $f^{neq}$ are one order smaller than $f^{eq}$. Thus, it is safe to interpolate/extrapolate $f^{neq}$ and $\rho$ with second-order accuracy~\cite{guo2002extrapolation}: \begin{equation} f^{neq}_{i'}(\bm{x}_d,t) = 2q(f^{+}_{i'}\left(\bm{x}_f,t)-f^{eq}_{i'}(\bm{x}_f,t)\right) + (1-2q)(f^{+}_{i'}(\bm{x}_{ff},t)-f^{eq}_{i'}(\bm{x}_{ff},t)), \label{eq:neqd} \end{equation} the density at $\bm{x}_d$ is evaluated as: \begin{equation} \resizebox{0.6\hsize}{!} {$\rho_d = \left\{ \begin{array}{l l} \text{$2q\rho_f + (1-2q)\rho_{ff}$}, & \quad \text{$q \leqslant 0.5$,}\\[3ex] \text{$\rho_f$}, & \quad \text{$q > 0.5$.}\\[0.ex] \end{array} \right.$} \label{eq:rhod} \end{equation} Based on the above analysis, $\bm{u}_d$ play the most important roles in determining unknown distributions. Fortunately, both $\bm{u}_w$, $\bm{u}_f$ and $\bm{u}_{ff}$ are known. $\bm{u}_d$ in Eq.~\ref{eq:eqneq} can be evaluated by linear interpolation separately: \begin{equation} \resizebox{0.6\hsize}{!} {$\bm{u}^{*}_d = \left\{ \begin{array}{l l} \text{$2q\bm{u}_f + (1-2q)\bm{u}_{ff}$}, & \quad \text{$q \leqslant 0.5$,}\\[3ex] \text{$\frac{1-q}{q}\bm{u}_f + \frac{2q-1}{q}\bm{u}_w$}, & \quad \text{$q > 0.5$,}\\[0.ex] \end{array} \right.$} \end{equation} or linearly interpolated between $\bm{u}_w$ and $\bm{u}_{ff}$ regardless of $\bm{u}_{f}$: \begin{equation} \bm{u}^{**}_d = \frac{1-q}{1+q}\bm{u}_{ff} + \frac{2q}{1+q}\bm{u}_w, \label{eq:neqd} \end{equation} here, we determine $\bm{u}_d$ by weighted averaging $\bm{u}^{*}_d$ and $\bm{u}^{**}_d$: \begin{equation} \bm{u}_d = \frac{1}{3}\bm{u}^{*}_d + \frac{2}{3}\bm{u}^{**}_d. \label{eq:neqd} \end{equation} To couple MDEM with LBM, the parameter $q$ needs to be determined. The intersection point $\bm{x}_{w}$ between metaballs $M(\bm{x})$ and LBM discrete velocities $\bm{e}_i$ must satisfy: $M(\bm{x}_{w}) = c_0$, where $c_0$ is a special metaball function value that depends on spherical radius $R_s$. In practice, $c_0$ is determined by the minimum function value for the particle surface (see Fig.~\ref{fig:sphere_meta}). $q$ can be calculated by solving following equation: \begin{equation} M(\bm{x}_{f}+q \bm{e}_i) = c_0, \label{eq:calq} \end{equation} unfortunately, the solution of Eq.~\ref{eq:calq} does not have an explicit form in general. Although iterative methods such as the Newton-Raphson method can be used, the high computational costs make them less favourable since $q$ has to be updated for every time step. Here, we propose a simple approximation for $q$: \begin{equation} q = \frac{c_0-M(\bm{x}_{f})}{M(\bm{x}_{s})-M(\bm{x}_{f})}. \label{eq:appq} \end{equation} As illustrated in Fig.~\ref{fig:meta_q}, if the LBM lattice size is considerably smaller than particle size, it is reasonable to assume that the metaball function decreases linearly with increasing solid surface distance. Eq.~\ref{eq:appq} also guarantees that $q\in [0,1]$. The accuracy of Eq.~\ref{eq:appq} is examined by comparing with Newton's method (with tolerance $10^{-7}$ and maximum iteration number 100), it is found the relative error is around 5\%. The settling velocity from both methods are identical, implying that Eq.~\ref{eq:appq} is a good approximation to determine $q$. \begin{figure} \centering \subfigure{\label{fig:a}\includegraphics{2d_a}} \caption{The illustration of intersection between LBM discrete velocity (black arrow) and metaball (blue curve). The contour plot shows that the mataball function value can be used to calculate the intersection points.} \label{fig:meta_q} \end{figure} \subsubsection{Momentum Exchange Method for hydrodynamic forces} The influence of solid particles on the fluid is modelled by the above no-penetration non-slip boundary conditions, particles interact with fluid through the hydrodynamic force $\bm{F}^h$ and torque $\bm{T}^h$ which appear in Eq.~\ref{eq:DEM}. Accurate and efficient calculations of $\bm{F}^h$ and $\bm{T}^h$ are essential for a successful coupling scheme. One widely used scheme is the momentum exchange method (MEM)~\cite{ladd1994numerical}, where the hydrodynamic forces can be calculated as a sum of all the momentum exchanges along with every discrete velocity that collides with solid surfaces. MEM is extensively used for fluid-particle interactions. However, it suffers from numerical noises which introduce extreme flocculating hydrodynamic forces~\cite{peng2016implementation}. Wen et al.~\cite{wen2014galilean} showed that the original momentum exchange method does not obey the Galilean invariance principle. They further proposed a Galilean invariant momentum exchange method which relief numerical noises considerably. Therefore, the Galilean invariant momentum exchange method is adapted in this work, the hydrodynamic force and torque that act on the $j$th particle are given as: \begin{equation} \bm{F}^{h}_j = \sum_{i \in \Gamma_j} \left[ (\bm{e}_i-\bm{u}_w)f_i(\bm{x}_f, t) - (\bm{e}_{i'}-\bm{u}_w)f_{i'}(\bm{x}_f,t) \right], \label{eq:mem} \end{equation} \begin{equation} \bm{T}^{h}_j = \sum_{i \in \Gamma_j} \left( \bm{x}_w - \bm{x}_{pj} \right) \times \left[ (\bm{e}_i-\bm{u}_w)f_i(\bm{x}_f, t) - (\bm{e}_{i'}-\bm{u}_w)f_{i'}(\bm{x}_f,t) \right], \label{eq:mem} \end{equation} where $\Gamma_j$ represents the set of all the discrete velocities that intersect with the $j$th particle. Compared with the original momentum exchange method, $\bm{e}_i$ is shifted by the solid velocity. \subsubsection{Local refilling algorithm for new fluid nodes} One drawback of having sharp solid boundaries is that solid nodes may switch to fluid nodes with no fluid information since particles can freely move within the fluid domain. Therefore, these new fluid nodes need to be initialized with proper distribution functions. This procedure is often referred to refilling algorithm. Peng et al.~\cite{peng2016implementation} discussed the influence of different refilling algorithms in terms of numerical stability and accuracy, their results showed that refilling may have significant contributions to the numerical noise on the flocculating hydrodynamic forces. Most refilling algorithms require interpolating/extrapolating information from neighbour nodes. Here, a local refilling algorithm is proposed. The proposed algorithm is based on the following observation: the new fluid node is always close to the solid surface due to the low Mach number requirement of LBM. Therefore, it is reasonable to apply bounce back role for these missing distribution functions that their opposite distribution is known after streaming: \begin{equation} f_{i}(\bm{x}_{new}, t) = f_{i'}(\bm{x}_{new}, t) + 6\omega_{i'}\rho_{f}\frac{\bm{e}_i \cdot \bm{u}_{w}}{C^2}, \end{equation} where $\bm{x}_{new}$ is the new fluid node position. If $f_{i'}(\bm{x}_{new}, t)$ does not exist, the equilibrium refilling is used: $f_{i}(\bm{x}_{new}, t) = f^{eq}(\rho_0, \bm{u}_{new})$, where $\bm{u}_{new} = \bm{v}_{pj} + \bm{w}_{pj} \times (\bm{x}_{new}-\bm{x}_{pj})$. It is obverse that the proposed refilling algorithm is a local scheme and does not depend on interpolations/extrapolations. After distribution functions are refilled, the macroscopic properties like density and velocity are calculated as Eq.~\ref{eq:blm_rhov}. \subsubsection{Sub-cycling time integration} There are two time steps involve in DEM-LBM coupling scheme. The time step of DEM $\Delta t_{DEM}$ is typically around~$10^{-6}$ \si{s} and it is easy to reach~$10^{-7}$ \si{s} to make sure the contact is properly resolved in time. On the other hand, the time step of LBM $\Delta t_{LBM}$ is often found severe orders of magnitude larger than the DEM one since it is a function of the viscosity as in Eq.~\ref{eq:lbm_collide}. Therefore, the sub-cycling time integration proposed by Feng et al.~\cite{feng2007coupled} is used in this study. After one LBM computational step, the hydrodynamic force and torque are assumed unchanged and a sub-cycling is used to update contact forces, particle positions, and velocities. The sub-cycling step is defined as: $n_s=\Delta t_{LBM}/\Delta t_{DEM}$. We found $n_s$ has little influence on the overall accuracy if $\Delta t_{DEM}$ is small enough to guarantee a decent contact resolution. \subsubsection{Special treatments to handle low resolution between particles}\label{sec:treatment} IBB and MEM enjoy high accuracy due to the sharp interface representation but suffer from numerical stability issues for the same reason, particularly, when simulations involve multiple particles. Therefore, diffuse interface based schemes like Immersed Boundary Method, Immersed Moving Boundary method are often used for DEM-LBM coupling despite their non-physical interface representation. Here, we show that the numerical stability of IBB and MEM can be significantly enhanced with proper treatments. 1. When a LBM node lays between two particles, there may not have enough fluid nodes to conduct VIBB, then halfway bounce-back is used in this case. \begin{figure}[t] \begin{centering} \includegraphics[width=0.8\linewidth]{force_point} \caption{When the gap between particles is small, some neighbouring nodes of particles are covered by other particles, caused an anisotropic distribution of force points.} \label{fig:force_point} \end{centering} \end{figure} 2. When the particle-particle gap or particle-wall gap is small, the distribution of force points on the particle surface becomes less isotropic since only the momentum exchange from fluid nodes is considered (see Fig.~\ref{fig:force_point}). The anisotropic effect can introduce a significant disturbance to particle dynamics. We found that the key to restore the isotropic is to take into account the momentum exchange on the missing force points. Due to the limited information of fluid, the distribution function at missing force points is reconstructed as equilibrium distribution with the initial density $\rho_0$ and $\bm{u}_w$. The hydrodynamic force and torque at the missing force points are then evaluated as: \begin{equation} \bm{F}^{h,i}_j = (\bm{e}_i-\bm{u}_w)f^{eq}_i(\rho_0,\bm{u}_w) - (\bm{e}_{i'}-\bm{u}_w)f^{eq}_{i'}(\rho_0,\bm{u}_w), \label{eq:fh_s_issue} \end{equation} \begin{equation} \bm{T}^{h,i}_j = \left( \bm{x}_w - \bm{x}_{pj} \right) \times \left[ (\bm{e}_i-\bm{u}_w)f^{eq}_i(\rho_0,\bm{u}_w) - (\bm{e}_{i'}-\bm{u}_w)f^{eq}_{i'}(\rho_0,\bm{u}_w) \right]. \label{eq:th_s_issue} \end{equation} It is worth to mention that Christoph and Ulrich~\cite{rettinger2022efficient} also report similar issues recently. In their treatment, the missing hydrodynamic force is given by $\bm{F}^{h,i}_j=2w_i \rho_0 \bm{e}_i$ which is consistent with Eq.~\ref{eq:fh_s_issue}. In fact, Eq.~\ref{eq:fh_s_issue} becomes identical to their treatment if $\bm{u}_w=\bm{0}$. \section{Validation}\label{sec:validation} \subsection{Settling of a single sphere with metaball equation}\label{sec:settling_sphere} To validate the proposed model, the settling of a single sphere in a viscous fluid is simulated to examine the dynamic behaviours of the sphere and associated fluid motion. The time evolution of settling velocities is compared with experimental results. The domain size is $0.1 \times 0.1 \times 0.6$ \si{m}. A sphere particle with diameter $d_p=0.015$ \si{m} and density $\rho_p=1120$ \si{kg/m^3} is placed at a height of $0.12$ \si{m} from the bottom. Four different fluids are used with fluid density $\rho_f= 970, 965, 962, 960$ \si{kg/m^3} and the kinetic viscosity $\nu= 3.85 \times 10^{-4}, 2.2 \times 10^{-4}, 1.17 \times 10^{-4}, 0.6 \times 10^{-4}$ \si{m^2/s} respectively. Non-slip boundary conditions are applied for all boundary walls. Note that in the simulations, the gravitational body force is only applied to the particle, thus, a relative gravity ($(1-\rho_f/\rho_s)g$) is used as suggested by Feng and Michaelides~\cite{feng2004immersed}, where $g=9.8$ \si{m/s^2} is the gravity. The sphere is handled as a metaball instead of using the sphere equation directly. We choose the spherical radius $R_s=1.0 \times 10^{-4}$ \si{m}, the metaball function of a sphere is then given as $M(\bm{x})=k_0/\Vert\bm{x}-\bm{x}_p\Vert^2=1$, where $\bm{x}_p$ is the mass center of the sphere and $k_0=(0.5 d_p-R_s)^2$. Ladd~\cite{ladd1994numerical} suggested that the sphere diameter should be larger than 9 LBM cells to ensure sufficient accuracy. Here, the space step is set as $\Delta x_{LBM}=0.001$ \si{m}, thus, $d_p=15$ under lattice unit. The LBM and DEM time steps are given as: $\Delta t_{LBM}=2.0 \times 10^{-4}$ and $\Delta t_{DEM}=2.0 \times 10^{-6}$ \si{s}. The Reynolds number $Re$ is defined as: $Re=d_p u_t /\nu$, where $u_t$ is the terminal settling velocity. The time series of simulated settling velocities are compared to the experimental data presented in Fig.~\ref{fig:settling_vel}. The good agreements between simulation results and experimental observations suggest that the proposed coupling scheme can accurately capture the fluid-particle interactions. \begin{figure}[t] \begin{centering} \includegraphics[width=0.8\linewidth]{time_vs_velocity_sphere} \caption{Comparison between simulated settling velocities and experimental measurements for a sphere during the settling process.} \label{fig:settling_vel} \end{centering} \end{figure} \subsection{Settling of a non-spherical metaball}\label{sec:settling_metaball} One advantage of metaball DEM is it can be used to describe non-spherical particles with round surfaces. To further validate the proposed MDEM-LBM model, we conducted both experiments and simulations for the settling of a non-spherical metaball in a viscous fluid. The experimental setup is shown in Fig.~\ref{fig:exp}, where a rectangle container is used with dimension of $0.15 \times 0.15 \times 0.2$ \si{m}. The shape of metaball is shown in Fig.~\ref{fig:particle_shape}, its control points form a square and the shape can be quantitatively described by elongation $f_{elong}=0.84375$ which is defined as the ratio of bounding box width and length. The metaball is 3D printed by using a high-resolution surface mesh to conserve geometrical properties and keep a smooth surface. The surface mesh contains 159602 vertices where the metaball descriptor for the same shape only requires 4 control points. It is worth to point out that the surface mesh is only used to visualize the particle shapes and 3D printing, where the proposed collision and coupling algorithm does not require the discretization of particles. Since control points and weights of the metaball are predefined and the metaball is then manufactured by 3D printing, the particle shape in simulations is exact the same as the one in experiments (except for the error from 3D printing). The density and volume of 3D printed particles are $1134.156$ \si{kg/m^3} and $1.648 \times 10^{-6}$ \si{m^3}. The metaball is released by using a pair of tweezers from a completely submerged position and the initial orientation is controlled to make sure that the particle maximum projection area is perpendicular to the settling direction. Two types of Di-methyl silicon oil with viscosity $4.22 \times 10^{-4}$ and $8.91 \times 10^{-5}$ \si{m^2/s} (measured by a LICHEN NDJ-8S viscometer) are used in experiments. The trajectory of the metaball is recorded by a camera (CANON EOS 5D Mark IV with a lens of 24-105mm) with 1080P resolution and 50fps frequency. The videos are post-processed into binary images and the metaball centroid is determined with MATLAB package "regionprops". The metaball settling velocity is calculated by counting pixels between the position change of the centroid between consecutive frames. Same parameters are used in simulations, the space and time step is set as $\Delta x_{LBM}=0.001$ \si{m}, $ \Delta t_{LBM}=2.0 \times 10^{-4}$ and $\Delta t_{DEM}=2.0 \times 10^{-6}$ \si{s}. \begin{figure}[t] \begin{centering} \includegraphics[width=0.8\linewidth]{exp_setup.png} \caption{Experiment setup for single metaball settling in fluid.} \label{fig:exp} \end{centering} \end{figure} Fig.~\ref{fig:meta_settling_vel} shows the time series of settling velocity for both simulations and experiments. Since $Re$ is relatively low, no rotations are observed if the particle is released with maximum projection area perpendicular to the settling direction. It is also confirmed by the monotonous increasing settling velocity. Overall, the simulation results matched well with the experimental one. Small deviations can be found at the beginning stage at $Re=0.57$, it can be explained by the fact that the initial releasing orientation of the metaball is not strictly controllable. Thus, the particle needs to adjust to the maximum projection area at the beginning stage. At low $Re$, the fluid velocity field surrounding the metaball is similar with sphere one as shown in Fig.~\ref{fig:a_meta_snapshot}, which is consistent with previous studies~\cite{zhang2016lattice}. The time evolution of hydrodynamic force acted on the metaball is plotted in Fig.~\ref{fig:meta_fh} for $Re$=8.74. It is clear that the sharp interface coupling scheme indeed introduced observable fluctuations in the hydrodynamic force, but the fluctuations are still within reasonable range and doesn't significantly affect the overall accuracy in terms of particle motions. Table 1 shows the averaged computational time per step for different functions without parallelization, it is found that LBM is the most time consuming part when the number of particles is small. \begin{figure} \centering \subfigure[Metaball particle in simulation]{\label{fig:a}\includegraphics[width=50mm]{p1n.png}} \subfigure[Metaball particle in experiment]{\label{fig:a}\includegraphics[width=50mm]{p1.png}} \caption{Particle shapes that used in experiments and simulations.} \label{fig:particle_shape} \end{figure} \begin{figure}[t] \begin{centering} \includegraphics[width=0.8\linewidth]{meta_time_vs_velocity} \caption{Comparison between simulated settling velocities and experimental measurements for a metaball during the settling process.} \label{fig:meta_settling_vel} \end{centering} \end{figure} \begin{figure} \centering \subfigure{\includegraphics[width=55mm]{10.png}} \subfigure{\includegraphics[width=55mm]{30.png}} \subfigure{\includegraphics[width=55mm]{50.png}} \subfigure{\includegraphics[width=55mm]{80.png}} \caption{Snapshot of the metaball settling simulation for $Re=8.74$ at 0.2, 0.6, 1 and 1.6 s, colour indicates fluid velocity magnitude.} \label{fig:a_meta_snapshot} \end{figure} \begin{figure}[t] \begin{centering} \includegraphics[width=0.8\linewidth]{fh} \caption{Time evolution of the hydrodynamic force acted on a metaball during settling, $Re$=8.74.} \label{fig:meta_fh} \end{centering} \end{figure} \begin{table} \centering \begin{tabular}{ |c|c|c|c|c| } \hline DEM step & LBM collision & LBM stream & IBB boundary condition\\ \hline $6.65 \times 10^{-6}$ \si{s} & 1.13 \si{s} & 0.89 \si{s} & $9.46 \times 10^{-3}$ \si{s}\\ \hline \end{tabular} \caption{\label{tab} Averaged computational time per step for difference functions without parallelization.} \end{table} \subsection{Instability issues for multiple particle simulations} It is well known that sharp interface boundary conditions suffer from spurious hydrodynamic force oscillations~\cite{seo2011sharp}. This issue can be even more profound for multiple particle simulations due to the lack of enough resolution between particles. The oscillating forces and torques can cause numerical instabilities, even crash simulations. Our analysis in the previous section shows that the lack of isotropic on forcing point distribution is an important source of nonphysical oscillations. To illustrate this problem, the settling of two spheres is conducted with the same parameters in section~\ref{sec:settling_sphere}. The time series of particle positions and fluid velocity field are shown in Fig.~\ref{fig:issue}, where the left panel shows simulations without treatment and the right panel with treatments described in section~\ref{sec:treatment} (Eq.~\ref{eq:fh_s_issue} and Eq.~\ref{eq:th_s_issue}). The simulations are identical before two particles reach the bottom. However, particles produce nonphysical spins without treatments and the numerically introduced energy cannot be dissipated (see the last two figures in the left panel of Fig.~\ref{fig:issue}). On the contrary, both particle and fluid velocities reach zero if the isotropic of forcing point distribution is restored. In fact, the simulation shown in the left panel becomes unstable if the simulation continues, where no stability issue is found with treatments. \begin{figure} \centering \subfigure{\includegraphics[width=65mm]{i1500.png}} \subfigure{\includegraphics[width=65mm]{c1500.png}} \subfigure{\includegraphics[width=65mm]{i2500.png}} \subfigure{\includegraphics[width=65mm]{c2500.png}} \subfigure{\includegraphics[width=65mm]{i3500.png}} \subfigure{\includegraphics[width=65mm]{c3500.png}} \subfigure{\includegraphics[width=65mm]{i5000.png}} \subfigure{\includegraphics[width=65mm]{c5000.png}} \caption{Snapshot of the metaball settling simulation for $Re=8.74$ at 0.3, 0.5, 0.7 and 1 s, colour indicates fluid velocity magnitude.} \label{fig:issue} \end{figure} \section{Numerical examples}\label{sec:examples} In this section, a simulation of two settling metaballs is conducted in a closed box with dimension of $0.15 \times 0.15 \times 0.25$ \si{m}. Same metaballs as in section~\ref{sec:settling_metaball} are used but with density $1200$ \si{kg/m^3}. The fluid density and viscosity are $927$ \si{kg/m^3} and $7.55 \times 10^{-5}$ \si{m^2/s}. Two metaballs are placed at $0.22$ and $0.205$ \si{m} from the bottom initially, and the lower metaball is placed $0.002$ \si{m} off from the centreline. To highlight the importance of capturing particle shapes, the same simulation with spheres is also conducted, where the spheres have the same volume as the metaballs. It is well known that the settling of two spheres under gravity has complex dynamics often referred to as "drafting, kissing and tumbling" (DKT), which was first numerical studied by Feng and Michaelides~\cite{feng2004immersed}. In DKT, the following particle will catch up with the leading particle due to the drag reduction from the leading particle's wake. A similar trend is found for the settling of two metaballs in this simulation. Fig.~\ref{fig:dkt_h},~\ref{fig:dkt_v} and~\ref{fig:dkt_w} show time series of height of particle centre, particle vertical velocity and angular velocity magnitude for both metaballs and spheres. The drag coefficient for non-spherical particles is generally larger than for volume equivalent spheres~\cite{zhang2016lattice}. Surprisingly, metaballs reach the bottom before spheres, although the initial projection area of metaball is larger than the sphere. Fig.~\ref{fig:dkt} reveals detailed flow patterns around metaballs as well as particle positions and orientations. It is clear that the particle shape induced rotations have significant influences on particle dynamics: rotations can reduce the projection area and result in a lower drag force. The effects of variable projection area can also be observed by the time series of settling and angular velocities and, where the fluctuation of metaball settling and angular velocities is considerably larger than those of sphere's (Fig.~\ref{fig:dkt_v}). \begin{figure}[t] \begin{centering} \includegraphics[width=0.5\linewidth]{dkt_time_vs_height} \caption{Time series of height of particle centre. Solid line and dotted line represent represent the leading particle and t he following particle, respectively. Black for sphere and red for metaball.} \label{fig:dkt_h} \end{centering} \end{figure} \begin{figure}[t] \begin{centering} \includegraphics[width=0.5\linewidth]{dkt_time_vs_velocity} \caption{Time series of particle's vertical velocity. Solid line and dotted line represent represent the leading particle and t he following particle, respectively. Black for sphere and red for metaball.} \label{fig:dkt_v} \end{centering} \end{figure} \begin{figure}[t] \begin{centering} \includegraphics[width=0.5\linewidth]{dkt_time_vs_ang} \caption{Time series of angular velocity magnitude. Solid line and dotted line represent represent the leading particle and t he following particle, respectively. Black for sphere and red for metaball.} \label{fig:dkt_w} \end{centering} \end{figure} \begin{figure} \centering \subfigure{\includegraphics[width=30mm]{dkt_0.png}} \subfigure{\includegraphics[width=30mm]{dkt_25.png}} \subfigure{\includegraphics[width=30mm]{dkt_40.png}} \subfigure{\includegraphics[width=30mm]{dkt_45.png}} \subfigure{\includegraphics[width=30mm]{dkt_48.png}} \subfigure{\includegraphics[width=30mm]{dkt_54.png}} \subfigure{\includegraphics[width=30mm]{dkt_61.png}} \subfigure{\includegraphics[width=30mm]{dkt_70.png}} \caption{Snapshot of the two metaball settling simulation at 0, 0.5, 0.8, 0.9, 0.96, 1.08, 1.22 and 1.4 s, colour indicates fluid velocity magnitude.} \label{fig:dkt} \end{figure} The last example is the settling of 30 metaballs with complex shapes. The side views of a randomly generated particle shape are shown in Fig.~\ref{fig:s30_p}, it is clear that the shape is non-isotropic. Same parameters are used as in the previous section but with periodic boundary conditions applied to horizontal directions and the particle volume is $2.186 \times 10^{-6}$ \si{m^3}. 30 metaballs are randomly placed within the domain with zero initial velocity and start to settle under gravity. Fig.~\ref{fig:s30} shows the time evolution of the fluid-particle systems with detailed flow structures, which demonstrated the capability of the proposed model in simulating fluid-particle systems with complex particle shapes. \begin{figure} \centering \subfigure{\includegraphics[width=40mm]{p_x.png}} \subfigure{\includegraphics[width=40mm]{p_y.png}} \subfigure{\includegraphics[width=40mm]{p_z.png}} \caption{Side views of the irregularly shaped metaball.} \label{fig:s30_p} \end{figure} \begin{figure} \centering \subfigure{\includegraphics[width=40mm]{s30_0.png}} \subfigure{\includegraphics[width=40mm]{s30_20.png}} \subfigure{\includegraphics[width=40mm]{s30_40.png}} \subfigure{\includegraphics[width=40mm]{s30_50.png}} \subfigure{\includegraphics[width=40mm]{s30_70.png}} \subfigure{\includegraphics[width=40mm]{s30_100.png}} \caption{Snapshot of the 30 metaball settling simulation at 0, 0.4, 0.8, 1, 1.4 and 2 s, colour indicates fluid velocity magnitude.} \label{fig:s30} \end{figure} \section{Concluding remarks}\label{sec:conclusion} In this paper, we proposed a coupled metaball DEM-LBM model to simulate fluid-particle, particle-particle interactions with complex particle shapes. By introducing a proper sharp interface coupling scheme, the efficiency of LBM in solving flows and the capability of metaball DEM in handling non-spherical particles are integrated. To preserve the high accuracy of sharp interface moving boundary conditions, its numerical instability issues are addressed by a local refilling algorithm and the re-evaluation of hydrodynamic forces from solid nodes. Implementations of metaball DEM, LBM, and the coupling scheme are presented in detail. The proposed model is first validated by comparing settling velocities of a sphere (with metaball representation) in the fluid under various $Re$ with experimental results, good agreements are observed for the settling velocities. By simulating the settling of a non-spherical metaball and comparing with our experimental results, the model shows its capability in accurately handling fluid-particle interactions with complex particle shapes. The treatments of instability issues and their effects are illustrated by multiple particle simulations, which suggests that the proposed coupling scheme can efficiently suppress the non-physical spin when two particles are close to each other. To demonstrate the capability of the model for fluid-particle systems with complex particle shapes, the classic DKT phenomenon is reproduced with non-spherical shapes. It is found that shapes have significant effects on particle dynamics, it is essential to accurately capture particle shape. The model is then applied to simulate the settling of 30 metaballs which clearly shows that the model can handle complex particle geometries. In conclusion, the presented results demonstrate the potential of the metaball DEM-LBM model as a powerful numerical tool for simulating a wide range of fluid-particle systems, particularly for non-spherical particles which can be found in many engineering and science disciplines. \section*{Acknowledgement}\label{sec:Acknowledgement} We gratefully acknowledge the funding from the Zhejiang Provincial Key Research and Development Program (2021C02048), Natural Science Foundation of Zhejiang Province, China (LHZ21E090002) and the National Natural Science Foundation of China (12172305). \bibliographystyle{elsarticle-num}
2,869,038,156,127
arxiv
\section{Introduction} In \cite{CPR} Corti, Pukhlikov and Reid proved that a general quasismooth complex variety \linebreak $X = X_d \subset \mathbb{P}(1,a_1,\ldots,a_4)$ in one of the `famous~95 families' of $\mathbb{Q}$-Fano 3-fold weighted hypersurfaces is \emph{birationally rigid} --- that is, if $X$ is birational to some Mori fibre space $Y/S$ then in fact $Y \simeq X$. A related problem is to classify elliptic and K3 fibrations birational to general hypersurfaces in these families; it was Ivan Cheltsov \cite{Ch00} who first proved classification results of this kind for several birationally rigid smooth Fano varieties, including a general quartic 3-fold $X_4 \subset \mathbb{P}^4$ and a double cover of $\mathbb{P}^3$ branched in a general sextic surface, i.e., $X = X_6 \subset \mathbb{P}(1,1,1,1,3)$. In \cite{Ry02} the classification of elliptic and K3 fibrations birational to general members of the remaining 93 families was addressed, but completed only for family~5, $X_7 \subset \mathbb{P}(1,1,1,2,3)$. In the present paper we aim firstly to give concise proofs of some of the more generally applicable results of \cite{Ry02} and secondly to present a complete proof of the following theorem for family 75, which is the family referred to in the abstract. Furthermore, we state similar theorems for families 34, 88 and 90; these can be proved using essentially the same techniques. \begin{thm} \label{thm:75main} Let $X = X_{30} \subset \mathbb{P}(1,4,5,6,15)_{x,y,z,t,u}$ be a general member of family~75 of the~95. \begin{itemize} \item[(a)] Suppose $\Phi \colon X \dashrightarrow Z/T$ is a birational map from $X$ to a K3 fibration $g \colon Z \to T$ (see~\ref{defns:fibr} below for our assumptions on K3 fibrations, and also on elliptic fibrations and Fano 3-folds). Then there exists an isomorphism $\mathbb{P}^1 \to T$ such that the diagram below commutes, where $\pi = (x^4,y) \colon X \dashrightarrow \mathbb{P}^1$. \[ \xymatrix@C=1.6cm{ X \ar@{-->}[r]^{\Phi} \ar@{-->}[d]_{\pi} & Z \ar[d]^g \\ \mathbb{P}^1 \ar[r]^{\simeq} & T \\ } \] \item[(b)] There does not exist an elliptic fibration birational to $X$. \item[(c)] If $\Phi \colon X \dashrightarrow Z$ is a birational map from $X$ to a Fano 3-fold $Z$ with canonical singularities then $\Phi$ is actually an isomorphism (so in particular $Z \simeq X$ has terminal singularities). \end{itemize} \end{thm} Part (b) of this theorem was recently proved independently by Cheltsov and Park~\cite{CP05} using somewhat different methods. It is an interesting result because of its relevance to the question of whether $\mathbb{Q}$-rational points of $X$ are potentially dense: a birational elliptic fibration is one key geometric construction used to prove potential density (see~\cite{HT00}, \cite{Ha03} and \cite{HT01}). The proof presented here requires close examination of $\operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$ (see~\ref{comments_on_excl_sec}), where $\mathcal{H}$ is the linear system associated to a putative birational map and $n$ is its anticanonical degree; in~\cite{CP05} more general methods are used. Our approach has the advantage that the other parts of Theorem~\ref{thm:75main} follow immediately from our complete classification of possible sets of strictly canonical centres. The following results are analogous to Theorem~\ref{thm:75main}; they can be proved with the same techniques, though we do not include all the details here. From now on we abbreviate conclusions such as that in Theorem~\ref{thm:75main}(a) by stating that \emph{up to a birational twist of the base,} $g \circ \Phi = (x^4,y) \colon X \dashrightarrow \mathbb{P}^1$. The birational twist $\mathbb{P}^1 \to T$ of the base is an isomorphism in the above case because $T$ is a smooth curve. (We assume all fibrations are morphisms of normal varieties: see~\ref{defns:fibr}.) \begin{thm} \label{thm:34main} Let $X = X_{18} \subset \mathbb{P}(1,1,2,6,9)_{x_0,x_1,y,z,t}$ be a general member of family~34 of the~95. \begin{itemize} \item[(a)] If $\Phi \colon X \dashrightarrow Z/T$ is a birational map from $X$ to a K3 fibration $g \colon Z \to T$ then, up to a birational twist of the base (see above for explanation), $g \circ \Phi = (x_0,x_1) \colon X \dashrightarrow \mathbb{P}^1$. \item[(b)] Suppose $\Phi \colon X \dashrightarrow Z/T$ is a birational map from $X$ to an elliptic fibration $g \colon Z \to T$. Then, up to a birational twist of the base, $g \circ \Phi = (x_0,x_1,y) \colon X \dashrightarrow \mathbb{P}(1,1,2)$. \item[(c)] If $\Phi \colon X \dashrightarrow Z$ is a birational map from $X$ to a Fano 3-fold $Z$ with canonical singularities then $\Phi$ is actually an isomorphism (so in particular $Z \simeq X$ has terminal singularities). \end{itemize} \end{thm} \begin{thm} \label{thm:88main} Let $X = X_{42} \subset \mathbb{P}(1,1,6,14,21)_{x_0,x_1,y,z,t}$ be a general member of family~88 of the~95. Under assumptions corresponding to those in Theorem~\ref{thm:34main} we can conclude as follows. \begin{itemize} \item[(a)] Up to a birational twist of the base, $g \circ \Phi = (x_0,x_1) \colon X \dashrightarrow \mathbb{P}^1$. \item[(b)] Up to a birational twist of the base, $g \circ \Phi = (x_0,x_1,y) \colon X \dashrightarrow \mathbb{P}(1,1,6)$. \item[(c)] $\Phi$ is actually an isomorphism. \end{itemize} \end{thm} \begin{thm} \label{thm:90main} Let $X = X_{42} \subset \mathbb{P}(1,3,4,14,21)_{x,y,z,t,u}$ be a general member of family~90 of the~95. Under assumptions corresponding to those in Theorem~\ref{thm:34main} we can conclude as follows. \begin{itemize} \item[(a)] Up to a birational twist of the base, $g \circ \Phi = (x,y) \colon X \dashrightarrow \mathbb{P}(1,3)$. \item[(b)] Up to a birational twist of the base, $g \circ \Phi = (x,y,z) \colon X \dashrightarrow \mathbb{P}(1,3,4)$. \item[(c)] $\Phi$ is actually an isomorphism. \end{itemize} \end{thm} \subsection*{Outline of paper} Our proof of Theorem~\ref{thm:75main} has essentially three parts; this division of the argument is closely modelled on the approach of Cheltsov to similar problems for smooth varieties --- see e.g.~\cite{Ch00}. In brief, the parts are: constructing the K3 fibration birational to our $X$ in family~75 (see~\S\ref{sec:constr}); proving a technical result, Theorem~\ref{thm:75aux}, using exclusion arguments (\S\ref{sec:excl}); and deriving Theorem~\ref{thm:75main} from Theorem~\ref{thm:75aux} (in~\S\ref{sec:class}) using an analogue of the Noether--Fano--Iskovskikh inequalities (\ref{nfi}) together with an adaptation of the framework of~\cite{Ch00}. We now make some comments on each of these three. \begin{emp} In \S\ref{sec:constr} we show that the projection $(x^4,y) \colon X \dashrightarrow \mathbb{P}^1$ is indeed a K3 fibration, after resolution of indeterminacy. The construction is Mori-theoretic: we make a Kawamata blowup of $X$ and play out the two ray game. We also outline constructions of the elliptic fibrations in Theorems~\ref{thm:34main}(b),~\ref{thm:88main}(b) and~\ref{thm:90main}(b). \end{emp} \begin{comments_on_excl_sec} \label{comments_on_excl_sec} First let us state the technical theorem mentioned above, which is proved in~\S\ref{sec:excl}. We need the following. \begin{notn*} Let $X$ be a normal complex projective variety, $\mathcal{H}$ a mobile linear system on $X$ and $\alpha \in \mathbb{Q}_{\ge 0}$. We denote by $\operatorname{CS}(X,\alpha\mathcal{H})$ the set of centres on $X$ of valuations that are strictly canonical or worse for $K_X + \alpha\mathcal{H}$ --- that is, \[ \operatorname{CS}(X,\alpha\mathcal{H}) = \{\operatorname{Centre}_X(E) \mid a(E,X,\alpha\mathcal{H}) \le 0\}. \] Occasionally we also use $\operatorname{LCS}(X,\alpha\mathcal{H})$, which is defined similarly as \[ \operatorname{LCS}(X,\alpha\mathcal{H}) = \{\operatorname{Centre}_X(E) \mid a(E,X,\alpha\mathcal{H}) \le {-1}\}. \] \end{notn*} \begin{thm} \label{thm:75aux} Let $X = X_{30} \subset \mathbb{P}(1,4,5,6,15)_{x,y,z,t,u}$ be a general member of family~75 of the~95. Suppose $\mathcal{H}$ is a mobile linear system of degree $n$ on $X$ with $K_X + \textstyle\frac{1}{n}\HH$ nonterminal. Then in fact $K_X + \textstyle\frac{1}{n}\HH$ is strictly canonical and $\operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right) = \{Q_1,Q_2\}$, where $Q_1,Q_2 \sim \frac{1}{5}(1,4,1)_{x,y,t}$ are the two singularities of $X$ on the $zu$-stratum. \end{thm} For an introduction to the relationship between this result and Theorem~\ref{thm:75main}, see~\ref{comments_on_final_sec} below. Our proof of~\ref{thm:75aux} in~\S\ref{sec:excl} is by exclusion arguments, as mentioned above --- e.g., we show in Theorem~\ref{thm:smpts} that no smooth point can be a centre on $X$ of a valuation strictly canonical for $K_X + \textstyle\frac{1}{n}\HH$; we refer to this as \emph{excluding\/} any smooth point. As well as \emph{absolute\/} exclusions such as this, there is an interesting \emph{conditional\/} exclusion result for singular points, Theorem~\ref{thm:Tmethodcond}. Some of the exclusion results of~\S\ref{sec:excl} are extensions of arguments in~\cite{CPR}, but Theorem~\ref{thm:Tmethodcond} is an example of strikingly new behaviour, and there are substantial differences in method also for exclusion of curves~(\S\ref{subsec:curves}). It should be noted that, though the main aim of~\S\ref{sec:excl} is to prove Theorem~\ref{thm:75aux}, several of the results obtained apply to many of the~95 families other than number~75 (in particular, numbers~34,~88 and~90) --- and the techniques of proof apply more generally still. \cite[App.\ A]{Ry02} contains a detailed list of results analogous to Theorem~\ref{thm:75aux} for most of the~95 families, including many for which only a conjectural birational classification of elliptic and K3 fibrations is currently known. In the case of family~34, for instance, we have the following. \begin{thm} \label{thm:34aux} Let $X = X_{18} \subset \mathbb{P}(1,1,2,6,9)_{x_0,x_1,y,z,t}$ be a general member of family~34 of the~95. Suppose $\mathcal{H}$ is a mobile linear system of degree $n$ on $X$ with $K_X + \textstyle\frac{1}{n}\HH$ nonterminal. Then in fact $K_X + \textstyle\frac{1}{n}\HH$ is strictly canonical and $\operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$ is either $\{C,P,Q_1,Q_2,Q_3\}$ or $\{P\}$. Here $C = \{x_0=x_1=0\}\cap X$ is irreducible by generality of $X$, and $P$, $Q_1$, $Q_2$ and $Q_3$ are the singularities of $X$; $P \sim \frac{1}{3}(1,1,2)_{x_0,x_1,y}$ lies on the $zt$-stratum and $Q_1,Q_2,Q_3 \sim \frac{1}{2}(1,1,1)_{x_0,x_1,t}$ lie on the $yz$-stratum. \end{thm} \noindent Like Theorem~\ref{thm:34main}, this result is not proved in this paper, but the techniques for doing so, and for proving analogues for families~88 and~90, are essentially those we use below for Theorem~\ref{thm:75aux}. \end{comments_on_excl_sec} \begin{comments_on_final_sec} \label{comments_on_final_sec} In a sense the relationship between Theorems~\ref{thm:75aux} and~\ref{thm:75main} is obvious: the K3 fibration given by $(x^4, y)$ has the singularities $Q_1$ and $Q_2$ of~\ref{thm:75aux} as centres, and no other set of centres is possible --- so if, say, we try to grow an elliptic fibration birational to $X$, we must start with an extremal extraction of $Q_1$ or $Q_2$, but then we find we have to extract the other $Q_i$ as well. It turns out that to make this rigorous we need some abstract machinery --- in particular, results concerning the log Kodaira dimension of $(X,(\frac{1}{n} + \varepsilon)\mathcal{H})$ for small $\varepsilon$. It should be noted that for many of the~95 families an argument such as the above does not apply directly, because sets of centres do not distinguish objects of interest: for example, \cite{Ry02} contains many examples of $\mathbb{Q}$-Fanos $X$ birational to, say, an elliptic fibration and also a Fano with canonical singularities, and with the two linear systems having the same $\operatorname{CS}$ on $X$. To describe the approach of~\S\ref{sec:class} we consider a more abstract setup. Let $X$ be a Mori Fano variety (see~\ref{defns:Mfs} below), $Z$ a variety with canonical singularities and $\Phi \colon X \dashrightarrow Z$ a birational map. Assume furthermore that one of the following holds. (a) $g \colon Z \to T$ is a $K$-trivial fibration (see~\ref{defns:fibr}) with $0 < \dim T < \dim Z$. Let $\mathcal{H}_Z = g^*|H|$ be the pullback of a very ample complete linear system of Cartier divisors on $T$ and $\mathcal{H}$ its birational transform on $X$. (b) $Z$ is a Fano variety with canonical singularities (see~\ref{defns:Mfs}) and $\Phi$ is not an isomorphism. Let $\mathcal{H}_Z = |H|$ be a very ample complete linear system of Cartier divisors on $Z$ and $\mathcal{H}$ its birational transform on $X$. In either case (a) or (b), define $n \in \mathbb{Q}$ by $K_X + \textstyle\frac{1}{n}\HH \sim_\QQ 0$. \begin{nfi}[{\cite{Ch00}}] \label{nfi} In either of the above situations, $K_X + \textstyle\frac{1}{n}\HH$ is nonterminal, that is, strictly canonical or worse. \end{nfi} This is a standard result used in~\cite{Ch00} and~\cite{Is01}; but we give a proof of it in~\S\ref{sec:class}, under the assumptions~(a), because in this situation it follows from results we need anyway. Clearly~\ref{nfi} is motivation enough to address Theorem~\ref{thm:75aux} on the way towards proving Theorem~\ref{thm:75main}, but more work is needed to complete the proof of the latter. \S\ref{sec:class} contains the necessary arguments; two of the propositions are results of Cheltsov (though we give a proof of one of them), but in order to conclude we need a rather delicate argument that traces a log Kodaira dimension through a two ray game diagram. \end{comments_on_final_sec} \subsection*{What is special about families 34, 75, 88 and 90?} It is natural to ask why we are able to prove Theorems~\ref{thm:75main},~\ref{thm:34main},~\ref{thm:88main} and~\ref{thm:90main} for families~75,~34,~88 and~90, but are not able to prove similar theorems for other families out of the~95 with the same methods. There is no one answer to this, but the following are major factors. \begin{itemize} \item General members of families~34,~75,~88 and~90 are \emph{superrigid}, i.e., there are no nonautomorphic birational selfmaps. For nonsuperrigid families (the majority of the~95) one obtains much bigger anticanonical rings after Kawamata blowups of the singularities that are centres of involutions: see~\cite{CPR}. This makes impossible any direct generalisation of arguments such as that in our final proof of Theorem~\ref{thm:75main}(b). \item As already mentioned, it frequently occurs for other families out of the~95 that sets of centres on $X$ do not distinguish between different birational maps to elliptic or K3 fibrations or to Fanos with canonical singularities. \cite[Ch.\ 4]{Ry02} discusses this phenomenon in some detail using family~5, $X_7 \subset \mathbb{P}(1,1,1,2,3)$, as an extended example. Whilst the analogue of Theorem~\ref{thm:75main} is eventually proved for this family in \cite{Ry02}, the proof requires complicated exclusion arguments on blown up models of $X$, and there is no obvious way of avoiding these. See~\cite{CM04} for a situation that is in some ways analogous. \end{itemize} The recent work of Cheltsov and Park~\cite{CP05} proves a number of results that complement those here: they show, for example, that a general member of any of the~95 families is birational to a K3 fibration, but do not classify the K3 fibrations so obtained; they also prove that a general member of family~$N$ is not birational to an elliptic fibration if and only if $N \in \{3,75,84,87,93\}$ and, for some 23 values of $N$ not in this set, they show that up to a birational twist of the base there is a unique birational elliptic fibration: it is obtained by projection $(x_0,x_1,x_2)$ onto the first three coordinates~\cite[4.13]{CP05}. They do not, however, prove full analogues of our Theorem~\ref{thm:75aux} for these families; this is one reason why they cannot classify birational K3 fibrations. Our methods do permit K3 fibrations to be classified, but only in cases where whenever we blow up canonical centres not excluded by general results we obtain small, manageable anticanonical rings. With our current technology this restricts us to families such as~34,~75,~88 and~90, where there are few birational maps to elliptic and K3 fibrations and no nontrivial birational maps to Fanos with canonical singularities. \subsection*{Conventions and assumptions} Our notations and terminology are mostly as in, for example,~\cite{KM}, but we list here some conventions that are nonstandard, together with assumptions that will hold throughout. \begin{emp} All varieties considered are complex, and they are projective and normal unless otherwise stated. \end{emp} \begin{emp} For details on the famous~95 families, see~\cite{Fl00} and~\cite{CPR}. In brief, $X_d \subset \mathbb{P}(1,a_1,\ldots,a_4)$ belongs to one of the families if (1) $X$ is \emph{quasismooth}, i.e., its singularities are all quotient singularities forced by the weighted $\mathbb{P}^4$; (2) the singularities are \emph{terminal} --- 3-fold terminal quotient singularities are necessarily of the form $\frac{1}{r}(1,a,r-a)$ with $r \ge 1$ and $(a,r) = 1$; and (3) $a_1 + \cdots + a_4 = d$, so by the adjunction formula ${-K_X} = \mathcal O_X(1)$ is very ample. Whenever $X$ is a member of one of the~95 families, we let $A = {-K_X} = \mathcal O_X(1)$ denote the positive generator of the class group; moreover, if $f \colon Y \to X$ is a birational morphism then $B$ denotes ${-K_Y}$. \end{emp} \begin{emp} As in~\cite{CPR} and~\cite{Ry02}, we refer to the weighted blowup with weights $\frac{1}{r}(1, a, r-a)$ of a 3-fold terminal quotient singularity $\frac{1}{r}(1, a, r-a)$ with $r \ge 2$ and $(a, r) = 1$ as the \emph{Kawamata blowup\/} --- see~\cite{Ka96}, the main theorem of which is reproduced here as Theorem~\ref{thm:Ka}. \end{emp} \begin{defns} \label{defns:Mfs} A \emph{Mori fibre space\/} $f \colon X \to S$ is a Mori extremal contraction of fibre type, that is, $\dim S < \dim X$. This means that $X$ and $S$ are projective varieties, $X$ has $\mathbb{Q}$-factorial, terminal singularities, $f_*\mathcal O_X = \mathcal O_S$, $\rho(X/S) = 1$ and ${-K_X}$ is $f$-ample. If $S = \{*\}$ is a point then $X$ is a \emph{Mori Fano variety}. We use the term \emph{Fano variety\/} more generally to refer to any normal, projective variety $X$ with $-K_X$ ample and $\rho(X) = 1$. \end{defns} \begin{defns} \label{defns:fibr} Let $Z$ be a normal projective variety with canonical singularities. A \emph{fibration\/} is a morphism $g \colon Z \to T$ to another normal projective variety $T$ such that $\dim T < \dim Z$ and $g_*\mathcal O_Z = \mathcal O_T$. We say the fibration is \emph{$K$-trivial\/} if and only if $K_Z C = 0$ for every contracted curve~$C$. $g$ is an \emph{elliptic fibration}, resp.\ a \emph{K3 fibration}, if and only if its general fibre is an elliptic curve, resp.\ a K3 surface. \end{defns} \begin{emp} Usually when we write an equation explicitly or semi-explicitly in terms of coordinates we omit scalar coefficients of monomials; this is the `coefficient convention'. \end{emp} \subsection*{Acknowledgments} Most of the techniques in this paper, and some of the theorems, are from my PhD thesis,~\cite{Ry02}. I would like to thank my supervisor, Miles Reid, and also Gavin Brown, Alessio Corti and Hiromichi Takagi, for their help and generosity with their ideas; I would also like to thank Ivan Cheltsov for his helpful comments during the final stages of preparing this paper. My PhD studies were supported financially by the British EPSRC. \section{Constructions} \label{sec:constr} \subsection{K3 fibrations} The following observation does not apply to family~75, our main object of study, but it does apply to families~34 and~88; in any case, it needs to be noted because it describes all `easy' K3 fibrations birational to members of the famous 95 --- cf.~Lemma~\ref{lem:75tworay} for the `hard' case. \begin{prop} \label{prop:easyK3s} Let $X = X_d \subset \mathbb{P}(1,1,a_2,a_3,a_4)$ be general in one of the families with $a_1 = 1$ and $a_2 > 1$. Then a general fibre $S$ of $\pi = (x_0,x_1) \colon X \dashrightarrow \mathbb{P}^1$ is a quasismooth Du Val K3 surface and, setting $\mathcal P$ to be the pencil $\left<x_0,x_1 \right>$, we have \[ \operatorname{CS}(X,\mathcal P) = \{C,P_1,\ldots,P_r\}, \] where $C$ is the curve $\{x_0 = x_1 = 0\} \cap X$, which is irreducible by generality of $X$, and $P_1,\ldots,P_r$ are all the singularities of $X$. \end{prop} \begin{proof}[\textsc{Proof}] Because $S$ is a general element of $|\mathcal O_X(1)|$, it is certainly quasismooth. The adjunction formula for $S_d \subset \mathbb{P}(1,a_2,a_3,a_4) =: \mathbb{P}$ gives $K_S = 0$ and the cohomology long exact sequence from \[ 0 \to \mathcal I_{S,\mathbb{P}}=\mathcal O_{\mathbb{P}}(-d) \to \mathcal O_{\mathbb{P}} \to \mathcal O_S \to 0, \] with the standard cohomology results for weighted projective space, gives $h^1(S,\mathcal O_S) = 0$. Therefore $S$ is a quasismooth Du Val K3 surface. Let $f \colon Y \to X$ be the blowup of the ideal sheaf $\mathcal I_{C,X}$ of $C$ and $E \subset Y$ the unique exceptional divisor of $f$ which dominates $C$. Then clearly $m_E(\mathcal P) = a_E(K_X) = 1$, so $C \in \operatorname{CS}(X,\mathcal P)$. The fact that $P_1,\ldots,P_r \in \operatorname{CS}(X,\mathcal P)$ is a consequence of Corollary~\ref{cor:Kalemma:2} of Kawamata's Lemma (below). Therefore \[ \operatorname{CS}(X,\mathcal P) \supset \{C,P_1,\ldots,P_r\}; \] the reverse inclusion follows from Theorem~\ref{thm:smpts}. \end{proof} \begin{rk} If $X = X_d \subset \mathbb{P}(1,1,1,a_3,a_4)$ is general in a family with $a_1 = a_2 = 1$ then clearly a general element $S$ of any pencil $\mathcal P \subset |\mathcal O_X(1)|$ is a Du Val K3 surface, provided one can prove it is quasismooth. This can be fiddly, the problem being that while $X$ is general and $S \in \mathcal P$ is general, $\mathcal P$ must be able to be \emph{any\/} pencil inside $|\mathcal O_X(1)|$. \end{rk} In contrast to the situation considered above, for families $X_d \subset \mathbb{P}(1,a_1,\ldots,a_4)$ with $a_1 > 1$ it is not immediately clear whether there exist K3 fibrations birational to $X$: to construct them, or at least to make sense of the construction, we need Mori theory. Here we consider only family~75, but the technique applies to many other families --- see~\cite{Ry02} --- and, in particular, to family~90, the subject of our Theorem~\ref{thm:90main}. \begin{lemma} \label{lem:75tworay} Let $X=X_{30} \subset \mathbb{P}(1,4,5,6,15)_{x,y,z,t,u}$ be a general member of family~75 of the~95; for the notations $P,Q_1,Q_2,R_1,R_2$ for the singularities of $X$, see Theorem~\ref{thm:75aux}. Let $Q_i \in X$ be either $Q_1$ or $Q_2$ and $f \colon Y \to X$ the Kawamata blowup of $Q_i$. \begin{itemize} \item[(1)] Let $R \subset \operatorname{\overline{NE}} Y$ be the ray with $\operatorname{cont}_R = f$. Then the other ray $Q \subset \operatorname{\overline{NE}} Y$ is contractible and its contraction $g = \operatorname{cont}_Q \colon Y \to Z$ is antiflipping. \item[(2)] The antiflip $Y \dashrightarrow Y'$ of $g$ exists and $Y'$ has canonical singularities. \item[(3)] Let $Q' \subset \operatorname{\overline{NE}} Y'$ be the ray whose contraction is $g' \colon Y' \to Z$ and $R' \subset \operatorname{\overline{NE}} Y'$ the other ray. Then $R'$ is contractible and its contraction $f' \colon Y' \to \mathbb{P}^1$, which is in fact the anticanonical morphism $\fie_{|{-4K_{Y'}}|} = \fie_{|4B'|}$, is a K3 fibration --- that is, a general fibre $T'$ of $f'$ has Du Val singularities, $K_{T'} =0$ and $h^1(T',\mathcal O_{T'}) = 0$. \end{itemize} It follows that the total composite $X \dashrightarrow \mathbb{P}^1$ of the two ray game we have played, illustrated below, is $\pi = (x,y) \colon X \dashrightarrow \mathbb{P}(1,4) = \mathbb{P}^1$. \[ \xymatrix@C=0.4cm{ & Y \ar[dl]_f \ar[dr]^g & & Y' \ar[dl]_{g'} \ar[dr]^{f'} & \\ X & & Z & & \mathbb{P}^1 \\ } \] Therefore $R(Y,B) = R(Y',B') = k[x,y]$. \end{lemma} \begin{proof}[\textsc{Proof of \ref{lem:75tworay}}] (1) The first part of the following argument --- showing that the curve $C$ defined below generates $Q \subset \operatorname{\overline{NE}} Y$ --- is one case of \cite[5.4.3]{CPR}; but in \cite{CPR} this point $Q_i$ is excluded as a maximal centre by the test class method, so there is no need for the two ray game to be played out. By generality of $X$, the curve $\{x=y=0\} \cap X$ is irreducible. $f \colon Y \to X$ is the $\frac{1}{5}(1,4,1)_{x,y,t}$ weighted blowup of the $\frac{1}{5}(1,4,1)_{x,y,t}$ point $Q_i$. Let $S \in \left|B\right|$ be the unique effective surface and $T \in \left|4B\right|$ a general element, where as always $B = {-K_Y}$. One can check explicitly, by looking at the three affine pieces of $Y$ locally over $Q_i$, that $C := S \cap T$ is an irreducible curve inside $Y$; this uses the generality of $X$. We also know that \[ B^3 = A^3 - \frac{1}{ra(r-a)} = \frac{1}{60} - \frac{1}{20} < 0, \] so in particular $BC = 4B^3 < 0$. It follows that the ray $Q \subset \operatorname{\overline{NE}} Y$ is generated by $C$ --- indeed, suppose this is not the case; then $C$ is in the interior of the 2-dimensional cone $\operatorname{\overline{NE}} Y$, so we can pick an effective 1-cycle $\sum_{i = 0}^p \alpha_i C_i$ that lies strictly between $Q$ and the half-line generated by $C$. This 1-cycle is $B$-negative, because $R$ is $B$-positive (i.e., $K$-negative) and $BC < 0$; but $\operatorname{Bs}\left|4B\right|$ is supported on $C$ and therefore one of the $C_i$, say $C_0$, is in fact $C$. The geometry of the cone now implies that, after we subtract off $\alpha_0 C_0$, $\sum_{i = 1}^p \alpha_i C_i$ is again strictly between $Q$ and the half-line generated by $C$; so we can repeat the argument to deduce that some other $C_i$ is $C$ --- and of course this contradicts our initial (implicit) assumption that the $C_i$ were distinct. This argument has also shown that $C$ is the only irreducible curve in the ray $Q$. We show $Q$ is contractible with a general Mori-theoretic trick --- afterwards, clearly, $g = \operatorname{cont}_Q$ is antiflipping, because $C$ is the only contracted curve and $K_Y C = {-BC} > 0$. Firstly note that $S$ has canonical singularities and so in particular is klt. We apply Shokurov's inversion of adjunction (see \cite[5.50]{KM}) to deduce that the pair $(Y,S)$ is plt. (In fact $K_Y + S$ is Cartier so the log discrepancy of any valuation for $(Y,S)$ is an integer, and therefore $(Y,S)$ is canonical.) But plt is an open condition so for $\varepsilon \in \mathbb{Q}_{>0}$, $\varepsilon \ll 1$, the pair $(Y,S + \varepsilon T)$ is plt as well. Now \[ (K_Y + S + \varepsilon T)C = \varepsilon TC = \varepsilon B^3 < 0, \] so $Q$ is contractible by the (log) contraction theorem for $(Y,S + \varepsilon T)$. (2) This follows immediately from Mori's result \cite[20.11]{FA}: $S$ and $T$ are effective divisors, $T \sim 4S$ and $T \cap S = C$ is precisely the exceptional set of $g$. Since $S \sim B = {-K_Y}$ the antiflip of $g$ is precisely its `opposite with respect to $S$', to use the language of \cite{FA}. This can be constructed as the normalisation of the closure of the image of \[ g \times \pi_Y \colon Y \dashrightarrow Z \times \mathbb{P}^1, \] where $\pi_Y = \pi \circ f \colon Y \dashrightarrow \mathbb{P}(1,4) = \mathbb{P}^1$ corresponds to the pencil $\left<4S,T\right> = \left|4B\right|$. Alternatively, since we have already observed that the ray $Q$ is $(K_Y + S + \varepsilon T)$-negative, (2) follows from Shokurov's general result that log flips of lc pairs exist in dimension 3. (3) From the construction of the antiflip, the transforms $S'$ and $T'$ of $S$ and $T$ on $Y'$ are disjoint, so $\operatorname{Bs}\left|{-4K_{Y'}}\right| = \operatorname{Bs}\left|4B'\right| = \emptyset$. Therefore $f' = \operatorname{cont}_{R'}$ exists and is (the Stein factorisation of) $\fie_{\left|4B'\right|}$; $T'$ is a general fibre of $f'$. In the diagram below, $\nu \colon U \to g(T)$ is the normalisation of $g(T)$. \[ \xymatrix@C=0.1cm{ & Y & \supset & T \ar[dl]_f \ar@/_0.3cm/[ddrr]_g \ar[drr]^h & & & & T' \ar[dll]_{h'} \ar@/^0.3cm/[ddll]^{g'} \ar[dr]^{f'} & \subset & Y'& \\ X & \supset & T_X & & & U \ar[d]^{\nu} & & & \{*\} & \subset & \mathbb{P}^1 \\ & & & & & g(T) & & & & & \\ } \] Now $K_T = (K_Y + T)|_T = 3B|_T = 3C$, so $K_U = h_*(3C) = 0$. Furthermore, the Leray spectral sequence for $f\colon T\to T_X$, \[ 0 \to 0 = H^1\left(T_X,\mathcal O_{T_X}\right) \to H^1\left(T,\mathcal O_T\right) \to H^0\left(T_X,R^1f_*\mathcal O_T\right) = 0 , \] shows that $h^1(T,\mathcal O_T) = 0$ (here $h^0(T_X,R^1f_*\mathcal O_T) = 0$ because the singularity $\frac{1}{5}(1,1)$ of $T_X$ at $Q_i$ is rational); and now the Leray spectral sequence for $h \colon T \to U$, \[ 0 \to H^1\left(U,\mathcal O_U\right) \to H^1\left(T,\mathcal O_T\right) = 0 , \] shows that $h^1(U,\mathcal O_U) = 0$. The only thing left to do is to show that $U$ has Du Val singularities --- it is then clear that $T'$ is a Du Val K3 surface because \[ K_{T'} = \left(K_{Y'} + T'\right)|_{T'} = 3B'|_{T'} = 0 , \] so the minimal resolution $\widetilde{U} \to U$ of $U$ factors through $h' \colon T' \to U$. To show $U$ has Du Val singularities we observe first that $T$ has two singularities: a $\frac{1}{5}(1,1)_{x,t}$ point over $Q_j \in T_X$, where $\{i,j\} = \{1,2\}$, and a $\frac{1}{3}(1,2)_{x,z}$ point over $P \in T_X$. The curve $C$ passes through both and is locally defined by $x = 0$ in a neighbourhood of each. Let $\xi \colon \widetilde{T} \to T$ be the minimal resolution of $T$, with exceptional curves $E_1$, $E_2$ lying over $P$ and $E_3$ lying over $Q_j$, where $E_1^2 = E_2^2 = {-2}$, $E_1 E_2 = 1$, $E_1 \widetilde{C} = 0$, $E_2 \widetilde{C} = 1$, $E_3^2 = {-5}$ and $E_3 \widetilde{C} = 1$ (here $\widetilde{C}$ is the birational transform of $C$ on $\widetilde{T}$). We can calculate that $(\widetilde{C})^2 < 0$ and $K_{\widetilde{T}}\widetilde{C} < 0$, from which it follows that $\widetilde{C}$ is a $-1$-curve. (To do this, first show that $C^2_T < 0$ and $K_T C < 0$; then express $\widetilde{C}$ as the pullback $\xi^*C$ minus nonnegative multiples of $E_1,E_2,E_3$; then use the fact that $(E_i E_j)_{i,j = 1}^2$ is negative definite, $K_{\widetilde{T}}(\xi^*C) = K_T C < 0$ and $K_{\widetilde{T}}$ is $\xi$-nef by minimality of $\xi$.) Now the minimal resolution $\widetilde{U}$ of $U$ is obtained from $\widetilde{T}$ by running a minimal model program over $U$ --- so we start by contracting $\widetilde{C}$, which is exceptional over $U$, and then $E_2$, which has become a $-1$-curve, and finally $E_1$. $E_3$ is left as a $-2$-curve. This MMP can be summarised by \[ (2,2,1,5) \to (2,1,4) \to (1,3) \to (2). \] It follows that $U$ has a single $A_1$ Du Val singularity, as required. \end{proof} \subsection{Elliptic fibrations} Our main aim is to prove Theorem~\ref{thm:75main}, part of which is the statement that there is no elliptic fibration birational to a general $X$ in family~75. In this subsection, however, we digress briefly to discuss the construction of elliptic fibrations birational to hypersurfaces in other families out of the~95. In particular we are concerned with families~34,~88 and~90, the subjects of Theorems~\ref{thm:34main},~\ref{thm:88main} and~\ref{thm:90main}. \begin{ex} Let $X = X_{18} \subset \mathbb{P}(1,1,2,6,9)_{x_0, x_1, y, z, t}$ be a general member of family~34 of the~95. We claim that \begin{itemize} \item[(a)] the projection $\pi = (x_0, x_1, y) \colon X \dashrightarrow \mathbb{P}(1,1,2)$ gives an elliptic fibration birational to $X$ after resolution of indeterminacy; and \item[(b)] the indeterminacy may be resolved as shown below, where $f \colon Y \to X$ is the Kawamata blowup of the unique singularity $P \sim \frac{1}{3}(1,1,2)_{x_0,x_1,y}$ on the $zt$-stratum, and $\pi_Y := \pi \circ f$ is the anticanonical morphism $\fie_{|2B|}$, $B = {-K_Y}$. \[ \xymatrix@C=1.6cm{ Y \ar[rd]^(.4){\pi_Y} \ar[d]_{f} & \\ X \ar@{-->}[r]^(.4){\pi} & \mathbb{P}(1,1,2) \\ } \] \end{itemize} \begin{proof}[\textsc{Proof}] (a) This is easy to see: a general fibre of the rational map $\pi$ is a curve $E_{18} \subset \mathbb{P}(1,6,9)_{x,z,t}$ and, when we write down the Newton polygon of the defining equation of such a curve, there is a unique internal monomial (namely $x^3 z t$, the vertices being $t^2$, $x^{18}$ and~$z^3$). Consequently $E_{18}$ is birational to an elliptic curve, by standard toric geometry. (b) The linear system $\LL$ defining $\pi$ is $|2A| = \left< x_0^2, x_1^2, y \right>$ and one can calculate directly that its birational transform $\LL_Y$ on $Y$ is free. Furthermore \[ \LL_Y = f^*\LL - \textstyle\frac{2}{3} E \sim 2B \] and $\LL_Y$ is clearly a complete linear system. \end{proof} \end{ex} As well as exhibiting one elliptic fibration birational to a general hypersurface in family~34 (actually, according to Theorem~\ref{thm:34main}, the only such), this example shows us one way to look for elliptic fibrations when we consider other families: namely, find a singular point $P$ with $B^3 = 0$ (for $B = {-K_Y}$, $f \colon Y \to X$ the Kawamata blowup of $P$), take the anticanonical morphism on $Y$ (if $B$ is eventually free) and see if it maps to a surface. It turns out that this method works for families~88 and~90, as well as~34, so as far as the present paper is concerned, we are done. There are, however, other ways in which elliptic fibrations occur birational to hypersurfaces in the~95 families. Here is a brief list; see~\cite{Ry02} for more details. \begin{itemize} \item Sometimes elliptic fibrations have more than one singular point of $X$ in $\operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$; they can be constructed by blowing up all these points and taking the anticanonical morphism. \item {\sloppy It can also occur that $\operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$ consists of only one singular point $P$, but \linebreak $\operatorname{CS}\left( Y, \frac{1}{n}\mathcal{H}_Y \right) \ne \emptyset$, where $Y = \operatorname{B}_P X$ is the Kawamata blowup; in this case further blowups of $Y$ are necessary before taking the anticanonical morphism.} \item Finally, there are examples of elliptic fibrations with a curve in $\operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$ --- but only for families~1 and~2; see~\cite{Ry02}. \end{itemize} \section{Exclusions, absolute and conditional} \label{sec:excl} For an initial introduction to the contents of this section, see~\ref{comments_on_excl_sec}. We divide the material for proving Theorem~\ref{thm:75aux} into three subsections: in~\S\ref{subsec:curves} we show that all curves are excluded absolutely and in~\S\ref{subsec:smpts} we prove the corresponding result for smooth points; finally in~\S\ref{subsec:singpts} we deal with singular points. Much of the material in this section applies more widely than to family~75: for example, out of all the singular points on members of the~95 families, we show that those satisfying a certain condition turn out to be excluded absolutely, while those satisfying a different (but closely related) condition are excluded \emph{conditionally} --- that is, we prove that if they belong to $\operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$ for some $\mathcal{H}$ then other centres must also exist in $\operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$. \subsection{Curves} \label{subsec:curves} First we state the main curve exclusion theorem proved in \cite{Ry02}. \begin{thm}[{\cite[Curves Theorem A]{Ry02}}] \label{thmA} Let $X = X_d \subset \mathbb{P} = \mathbb{P}(1,a_1,a_2,a_3,a_4)$ be a general hypersurface in one of the 95 families and $C \subset X$ a reduced, irreducible curve. Suppose $\mathcal{H}$ is a mobile linear system of degree $n$ on $X$ such that $K_X + \frac{1}{n}\mathcal{H}$ is strictly canonical and $C \in \operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$. Then there exist two linearly independent forms $\ell,\ell'$ of degree $1$ in $(x_0,\ldots,x_4)$ such that \begin{equation} C \subset \{\ell = \ell' = 0\} \cap X. \label{CinPi} \end{equation} \end{thm} We do not reproduce the proof in full here, but instead restrict ourselves to the case we need, namely $X_{30} \subset \mathbb{P}(1,4,5,6,15)$. This follows immediately from the following lemmas, the first of which is standard. \begin{lemma} \label{lem:degC} Let $X$ be any hypersurface in one of the 95 families and $C \subset X$ a curve, reduced but possibly reducible. Suppose $\mathcal{H}$ is a mobile linear system of degree $n$ on $X$ such that $K_X + \frac{1}{n}\mathcal{H}$ is strictly canonical and each irreducible component of $C$ belongs to $\operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$. Then $\deg C = AC \le A^3$. \end{lemma} \begin{lemma} \label{curvesinwps1} Let $X = X_d \subset \mathbb{P} = \mathbb{P}(1,a_1,a_2,a_3,a_4)$ be a hypersurface in one of the families with $a_1 > 1$ and suppose that either \begin{itemize} \item[(a)] $d < a_1a_4$ or \item[(b)] $d < a_2a_4$ and the curve $\{x = y = 0 \}\cap X$ is irreducible (which holds for general $X$ in a family with $a_1 > 1$ by Bertini's theorem). \end{itemize} Then any curve $C \subset X$ that is not contracted by $\pi_4 \colon X \dashrightarrow \mathbb{P}(1,a_1,a_2,a_3)$ has $\deg C > A^3$. Consequently $C$ is excluded absolutely by Lemma \ref{lem:degC}. \end{lemma} For the proofs of Lemmas~\ref{lem:degC} and~\ref{curvesinwps1}, see below. It is straightforward to check that they imply the following. \begin{cor} \label{cor:75curves} Let $X = X_{30} \subset \mathbb{P}(1,4,5,6,15)_{x,y,z,t,u}$ be any (quasismooth) member of family~75. Then Theorem~\ref{thmA} holds for $X$, that is, no reduced, irreducible curve $C \subset X$ can belong to $\operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$ for any mobile system $\mathcal{H}$ of degree $n$ with $K_X + \textstyle\frac{1}{n}\HH$ strictly canonical. \end{cor} \begin{proof}[\textsc{Proof of Lemma \ref{lem:degC}}] Let $s$ be a natural number such that $sA$ is Cartier and very ample, and pick general members $H,H' \in \mathcal{H}$. Now by assumption \[ \operatorname{mult}_{C_i}(H) = \operatorname{mult}_{C_i}(H') = n \] for each irreducible component $C_i$ of $C$, so for a general $S \in \left|sA \right|$ \[ A^3 sn^2 = SHH' \ge sn^2 AC = sn^2 \deg C, \] which proves $\deg C \le A^3$. \end{proof} \begin{proof}[\textsc{Proof of Lemma \ref{curvesinwps1}}] We prove this lemma under the additional assumption that $(a_1,a_2) = 1$ --- which is the case for family 75 ($a_1 = 4$, $a_2 = 5$); if $(a_1,a_2) > 1$ then a little trick, described in \cite{Ry02}, is needed. Suppose that $C \subset X$ has $\deg C \le A^3$ and is not contracted by $\pi_4$; let $C' \subset \mathbb{P}(1,a_1,a_2,a_3)$ be the set-theoretic image $\pi_4(C)$. Note that $\deg C' \le \deg C$ --- indeed, if $H$ denotes the hyperplane section of $\mathbb{P}(1,a_1,\ldots,a_4)$ and $H'$ that of $\mathbb{P}(1,a_1,a_2,a_3)$, we pick $s \ge 1$ such that $\left|sH\right|$ and $\left|sH'\right|$ are very ample, and calculate that \begin{eqnarray*} s\deg C & = & (sH)C = \pi_4^*(sH')C \\ & = & sH'(\pi_4)_*C = srH'C' = sr\deg C' \ge s\deg C', \end{eqnarray*} where $r \ge 1$ is the degree of the induced morphism $\pi_4|_C \colon C \to C'$. So in fact $\deg C$ is a multiple of $\deg C'$ by the positive integer $r$. (The point of $\left|sH\right|$ being very ample is that we can move it away from $P_4$, where $\pi_4$ is undefined, and apply the projection formula to the morphism $\pi_4|_{\mathbb{P}(1,a_1,\ldots,a_4) \smallsetminus \{P_4\}}$.) Now form the diagram below. \[ \xymatrix@C=0.1cm{ C \ar[d] & \subset & \mathbb{P}(1,a_1,a_2,a_3,a_4) \ar@{-->}[d]^{\pi_4}\\ C' \ar[d] & \subset & \mathbb{P}(1,a_1,a_2,a_3) \ar@{-->}[d]^{\pi_3}\\ \{*\} & \subset & \mathbb{P}(1,a_1,a_2)\\ } \] $C'$ is contracted by $\pi_3$ --- indeed, if its image were a curve $C''$ we would have \[ \deg C'' \le \deg C' \le \deg C \le A^3, \] but $A^3 = d/(a_1a_2a_3a_4) < 1/(a_1a_2)$, since $d < a_3a_4$ in either case (a) or (b), and on the other hand $1/(a_1a_2) \le \deg C''$ simply because $C'' \subset \mathbb{P}(1,a_1,a_2)$ --- contradiction. Now by our extra assumption $(a_1,a_2) = 1$, the point $\{*\} \subset \mathbb{P}(1,a_1,a_2)$ is, up to coordinate change, one of \[ \{y = z = 0\}, \quad \{y^{a_2} + z^{a_1} = x = 0\}, \quad \{x = z = 0\} \quad \mbox{and} \quad \{x = y = 0\}, \] using the coefficient convention in $y^{a_2} + z^{a_1} = 0$. It follows that the curve $C' \subset \mathbb{P}(1,a_1,a_2,a_3)$ is defined by the same equations. In the first case, this means that $\deg C' = 1/a_3 > d/(a_1a_2a_3a_4) = A^3$, contradiction. In the second case $\deg C' = 1/a_3$ again, because $C' \simeq \{y^{a_2} + z^{a_1} = 0\} \subset \mathbb{P}(a_1,a_2,a_3)$ passes only through the singularity $(0,0,1)$, using $(a_1,a_2) = 1$ --- so we obtain a contradiction as in the first case. In the case $C' = \{x = z = 0\}$, we have $\deg C' = 1/(a_1a_3)$ and we easily obtain a contradiction from $a_2a_4 > d$. In the final case, $C' = \{x = y = 0\}$, if the assumptions in part (a) of the statement hold then we have \[ \deg C' = 1/(a_2a_3) > d/(a_1a_2a_3a_4) = A^3, \] contradiction; while if the assumptions in part (b) hold then \[ C = \{x = y = 0 \}\cap X \] (because the right hand side is irreducible), but \[ \deg\left(\{x = y = 0 \}\cap X \right) = a_1A^3 > A^3, \] since we also assumed $a_1 > 1$ --- contradiction. \end{proof} Note that for the case $a_1 = 1, a_2 > 1$ there is the following analogue of Lemma~\ref{curvesinwps1} --- see~\cite[3.3]{Ry02} for the proof, which is similar to but much shorter than the one we have just seen. \begin{lemma} \label{curvesinwps2} Let $X = X_d \subset \mathbb{P} = \mathbb{P}(1,1,a_2,a_3,a_4)$ be a hypersurface in one of the families with $a_1 = 1$ and $a_2 > 1$; suppose that $d < a_2a_4$. Then any curve $C \subset X$ that is not contracted by $\pi_4$ and that satisfies $\deg C \le A^3$ is contained in $\{x_0 = x_1 = 0\} \cap X$. \end{lemma} \noindent When $a_1 = a_2 = 1$, however, the situation is different and more work is required to prove sufficiently strong results. We note also for completeness that in the case of family~75 we have $P_4 = P_u \not\in X$ so the question of whether $C$ is contracted by $\pi_4 \colon X \dashrightarrow \mathbb{P}(1,a_1,a_2,a_3)$ never arises. For families that do contain curves contracted by $\pi_4$, \cite[3.5]{Ry02} shows that in almost all cases these curves are of degree greater than $A^3$, so they are excluded by Lemma~\ref{lem:degC}. The remaining few families to which this result does not apply are also dealt with in \cite{Ry02}. \begin{crvs_fams34etc} We have dealt with curves inside a general $X$ in family~75. The arguments presented above, or variants of them, deal also with curves in general members of families~88 and~90, given the following observation: family~88 has $a_1 = 1$ and $a_2 > 1$, and consequently there is a birational K3 fibration obtained by the projection $(x_0, x_1) \colon X \dashrightarrow \mathbb{P}^1$. The corresponding pencil $\mathcal P = \left< x_0, x_1 \right>$ has $C \in \operatorname{CS}(X, \mathcal P)$, where $C$ is the curve $\{ x_0 = x_1 = 0 \} \cap X$. The important point is that, by taking $X$ general in its family, $C$ is \emph{irreducible}. If it were not, and we had \[ \{ x_0 = x_1 = 0 \} \cap X = C_0 \cup \cdots \cup C_r, \] we would have to prove a conditional exclusion result of the form `if $C_0 \in \operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$ then all $C_i \in \operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$'. This can be done --- see~\cite[3.12]{Ry02} --- but we avoid it using our generality assumption on $X$. The case of family~34 is more problematic because Lemma~\ref{curvesinwps2} above does not apply. We omit the argument for curve exclusion for this family; it can be found in~\cite[\S3.4]{Ry02}. \end{crvs_fams34etc} \subsection{Smooth points} \label{subsec:smpts} In this subsection we present a proof of the following theorem. \begin{smptsthm} \label{thm:smpts} Let $X = X_d \subset \mathbb{P} = \mathbb{P}(1,a_1,\ldots,a_4)$ be a general hypersurface in one of families $3,4,\ldots,95$ and $P \in X$ a smooth point. For any $n \in \mathbb{Z}_{\ge1}$ and any mobile linear system $\mathcal{H}$ of degree $n$ on $X$ we have $P \not\in \operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$. \end{smptsthm} The proof closely follows the argument used in \cite{CPR} to exclude smooth points as maximal centres of a pair $\XnH$. First we need to quote some theoretical results. \begin{thm}[Shokurov's inversion of adjunction] \label{invadj} Let $P \in X$ be the germ of a smooth 3-fold and $\mathcal{H}$ a mobile linear system on $X$. Assume $n \in \mathbb{Z}_{\ge 1}$ is such that $P \in \operatorname{CS}\left(X,\frac{1}{n}\mathcal{H}\right)$. Then for any normal irreducible divisor $S$ containing $P$ such that $\mathcal{H}|_S$ is mobile we have $P \in \operatorname{LCS}\left(S,\frac{1}{n}\mathcal{H}|_S\right)$. \end{thm} For a readable account of the proof see \cite[5.50]{KM}. Note that under the given assumptions, but without assuming $\left.\mathcal{H}\right|_S$ is mobile, \cite[5.50]{KM} says $K_S + \frac{1}{n}\mathcal{H}|_S$ is not klt near $P$, which does not preclude the centre on $S$ of the relevant valuation being a \emph{curve\/} containing $P$ rather than $P$ itself. This curve would of course be in $\operatorname{Bs}(\mathcal{H})$, so the problem is eliminated by assuming $\mathcal{H}|_S$ is mobile --- and it will be clear we may assume this in our application. With the extra assumption $K_S + \frac{1}{n}\mathcal{H}|_S$ is not plt near $P$, as required. \begin{thm}[Corti] \label{Cothm} Let $\left(P \in \Delta_1 + \Delta_2 \subset S \right) \simeq \left(0 \in \{xy = 0 \} \subset \mathbb C^2 \right)$ be the analytic germ of a normal crossing curve on a smooth surface; let $\LL$ be a mobile linear system on $S$ and $L_1,L_2 \in \LL$ general members. Suppose there exist $n \in \mathbb{Z}_{\ge 1}$ and $a_1,a_2 \in \mathbb{Q}_{\ge 0}$ such that \[ P \in \operatorname{LCS}\left(S, (1-a_1)\Delta_1 + (1-a_2)\Delta_2 + \textstyle\frac{1}{n}\LL \right). \] Then \[ (L_1 \cdot L_2)_P \ge \left\{ \begin{array}{l@{\quad\mbox{if}\hspace{1.5ex}}l} 4a_1a_2n^2 & a_1 \le 1 \mbox{\hspace{1ex} or \hspace{1ex}} a_2 \le 1; \\ 4(a_1+a_2-1)n^2 & \mbox{both\hspace{1.5ex}} a_1,a_2 > 1. \end{array} \right. \] \end{thm} This is proved as in the original \cite{Co00}, but replacing `log canonical' by `purely log terminal' and strict inequalities by $\le$ or $\ge$ as appropriate. Now combining Theorems \ref{invadj} and \ref{Cothm} we obtain the following. \begin{cor} \label{multthm} Let $P \in X$ be the germ of a smooth 3-fold and $\mathcal{H}$ a mobile linear system on $X$. Assuming as in Theorem \ref{invadj} that $P \in \operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$ for some $n \in \mathbb{Z}_{\ge 1}$ we have \[ \operatorname{mult}_P(H \cdot H') \ge 4n^2 \] where $H,H' \in \mathcal{H}$ are general and $H \cdot H'$ is their intersection cycle. \end{cor} Now we need to borrow an additional result from \cite{CPR}. First we recall the following definition. \begin{defn}[cf. {\cite[5.2.4]{CPR}}] \label{defn:Gammaisol} Let $L$ be a Weil divisor class in a 3-fold $X$ and $\Gamma \subset X$ an irreducible curve or a closed point. We say that \emph{$L$ isolates $\Gamma$}, or is a \emph{$\Gamma$-isolating class\/}, if and only if there exists $s \in \mathbb{Z}_{\ge1}$ such that the linear system $\LL_{\Gamma}^s := \left|\mathcal I_{\Gamma}^s(sL) \right|$ satisfies \begin{itemize} \item $\Gamma \in \operatorname{Bs} \LL_{\Gamma}^s$ is an isolated component (i.e., in some neighbourhood of $\Gamma$ the subscheme $\operatorname{Bs} \LL_{\Gamma}^s$ is supported on $\Gamma$); and \item if $\Gamma$ is a curve, the generic point of $\Gamma$ appears with multiplicity 1 in $\operatorname{Bs} \LL_{\Gamma}^s$. \end{itemize} \end{defn} \begin{thm}[{\cite[5.3.1]{CPR}}] \label{Pisol} Let $X = X_d \subset \mathbb{P}(1,a_1,\ldots,a_4)$ be a general hypersurface in one of families $3,4,\ldots,95$ and $P \in X$ a smooth point. Then for some positive integer $l < 4/A^3$ the class $lA$ is $P$-isolating. \end{thm} \begin{rk} \label{Pisolrk} \cite[5.3.1]{CPR} says $l \le 4/A^3$, but the statement there is for all the families except number 2 --- that is, including number 1 --- and a trivial check shows that in fact $l = 4/A^3$ only for number 1. \end{rk} \begin{proof}[\textsc{Proof of Theorem \ref{thm:smpts}}] We know that $P \in \operatorname{Bs}\left|\mathcal I_P^s(slA) \right|$ is an isolated component for some $l,s \in \mathbb{Z}_{\ge1}$ with $l < 4/A^3$. Take a general surface $S \in \left|\mathcal I_P^s(slA) \right|$ and general elements $H,H' \in \mathcal{H}$. If we assume that $P$ belongs to $\operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$ then Corollary \ref{multthm} tells us that $\operatorname{mult}_P \left(H\cdot H' \right) \ge 4n^2$, so \[ {S \cdot H \cdot H'} \ge \left(S \cdot H \cdot H' \right)_P \ge 4sn^2. \] But we know \[ {S \cdot H \cdot H'} = sln^2A^3 < \frac{4}{A^3}sn^2A^3 = 4sn^2, \] contradiction. \end{proof} \subsection{Singular points} \label{subsec:singpts} The following two results are fundamental. \begin{thm}[Kawamata, \cite{Ka96}] \label{thm:Ka} Let $P \in X \simeq \frac{1}{r}(1,a,r-a)$, with $r \ge 2$ and $(a,r) = 1$, be the germ of a 3-fold terminal quotient singularity, and \[ f \colon (E \subset Y) \to (\Gamma \subset X) \] a divisorial contraction such that $P \in \Gamma$ (so $Y$ has terminal singularities, $\operatorname{Exc} f = E$ is an irreducible divisor and $-K_Y$ is $f$-ample). Then $\Gamma = P$ and $f$ is isomorphic over $X$ to the $(1,a,r-a)$ weighted blowup of $P \in X$. \end{thm} \begin{lemma}[Kawamata, \cite{Ka96}] \label{lem:Ka} Let $P \in X \simeq \frac{1}{r}(1,a,r-a)$ be as in Theorem \ref{thm:Ka} and $f \colon (E \subset Y) \to (P \in X)$ the $(1,a,r-a)$ weighted blowup; let $g \colon \widetilde{X} \to X$ be a resolution of singularities with exceptional divisors $\{E_i \}$. Fix an effective Weil divisor $H$ on $X$ and define $a_i = a_{E_i}\left(K_X\right)$ and $m_i = m_{E_i}(H)$ in the usual way via \begin{eqnarray} K_{\widetilde{X}} & = & g^*K_X + \textstyle\sum a_iE_i, \nonumber \\ g^{-1}_*H & = & g^*H - \textstyle\sum m_iE_i; \nonumber \end{eqnarray} define $a_E$ and $m_E$ similarly using $f$. Then $m_i / a_i \le m_E / a_E$ for all $i$. \end{lemma} In \cite{Ka96} Lemma \ref{lem:Ka} is used to prove Theorem \ref{thm:Ka}, but it is an interesting result in its own right; in particular, it has two corollaries that are of great importance for our problem. \begin{cor} \label{cor:Kalemma:1} Let $P \in X \simeq \frac{1}{r}(1,a,r-a)$ and $f \colon (E \subset Y) \to (P \in X)$ the Kawamata blowup as in Lemma \ref{lem:Ka}. Suppose $\mathcal{H}$ is a mobile linear system on $X$ and $n \in \mathbb{Z}_{\ge1}$ is such that $P \in \operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$. Then the valuation $v_E$ of $E$ is strictly canonical or worse for $\XnH$. \end{cor} \begin{proof}[\textsc{Proof}] Let $g \colon \widetilde{X} \to X$ be any resolution of singularities with exceptional divisors $\{E_i\}$ and $H \in \mathcal{H}$ a general element. The assumption $P \in \operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$ means that $n \le m_i/a_i$ for some $i$, so by Lemma \ref{lem:Ka} $n \le m_E / a_E$ as well. \end{proof} This Corollary \ref{cor:Kalemma:1} tells us that we can exclude a singular point from any $\operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$ simply by excluding the valuation $v_E$; it has other uses as well. \begin{cor} \label{cor:Kalemma:2} Let $P \in X \simeq \frac{1}{r}(1,a,r-a)$ and $f \colon Y \to X$ be the Kawamata blowup of $P$ as in Lemma \ref{lem:Ka}. Suppose $C \subset X$ is a curve containing $P$, $\mathcal{H}$ is a mobile linear system on $X$ and $n \in \mathbb{Z}_{\ge1}$ is such that $C \in \operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$. Then $P \in \operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$ also. \end{cor} \begin{proof}[\textsc{Proof}] Let $g \colon \widetilde{X} \to X$ be a resolution of singularities with exceptional divisors $\{E_i\}$ at least one of which has centre $C$ on $X$ and is strictly canonical or worse for $\XnH$. The rest of the proof is the same as that of Corollary~\ref{cor:Kalemma:1}. \end{proof} \begin{absexclsingpts} Suppose $P$ is a singular point of a hypersurface $X$ in one of the~95 families and $P \in X$ is locally isomorphic to $\frac{1}{r}(1,a,r-a)$. Let $f \colon (E \subset Y) \to (P \in X)$ be the Kawamata blowup and suppose $B^3 < 0$ (where as always $B = {-K_Y}$). We denote by $S$ the surface $f^{-1}_*\{x_0=0\}$, which is an element of $\left|B\right|$ and is irreducible, assuming $X$ is general. \end{absexclsingpts} \begin{lemma}[{see~\cite[5.4.3]{CPR}}] \label{Texists} If $B^3 < 0$ then there exist integers $b,c$ with $b > 0$ and $b/r \ge c \ge 0$ and a surface $T \in \left|bB + cE \right|$ such that \begin{itemize} \item[(a)] the scheme theoretic complete intersection $\Gamma = S \cap T$ is a reduced, irreducible curve and \item[(b)] $T\Gamma \le 0$. \end{itemize} \end{lemma} \begin{thm} \label{thm:Tmethodabs} Suppose $B^3 < 0$ and the integer $c$ provided by Lemma \ref{Texists} is strictly positive. Then $P$ is excluded absolutely, that is, there does not exist a mobile linear system $\mathcal{H}$ of degree $n$ on $X$ such that $K_X + \frac{1}{n}\mathcal{H}$ is canonical and $P \in \operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$. \end{thm} For the proof we need only the following two lemmas. \begin{lemma} \label{lem:test_class} Let $X$ be a Fano 3-fold hypersurface in one of the 95 families and $\mathcal{H}$ a mobile linear system of degree $n$ on $X$ such that $K_X + \frac{1}{n}\mathcal{H}$ is strictly canonical; suppose $\Gamma\subset X$ is an irreducible curve or a closed point satisfying $\Gamma\in\operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$, and furthermore that there is a Mori extremal divisorial contraction \[ f\colon (E\subset Y) \to (\Gamma\subset X), \quad \operatorname{Centre}_X E = \Gamma, \] such that $E\in\operatorname{V_0}\left(X,\textstyle\frac{1}{n}\HH\right)$. Then $B^2 \in \operatorname{\overline{NE}} Y$. \end{lemma} \begin{proof}[\textsc{Proof}] We know that \[ K_Y + \textstyle\frac{1}{n}\mathcal{H}_Y \sim_\QQ f^*\left(K_X + \textstyle\frac{1}{n}\mathcal{H}\right) \sim_\QQ 0 . \] It follows that $B \sim_\QQ \frac{1}{n}\mathcal{H}_Y$, and therefore $B^2 \in \operatorname{\overline{NE}} Y$, because $\mathcal{H}_Y$ is mobile. \end{proof} \begin{lemma}[{see~\cite[5.4.6]{CPR}}] \label{NEbarY} If $B^3 < 0$, let $T$ and $\Gamma = S \cap T$ be as in the conclusion of Lemma \ref{Texists}. Write $R$ for the extremal ray of $\operatorname{\overline{NE}} Y$ contracted by $f \colon Y \to X$ and let $Q \subset \operatorname{\overline{NE}} Y$ be the other ray. Then $Q = \mathbb R_{\ge0}[\Gamma]$. \end{lemma} \begin{proof}[\textsc{Proof of Theorem \ref{thm:Tmethodabs}}] Suppose $\mathcal{H}$ is a mobile linear system of degree $n$ on $X$ such that $K_X + \frac{1}{n}\mathcal{H}$ is canonical and $P \in \operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$. Corollary~\ref{cor:Kalemma:1} of Kawamata's Lemma tells us that the Kawamata blowup $f \colon Y \to X$ of $P$ extracts a valuation $v_E$ (where $E = \operatorname{Exc} f$) which is strictly canonical for $\XnH$. The test class Lemma~\ref{lem:test_class} now implies that $B^2 \in \operatorname{\overline{NE}} Y$. But the ray $Q \subset \operatorname{\overline{NE}} Y$ is generated by $(bB+cE)B$ for some $b,c > 0$, and certainly $EB \in \operatorname{\overline{NE}} Y$, so if $B^2 \in \operatorname{\overline{NE}} Y$ we have both $B^2,EB \in Q$ (by definition of `extremal'). It follows that $EB$ is numerically equivalent to $\alpha B^2$ for some positive $\alpha \in \mathbb{Q}$; but \[ EB \cdot B = E \Big(A-\frac{1}{r}E \Big)^2 = \frac{1}{r^2}E^3 = \frac{1}{a(r-a)} > 0, \] while $B^2 \cdot B = B^3 < 0$ by assumption --- contradiction. \end{proof} \begin{cor} \label{cor:75singpts} Let $X = X_{30} \subset \mathbb{P}(1,4,5,6,15)_{x,y,z,t,u}$ be a general member of family~75 of the~95 and $\mathcal{H}$ a mobile linear system of degree $n$ on $X$ with $K_X + \textstyle\frac{1}{n}\HH$ strictly canonical. Then no singular point of $X$ other than the two $\frac{1}{5}(1,4,1)$ points in the $zu$-stratum can belong to $\operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$. \end{cor} \begin{proof}[\textsc{Proof}] Here is the complete list of singular points of $X$, together with the sign of $B^3$ and $bB + cE \sim T$ (see Lemma~\ref{Texists}) for each of them. \begin{tabbing} \hspace*{1cm} \=$P_y P_t \cap X = R_1, R_2 \sim \frac{1}{2}(1,1,1)_{x,z,u}$ space \= sample text \= \kill \>$P_y \sim \frac{1}{4}(1,1,3)_{x,z,u}$ \>$B^3 < 0$ \>$10B + E$ \\ \>$P_t P_u \cap X = P \sim \frac{1}{3}(1,1,2)_{x,y,z}$ \>$B^3 < 0$ \>$5B + E$ \\ \>$P_z P_u \cap X = Q_1, Q_2 \sim \frac{1}{5}(1,4,1)_{x,y,t}$ \>$B^3 < 0$ \>$4B$ \\ \>$P_y P_t \cap X = R_1, R_2 \sim \frac{1}{2}(1,1,1)_{x,z,u}$ \>$B^3 < 0$ \>$5B + 2E$ \end{tabbing} Clearly all these apart from $Q_1$ and $Q_2$ satisfy the hypotheses of Theorem~\ref{thm:Tmethodabs}, and consequently are excluded absolutely. \end{proof} \begin{condexclsingpts} Assume $P \in X$ is a singular point of a hypersurface in one of the~95 families which is locally isomorphic to $\frac{1}{r}(1,a,r-a)$ with $B^3 < 0$. We keep the notations $T \sim bB + cE$, $S = f^{-1}_*\{x_0=0\}$ and $\Gamma = S \cap T$ of Lemma~\ref{Texists}; in the following paragraphs we consider the case where the integer $c$ provided by Lemma \ref{Texists} is zero. Out of all such singular points, the vast majority live in a family with $b = a_1 < a_2$, where $a_1,a_2$ are the weights of $x_1,x_2$. For such points we have the following result. \end{condexclsingpts} \begin{thm} \label{thm:Tmethodcond} Let $P \in X$ be a singular point satisfying $B^3 < 0$, $T \sim bB$ and $b = a_1 < a_2$. Assume that the curve $C = \{x_0 = x_1 = 0\}\cap X$ is irreducible. Then $R(Y,B) = k[x_0,x_1]$. It follows that if $\mathcal{H}$ is a mobile linear system of degree $n$ on $X$ such that $K_X + \textstyle\frac{1}{n}\HH$ is canonical and $P \in \operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$ then in fact $\operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right) = \operatorname{CS}\left(X,\frac{1}{b}f_*\left|bB\right|\right)$, where $f \colon Y \to X$ is the Kawamata blowup of $P$. \end{thm} \begin{proof}[\textsc{Proof for family 75}] We do not prove this theorem here for every case. As explained in~\cite{Ry02}, the first statement, namely $R(Y, B) = k[x_0, x_1]$, follows from two ray game calculations such as that in the proof of our Lemma~\ref{lem:75tworay}; we have already shown this for family~75. The second part of the theorem follows easily from the first: let $\mathcal{H}$ be mobile of degree $n$ on $X$ with $K_X + \textstyle\frac{1}{n}\HH$ canonical and $P \in \operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$. Then $\mathcal{H}_Y \subset |nB| = \left<x_0^n,x_0^{n-b}x_1,\ldots,x_0^rx_1^q\right>$, where $n = qb + r$, $0 \le r < b$ --- so $\mathcal{H} \subset f_*|nB| = \left<x_0^n,\ldots,x_0^rx_1^q\right>$, while of course $f_*|bB| = \left<x_0^b,x_1\right>$, and therefore $\operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right) = \operatorname{CS}\left(X,\frac{1}{b}f_*|bB|\right)$. \end{proof} We are now in a position to put together all the exclusion results obtained so far to prove Theorem~\ref{thm:75aux} for family~75. \begin{proof}[\textsc{Proof of Theorem~\ref{thm:75aux}}] First, since $X$ is superrigid by~\cite[\S6]{CPR}, $K_X + \textstyle\frac{1}{n}\HH$ nonterminal implies $K_X + \textstyle\frac{1}{n}\HH$ strictly canonical; therefore $\operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$ is nonempty. Corollary~\ref{cor:75curves} tells us that no curve belongs to $\operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$, Theorem~\ref{thm:smpts} says that the same is true for smooth points and Corollary~\ref{cor:75singpts} shows the same for all singular points other than $Q_1, Q_2 \sim \frac{1}{5}(1,4,1)_{x,y,t}$. Therefore at least one $Q_i \in \operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$; without loss of generality we may assume this holds for $Q_1$. Now by Theorem~\ref{thm:Tmethodcond} \[ \operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right) = \operatorname{CS}\left( X, \textstyle\frac{1}{b} f_* |bB| \right) \] where $b = 4$ because $T \sim 4B$ (in the notation above) and $f \colon Y \to X$ is the Kawamata blowup of $Q_1$. But $R(Y, B) = k[x_0, x_1] = k[x, y]$, so $f_* |bB|$ is just $\left< x^4, y \right>$; and both $x^4$ and $y$ have local vanishing order $4/5$ at $Q_2$, so $Q_2 \in \operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$ also, as required. \end{proof} \section{Birational classification of elliptic and K3 fibrations} \label{sec:class} We have now proved Theorem~\ref{thm:75aux} for family~75; all that remains is to deduce the main Theorem~\ref{thm:75main} from it. For this we need the following theorem--definition and propositions. \begin{thmdefn} \label{Kod} Let $X$ be a variety, normal and projective over $\mathbb C$ as always, and $\mathcal{H}$ a mobile linear system on $X$. Fix $\alpha\in\mathbb{Q}_{\ge0}$ and let $f \colon Y \to X$ be a birational morphism such that $K_Y + \alpha\mathcal{H}_Y$ is canonical. We define the \emph{log Kodaira dimension $\kappa(X,\alpha\mathcal{H})$} to be the $D$-dimension of $K_Y + \alpha\mathcal{H}_Y$, that is, \[ \kappa(X,\alpha\mathcal{H}) = D(K_Y + \alpha\mathcal{H}_Y) = \max \{\dim(\operatorname{im}\fie_{\left|m(K_Y + \alpha\mathcal{H}_Y)\right|}) \} , \] taking the max over all $m\ge1$ such that $m(K_Y + \alpha\mathcal{H}_Y)$ is integral; if all the linear systems $\left|m(K_Y + \alpha\mathcal{H}_Y)\right|$ are empty, by definition \[\kappa(X,\alpha\mathcal{H}) = D(K_Y + \alpha\mathcal{H}_Y) = {-\infty}.\] Then $\kappa(X,\alpha\mathcal{H})$ is independent of the choice of $Y$ and is attained for a particular $Y$ using any sufficiently large $m$ such that $m(K_Y + \alpha\mathcal{H}_Y)$ is integral. \end{thmdefn} \begin{proof}[\textsc{Proof}] This result is standard and is used in, e.g.,~\cite{Ch00} and~\cite{Is01}. The methods employed in \cite{Sh96} to show uniqueness of a log canonical model can be used to prove it; alternatively see \cite[Ch.\ 2]{FA}, particularly Theorem 2.22 and the Negativity Lemma 2.19, which is an essential ingredient. \end{proof} \begin{emp} Now let $X$ be a Mori Fano variety and $\mathcal{H}$ a mobile linear system of degree $n$ on $X$, that is, $n \in \mathbb{Q}$ is such that $K_X + \frac{1}{n}\mathcal{H} \sim_\QQ 0$ --- of course $n\in\mathbb{Z}_{\ge 1}$ if $X$ is a hypersurface in one of the 95 families. \end{emp} \begin{prop} \label{prop:varyep} Assume that $K_X + \frac{1}{n}\mathcal{H}$ is canonical. \begin{itemize} \item[(a)] Let $\varepsilon \in \mathbb{Q}$. Then \[ \kappa\left(X, \left(\textstyle \frac{1}{n} + \varepsilon \right)\mathcal{H} \right) = \left\{\begin{array}{l@{\quad\mbox{if}\hspace{1.5ex}}l} {- \infty} & \varepsilon < 0 \\ 0 & \varepsilon = 0 \\ d \ge 1 & \varepsilon > 0 \end{array} \right.\] \item[(b)] If $1 \le \kappa(X, (\frac{1}{n} + \varepsilon )\mathcal{H}) \le \dim X - 1$ for some $\varepsilon \in \mathbb{Q}_{>0}$ then $K_X + \frac{1}{n}\mathcal{H}$ is nonterminal, i.e., strictly canonical (so $K_X + (\frac{1}{n} + \varepsilon)\mathcal{H}$ is noncanonical). \end{itemize} \end{prop} \begin{proof}[\textsc{Proof}] See the survey~\cite[III.2.3--2.4]{Is01}. \end{proof} Recall that the NFI-type inequality~\ref{nfi} was stated under two alternative sets of assumptions: either \begin{itemize} \item[(a)] $X$ is a Mori Fano and $\Phi \colon X \dashrightarrow Z/T$ a birational map to the total space $Z$ of a $K$-trivial fibration $g \colon Z \to T$; or \item[(b)] $X$ is a Mori Fano and $\Phi \colon X \dashrightarrow Z$ a birational map to a Fano variety with canonical singularities. \end{itemize} \begin{prop} \label{dimT} In situation (a) above, assume that $K_X + \textstyle\frac{1}{n}\HH$ is canonical. Then for any $\varepsilon \in \mathbb{Q}_{>0}$, $\kappa\XnepH = \dim T$. \end{prop} \begin{proof}[\textsc{Proof}] Fix $\varepsilon\in\mathbb{Q}_{>0}$. $K_X + \frac{1}{n}\mathcal{H}$ is canonical so $\kappa(X,\frac{1}{n}\mathcal{H}) = 0$, and therefore $\kappa(Z,\frac{1}{n}\mathcal{H}_Z) = 0$ by the birational invariance of log Kodaira dimension (\ref{Kod}). But in fact $K_Z + \frac{1}{n}\mathcal{H}_Z$ is canonical (as is $K_Z + (\frac{1}{n} + \varepsilon)\mathcal{H}_Z$, because $K_Z$ is canonical and $\mathcal{H}_Z$ is free), so $\kappa(Z,\frac{1}{n}\mathcal{H}_Z)$ (and $\kappa(Z,(\frac{1}{n} + \varepsilon)\mathcal{H}_Z)$) can be computed on $Z$ as the ordinary $D$-dimension. Consequently for $m \gg 0$ and such that $m(K_Z + \frac{1}{n}\mathcal{H}_Z)$ is integral, it is in fact effective and fixed. Fix such an $m$ with the additional property that $m\varepsilon \in \mathbb N$, so that $m(K_Z + (\frac{1}{n}+\varepsilon)\mathcal{H}_Z)$ is integral as well; let $F \in \left|m(K_Z + \frac{1}{n}\mathcal{H}_Z)\right|$ be the unique element. Now for any curve $C$ contracted by $g$, $FC = 0$, because by assumption $K_Z C = 0$ and $\mathcal{H}_Z = g^*\left|H\right|$. But $F$ is effective, so it must be a pullback $g^* F_T$ of some effective $F_T$ on $T$. Furthermore, for any $m' \in \mathbb N$, \[ H^0(T,m'F_T) = H^0(Z,g^*(m'F_T)) = H^0(Z,m'F), \] so $F_T$ is fixed, $D(F_T) = 0$ and it is easy to see that $D(F_T + m\varepsilon H) = \dim T$, because $H$ is ample and $F_T$ is effective. Now $g^*(F_T + m\varepsilon H) = F + m\varepsilon\mathcal{H}_Z$, so \begin{eqnarray*} \textstyle \kappa\left(X,\left(\frac{1}{n} + \varepsilon \right)\mathcal{H} \right) & = & \textstyle D\left(m\left(K_Z + \left(\frac{1}{n} + \varepsilon \right)\mathcal{H}_Z\right) \right) \\ & = & D(F+ m\varepsilon\mathcal{H}_Z) = \dim T \end{eqnarray*} as required. \end{proof} \medskip \begin{proof}[\textsc{Proof of NFI-type inequality \ref{nfi} in situation} \emph{(a)}] We note that under the assumptions (a),~\ref{nfi} is an immediate consequence of Propositions~\ref{prop:varyep} and~\ref{dimT}. Under assumptions~(b), the proof of~\cite[4.2]{Co95} can be easily adapted to give an argument. Like Theorem--Definition~\ref{Kod}, this is a standard result used in~\cite{Ch00} and~\cite{Is01}, so we omit the details. \end{proof} \bigskip All that remains is to prove the main theorem for family~75. Arguments similar to the following also prove Theorems~\ref{thm:34main},~\ref{thm:88main} and~\ref{thm:90main}. \begin{proof}[\textsc{Proof of Theorem \ref{thm:75main}}] It is simplest to prove part (b) first; we then indicate how the argument can be easily adapted to demonstrate (a) and (c) also. {(b) \sloppy Suppose that $\Phi \colon X \dashrightarrow Z/T$ is a birational map from $X$ to an elliptic fibration \linebreak $g \colon Z \to T$. By the NFI-type inequality~\ref{nfi}, $K_X + \textstyle\frac{1}{n}\HH$ is nonterminal, where as usual the system \linebreak[4] $\mathcal{H} = \Phi^{-1}_*\mathcal{H}_Z = \Phi^{-1}_*g^*|H|$ is the transform of a very ample complete system of Cartier divisors on $T$. By Theorem~\ref{thm:75aux}, $K_X + \textstyle\frac{1}{n}\HH$ is strictly canonical and $\operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right) = \{Q_1,Q_2\}$. } Let $Q$ be either $Q_1$ or $Q_2$. As in Lemma~\ref{lem:75tworay} we blow up $Q$ and play out the two ray game; in the notation of the lemma, \[ R(Y,B) = R(Y',B') = k[x,y], \] $f'$ is the anticanonical morphism of $Y'$ and the composite $\pi \colon X \dashrightarrow \mathbb{P}^1$ is $(x^4,y)$. Now because $Q \in \operatorname{CS}\left(X,\textstyle\frac{1}{n}\HH\right)$, we have that \[ \mathcal{H}_Y \subset |{-nK_Y}| = |nB| = k[x,y]_n = k\left[x^n,x^{n-4}y,\ldots,x^{n \bmod 4}y^{\lfloor n/4\rfloor}\right]; \] the same is true of $Y'$ (since $Y$ and $Y'$ are isomorphic in codimension one) and therefore \linebreak $\mathcal{H}_{Y'} = (f')^*\mathcal{H}_{\mathbb{P}^1}$ is the pullback of a mobile system on $\mathbb{P}^1$. But any mobile system on $\mathbb{P}^1$ is free, so we can deduce (as in the proof of Proposition~\ref{dimT}) that for any $\varepsilon \in \mathbb{Q}$ with $0 < \varepsilon \ll 1$, \begin{equation} \label{kale1} \kappa\XnepH = \kappa\left(Y',\left(\textstyle\frac{1}{n} + \varepsilon\right)\mathcal{H}_{Y'}\right) = D(F + m\varepsilon\mathcal{H}_{\mathbb{P}^1}) \le 1 \end{equation} where $F$ is a fixed effective divisor on $\mathbb{P}^1$ (so in fact $F = 0$) and $m \in \mathbb{Z}_{>0}$. But by Proposition~\ref{dimT} applied to $\Phi \colon X \dashrightarrow Z/T$ we have $\kappa\XnepH = \dim T = 2$, which contradicts (\ref{kale1}). This proves (b). (a) We can follow the proof of (b) but in the end, rather than a contradiction, we deduce that the system $\mathcal{H} = \Phi^{-1}_*g^*|H|$ is actually a pullback $\pi^{-1}_*\mathcal{H}_{\mathbb{P}^1}$ via the map $\pi = (x^4,y) \colon X \dashrightarrow \mathbb{P}^1$. This induces an isomorphism $\mathbb{P}^1 \to T$ such that the specified diagram commutes. (c) If we assume $\Phi$ is not an isomorphism, we can follow the argument for (b) to deduce that $\kappa\XnepH = 1$, which is obviously a contradiction since $\mathcal{H}_Z = |H|$ is very ample. (Note that the NFI-type inequality~\ref{nfi} requires us to assume $\Phi$ is not an isomorphism in the Fano case; for the elliptic and K3 cases this is of course not necessary.) This completes the proof. \end{proof} \small